We'll trust you to
- Work closely with senior stakeholders of a global investment bank
- Oversee the modernization of data platforms to increase the stability, reliability, and scalability of existing data and quality control processes, and automate the process of data extraction from various sources using data pipelines and data mapping
- Have a comprehensive understanding of data engineering solutions for financial and banking data
- Be responsible for the end-to-end delivery of various projects related to data, research, and platforms focusing on financial data
- Implement and enhance existing data quality controls, governance, and policy processes
- Work on data migration and mapping processes
- Ensure adoption of taxonomy and ontology for compiled data to end users
- Ensure data consistency across multiple systems and business units
- Coordinate design sessions with stewards, data engineers, the engineering teams, data scientists, and the product team
- Ensure the adoption of best practices for data quality, validation, and wrangling
- Contribute toward creating open data and making data findable, accessible, interoperable, and reusable
- Support data migration from legacy systems, data inserts, and updates not supported by current applications
- Understand business capability needs and processes by partnering with product managers and functional IT stakeholders
- Participate in data scraping, curation, and compilation
What you'll need to have
- Graduation / post graduation in computer science / engineering from a reputed university years of experience in Big data, and team management
- A strong background in data handling, wrangling, and management
- Comprehensive understanding of the Hadoop ecosystem
- Experience in creating data pipelines via Spark, Python, and Airflow
- Experience in implementing data lakes, preferably Delta Lake
- Experience in designing and developing systems using Databricks
Didn’t find the job appropriate? Report this Job