JOB SUMMARY :
- The Data Engineer will be responsible for design and delivery of Data solutions in alignment with business and IT strategy & projects.
- The role is expected to be a hands-on expert who knows the flow of information, data structures, data pipelines, and is aware to create and maintain data platform on cloud. The role will be a curious builder for building & maintaining data platform to meet enterprise's objectives keeping best practices in mind.
- The role will be self-starter, result oriented and clear communicator in very agile environment. This will be involved creating value through data exploitation, data-enabled strategies, enabling all forms of business outcomes through making data available in a single repository.
- The role will also responsible for aligning data policy and administration with relevant regulatory, legal and ethical mandates.
KEY RESPONSIBILITIES:
Main roles and responsibilities:
You will integrate into a high performing & agile team including Data Scientists, Data Engineers, Software Engineers, Data Analysts, Data Stewards, PowerBI Developers and Engagement leads working closely with deployment lead and subject matter experts. Under the supervision of the Head of Data & Analytics, you will be in charge of building data pipelines within the variety of internal & external data sources across the functions that will help to create and enrich the data repository on the cloud
Project Delivery: 40%
- Understand business issues and translate these into technical needs
- Gather and organize large and complex data assets, perform relevant analyses
- Ensure the quality of the data in coordination with Data Scientist, analysts & PowerBI developers (peer validation)
- Propose and implement relevant data model for each business cases
- Communicate results and findings in a structured way
- Partner with Data Steward, DBAs and Software Engineer & PowerBI Developers to prioritize the pipeline implementation plan
- Partner with PowerBI Developers & Data Scientist to design pipelines relevant for each business case
- Work closely with Data industrialization team (IT) to ensure that the POC will be eligible for industrialization
- Leverage existing or create new "standard pipelines" within Pernod-Ricard to bring value through business use cases
- Actively contribute to Data platform community
- Ensure documentation is up to date and solutions developed are transitioned to Service Delivery teams in accordance to set guidelines.
- Drive solution risk / compliance assessment and drive mitigation of risks/ gaps and in liaison with Information Security lead.
- Support audits for related solutions delivery areas
Understand Data Model needs : 20%
- Understand the work done by business and development teams and analyze the different documents formalized during inflection/incubation
- Identify inputs, outputs and cleansing rules
- Define the success factor per dataset aligned with business requirements
- Deep dive on "data preparation / data engineering" topics raised by business and identify potential overlaps with Data Cleansing Factory
Hands-on work on Data : 20%
- Manually analyze files (inputs and outputs) to confirm the documented rules
- Test the documented rules by manually cleansing new datasets
- Technical discussions with business & developers to ensure compliance of generated outputs
Formalize Process : 20%
- Identify all steps needed to clean the different datasets
- Work closely with the business teams to define the processes
- Training business users, developers & data scientists on the available data sources and their usage techniques
- Write et deploy processes to monitor the data quality
- Work closely with the business teams to define the processes
- Develop and deploy data pipelines to automate the data cleansing
Develop following the Data Platform guidelines :
- Create target data models on Azure, Snowflake or other cloud platforms
- Develop and deploy pipeline to feed the different data models
- Ensure the data quality of the inputs and outputs
- Develop and deploy pipeline to feed the different data models
- Define ways of working to industrialize the automations
PROFILE DETAILS :
Education required :
- A bachelor's or master's degree computer science, information science or related field, or equivalent work experience.
- Academic qualification or professional training and experience in Cloud computing will add big value
Experience / Background :
- 3-5 years experiences in an IT team as Data Engineer
- Experience in an FMCG company is a strong plus
- Good knowledge of cloud computing
- Data tools: Storage (Azure Datalake, SQL Database, AAS), processing and orchestration (Databricks, ADF, LogicApp), exploration (Power BI, Superset), API Management and Azure Functions for serverless applications
- Project management & support
Skills / Competencies :
- Strong expertise in Python and SQL
- Pro-active, dynamic and motivated
- Engage discussions with business and technical stakeholders
- Strong Customer orientation
- English mandatory
- Strong result orientation
- Strong collaboration and influencing skills
- Ability to drive Continuous improvements
INTERACTIONS :
INTERNALLY :
- Data Scientists, Developers, Software Engineers, Data Owners and other Data Engineers
- Solutions Delivery lead
- IT Demand Management BRM, Architects, Technology Leads, Information Security leads and IT Service Delivery leads, and their teams
- Program Management Office (PMO)
- Internal audit/risk assurance team
- Head of HR
- Enterprise data and analytics service providers
EXTERNALLY :
- System integrators/ implementation partners
- IT infrastructure / application support partner/vendor and IT product/service providers
- Business and IT consultants
Didn’t find the job appropriate? Report this Job