Posted By

user_img

Shruti

TA at Uncap Research Labs

Last Login: 31 October 2022

Job Views:  
292
Applications:  20
Recruiter Actions:  4

Posted in

IT & Systems

Job Code

1169549

Senior Role - Data Engineering - Data Platforms

12 - 20 Years.Gurgaon/Gurugram
Posted 2 years ago
Posted 2 years ago

Experience: Must have 12-14 years of experience with a minimum of 9 years of experience in building Big- Data applications at scale.

About Role:

We are looking for experienced Data engineers with excellent problem-solving skills to develop machine learning-powered Data Products design to enhance customer experiences.

About us:

Nurtured from the seed of a single great idea - to empower the traveller - MakeMyTrip went on to pioneer India's online travel industry Founded in the year 2000 by Deep Kalra, MakeMyTrip has since transformed how India travels. One of our most memorable moments has been to ring the bell at NASDAQ in 2010.

Post-merger with the Ibibo group in 2017, we created a stronger identity and traction for our portfolio of brands, increasing the pace of product and technology innovations. Ranked amongst the LinkedIn Top 25 companies 2018.

GO-MMT is the corporate entity of three giants in the Online Travel Industry-Goibibo, MakeMyTrip and RedBus. The GO-MMT family celebrates the compounded strengths of their brands. The group company is easily the most sought after corporate in the online travel industry.

About the team:

MakeMyTrip as India's leading online travel company and provides petabytes of raw data which is helpful for business growth, analytical and machine learning needs

Data Platform Team is a horizontal function at MakeMyTrip to support various LOBs (Flights, Hotels, Holidays, Ground) and works heavily on streaming datasets which powers personalized experiences for every customer from recommendations to in-location engagement.

There are two key responsibilities of Data Engineering team:

One to develop the platform for data capture, storage, processing, serving and querying.

Second is to develop data products starting from;

- Personalization & recommendation platform

- Customer segmentation & intelligence

- Data insights engine for persuasions and

- The customer engagement platform to help marketers craft contextual and personalized campaigns over multi-channel communications to users

We developed Feature Store, an internal unified data analytics platform that helps us to build reliable data pipelines, simplify featurization and accelerate model training. This enabled us to enjoy actionable insights into what customers want, at scale, and to drive richer, personalized online experiences.

Role Responsibilities:

- Solution Architecture in Big Data and Advanced Analytics domains

- Build 'Reference Architecture' for Data / Big Data technology domain

- Define and own end-to-end Architecture from definition phase to go-live phase for large and complex engagements

- Build scalable architectures for data storage, transformation and analysis

- Designing platforms as consumable data services across the organization using Big Data tech stack

- Think of solutions as scalable generic reusable organization-wide platforms

- Architect and build near real-time (low latency) platforms for segmentation, personalized recommendations, reporting etc.

- Build and execute data mining and modeling activities using agile development techniques

- Leading big data projects successfully from scratch to production

- Appreciate and understand the cloud delivery model and how that affects application solutions - both delivery and deployment

- Solve problems in robust and creative ways and demonstrate solid verbal, interpersonal and written communication skills

- Work in an environment and get clarity about unknowns - both technically and functionally

Technology experience:

- Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture

- Extensive experience in data modeling and database design

- At least 5+ years of hands-on experience in Spark/BigData Tech stack

- Stream processing engines - Spark Structured Streaming/Flink

- Analytical processing on Big Data using Spark

- At least 9+ years of experience in Java/Scala

- Hands-on administration, configuration management, monitoring, performance tuning of Spark workloads, Distributed platforms, and JVM based systems

- At least 2+ years of cloud deployment experience - AWS | Azure | Google Cloud Platform

- At least 2+ product deployments of big data technologies - Business Data Lake, NoSQL databases etc

- Awareness and decision making ability to choose among various big data, no sql, and analytics tools and technologies

- Should have experience in architecting and implementing domain centric big data solutions

- Ability to frame architectural decisions and provide technology leadership & direction

- Excellent problem solving, hands-on engineering, and communication skills

Leadership Experience:

- Strong people management skills - Ability to build high performing teams, mentoring team members, building a strong second line, ability to attract & retain talent

- Seen as a thought-leader in the industry in the areas of Data / Big Data, Analytics - prescriptive and predictive, and insights visualization for effective story telling.

- Experience in building strong partnerships with leaders in the Big Data technologies providers like Databricks, Cloudera, Cassandra, AWS, etc.

Didn’t find the job appropriate? Report this Job

Posted By

user_img

Shruti

TA at Uncap Research Labs

Last Login: 31 October 2022

Job Views:  
292
Applications:  20
Recruiter Actions:  4

Posted in

IT & Systems

Job Code

1169549

UPSKILL YOURSELF

My Learning Centre

Explore CoursesArrow