- Build solution design & implementation of large scale futuristic data architecture, real-time data pipelines for data curation, feature engineering and machine learning on enterprise/open source tools, transitioned data & AI workloads to cloud platforms (AWS, GCP, Azure)
- Define the modern distributed data store architecture and design leveraging a hybrid cloud environment, distributed data pipelines, and incorporating automated governance
- Work on opportunity pursuits to shape and solution data programs, engage with key client executives and effectively communicate value proposition on the next gen data capabilities
- Leads client assessments, preparing current state and future state architectures along with go forward recommendations
- Ability to work with a multi-technology / cross-functional teams and customer stakeholders to guide/managing a full lifecycle of a big data solution
Qualification:
- Architect with 6 to 8 years of relevant experience in designing, architecting, and implementing large scale data processing/data storage/data distribution systems
- Strong understanding of Data Analytics platforms and ETL in the context of emerging data sources like Kafka/ Flink / Spark
- Should have experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS, Cloud native data stores on AWS, Azure and Google as well as using NoSQL and Graph Stores
- Extensive experience in data modelling and working with API based data exchange
- Expertise in enterprise data governance, data security, data quality, data lineage tools
- Expertise in Decision intelligence, DataOps, MLOps and multi-cloud data platform implementation
- Ability to frame architectural decisions, provide technology leadership & direction
- Excellent written and verbal communication
- Strong senior stake-holders management
- Strong client-interfacing and business presentation
Didn’t find the job appropriate? Report this Job