Job Views:  
1358
Applications:  30
Recruiter Actions:  17

Posted in

IT & Systems

Job Code

535194

Capgemini - Cloud Native Big Data Architect

14 - 18 Years.Bangalore/Mumbai
Posted 6 years ago
Posted 6 years ago

Job Description

Role: Cloud Native Big Data Architect

Location: Bangalore/Mumbai

- 14+ years of software development experience building large scale distributed data processing systems/application or large scale internet systems

- Experience of at least 3 years in architecting Big Data solution at enterprise scale with at least one end to end implementation

- Strong understanding & experience of Hadoop eco system such as HDFS, MapReduce, Yarn, Spark, Scala, Hive, HBase, Cassandra, Zookeeper, Pig, Hadoop streaming, Sqoop

- Knowledge of Hadoop Security, Data Management and Governance

- Ability to articulate pros & cons of - TO-BE- design/architecture decisions across a wide spectrum of factors

- Work closely with Operations team to size, scale and tune existing and new architecture.

- Experience working in core development Big Data projects, Should be able to perform hands-on development activities particularly in Spark, HBase / Cassandra, Hive, Shell Scripts.

- Responsible for designing, developing, testing, tuning and building a large-scale data processing system

- Troubleshoot and develop on Hadoop technologies including HDFS, Hive, HBase, Phoenix, Spark, Scala, Map Reduce and Hadoop ETL development via tools such as Talend

- Must have strong knowledge on Hadoop security components like Kerberos, SSL, and Encryption using TDE etc.

- Ability to engage in senior-level technology discussions.

- The ideal candidate is pro-active, shows an ability to see the big picture and can prioritize the right work items in order to optimize the overall team output.

- Should have worked in agile environments and good to have exposure to Devops

- Excellent oral and written communications skills

- Hadoop Certified Developer/Architect will have added advantage

- Should be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them

- Should be able to clearly articulate pros and cons of various technologies and platforms

- Should be able to document use cases, solutions and recommendations

- Should have a good understanding of AWS and Azure ecosystem (e. g. Azure HDInsight and AWS Elastic Map Reduce) and IaaS/PaaS/SaaS models

- Must have good knowledge on third party tools supporting big data

- Must possess good knowledge on NoSQL database systems

Didn’t find the job appropriate? Report this Job

Job Views:  
1358
Applications:  30
Recruiter Actions:  17

Posted in

IT & Systems

Job Code

535194

UPSKILL YOURSELF

My Learning Centre

Explore CoursesArrow