Job Summary:
We are looking for bright, driven, and talented individuals to join our team of passionate and innovative software engineers. In this role, you will use your experience with Java/Scala, Spark, Big Data and Streaming technologies to build a lending platform based of data lake
Job Duties :
- Developing and deploying distributed computing Big Data applications using Apache Spark on MapR Hadoop (others hortonworks / Cloudera will work as well)
- Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Ansible, Teraform, Git and Docker
- Help drive cross team design / development via technical leadership / mentoring
Essential skills
- Total Experience - 15+ years
- 5+ years of experience with the Hadoop Stack
- At least 7 years of professional work experience programming in Java or Scala (3+)
- 2+ years of Distributed Computing frameworks such as Apache Spark, Hadoop
- Experience with Elasticsearch, Spark (plus)
- Experience with database and ETL development, including big data platforms such as Hadoop and Informatica BD
- Strong knowledge of Object Oriented Analysis and Design, Software Design Patterns and Java coding principles
- Experience with Core Java development preferred
- Familiarity with Agile engineering practices
Didn’t find the job appropriate? Report this Job