Technical Skills :
- Over 3+ years of engineering and software development experience, demonstrable architecture experience in a large organization.
- Strong Hands-on experience in Big Data Components / Frameworks such as Hadoop, HDFS, Hive, Spark, Kafka, etc.
- Strong programming skills in Python or pyspark or Scala and IDEs for the same
- Experience in Cloud based Big Data platforms that uses Amazon Web Services (AWS) such as S3, EC2, EMR, GLUE, Athena, RDS etc.
- Work with large, complex data sets; solve difficult analytical problems, applying advanced analytical methods as needed.
- Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, build processes, testing and operations
- Conduct end-to-end analysis that includes data gathering and requirements specification, processing, analysis, ongoing deliverables and presentations.
- Build and prototype analysis pipelines iteratively to provide insights at scale.
- Hands-on experience working in unix environment.
- Experience in scheduling and workflow management tools Airflow would be preferred.
- Hands on experience using CI/CD processes would be a plus
Job Responsibilities
- Contribute to full development lifecycle including requirement analysis, functional design, technical design, programming, testing, documentation, implementation and ongoing technical support.
- Analyse data, develop transformation scripts, remediate quality issues and work with team members to implement and test new data-driven use cases.
- Work with product/data engineering leaders to ensure robust and accurate data systems
- Create reusable components across the available data sources
- Develop agile processes for fluid delivery of data, information and analyses to the business.
Didn’t find the job appropriate? Report this Job