Please note: it's a Fintech company and looking for people who can join immediately / in 1-2 weeks time.
Mandate: Hive, Tableau, Spark, Data Analysis, Sacla / Python / Java.
Those who fits in the above requirement needs apply.
Roles & Responsibilities.
1) Overall 3+ years of experience as Hadoop Big Data, SQL, HIVE, RDBMS, Spark, HDFS and Scala.
2) Experience extracting data from a variety of sources, and experience in SQL and Spark Scala or Spark Java is mandatory.
2) Hive with HQL and expertise in Unix Commands and Bash scripting.
3) Must have expertise in Tableau.
4) Build Hadoop Data Lake and analytics applications meeting Business Stakeholder requirements.
5) Provide best practices for Data Lake design, including the use of multiple data access zones across the Data Lake (raw/landing, operational, self-service, data marts, etc.) and understand how to apply data governance in each.
6) Develop highly scalable and extensible Big Data platforms to ingest, store, model, assure quality standards, and analyse massive data sets from numerous channels and in varying formats.
7) Monitor, manage and tune Hadoop cluster job performance capacity planning, and security.
8) Ensure proper configuration management and change controls are implemented during code creation and deployment.
9) Strong analytical skills demonstrated by the ability to research and to apply problem-solving skills to complex technical problems.
10) Excellent communication skills (must be able to interface with both technical and business leaders in the organization).
11) Should be able to work independently.
12) Experience on Scheduling tool.
13) Experience with Object Oriented Programming using Python is an added advantage.
Didn’t find the job appropriate? Report this Job