Job description for requirement1:
- Experience with Major Hadoop distributions - Cloudera, Hortonworks, Big Insights, MapR or Cloud based Hadoop distributions AWS EMR, Azure HDInsight, Watson Data Platform (BlueMix)
- Experience with Apache Hadoop ecosystem components -
MapReduce, Pig, Hive, Hbase
Flume, Sqoop, Storm, Spark
Ambari, Oozie, ZooKeeper
Proficient in SQL, Spark, Python, Scala and Java
- Experience in implementing use cases for Hadoop - DW Augmentation and Data Exploration
- Good understanding and experience with the traditional DW architecture solutions - Kimball and Inmon implementations.
- Experience and knowledge of traditional DW database platforms - DB2, Oracle, SQL Server, Netezzza, Teradata, Green Plum or other appliance based solutions
- Designed / developed RDBMS migration to Hadoop
- Experience in developing Data Lake Solutions
- Experience in Metadata Management skills on Cloudera Navigator, Apache Atlas, Apache Falcon
- Experience NoSQL databases - MongoDB, Cassandra, CouchDB or Cloudant
- Experience in Customer Experience Management, Risk Management or Real time data management use cases and projects like Marketing and Web Analytics, Social Analytics, Sensor Data for IOTs, Risk Analytics
- Ability to size and develop / validate physical architecture of data platform
Didn’t find the job appropriate? Report this Job