- Minimum 10 years of solid IT consulting experience in data warehousing, operational data stores and large scale implementations
- Candidate needs to have hands-on experience with Hadoop applications (e.g. administration, configuration management, monitoring, debugging, and performance tuning).
- Experience working in Hortonworks /Cloudera/Azure.
- Experience with Hadoop, MapReduce, Hive, HBase, and MongoDB
- Experience in Spark and Scala/R
- Experience in Kafka, Storm.
- Experience in Impala, Oozie, Mahout, Flume, ZooKeeper and Sqoop.
- Major programming/scripting languages like Java, Linux, PHP, Ruby, Phyton and/or R.
- Experience in working with ETL tools such as Informatica.
- Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS and/or knowledge on NoSQL platforms.
- Experience with one of the large cloud-computing infrastructure solutions Amazon Web Services or Microsoft Azure
- Skilled architect with cross-industry, cross-functional and cross-domain know-how.
Didn’t find the job appropriate? Report this Job