- Hands-on experience in working with Hadoop Distribution platforms like HortonWorks, Cloudera, MapR and others.
- Ability to work with huge volumes of data so as to derive Business Intelligence.
- Analyze data, uncover information, derive insights and propose data-driven strategies.
- Database concepts, principles, structures and best practices.
- Full knowledge of Hadoop Architecture and HDFS.
- Good knowledge of Data warehousing concepts and Business Intelligence, Data management and Data modeling
- Comprehensive understanding of Hadoop/MapReduce ecosystem and architecture.
- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming.
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala.
- Experience with Spark, NoSQL databases, such as HBase, Cassandra and MongoDB.
- Knowledge of various ETL techniques and frameworks, such as Flume.
- Experience with various messaging systems, such as Kafka or RabbitMQ.
- Has experience building end to end data pipeline, performance optimization, troubleshooting using the Hadoop tools
Didn’t find the job appropriate? Report this Job