Job Description:
Objectives of this Role
- Develop, implement, and maintain leading-edge Data solutions, taking complicated collections/underwriting operational problems and building simple data frameworks
- Identify trends and opportunities for growth through structuring of complex data sets in partnership with Operation, Contact management and Ops analytics teams.
- Evaluate organizational methods and provide source-to-target mappings and information-model specification documents for data sets
- Create best-practice reports based on data mining, analysis, and visualization
- Evaluate internal systems for efficiency, problems, and inaccuracies, developing and maintaining protocols for handling, processing, and cleaning data
- Work directly with management and users to gather requirements, provide status updates, and build relationships Daily and Monthly Responsibilities
- Work closely with project managers- / key stakeholders to understand and maintain focus on their IM & data needs, including identifying critical metrics and KPIs, and deliver actionable insights to relevant decision-makers
- Proactively analyse data to answer key questions from stakeholders or out of self-initiated curiosity with an eye for what drives business performance, investigating and communicating areas for improvement in efficiency and productivity
- Create and maintain rich interactive visualizations through data interpretation and integrating various components from multiple data sources
- Define and implement data acquisition and integration logic, selecting appropriate combination of methods and tools within defined technology stack to ensure optimal scalability and performance of the solution
- Develop and maintain databases by acquiring data from primary and secondary sources, and build scripts that will make our data evaluation process more flexible or scalable across data sets
Experience:
- Bachelor's degree in any of the discipline like Mathematics, Statistics, Economics, Finance, Computer Science from a reputed university
- 6- 10 years of experience in SAS, Big Data, Hadoop, Ingestion capabilities, Pyspark, Cloud
- 4 - 8 years of experience in banking industry with exposure to risk and operations data, systems and reporting
- Experience in managing a team 5-10 people, while guiding them technically and functionally and handling stakeholder and business partner conversations.
- Proven experience in defined/use HADOOP architecture to store and process data in HADOOP with development/ data analytical experience in developing jobs using SAS DI studio (Optional)
- Good knowledge on data ingestion automation, data validation checks. Should have development experience of Hadoop (HDFS& Map Reduce), Hive, Sqoop, HBase etc.
- Exposure to Machine learning/automation techniques
- Awareness and working experience on GCP/ AWS cloud platforms/ technologies
- Experience in applications such as R or PYTHON will be an added advantage
Skills needed:
- To design and develop Big Data based solutions leveraging any of the frameworks like Hortonworks, MapR.
- To study source system files/formats and develop solution to ingest the data accordingly.
- Implement solutions along with the team to provide big data based software solutions.
- Good interpersonal and teamwork skills
- Good written and oral communication capability
- Experienced in establishing, documenting, and enforcing policies and procedures to standardize technology operations, security controls, and reliability standards.
- Highly experience in supporting Development teams in an Agile environment
- Expert in system availability, performance, capacity and monitoring through proper response to incidents, events and problems
- Good organizational, project management, analytical, problem-solving and verbal/written communication skills.
Didn’t find the job appropriate? Report this Job