As an Enterprise Data Solution Architect, you are responsible for supporting the design, implementation, and ongoing scaling of LPL's cloud-based Enterprise Data Lake (EDL)/EDU. You are expected to own the foundational architecture of the EDL Platform. Architectural oversite includes the maturation of data microservices (cataloging and master data management), data access patterns (API- s, integrations), storage mediums (S3, Hadoop HDFS, MongoDB, DynamoDB, RDS), and the Data Science production framework. Duties include leading solution design, producing architectural artifacts, gaining stakeholder consensus, and partnering with stakeholders for technology selection. You are expected to continually monitor new technologies and partner with leaders to set the vision and define future state capabilities considering AI. Success in this role requires domain expertise in data management, cloud computing, and microservice design.
Responsibilities :
- Function as a thought-leader for the EDL teams; identify upcoming issues and present solutions in advance to support the ongoing developments
- Analyze complex technical scenarios and provide recommendations based on research and proof-of-concepts
- Create documentation and visual artifacts outlining designs that enable engineers to replicate solutions
- Lead cloud data storage selection, analyze and provide recommendations to match data storage mediums to access patterns and non-functional requirements
- Support the design and scaling of LPL's Cloud Data Lake and Enterprise Data Warehouse
- Support the design and development of LPL's data science production environment
- Lead microservice design for core data utilities, including master data management, data cataloging, lineage, and other core functions
- Advise on monitoring, optimization, and operationalization of services within the EDL
- Support technology cost analysis and capacity planning
- Monitor the ever-changing technology landscape, provide a recommendation to best support LPL's data-centric vision
Skills & Qualification :
- A Bachelor's or Master degree in Computer Science
- 5+ years of progressively responsible technology experience
- 7 to 10 years overall experience
- Proven ability to communicate clearly to a variety of stakeholders over different mediums
- Able to work independently, navigate ambiguity on requirements and scope, and manage multiple disparate projects with competing deadlines
- Working knowledge of IT processes such as ITIL, Agile, DevOps, Change Management, Service Level Agreements.
- Strong domain experience in data required; such as developing or maintaining data pipelines, ETL scripts, performing database administration, or supporting other automated data management functions
- Experience with performance testing and tuning or production support, preferably over data pipelines
- Cloud data storage and processing required.
- Experience of integration pattern design including API endpoint design, file transfer, and message queuing and caching
- Experience of microservice design principles
- Experience of data modeling in relational and NoSQL data stores (AWS DynamoDB, AWS Neptune, Mongo DB)
- Familiarity with data science development processes and platforms
- Experience designing and implementing solutions that support large data volumes and streaming data preferred
Didn’t find the job appropriate? Report this Job