Lead Data Engineer
Mexico
With you Chubb is better!
Chubb is the world’s largest publicly traded P&C insurance company and a leading commercial lines insurer in the United States. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance, and life insurance to a diverse group of clients. As an underwriting company, we assess, assume, and manage risk with insight and discipline. We service and pay our claims fairly and promptly. We combine the precision of craftsmanship with decades of experience to conceive, craft and deliver the very best insurance coverage and service to individuals and families, and businesses of all sizes.
Chubb is also defined by its extensive product and service offerings, broad distribution capabilities, direct-to-consumer platform partnerships, exceptional financial strength and local operations globally. The company serves multinational corporations, mid-size and small businesses with property and casualty insurance and risk engineering services; affluent and high net worth individuals with substantial assets to protect; individuals purchasing life, personal accident, supplemental health, homeowners, automobile and specialty personal insurance coverage; companies and affinity groups providing or offering accident and health insurance programs and life insurance to their employees or members; and insurers managing exposures with reinsurance coverage.
Position Overview:
We are seeking a highly skilled and experienced lead data engineer to join our dynamic team. The ideal candidate will be responsible for leading the design, development and implementation of scalable data pipelines, and ensuring the effective utilization of data in the business intelligence and decision-making.
Position Summary
Technical Hands-on role providing direction and thought leadership in implementation of data solutions and related downstream systems including the build of repositories, such as data warehouses, data lakes, using appropriate technologies. This role requires strong technical background, hands-on experience in Informatica IICS, Databricks Python, ADF, snowflake and other Azure technologies along with data analysis skills and leadership abilities. The candidate will work on multiple projects for delivery of data solutions, including new development, maintenance, and enhancements. The candidate will also assist with the daily operations, management, and oversight of team’s deliverables.
Primary Responsibilities:
- The candidate will be responsible for design, and implement new solutions for complex data ingestions from multiple sources to enterprise data products with focus on automation, performance, resilience, and scalability, etc.
- Partner with Lead Architect, Data Product Manager (Product Owner) and Lead Data Integration Engineer to create strategic solutions introducing new technologies.
- Work with stakeholders including Management, Domain leads, and Teams to assist with data-related technical issues and support their data infrastructure needs.
- Strong development and programming experience in database management, ETL processes, data modeling, and infrastructure to developing APIs, front-end applications, and automated data pipelines.
- Experience in Informatica (IICS), Python, ADF, Azure Synapse, snowflake, Cosmos and Databricks
- Solid understanding of databases, real-time integration patterns and ETL/ELT best practices.
- From database management, ETL processes, data modeling, and infrastructure to developing APIs, front-end applications, and automated data pipelines
- Defining data retention policies, monitoring performance and advising any necessary infrastructure changes based on functional and non-functional requirements.
- Responsible for ensuring enterprise data policies, best practices, standards and processes are followed.
- Write up and maintain technical specifications, design documents and process flow.
- Mentor a team of onshore and offshore development resources to analyze, design, construct and test software development projects focused on analytics and data integration.
- Elaborate user stories for technical team and ensure that the team understands the deliverables.
- Effectively communicate, coordinate & collaborate with business, IT architecture and data teams across multi-functional areas to complete deliverables.
- Provide direction to the Agile development team and stakeholders throughout the project.
- Assist in Data Architecture design, tool selection and data flows analysis.
- Work with large amounts of data, interpret data, analyze results, perform gap analysis and provide ongoing reports.
- Handle ad-hoc analysis & report generation requests from the business.
- Respond to data related inquiries to support business and technical teams.
Technical Skills / Experience:
- 7+years of proven working experience in ETL methodologies, Data integration and data migration. Informatica IICS, Databricks/Spark & Python hands-on development skills a must.
- Clear hands-on experience with database systems - SQL server, Oracle, Azure Synapse, Snowflake and Cosmos, Cloud technologies (e.g., AWS, Azure, Google), and NoSQL databases (e.g., Cosmos, MongoDB, DynamoDB)
- Skilled in backend technologies (Python, Java, Scala) & Front-end development (JavaScript, React, Angular).
- Extensive experience developing complex solutions focused on data ecosystem solutions.
- Extensive knowledge of data and analytics framework supporting data lakes, warehouses, marts, reporting, etc.
- In depth knowledge of data engineering and architecture disciplines
- Extensive experience working with Big Data tools and building data solutions for advanced analytics and Machine learning frameworks.
- Solid understanding of P&C Insurance data
- Technical expertise regarding data architecture, models and database design development
- Strong knowledge of and experience with Java, SQL, XML’s, Python, ETL frameworks and Databricks
- Working knowledge/familiarity with Git version control.
- Strong Knowledge of analyzing datasets using Excel
- Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
- Proficient in learning new technologies with the ability to quickly understand capabilities and work with others to guide these into development.
- Good communication and presentation skills
- Solid problem solving, decision making and analytical skills.
- Knowledge & working experience with Duck creek is an added plus.
- Knowledge & working experience with Insurity Policy Decisions and/or IEV is an added plus.
- Experience with JIRA
Other Skills / Experience:
- Experience being part of high-performance agile teams in a fast-paced environment.
- Must understand the system scope and project objectives to achieve project needs through matrix management and collaboration with other enterprise teams.
- Proven ability to produce results in the analysis, design, testing and deployment of applications.
- Strong team emphasis and relationship building skills; partners well with business and other IT/Data areas.
- Strong coaching / mentoring skills
- Applies technical knowledge to determine solutions and solve complex problems.
- Ability to be proactive, self-motivated, detail-oriented, creative, inquisitive, and persistent.
- Excellent communication and negotiation skills.
- Ability to organize, plan and implement work assignments, juggle competing demands and work under pressure of frequent and tight deadlines.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Angular APIs Architecture AWS Azure Big Data Business Intelligence Data analysis Databricks Data pipelines DynamoDB ELT Engineering ETL Excel Git Informatica Java JavaScript Jira Machine Learning MongoDB NoSQL Oracle Pipelines Python React Scala Snowflake Spark SQL Testing XML
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open Data Science Manager jobs
- Open Lead Data Analyst jobs
- Open MLOps Engineer jobs
- Open Data Manager jobs
- Open Senior Business Intelligence Analyst jobs
- Open Principal Data Engineer jobs
- Open Data Engineer II jobs
- Open Power BI Developer jobs
- Open Sr Data Engineer jobs
- Open Data Scientist II jobs
- Open Data Analytics Engineer jobs
- Open Product Data Analyst jobs
- Open Business Intelligence Developer jobs
- Open Junior Data Scientist jobs
- Open Business Data Analyst jobs
- Open Sr. Data Scientist jobs
- Open Data Analyst Intern jobs
- Open Senior Data Architect jobs
- Open Big Data Engineer jobs
- Open Principal Data Scientist jobs
- Open Junior Data Engineer jobs
- Open Manager, Data Engineering jobs
- Open Data Quality Analyst jobs
- Open Azure Data Engineer jobs
- Open Data Product Manager jobs
- Open Data quality-related jobs
- Open GCP-related jobs
- Open Business Intelligence-related jobs
- Open Java-related jobs
- Open ML models-related jobs
- Open Data management-related jobs
- Open Privacy-related jobs
- Open Data visualization-related jobs
- Open Finance-related jobs
- Open Deep Learning-related jobs
- Open PhD-related jobs
- Open PyTorch-related jobs
- Open APIs-related jobs
- Open TensorFlow-related jobs
- Open NLP-related jobs
- Open Consulting-related jobs
- Open Snowflake-related jobs
- Open LLMs-related jobs
- Open CI/CD-related jobs
- Open Generative AI-related jobs
- Open Kubernetes-related jobs
- Open Hadoop-related jobs
- Open Data governance-related jobs
- Open Airflow-related jobs
- Open DevOps-related jobs