DescriptionJob Description
Be an integral part of an agile team that's constantly pushing the envelope to enhance, build, and deliver top-notch technology products.
As a Lead Software Engineer at JPMorgan Chase within the Enterprise Technology - Risk team, you will play a crucial role in an agile team that is dedicated to developing, enhancing, and delivering top-tier technology products in a secure, stable, and scalable manner. Your technical expertise and problem-solving skills will be instrumental in promoting significant business impact and addressing a wide range of challenges across various technologies and applications. This role involves leading and developing data pipeline application, a key application for data migrations from on-prem to Cloud.
We are seeking a skilled and motivated Data Engineer to join our team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and systems to support our data-driven decision-making processes. This role requires a strong understanding of data architecture, data modeling, and ETL processes.
Job responsibilities
- Design, develop, and maintain robust data pipelines and ETL processes to ingest, process, and store large volumes of data from various sources.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs.
- Optimize and improve existing data systems for performance, scalability, and reliability.
- Implement data quality checks and validation processes to ensure data accuracy and integrity.
- Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption.
- Stay up-to-date with industry trends and best practices in data engineering and incorporate them into our processes.
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 5+ years of applied experience
- Proven experience as a Data Engineer or in a similar role.
- Strong proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL).
- Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms, particularly AWS.
- Proficiency in programming languages such as Python, Java, or Scala.
- Familiarity with data warehousing solutions, especially Snowflake, and ETL tools.
- Experience with infrastructure as code tools, particularly Terraform.
- Experience with Airflows or AWS MWAA.
- Experience with containerization and orchestration tools, especially Kubernetes.
- Proficiency with AWS services such as EKS (Elastic Kubernetes Service), EMR (Elastic MapReduce), Lambda, DynamoDb, and ECS (Elastic Container Service).
- Excellent problem-solving skills and attention to detail.
Preferred qualifications, capabilities, and skills
- Experience with Python, Java, Scala, AWS MWAA
- Knowledge of Hadoop, AWS, TerraformΒ concepts and frameworks.
- Bachelor's degree in Computer Science, Engineering, or a related field.Β
Β