Out of the successful launch of Chase in 2021, we’re a new team, with a new mission. We’re creating products that solve real world problems and put customers at the center - all in an environment that nurtures skills and helps you realize your potential. Our team is key to our success. We’re people-first. We value collaboration, curiosity and commitment.
As a Data Platform Engineer at JPMorgan Chase within the Accelerator business, you are the heart of this venture, focused on getting smart ideas into the hands of our customers. You have a curious mindset, thrive in collaborative squads, and are passionate about new technology. By your nature, you are also solution-oriented, commercially savvy and have a head for fintech. You thrive in working in tribes and squads that focus on specific products and projects – and depending on your strengths and interests, you'll have the opportunity to move between them.
Technologies we use: Java, Kotlin, Kubernetes, Apache Kafka, GCP, BigQuery, Spark
While we’re looking for professional skills, culture is just as important to us. We understand that
everyone's unique – and that diversity of thought, experience and background is what makes a
good team, great. By bringing people with different points of view together, we can represent
everyone and truly reflect the communities we serve. This way, there's scope for you to make a
huge difference – on us as a company, and on our clients and business partners around the
world.
Job responsibilities:
Build infrastructure to support financial products at scale. Setting up the data platform to complement the application platform which will provide modern data services for the applications running on it (ingestion, querying, governance, etc.). Use open source products whenever we can, and roll our own solutions when that makes sense. Help teams to identify their data needs and help them to leverage the platform in the best possible way. Be a point of contact for all other teams also on regulatory/control aspects of data as we tailor our solutions to accommodate those.Required qualifications, capabilities and skills:
Formal training or certification on problem-solving concepts and proficient applied experience Being a problem solver: you can independently analyze a problem and come up with options on how to solve it. Flexibility regarding tools and languages: for example you have to be open to debug an SSO issue one day in a python service and dig into some Java/Kotlin out-of-memory issue the other day (of course we take into account your expertise and you will have team members to help you out!). Knowledge of data structures. Experience with either Kubernetes or Docker.Preferred qualifications, capabilities and skills:
Experience with at least one cloud platform. Experience with message brokers (Kafka, RabbitMQ, Pulsar etc.). Preferably experience in setting up data platforms, setting standards - not just pipelines. Preferably experience in a distributed data processing environment/framework (e.g. Spark or Flink).