Software Engineer, Data Platform
Role details
Job location
Tech stack
Job description
Our mission is to provide a world-class platform that empowers the business to leverage data that will enhance, monitor, and support our products. We are responsible for data ingestion systems, processing pipelines, and various data stores all operating in the cloud. We operate at a petabyte scale, and support near real-time use cases as well as more traditional batch approaches., We are looking for a Senior Software Engineer, Data to join our Data Interfaces team. The team builds APIs, backend services, and tooling that enable teams and services across Epic to access and interact with our core data platform, including systems that support operational telemetry collection, data querying, visualization, alerting, and integrations with internal services. In this role, you will design and build backend systems that power how teams across Epic access and leverage data at scale., * Design and build backend systems in C# / .NET that enables teams across Epic to access and leverage data at scale.
- Work on systems that collect and process operational telemetry used across Epic products
- Collaborate closely with other data platform teams to support scalable data ingestion and access patterns
- Contribute to the architecture and reliability of distributed data systems operating at large scale and support the evolution of these systems over time
- Partner with engineers across teams to integrate data platform capabilities into internal services
Requirements
- Strong software engineering experience building backend systems using C# / .NET
- Experience designing and operating scalable distributed systems
- Experience building or working with data platforms or data-intensive systems
- Hands-on experience with distributed event streaming systems such as Apache Kafka
- Experience working with container orchestration systems such as Kubernetes
- Experience building and operating services in AWS or other cloud platforms
- Familiarity with OLAP databases such as Apache Pinot or ClickHouse
- Experience with modern data lake or warehouse technologies such as S3, Databricks, or Snowflake
- Experience with distributed data processing frameworks such as Apache Flink or Apache Spark, or experience working on large-scale analytics or telemetry platforms is preferred
- Strong communication skills and the ability to collaborate effectively with distributed teams