Senior Data Engineer - Data Pipelines
Role details
Job location
Tech stack
Job description
Are you ready to build high-performance data pipelines that turn complex science into real impact for patients? In this role, you will transform raw bioinformatics and scientific data into trusted, reusable assets that drive discovery and decision-making across our research programs.
You will join a team that fuses data engineering with cutting-edge science, using HPC and AWS to deliver reproducible workflows at scale. From first ingestion to consumption by scientists and AI models, you will set the standard for reliability, speed, and governance across our data foundation.
Do you thrive where learning is continuous and bold ideas are encouraged? You will have the freedom to experiment, the support to grow, and the opportunity to see your work influence breakthroughs as they take shape.
Accountabilities:
- Pipeline Engineering: Design, implement, and operate fit-for-purpose data pipelines for bioinformatics and scientific data, from ingestion to consumption.
- Workflow Orchestration: Build reproducible pipelines using frameworks such as Nextflow (preferred) or Snakemake; integrate with schedulers and HPC/cloud resources.
- Data Platforms: Develop data models, warehousing layers, and metadata/lineage; ensure data quality, reliability, and governance.
- Scalability and Performance: Optimize pipelines for throughput and cost across Unix/Linux HPC and cloud environments (AWS preferred); implement observability and reliability practices.
- Collaboration: Translate scientific and business requirements into technical designs; partner with CPSS stakeholders, R&D IT, and DS&AI to co-create solutions.
- Engineering Excellence: Establish and maintain version control, CI/CD, automated testing, code review, and design patterns to ensure maintainability and compliance.
- Enablement: Produce documentation and reusable components; mentor peers and promote best practices in data engineering and scientific computing.
Essential Skills/Experience:
- Pipeline engineering: Design, implement, and operate fit-for-purpose data pipelines for bioinformatics and scientific data, from ingestion to consumption.
- Workflow orchestration: Build reproducible pipelines using frameworks such as Nextflow (preferred) or Snakemake; integrate with schedulers and HPC/cloud resources.
- Data platforms: Develop data models, warehousing layers, and metadata/lineage; ensure data quality, reliability, and governance.
- Scalability and performance: Optimize pipelines for throughput and cost across Unix/Linux HPC and cloud environments (AWS preferred); implement observability and reliability practices.
- Collaboration: Translate scientific and business requirements into technical designs; partner with CPSS stakeholders, R&D IT, and DS&AI to co-create solutions.
- Engineering excellence: Establish and maintain version control, CI/CD, automated testing, code review, and design patterns to ensure maintainability and compliance.
- Enablement: Produce documentation and reusable components; mentor peers and promote best practices in data engineering and scientific computing.
Requirements
Do you have experience in Terraform?, * Strong programming in Python and Bash for workflow development and scientific computing.
- Experience with containerization and packaging (Docker, Singularity, Conda) for reproducible pipelines.
- Familiarity with data warehousing and analytics platforms (e.g., Redshift, Snowflake, Databricks) and data catalog/lineage tools.
- Experience with observability and reliability tooling (Prometheus/Grafana, ELK, tracing) in HPC and cloud contexts.
- Knowledge of infrastructure as code and cloud orchestration (Terraform, CloudFormation, Kubernetes).
- Understanding of FAIR data principles and domain-specific bioinformatics formats and standards.
- Track record of mentoring engineers and enabling cross-functional teams with reusable components and documentation.
- Experience optimizing performance and cost on AWS, including spot strategies, autoscaling, and storage tiers.
When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world.