Senior Data Engineer - Data Pipelines
Role details
Job location
Tech stack
Job description
Here, the answers aren't always available. So, you'll need to bring a fearless, self-starter mindset to navigate uncharted territories. You'll harness your ceaseless energy to discover and make the necessary connections with colleagues to shape the future and achieve maximum impact., You will join a team that fuses data engineering with cutting-edge science, using HPC and AWS to deliver reproducible workflows at scale. From first ingestion to consumption by scientists and AI models, you will set the standard for reliability, speed, and governance across our data foundation.
Do you thrive where learning is continuous and bold ideas are encouraged? You will have the freedom to experiment, the support to grow, and the opportunity to see your work influence breakthroughs as they take shape.
Accountabilities:
- Pipeline Engineering: Design, implement, and operate fit-for-purpose data pipelines for bioinformatics and scientific data, from ingestion to consumption.
- Workflow Orchestration: Build reproducible pipelines using frameworks such as Nextflow (preferred) or Snakemake; integrate with schedulers and HPC/cloud resources.
- Data Platforms: Develop data models, warehousing layers, and metadata/lineage; ensure data quality, reliability, and governance.
- Scalability and Performance: Optimize pipelines for throughput and cost across Unix/Linux HPC and cloud environments (AWS preferred); implement observability and reliability practices.
- Collaboration: Translate scientific and business requirements into technical designs; partner with CPSS stakeholders, R&D IT, and DS&AI to co-create solutions.
- Engineering Excellence: Establish and maintain version control, CI/CD, automated testing, code review, and design patterns to ensure maintainability and compliance.
- Enablement: Produce documentation and reusable components; mentor peers and promote best practices in data engineering and scientific computing.
Essential Skills/Experience:
- Pipeline engineering: Design, implement, and operate fit-for-purpose data pipelines for bioinformatics and scientific data, from ingestion to consumption.
- Workflow orchestration: Build reproducible pipelines using frameworks such as Nextflow (preferred) or Snakemake; integrate with schedulers and HPC/cloud resources.
- Data platforms: Develop data models, warehousing layers, and metadata/lineage; ensure data quality, reliability, and governance.
- Scalability and performance: Optimize pipelines for throughput and cost across Unix/Linux HPC and cloud environments (AWS preferred); implement observability and reliability practices.
- Collaboration: Translate scientific and business requirements into technical designs; partner with CPSS stakeholders, R&D IT, and DS&AI to co-create solutions.
- Engineering excellence: Establish and maintain version control, CI/CD, automated testing, code review, and design patterns to ensure maintainability and compliance.
- Enablement: Produce documentation and reusable components; mentor peers and promote best practices in data engineering and scientific computing., Your wellbeing means a lot to us, and we're here to support you through all of life's ups and downs. That's why we offer an unpaid leave policy, annual leave, reduced-hours timetables and a host of benefits, including a retirement plan, long service award, and health and travel insurance.
Requirements
Ready to make an impact in your career? If you're passionate, growth-orientated and a true team player, we'll help you succeed. Here are some of the skills and capabilities we look for., Seize ownership and excel with autonomy to enjoy the constant rush of ground-breaking discovery. Your ability to anticipate sudden shifts and adapt swiftly will prove critical as you make your mark in an environment that rewards initiative and resilience., * Strong programming in Python and Bash for workflow development and scientific computing.
- Experience with containerization and packaging (Docker, Singularity, Conda) for reproducible pipelines.
- Familiarity with data warehousing and analytics platforms (e.g., Redshift, Snowflake, Databricks) and data catalog/lineage tools.
- Experience with observability and reliability tooling (Prometheus/Grafana, ELK, tracing) in HPC and cloud contexts.
- Knowledge of infrastructure as code and cloud orchestration (Terraform, CloudFormation, Kubernetes).
- Understanding of FAIR data principles and domain-specific bioinformatics formats and standards.
- Track record of mentoring engineers and enabling cross-functional teams with reusable components and documentation.
- Experience optimizing performance and cost on AWS, including spot strategies, autoscaling, and storage tiers.
When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world.