Data Engineer (Azure)

Hays plc
Charing Cross, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Shift work
Languages
English
Compensation
£ 143K

Job location

Charing Cross, United Kingdom

Tech stack

Azure
Software Documentation
Code Review
Continuous Integration
Information Engineering
ETL
Data Security
Data Warehousing
Python
Azure
SQL Databases
Data Processing
Data Ingestion
Azure
Snowflake
Spark
Data Lake
Terraform
Azure
Databricks

Job description

Your new company Working for a renowned financial services organisation

Your new role Seeking a Data Engineer to help design and maintain scalable batch and near-real-time ingestion pipelines, modernizing legacy ETL/ELT processes into Azure and Snowflake, and implementing best-practice patterns such as CDC, incremental loading, schema evolution, and automated ingestion frameworks. They build cloud-native solutions using Azure Data Factory/Synapse, Databricks/Spark, ADLS Gen2, and Snowflake capabilities including stages, file formats, COPY INTO, and Streams/Tasks to support raw-to-curated data modelling.

The role involves creating reusable components and Python libraries to accelerate delivery across teams, enforcing data quality through validation, observability, and robust pipeline design, and ensuring strong security, governance, and documentation standards. Collaboration within agile workflows-including CI/CD, code reviews, and iterative planning-is also key to delivering consistent, reliable, and secure data solutions.

What you'll need to succeed

  • Strong hands-on data engineering experience, with strong focus on data ingestion

  • Experience building production pipelines using Azure Data Factory, Databricks, Synapse

  • Solid SQL skills and experience working with modern cloud data warehouses, ideally Snowflake

  • Proficiency in Python for data processing, automation, and pipeline utilities

  • Good understanding of data lake/lakehouse concepts and ingestion patterns

  • Infrastructure-as-Code exposure (Terraform) and CI/CD (Azure DevOps)

  • Able to prototype quickly while adhering to Group standards and controls

  • Communicates clearly with business stakeholders and technical teams

  • Familiarity with orchestration frameworks (Dagster) - desirable

  • Energy commodity trading experience is a real advantage

What you'll get in return Flexible working options available.

What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. #4782100 - Joshua

Requirements

  • Strong hands-on data engineering experience, with strong focus on data ingestion

  • Experience building production pipelines using Azure Data Factory, Databricks, Synapse

  • Solid SQL skills and experience working with modern cloud data warehouses, ideally Snowflake

  • Proficiency in Python for data processing, automation, and pipeline utilities

  • Good understanding of data lake/lakehouse concepts and ingestion patterns

  • Infrastructure-as-Code exposure (Terraform) and CI/CD (Azure DevOps)

  • Able to prototype quickly while adhering to Group standards and controls

  • Communicates clearly with business stakeholders and technical teams

  • Familiarity with orchestration frameworks (Dagster) - desirable

  • Energy commodity trading experience is a real advantage

Apply for this position