AWS Data Engineer
Role details
Job location
Tech stack
Requirements
We are looking for a Data Engineer with strong hands-on experience in designing and delivering enterprise-scale data pipelines using AWS Glue and PySpark. The role will involve building and optimising ETL processes, working with raw and curated datasets, and ensuring data is processed efficiently and to a high standard. You will be responsible for developing scalable, production-grade data workflows, integrating data from multiple systems, and applying best practices around data modelling, data quality, and automation. Experience working within a modern cloud data stack is essential, along with an understanding of how to structure data for analytics, reporting and downstream consumption. The ideal candidate will have a solid background in Spark-based engineering, particularly PySpark, and be confident working with Glue jobs, Glue Catalog, S3, and other AWS native services used within a data platform build.