Data Pipeline Engineer On prem to AWS migration
Role details
Job location
Tech stack
Job description
Reporting to the Lead Data Engineer, the Data Engineer is responsible for designing and maintaining scalable data pipelines, ensuring data availability, quality, and performance to support analytics and operational decision-making. They contribute to the data engineering roadmap through research, technical vision, business alignment, and prioritisation.
This role is also expected to be a subject matter expert in data engineering, fostering collaboration, continuous improvement, and adaptability within the organisation. They act as a mentor and enabler of best practices, advocating for automation, modularity, and an agile approach to data engineering. A person who advocates for a culture of agility, encouraging all to embrace Agile values and practices.
The role also has accountability to deputise for their line manager (whenever necessary) and is expected to support the product owner community while also driving a positive culture (primarily through role modelling) across the Technology department and wider business., * Design, develop, and maintain scalable, secure, and efficient data pipelines, ensuring data is accessible, high-quality, and optimised for analytical and operational use.
- Contribute to the data engineering roadmap, ensuring solutions align with business priorities, technical strategy, and long-term sustainability.
- Optimise data workflows and infrastructure, leveraging automation and best practices to improve performance, cost efficiency, and scalability.
- Collaborate with Data Science, Insight, and Governance teams to support data-driven decision-making, ensuring seamless integration of data across the organisation.
- Implement and uphold data governance, security, and compliance standards, ensuring adherence to regulatory and organisational best practices.
- Identify and mitigate risks, issues, and dependencies, ensuring continuous improvement in data engineering processes and system reliability, in line with the company risk framework.
- Ensure quality is maintained throughout the data engineering lifecycle, delivering robust, scalable, and cost-effective solutions on time and within budget.
- Monitor and drive data pipeline lifecycle, from design and implementation through to optimisation and post-deployment performance analysis.
- Support operational resilience, ensuring data solutions are maintainable, supportable, and aligned with architectural principles.
- Engage with third-party vendors where required, ensuring external contributions align with technical requirements, project timelines, and quality expectations.
- Continuously assess emerging technologies and industry trends, identifying opportunities to enhance data engineering capabilities.
- Document and maintain clear technical processes, facilitating knowledge sharing and operational continuity within the data engineering function.
Requirements
- Excellent communication skills, with the ability to convey technical concepts to both technical and non-technical audiences.
- Experience working with large-scale data processing and distributed computing frameworks.
- Knowledge of cloud platforms such as AWS, Azure, or GCP, with hands-on experience in cloud-based data services.
- Proficiency in SQL and Python for data manipulation and transformation.
- Experience with modern data engineering tools, including Apache Spark, Kafka, and Airflow.
- Strong understanding of data modelling, schema design, and data warehousing concepts.
- Familiarity with data governance, privacy, and compliance frameworks (e.g., GDPR, ISO27001).
- Hands-on experience with version control systems (e.g., Git) and infrastructure as code (e.g., Terraform, CloudFormation).
- Understanding of Agile methodologies and DevOps practices for data engineering, * AWS
- cloud
- data engineer
- redshift