Senior Data Engineer

FlexionHire
Charing Cross, United Kingdom
8 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
£ 110K

Job location

Charing Cross, United Kingdom

Tech stack

Amazon Web Services (AWS)
Automation of Tests
Software as a Service
Information Engineering
Data Infrastructure
Data Systems
Python
SQL Databases
Systems Integration
Spark
GIT
PySpark
Data Management
Software Version Control
Databricks

Job description

  • Playing a key role in shaping the foundations of a Databricks-based lakehouse platform - designing how the catalogue is structured, defining core dimensions/facts, and ensuring the platform is discoverable and useful across the business.
  • Writing clean, performant Python, SQL, and working confidently with Spark/PySpark.
  • Integrating third-party tools, connectors, and SaaS data sources into a cohesive data ecosystem.
  • Owning software components end-to-end: from idea, to build, to production (ensuring reliability and maintainability).
  • Championing continuous improvement and modern engineering practices.
  • Working closely with cross-functional stakeholders to turn real-world problems into elegant data solutions.
  • Producing clear, concise technical documentation.
  • Adapting within a fast-evolving environment and contributing across the data remit wherever needed.

Requirements

Do you have experience in Spark?, * Have hands-on experience building Databricks lakehouse architectures and are excited by shaping foundational data infrastructure.

  • Understand how to engineer data platforms for trust, scalability, and discoverability, not just produce pipelines.
  • Are confident with Databricks, AWS, and the modern data stack.
  • Enjoy fast-paced, iterative delivery and creating user-friendly, value-driven outcomes.
  • Collaborate naturally, share ideas openly, and learn from those around you.
  • Are adaptable, curious, and motivated by continuous improvement and learning.
  • Bring strong experience in data engineering, particularly in greenfield or scaling environments (or equivalent).
  • Embrace "data as a product" thinking - ensuring datasets have clear purpose, documentation, quality checks, version control, and measurable value.
  • Think like a seasoned engineer: Git, CI, modular code, automated tests, alerting, and clean architecture are second nature.
  • Are excited to establish foundational patterns that others will follow.

Apply for this position