Senior Data Engineer
Role details
Job location
Tech stack
Job description
FanDuel is looking for an experienced Senior Data Engineer with deep understanding of large-scale data handling and processing best practices in a cloud environment to help us build scalable systems. As our data is a key component of the business used by almost every facet of the company, including product development, marketing, operations, and finance. It is vital that we deliver robust solutions that ensure reliable access to data with a focus on quality and availability.
Our competitive edge comes from making decisions based on accurate and timely data and your work will provide access to that data across the whole company. Looking ahead to the next phase of our data platform we are keen to do more with real time data processing and working with our data scientists to create machine learning pipelines
THE GAME PLAN
Everyone on our team has a part to play
Build & Maintain Data Pipelines
- Design, build, and maintain scalable batch and stream data pipelines to support analytics and business operations that can easily support millions of transactions during peak business hours.
- Write clean, efficient, and well-documented code and test cases using tools like Python, SQL, and Spark.
- Ensure data is reliable, accurate, and delivered in a timely manner.
Collaborate Across Teams
- Work with data analysts, data scientists, and product managers to understand requirements and deliver actionable data solutions
- Translate business questions into engineering tasks and contribute to technical planning.
- Participate in code reviews, sprint planning, and retrospectives as part of an agile team.
Data Quality & Operations
- Monitor data pipelines and troubleshoot issues in a timely, systematic manner
- Implement data quality checks and contribute to observability and testing practices
- Document data sources, transformations, and architecture decisions to support long-term maintainability
Requirements
- 5+ years of experience writing Python scripts
- 5+ years' experience working SQL knowledge and experience working with relational databases
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management
- Show proficiency understanding complex ETL processes
- Demonstrate the ability to optimize processes (ram vs io)
- Knowledge of data integrity and relational rules
- Experience in setting up self-healing and resilient data pipelines using Airflow, monitoring and alerting using Monte Carlo, PagerDuty and DataDog.
- Understanding of AWS and Google Cloud
- Ability to quickly learn new technologies is critical
- Proficiency with agile or lean development practices
PLAYER CONTRACT We treat our team right