Artem Volk & Fabian Zillgens

Building the platform for providing ML predictions based on real-time player activity

What if data scientists could deploy their own ML models without engineering bottlenecks? This serverless architecture makes it possible.

Building the platform for providing ML predictions based on real-time player activity
#1about 3 minutes

Customizing the player experience in real time

The business goal is to use real-time player activity to deliver personalized in-game content, such as customized store offers.

#2about 3 minutes

Designing the high-level system architecture

The platform follows a three-stage architecture for event collection, data processing, and customization delivery using a standard AWS tech stack.

#3about 2 minutes

Building a resilient event collection pipeline

A slim API endpoint ingests high-volume, potentially out-of-order player events and uses an Amazon Kinesis stream to decouple it from downstream processing.

#4about 2 minutes

Separating offline and online data processing

The system uses a dual-path approach, with Apache Spark for offline analytics and Apache Flink with Flink SQL for real-time feature extraction.

#5about 2 minutes

Creating a low-latency user profile service

A user profile API stores a real-time snapshot of the player's state, updated by the Flink stream with a latency of around 200 milliseconds.

#6about 3 minutes

Delivering customizations via decoupled ML models

Machine learning models are deployed as independent AWS Lambda functions that data scientists can manage, allowing the game to pull personalized content on demand.

#7about 5 minutes

Analyzing system latency and architectural trade-offs

Empowering data scientists with monitoring tools reveals end-to-end latency metrics and highlights the advantages and costs of a highly decoupled system.

#8about 2 minutes

Implementing AWS cost optimization strategies

Costs are managed through techniques like event batching, data compression, aggressive Kinesis autoscaling, and S3 data partitioning and storage classes.

#9about 7 minutes

Q&A on model quality, scale, and player privacy

The team answers audience questions about event volume, ensuring model quality, load balancing, using AWS ML services, and handling player data privacy.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

Related Articles

View all articles
CH
Chris Heilmann
With AIs wide open - WeAreDevelopers at All Things Open 2025
Last week our VP of Developer Relations, Chris Heilmann, flew to Raleigh, North Carolina to present at All Things Open . An excellent event he had spoken at a few times in the past and this being the “Lucky 13” edition, he didn’t hesitate to come and...
With AIs wide open - WeAreDevelopers at All Things Open 2025

From learning to earning

Jobs that call for the skills explored in this talk.