Dieter Flick

Building Real-Time AI/ML Agents with Distributed Data using Apache Cassandra and Astra DB

Are your LLMs stuck on outdated data? Learn how the RAG pattern and a vector database can build smarter, context-aware AI agents.

Building Real-Time AI/ML Agents with Distributed Data using Apache Cassandra and Astra DB
#1about 3 minutes

Introducing the DataStax real-time data cloud

The platform combines Apache Cassandra, Apache Pulsar, and Kaskada to provide a flexible database, streaming, and machine learning solution for developers.

#2about 3 minutes

Interacting with Astra DB using GraphQL and REST APIs

A live demonstration shows how to create a schema, ingest data, and query tables in Astra DB using both GraphQL and REST API endpoints.

#3about 1 minute

Understanding real-time AI and its applications

Real-time AI leverages the most recent data to power predictive analytics and automated actions, as seen in use cases from Uber and Netflix.

#4about 2 minutes

What is Retrieval Augmented Generation (RAG)?

RAG is a pattern that allows large language models to access and use your proprietary, up-to-date data to provide contextually relevant responses.

#5about 3 minutes

Key steps for building a generative AI agent

The process involves defining the agent's purpose, choosing an LLM, selecting context data, picking an embedding model, and performing prompt engineering.

#6about 3 minutes

Exploring the architecture of a RAG system

A RAG system uses a vector database to perform a similarity search on data embeddings, finding relevant context to enrich the prompt sent to the LLM.

#7about 3 minutes

Generating vector embeddings from text content

A Jupyter Notebook demonstrates splitting source text into chunks and using an embedding model to create vector representations for storage and search.

#8about 4 minutes

The end-to-end data flow of a RAG query

A user's question is converted into an embedding, used for a similarity search in the vector store, and the results are combined with other context to build a final prompt.

#9about 3 minutes

Executing a RAG prompt to get an LLM response

The demo shows how the context-enriched prompt is sent to an LLM to generate a relevant answer, including how to add memory for conversational history.

#10about 3 minutes

Getting started with the Astra DB vector database

Resources are provided for getting started with Astra DB, including quick starts, a free tier for developers, and information on multi-cloud region support.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

Related Articles

View all articles
CH
Chris Heilmann
With AIs wide open - WeAreDevelopers at All Things Open 2025
Last week our VP of Developer Relations, Chris Heilmann, flew to Raleigh, North Carolina to present at All Things Open . An excellent event he had spoken at a few times in the past and this being the “Lucky 13” edition, he didn’t hesitate to come and...
With AIs wide open - WeAreDevelopers at All Things Open 2025

From learning to earning

Jobs that call for the skills explored in this talk.