Roberto Carratalá & Cedric Clyburn
Self-Hosted LLMs: From Zero to Inference
#1about 3 minutes
The rise of self-hosted open source AI models
Self-hosting large language models offers developers greater privacy, cost savings, and control compared to third-party cloud AI services.
#2about 2 minutes
Key benefits of local LLM deployment for developers
Running models locally improves the development inner loop, provides full data privacy, and allows for greater customization and control over the AI stack.
#3about 3 minutes
Comparing open source tools for serving LLMs
Explore different open source tools like Ollama for local development, vLLM for scalable production, and Podman AI Lab for containerized AI applications.
#4about 3 minutes
How to select the right open source LLM
Navigate the vast landscape of open source models by understanding different model families, their specific use cases, and naming conventions.
#5about 3 minutes
Using quantization to run large models locally
Model quantization compresses LLMs to reduce their memory footprint, enabling them to run efficiently on consumer hardware like laptops with CPUs or GPUs.
#6about 1 minute
Strategies for integrating local LLMs with your data
Learn three key methods for connecting local models to your data: Retrieval-Augmented Generation (RAG), local code assistants, and building agentic applications.
#7about 6 minutes
Demo: Building a RAG system with local models
Use Podman AI Lab to serve a local LLM and connect it to AnythingLLM to create a question-answering system over your private documents.
#8about 5 minutes
Demo: Setting up a local AI code assistant
Integrate a self-hosted LLM with the Continue VS Code extension to create a private, offline-capable AI pair programmer for code generation and analysis.
#9about 4 minutes
Demo: Building an agentic app with external tools
Create an agentic application that uses a local LLM with external tools via the Model Context Protocol (MCP) to perform complex, multi-step tasks.
#10about 1 minute
Conclusion and the future of open source AI
Self-hosting provides a powerful, private, and customizable alternative to third-party services, highlighting the growing potential of open source AI for developers.
Related jobs
Jobs that call for the skills explored in this talk.
Wilken GmbH
Ulm, Germany
Senior
Kubernetes
AI Frameworks
+3
Picnic Technologies B.V.
Amsterdam, Netherlands
Intermediate
Senior
Python
Structured Query Language (SQL)
+1
ROSEN Technology and Research Center GmbH
Osnabrück, Germany
Senior
TypeScript
React
+3
Matching moments
03:55 MIN
The hardware requirements for running LLMs locally
AI in the Open and in Browsers - Tarek Ziadé
04:28 MIN
Building an open source community around AI models
AI in the Open and in Browsers - Tarek Ziadé
05:03 MIN
Building and iterating on an LLM-powered product
Slopquatting, API Keys, Fun with Fonts, Recruiters vs AI and more - The Best of LIVE 2025 - Part 2
09:10 MIN
How AI is changing the freelance developer experience
WeAreDevelopers LIVE – AI, Freelancing, Keeping Up with Tech and More
01:02 MIN
AI lawsuits, code flagging, and self-driving subscriptions
Fake or News: Self-Driving Cars on Subscription, Crypto Attacks Rising and Working While You Sleep - Théodore Lefèvre
07:39 MIN
Prompt injection as an unsolved AI security problem
AI in the Open and in Browsers - Tarek Ziadé
04:59 MIN
Unlocking LLM potential with creative prompting techniques
WeAreDevelopers LIVE – Frontend Inspirations, Web Standards and more
06:28 MIN
Using AI agents to modernize legacy COBOL systems
Devs vs. Marketers, COBOL and Copilot, Make Live Coding Easy and more - The Best of LIVE 2025 - Part 3
Featured Partners
Related Videos
Unveiling the Magic: Scaling Large Language Models to Serve Millions
Patrick Koss
Inside the Mind of an LLM
Emanuele Fabbiani
Unlocking the Power of AI: Accessible Language Model Tuning for All
Cedric Clyburn & Legare Kerrison
Exploring LLMs across clouds
Tomislav Tipurić
Three years of putting LLMs into Software - Lessons learned
Simon A.T. Jiménez
One AI API to Power Them All
Roberto Carratalá
DevOps for AI: running LLMs in production with Kubernetes and KubeFlow
Aarno Aukia
How to Avoid LLM Pitfalls - Mete Atamel and Guillaume Laforge
Meta Atamel & Guillaume Laforge
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.

Forschungszentrum Jülich GmbH
Jülich, Germany
Intermediate
Senior
Linux
Docker
AI Frameworks
Machine Learning

Xablu
Hengelo, Netherlands
Intermediate
.NET
Python
PyTorch
Blockchain
TensorFlow
+3

Envirorec
Municipality of Madrid, Spain
Remote
€50-75K
Azure
Python
Machine Learning
+1

Envirorec
Barcelona, Spain
Remote
€50-75K
Azure
Python
Machine Learning
+1


Starion Group
Municipality of Madrid, Spain
API
CSS
Python
Docker
Machine Learning
+1

Menlo Ventures
Barcelona, Spain
Remote
Senior
Python

Hyperproof
Municipality of Madrid, Spain
€45K
Machine Learning
