Nimrod Kor
The Limits of Prompting: ArchitectingTrustworthy Coding Agents
#1about 2 minutes
Prototyping a basic AI code review agent
A simple prototype using a GitHub webhook and a single LLM call reveals the potential for understanding code semantics beyond static analysis.
#2about 2 minutes
Iteratively improving prompts to handle edge cases
Simple prompts fail to consider developer comments or model knowledge cutoffs, requiring more detailed instructions to improve accuracy.
#3about 5 minutes
Establishing a robust benchmarking process for agents
A reliable benchmarking pipeline uses a large dataset, concurrent execution, and an LLM-as-a-judge (LLJ) to measure and track performance improvements.
#4about 2 minutes
Decomposing large tasks into specialized agents
To combat inconsistency and hallucinations, a single large task like code review is broken down into multiple smaller, specialized agents.
#5about 6 minutes
Leveraging codebase context for deeper insights
Moving beyond prompts, providing codebase context via vector similarity (RAG) and module dependency graphs (AST) unlocks high-quality, human-like feedback.
#6about 3 minutes
Introducing Awesome Reviewers for community standards
Awesome Reviewers is a collection of prompts derived from open-source projects that can be used to enforce team-specific coding standards.
#7about 1 minute
Key takeaways for building reliable LLM agents
The path to a reliable agent involves starting with a proof-of-concept, benchmarking rigorously, using prompt engineering for quick fixes, and investing in deep context.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
22:34 MIN
The limitations and frustrations of coding with LLMs
WAD Live 22/01/2025: Exploring AI, Web Development, and Accessibility in Tech with Stefan Judis
00:06 MIN
An overview of an AI-powered code reviewer
How we built an AI-powered code reviewer in 80 hours
15:00 MIN
The evolution from prompt engineering to context engineering
Engineering Productivity: Cutting Through the AI Noise
09:55 MIN
Shifting from traditional code to AI-powered logic
WWC24 - Ankit Patel - Unlocking the Future Breakthrough Application Performance and Capabilities with NVIDIA
16:53 MIN
The danger of over-engineering with LLMs
Event-Driven Architecture: Breaking Conversational Barriers with Distributed AI Agents
27:51 MIN
Using prompt optimization to improve LLM usability
Creating Industry ready solutions with LLM Models
17:00 MIN
Designing developer tools and documentation for LLMs
WAD Live 22/01/2025: Exploring AI, Web Development, and Accessibility in Tech with Stefan Judis
16:54 MIN
Comparing LLM performance and planning next steps
Build Your First AI Assistant in 30 Minutes: No Code Workshop
Featured Partners
Related Videos
How we built an AI-powered code reviewer in 80 hours
Yan Cui
Three years of putting LLMs into Software - Lessons learned
Simon A.T. Jiménez
The AI Agent Path to Prod: Building for Reliability
Max Tkacz
Prompt Engineering - an Art, a Science, or your next Job Title?
Maxim Salnikov
Bringing the power of AI to your application.
Krzysztof Cieślak
Beyond Prompting: Building Scalable AI with Multi-Agent Systems and MCP
Viktoria Semaan
AI: Superhero or Supervillain? How and Why with Scott Hanselman
Scott Hanselman
Using LLMs in your Product
Daniel Töws
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.


Agentic AI Architect - Python, LLMs & NLP
FRG Technology Consulting
Intermediate
Azure
Python
Machine Learning


Machine Learning Engineer - Large Language Models (LLM) - Startup
Startup
Charing Cross, United Kingdom
PyTorch
Machine Learning


Technische AI Engineer | Webbureau | Prompts | LLM
CareerValue
Hellevoetsluis, Netherlands
€3-5K
PHP
Python
Laravel
low-code
+1

AI Agent Builder & Experimenter (Fullstack)
autonomous-teaming
München, Germany
Remote
API
React
Python
TypeScript


Full-Stack Engineer | Specializing in LLMs & AI Agents
Waterglass
Vienna, Austria
Junior
React
Python
Node.js
low-code
JavaScript