Vijay Krishan Gupta & Gauravdeep Singh Lotey
Creating Industry ready solutions with LLM Models
#1about 3 minutes
Understanding LLMs and the transformer self-attention mechanism
Large Language Models (LLMs) are defined by their parameters and training data, with the transformer's self-attention mechanism being key to resolving ambiguity in language.
#2about 4 minutes
Exploring the business adoption and emergent abilities of LLMs
Businesses are rapidly adopting LLMs due to their emergent abilities like in-context learning, instruction following, and chain-of-thought reasoning, which go beyond their original design.
#3about 9 minutes
Demo of an enterprise assistant for integrated systems
The Simplify Path demo showcases a unified chatbot interface that integrates with various enterprise systems like HRMS, Jira, and Salesforce for both informational queries and transactional tasks.
#4about 3 minutes
Demo of a document compliance checker for pharmaceuticals
The Doc Compliance tool validates pharmaceutical documents against a source-of-truth compliance document to ensure all parameters meet regulatory requirements.
#5about 3 minutes
Demo of a chatbot builder for any website
Web Water is a product that converts any website into an interactive chatbot by scraping its HTML, text, and media content to answer user questions.
#6about 5 minutes
Navigating the common challenges of building with LLMs
Key challenges in developing LLM applications include managing hallucinations, ensuring data privacy for sensitive industries, improving usability, and addressing the lack of repeatability.
#7about 7 minutes
Using prompt optimization to improve LLM usability
Prompt optimization techniques, such as defining a role, using zero-shot, few-shot, and chain-of-thought prompting, can significantly improve the quality and relevance of LLM outputs.
#8about 4 minutes
Advanced techniques like RAG, function calling, and fine-tuning
Overcome LLM limitations by using Retrieval-Augmented Generation (RAG) for domain-specific knowledge, function calling for real-time tasks, and fine-tuning for specialized models.
#9about 10 minutes
Code walkthrough for building a RAG-based chatbot
A practical code demonstration shows how to build a RAG pipeline using LangChain, ChromaDB for vector storage, and an open-source Llama 2 model to answer questions from a specific document.
#10about 9 minutes
Q&A on integration, offline RAG, and the future of LLMs
The discussion covers integrating LLMs into organizations, running RAG offline, suitability for small businesses, and the evolution towards large action models (LAMs).
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
25:25 MIN
Exploring practical industry use cases for LLMs
Exploring LLMs across clouds
03:36 MIN
The rapid evolution and adoption of LLMs
Building Blocks of RAG: From Understanding to Implementation
07:45 MIN
Using large language models for voice-driven development
Speak, Code, Deploy: Transforming Developer Experience with Voice Commands
23:35 MIN
Defining key GenAI concepts like GPT and LLMs
Enter the Brave New World of GenAI with Vector Search
17:00 MIN
Designing developer tools and documentation for LLMs
WAD Live 22/01/2025: Exploring AI, Web Development, and Accessibility in Tech with Stefan Judis
00:04 MIN
Three pillars for integrating LLMs in products
Using LLMs in your Product
09:55 MIN
Shifting from traditional code to AI-powered logic
WWC24 - Ankit Patel - Unlocking the Future Breakthrough Application Performance and Capabilities with NVIDIA
00:05 MIN
Moving beyond hype with real-world generative AI
Semantic AI: Why Embeddings Might Matter More Than LLMs
Featured Partners
Related Videos
Data Privacy in LLMs: Challenges and Best Practices
Aditi Godbole
How to Avoid LLM Pitfalls - Mete Atamel and Guillaume Laforge
Meta Atamel & Guillaume Laforge
Using LLMs in your Product
Daniel Töws
Lies, Damned Lies and Large Language Models
Jodie Burchell
From Traction to Production: Maturing your LLMOps step by step
Maxim Salnikov
Building Blocks of RAG: From Understanding to Implementation
Ashish Sharma
Exploring LLMs across clouds
Tomislav Tipurić
DevOps for AI: running LLMs in production with Kubernetes and KubeFlow
Aarno Aukia
Related Articles
View all articles
.png?w=240&auto=compress,format)


From learning to earning
Jobs that call for the skills explored in this talk.

Machine Learning Engineer - Large Language Models (LLM) - Startup
Startup
Charing Cross, United Kingdom
PyTorch
Machine Learning

Agentic AI Architect - Python, LLMs & NLP
FRG Technology Consulting
Intermediate
Azure
Python
Machine Learning

Manager of Machine Learning (LLM/NLP/Generative AI) - Visas Supported
European Tech Recruit
Municipality of Bilbao, Spain
Junior
GIT
Python
Docker
Computer Vision
Machine Learning
+2

FTE / Full Time Position: Data Engineer _AIML(LLM, Agentic AI) & Python exp- Onsite: Bournemouth UK
KBC Technologies UK LTD
Bournemouth, United Kingdom
NLTK
NumPy
Scrum
React
Python
+5

Machine Learning Engineer (LLM)
European Tech Recruit
Municipality of Madrid, Spain
Intermediate
Python
PyTorch
Computer Vision
Machine Learning

R&D AI Software Engineer / End-to-End Machine Learning Engineer / RAG and LLM
Pathway
Paris, France
Remote
€72-75K
GIT
Python
Unit Testing
+2



Conversational AI & Machine Learning Engineer
Deloitte
Leipzig, Germany
Azure
DevOps
Python
Docker
PyTorch
+6