This course provides a comprehensive understanding of building, deploying, evaluating, and monitoring Generative AI solutions using Databricks. The course covers key concepts such as Retrieval-Augmented Generation (RAG), LLM applications, prompt engineering, data preparation, deployment, and evaluation techniques.
Target Audience
Machine Learning Practitioners, Data Engineers, AI Solution Architects, and Technical Leaders interested in developing Generative AI applications using Databricks.
Pre-requisites
Familiarity with Natural Language Processing (NLP) concepts
Knowledge of prompt engineering best practices
Basic understanding of the Databricks Data Intelligence Platform
Familiarity with concepts like vector databases, embeddings, and RAG architecture
Course Outline
Module 1: Generative AI Basics
What is Generative AI?
Generative Models
Generative AI Use Cases
LLMs and Generative AI
LLMs are not hype—they change the AI game
What is an LLM?
How do LLMs work?
LLMs generate outputs for NLP tasks
LLM business use cases
Module 2: LLMs and Generative AI
LLM Applications
LLM Flavors
Using Proprietary Models (LLMs-as-a-Service)
Databricks AI
Databricks Data Intelligence Platform
Building Gen AI applications on Databricks
Databricks AI — a data-centric AI platform
Databricks AI — optimized for Generative AI
AI Adoption Preparation
How to Prepare for AI Revolution
Strategic Roadmap for AI Adoption
Module 3: Legal and Ethical Considerations
Potential Risks and Challenges
Legal Issues
Ethical Issues
Social/Environmental Issues
Legal Considerations
Data Privacy in Generative AI
Data Security in Generative AI
Intellectual Property Protection
Litigation and/or other regulatory risks
Ethical Considerations
Fairness and Bias in Data
Bias Reinforcement Loop
Reliability and Accuracy of AI Systems
Auditing Generative AI Models
Human-AI Interactions
How will AI Impact Society
AI and Workforce
Module 4: From Prompt Engineering to RAG
Prompt Engineering Primer
What is prompt engineering
Elements of a prompt (instruction, context, input data, output format)
Basic prompt engineering techniques
Zero-shot learning
Few-shot learning
In-context learning
Prompt-chaining / Chain-of-Thought prompting
Best practices for prompt engineering
Formatting tips
Introduction to RAG
How language models learn (pre-training, fine-tuning, contextual info)