Loading...

Course Description

In this course, you will use Python to quantify the next-word predictions of large language models (LLMs) and understand how these models assign probabilities to text. You’ll compare raw scores from LLMs, transform them into probabilities, and explore uncertainty measures like entropy. You’ll also build n-gram language models from scratch, apply smoothing to handle unseen words, and interpret log probabilities to avoid numerical underflow.

By the end of this course, you will be able to evaluate entire sentences for their likelihood, implement your own model confidence checks, and decide when and how to suggest completions for real-world text applications.

You are required to have completed the following course or have equivalent experience before taking this course:

  • LLM Tools, Platforms, and Prompts

Faculty Author

David Mimno

Benefits to the Learner

  • Implement Python code to generate and sample next-word probabilities for a given text corpus
  • Apply methods that extend predictions to new, unseen contexts
  • Use log probabilities to quantify text likelihood and develop heuristics for interpreting them
  • Leverage the Hugging Face API to evaluate and compare next-word probabilities over larger sequences

Target Audience

  • Engineers
  • Developers
  • Analysts
  • Data scientists
  • AI engineers
  • Entrepreneurs
  • Data journalists
  • Product managers
  • Researchers
  • Policymakers
  • Legal professionals

Applies Towards the Following Certificates

Loading...
Enroll Now - Select a section to enroll in
Type
2 week
Dates
Nov 05, 2025 to Nov 18, 2025
Total Number of Hours
16.0
Course Fee(s)
Contract Fee $100.00
Type
2 week
Dates
Jan 28, 2026 to Feb 10, 2026
Total Number of Hours
16.0
Course Fee(s)
Contract Fee $100.00
Type
2 week
Dates
Apr 22, 2026 to May 05, 2026
Total Number of Hours
16.0
Course Fee(s)
Contract Fee $100.00
Required fields are indicated by .