Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
제공자:
이 강좌에 대하여
귀하가 습득할 기술
- Bayesian Network
- Graphical Model
- Markov Random Field
제공자:

스탠퍼드 대학교
The Leland Stanford Junior University, commonly referred to as Stanford University or Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto, California, United States.
강의 계획표 - 이 강좌에서 배울 내용
Introduction and Overview
This module provides an overall introduction to probabilistic graphical models, and defines a few of the key concepts that will be used later in the course.
Bayesian Network (Directed Models)
In this module, we define the Bayesian network representation and its semantics. We also analyze the relationship between the graph structure and the independence properties of a distribution represented over that graph. Finally, we give some practical tips on how to model a real-world situation as a Bayesian network.
Template Models for Bayesian Networks
In many cases, we need to model distributions that have a recurring structure. In this module, we describe representations for two such situations. One is temporal scenarios, where we want to model a probabilistic structure that holds constant over time; here, we use Hidden Markov Models, or, more generally, Dynamic Bayesian Networks. The other is aimed at scenarios that involve multiple similar entities, each of whose properties is governed by a similar model; here, we use Plate Models.
Structured CPDs for Bayesian Networks
A table-based representation of a CPD in a Bayesian network has a size that grows exponentially in the number of parents. There are a variety of other form of CPD that exploit some type of structure in the dependency model to allow for a much more compact representation. Here we describe a number of the ones most commonly used in practice.
Markov Networks (Undirected Models)
In this module, we describe Markov networks (also called Markov random fields): probabilistic graphical models based on an undirected graph representation. We discuss the representation of these models and their semantics. We also analyze the independence properties of distributions encoded by these graphs, and their relationship to the graph structure. We compare these independencies to those encoded by a Bayesian network, giving us some insight on which type of model is more suitable for which scenarios.
Decision Making
In this module, we discuss the task of decision making under uncertainty. We describe the framework of decision theory, including some aspects of utility functions. We then talk about how decision making scenarios can be encoded as a graphical model called an Influence Diagram, and how such models provide insight both into decision making and the value of information gathering.
검토
- 5 stars74.83%
- 4 stars17.71%
- 3 stars5.20%
- 2 stars0.93%
- 1 star1.30%
PROBABILISTIC GRAPHICAL MODELS 1: REPRESENTATION의 최상위 리뷰
learned a lot. lectures were easy to follow and the textbook was able to more fully explain things when I needed it. looking forward to the next course in the series.
A good introduction to PGM, from very basic concepts to some move in-depth features. A big disadvantage is Matlab/Octave programming assignments.
A comprehensive introduction and review of how to represent joint probability distributions as graphs and basic causal reasoning and decision making.
This subject covered in this course is very helpful for me who interested in inference methods, machine learning, computer vision, and optimization.
확률 그래프 모델 특화 과정 정보
Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.

자주 묻는 질문
강의 및 과제를 언제 이용할 수 있게 되나요?
이 전문 분야를 구독하면 무엇을 이용할 수 있나요?
재정 지원을 받을 수 있나요?
Learning Outcomes: By the end of this course, you will be able to
궁금한 점이 더 있으신가요? 학습자 도움말 센터를 방문해 보세요.