Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
제공자:
이 강좌에 대하여
학습자 경력 결과
17%
18%
18%
귀하가 습득할 기술
학습자 경력 결과
17%
18%
18%
제공자:

스탠퍼드 대학교
The Leland Stanford Junior University, commonly referred to as Stanford University or Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto, California, United States.
강의 계획 - 이 강좌에서 배울 내용
Inference Overview
This module provides a high-level overview of the main types of inference tasks typically encountered in graphical models: conditional probability queries, and finding the most likely assignment (MAP inference).
Variable Elimination
This module presents the simplest algorithm for exact inference in graphical models: variable elimination. We describe the algorithm, and analyze its complexity in terms of properties of the graph structure.
Belief Propagation Algorithms
This module describes an alternative view of exact inference in graphical models: that of message passing between clusters each of which encodes a factor over a subset of variables. This framework provides a basis for a variety of exact and approximate inference algorithms. We focus here on the basic framework and on its instantiation in the exact case of clique tree propagation. An optional lesson describes the loopy belief propagation (LBP) algorithm and its properties.
MAP Algorithms
This module describes algorithms for finding the most likely assignment for a distribution encoded as a PGM (a task known as MAP inference). We describe message passing algorithms, which are very similar to the algorithms for computing conditional probabilities, except that we need to also consider how to decode the results to construct a single assignment. In an optional module, we describe a few other algorithms that are able to use very different techniques by exploiting the combinatorial optimization nature of the MAP task.
Sampling Methods
In this module, we discuss a class of algorithms that uses random sampling to provide approximate answers to conditional probability queries. Most commonly used among these is the class of Markov Chain Monte Carlo (MCMC) algorithms, which includes the simple Gibbs sampling algorithm, as well as a family of methods known as Metropolis-Hastings.
Inference in Temporal Models
In this brief lesson, we discuss some of the complexities of applying some of the exact or approximate inference algorithms that we learned earlier in this course to dynamic Bayesian networks.
검토
PROBABILISTIC GRAPHICAL MODELS 2: INFERENCE의 최상위 리뷰
Just like the first course of the specialization, this course is really good. It is well organized and taught in the best way which really helped me to implement similar ideas for my projects.
I have clearly learnt a lot during this course. Even though some things should be updated and maybe completed, I would definitely recommend it to anyone whose interest lies in PGMs.
Very good course. Subject is quiet complex: lack of concrete examples to make sure concepts well understood. Had to review each the Course twice to understand concepts well
Great course. The assignments are old and are not worth doing it. But the content is good for those who are interested in Probabilistic Graphical Models basics.
Probabilistic Graphical Models 특화 과정 정보
Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.

자주 묻는 질문
강의 및 과제를 언제 이용할 수 있게 되나요?
이 전문 분야를 구독하면 무엇을 이용할 수 있나요?
Is financial aid available?
Learning Outcomes: By the end of this course, you will be able to take a given PGM and
궁금한 점이 더 있으신가요? 학습자 도움말 센터를 방문해 보세요.