제공자:

기계 학습 전문 분야

워싱턴 대학교

About this Course

55,093

Case Studies: Analyzing Sentiment & Loan Default Prediction
In our case study on analyzing sentiment, you will create models that predict a class (positive/negative sentiment) from input features (text of the reviews, user profile information,...). In our second case study for this course, loan default prediction, you will tackle financial data, and predict when a loan is likely to be risky or safe for the bank. These tasks are an examples of classification, one of the most widely used areas of machine learning, with a broad array of applications, including ad targeting, spam detection, medical diagnosis and image classification.
In this course, you will create classifiers that provide state-of-the-art performance on a variety of tasks. You will become familiar with the most successful techniques, which are most widely used in practice, including logistic regression, decision trees and boosting. In addition, you will be able to design and implement the underlying algorithms that can learn these models at scale, using stochastic gradient ascent. You will implement these technique on real-world, large-scale machine learning tasks. You will also address significant tasks you will face in real-world applications of ML, including handling missing data and measuring precision and recall to evaluate a classifier. This course is hands-on, action-packed, and full of visualizations and illustrations of how these techniques will behave on real data. We've also included optional content in every module, covering advanced topics for those who want to go even deeper!
Learning Objectives: By the end of this course, you will be able to:
-Describe the input and output of a classification model.
-Tackle both binary and multiclass classification problems.
-Implement a logistic regression model for large-scale classification.
-Create a non-linear model using decision trees.
-Improve the performance of any model using boosting.
-Scale your methods with stochastic gradient ascent.
-Describe the underlying decision boundaries.
-Build a classification model to predict sentiment in a product review dataset.
-Analyze financial data to predict loan defaults.
-Use techniques for handling missing data.
-Evaluate your models using precision-recall metrics.
-Implement these techniques in Python (or in the language of your choice, though Python is highly recommended).

지금 바로 시작해 나만의 일정에 따라 학습을 진행하세요.

일정에 따라 마감일을 재설정합니다.

권장: 7 weeks of study, 5-8 hours/week...

자막: 영어, 한국어, 아랍어

Logistic RegressionStatistical ClassificationClassification AlgorithmsDecision Tree

지금 바로 시작해 나만의 일정에 따라 학습을 진행하세요.

일정에 따라 마감일을 재설정합니다.

권장: 7 weeks of study, 5-8 hours/week...

자막: 영어, 한국어, 아랍어

주

1Classification is one of the most widely used techniques in machine learning, with a broad array of applications, including sentiment analysis, ad targeting, spam detection, risk assessment, medical diagnosis and image classification. The core goal of classification is to predict a category or class y from some inputs x. Through this course, you will become familiar with the fundamental models and algorithms used in classification, as well as a number of core machine learning concepts. Rather than covering all aspects of classification, you will focus on a few core techniques, which are widely used in the real-world to get state-of-the-art performance. By following our hands-on approach, you will implement your own algorithms on multiple real-world tasks, and deeply grasp the core techniques needed to be successful with these approaches in practice. This introduction to the course provides you with an overview of the topics we will cover and the background knowledge and resources we assume you have....

8 videos (Total 27 min), 3 readings

What is this course about?6m

Impact of classification1m

Course overview3m

Outline of first half of course5m

Outline of second half of course5m

Assumed background3m

Let's get started!45

Important Update regarding the Machine Learning Specialization10m

Slides presented in this module10m

Reading: Software tools you'll need10m

Linear classifiers are amongst the most practical classification methods. For example, in our sentiment analysis case-study, a linear classifier associates a coefficient with the counts of each word in the sentence. In this module, you will become proficient in this type of representation. You will focus on a particularly useful type of linear classifier called logistic regression, which, in addition to allowing you to predict a class, provides a probability associated with the prediction. These probabilities are extremely useful, since they provide a degree of confidence in the predictions. In this module, you will also be able to construct features from categorical inputs, and to tackle classification problems with more than two class (multiclass problems). You will examine the results of these techniques on a real-world product sentiment analysis task....

18 videos (Total 78 min), 2 readings, 2 quizzes

Intuition behind linear classifiers3m

Decision boundaries3m

Linear classifier model5m

Effect of coefficient values on decision boundary2m

Using features of the inputs2m

Predicting class probabilities1m

Review of basics of probabilities6m

Review of basics of conditional probabilities8m

Using probabilities in classification2m

Predicting class probabilities with (generalized) linear models5m

The sigmoid (or logistic) link function4m

Logistic regression model5m

Effect of coefficient values on predicted probabilities7m

Overview of learning logistic regression models2m

Encoding categorical inputs4m

Multiclass classification with 1 versus all7m

Recap of logistic regression classifier1m

Slides presented in this module10m

Predicting sentiment from product reviews10m

Linear Classifiers & Logistic Regression10m

Predicting sentiment from product reviews24m

주

2Once familiar with linear classifiers and logistic regression, you can now dive in and write your first learning algorithm for classification. In particular, you will use gradient ascent to learn the coefficients of your classifier from data. You first will need to define the quality metric for these tasks using an approach called maximum likelihood estimation (MLE). You will also become familiar with a simple technique for selecting the step size for gradient ascent. An optional, advanced part of this module will cover the derivation of the gradient for logistic regression. You will implement your own learning algorithm for logistic regression from scratch, and use it to learn a sentiment analysis classifier....

18 videos (Total 83 min), 2 readings, 2 quizzes

Intuition behind maximum likelihood estimation4m

Data likelihood8m

Finding best linear classifier with gradient ascent3m

Review of gradient ascent6m

Learning algorithm for logistic regression3m

Example of computing derivative for logistic regression5m

Interpreting derivative for logistic regression5m

Summary of gradient ascent for logistic regression2m

Choosing step size5m

Careful with step sizes that are too large4m

Rule of thumb for choosing step size3m

(VERY OPTIONAL) Deriving gradient of logistic regression: Log trick4m

(VERY OPTIONAL) Expressing the log-likelihood3m

(VERY OPTIONAL) Deriving probability y=-1 given x2m

(VERY OPTIONAL) Rewriting the log likelihood into a simpler form8m

(VERY OPTIONAL) Deriving gradient of log likelihood8m

Recap of learning logistic regression classifiers1m

Slides presented in this module10m

Implementing logistic regression from scratch10m

Learning Linear Classifiers12m

Implementing logistic regression from scratch16m

As we saw in the regression course, overfitting is perhaps the most significant challenge you will face as you apply machine learning approaches in practice. This challenge can be particularly significant for logistic regression, as you will discover in this module, since we not only risk getting an overly complex decision boundary, but your classifier can also become overly confident about the probabilities it predicts. In this module, you will investigate overfitting in classification in significant detail, and obtain broad practical insights from some interesting visualizations of the classifiers' outputs. You will then add a regularization term to your optimization to mitigate overfitting. You will investigate both L2 regularization to penalize large coefficient values, and L1 regularization to obtain additional sparsity in the coefficients. Finally, you will modify your gradient ascent algorithm to learn regularized logistic regression classifiers. You will implement your own regularized logistic regression classifier from scratch, and investigate the impact of the L2 penalty on real-world sentiment analysis data....

13 videos (Total 66 min), 2 readings, 2 quizzes

Review of overfitting in regression3m

Overfitting in classification5m

Visualizing overfitting with high-degree polynomial features3m

Overfitting in classifiers leads to overconfident predictions5m

Visualizing overconfident predictions4m

(OPTIONAL) Another perspecting on overfitting in logistic regression8m

Penalizing large coefficients to mitigate overfitting5m

L2 regularized logistic regression4m

Visualizing effect of L2 regularization in logistic regression5m

Learning L2 regularized logistic regression with gradient ascent7m

Sparse logistic regression with L1 regularization7m

Recap of overfitting & regularization in logistic regression58

Slides presented in this module10m

Logistic Regression with L2 regularization10m

Overfitting & Regularization in Logistic Regression16m

Logistic Regression with L2 regularization16m

주

3Along with linear classifiers, decision trees are amongst the most widely used classification techniques in the real world. This method is extremely intuitive, simple to implement and provides interpretable predictions. In this module, you will become familiar with the core decision trees representation. You will then design a simple, recursive greedy algorithm to learn decision trees from data. Finally, you will extend this approach to deal with continuous inputs, a fundamental requirement for practical problems. In this module, you will investigate a brand new case-study in the financial sector: predicting the risk associated with a bank loan. You will implement your own decision tree learning algorithm on real loan data....

13 videos (Total 47 min), 3 readings, 3 quizzes

Intuition behind decision trees1m

Task of learning decision trees from data3m

Recursive greedy algorithm4m

Learning a decision stump3m

Selecting best feature to split on6m

When to stop recursing4m

Making predictions with decision trees1m

Multiclass classification with decision trees2m

Threshold splits for continuous inputs6m

(OPTIONAL) Picking the best threshold to split on3m

Visualizing decision boundaries5m

Recap of decision trees56

Slides presented in this module10m

Identifying safe loans with decision trees10m

Implementing binary decision trees10m

Decision Trees22m

Identifying safe loans with decision trees14m

Implementing binary decision trees14m

주

4Out of all machine learning techniques, decision trees are amongst the most prone to overfitting. No practical implementation is possible without including approaches that mitigate this challenge. In this module, through various visualizations and investigations, you will investigate why decision trees suffer from significant overfitting problems. Using the principle of Occam's razor, you will mitigate overfitting by learning simpler trees. At first, you will design algorithms that stop the learning process before the decision trees become overly complex. In an optional segment, you will design a very practical approach that learns an overly-complex tree, and then simplifies it with pruning. Your implementation will investigate the effect of these techniques on mitigating overfitting on our real-world loan data set. ...

8 videos (Total 40 min), 2 readings, 2 quizzes

Overfitting in decision trees5m

Principle of Occam's razor: Learning simpler decision trees5m

Early stopping in learning decision trees6m

(OPTIONAL) Motivating pruning8m

(OPTIONAL) Pruning decision trees to avoid overfitting6m

(OPTIONAL) Tree pruning algorithm3m

Recap of overfitting and regularization in decision trees1m

Slides presented in this module10m

Decision Trees in Practice10m

Preventing Overfitting in Decision Trees22m

Decision Trees in Practice28m

Real-world machine learning problems are fraught with missing data. That is, very often, some of the inputs are not observed for all data points. This challenge is very significant, happens in most cases, and needs to be addressed carefully to obtain great performance. And, this issue is rarely discussed in machine learning courses. In this module, you will tackle the missing data challenge head on. You will start with the two most basic techniques to convert a dataset with missing data into a clean dataset, namely skipping missing values and inputing missing values. In an advanced section, you will also design a modification of the decision tree learning algorithm that builds decisions about missing data right into the model. You will also explore these techniques in your real-data implementation. ...

6 videos (Total 25 min), 1 reading, 1 quiz

Strategy 1: Purification by skipping missing data4m

Strategy 2: Purification by imputing missing data4m

Modifying decision trees to handle missing data4m

Feature split selection with missing data5m

Recap of handling missing data1m

Slides presented in this module10m

Handling Missing Data14m

4.7

453개의 리뷰이 강좌를 수료한 후 새로운 경력 시작하기

이 강좌를 통해 확실한 경력상 이점 얻기

급여 인상 또는 승진하기

대학: SS•Oct 16th 2016

Hats off to the team who put the course together! Prof Guestrin is a great teacher. The course gave me in-depth knowledge regarding classification and the math and intuition behind it. It was fun!

대학: CJ•Jan 25th 2017

Very impressive course, I would recommend taking course 1 and 2 in this specialization first since they skip over some things in this course that they have explained thoroughly in those courses

Founded in 1861, the University of Washington is one of the oldest state-supported institutions of higher education on the West Coast and is one of the preeminent research universities in the world....

This Specialization from leading researchers at the University of Washington introduces you to the exciting, high-demand field of Machine Learning. Through a series of practical case studies, you will gain applied experience in major areas of Machine Learning including Prediction, Classification, Clustering, and Information Retrieval. You will learn to analyze large and complex datasets, create systems that adapt and improve over time, and build intelligent applications that can make predictions from data....

강의 및 과제를 언제 이용할 수 있게 되나요?

강좌에 등록하면 바로 모든 비디오, 테스트 및 프로그래밍 과제(해당하는 경우)에 접근할 수 있습니다. 상호 첨삭 과제는 이 세션이 시작된 경우에만 제출하고 검토할 수 있습니다. 강좌를 구매하지 않고 살펴보기만 하면 특정 과제에 접근하지 못할 수 있습니다.

이 전문 분야를 구독하면 무엇을 이용할 수 있나요?

강좌를 등록하면 전문 분야의 모든 강좌에 접근할 수 있고 강좌를 완료하면 수료증을 취득할 수 있습니다. 전자 수료증이 성취도 페이지에 추가되며 해당 페이지에서 수료증을 인쇄하거나 LinkedIn 프로필에 수료증을 추가할 수 있습니다. 강좌 내용만 읽고 살펴보려면 해당 강좌를 무료로 청강할 수 있습니다.

환불 규정은 어떻게 되나요?

재정 지원을 받을 수 있나요?

궁금한 점이 더 있으신가요? 학습자 도움말 센터를 방문해 보세요.