This course introduces you to one of the main types of modeling families of supervised Machine Learning: Classification. You will learn how to train predictive models to classify categorical outcomes and how to use error metrics to compare across different models. The hands-on section of this course focuses on using best practices for classification, including train and test splits, and handling data sets with unbalanced classes.
제공자:


이 강좌에 대하여
귀하가 습득할 기술
- Decision Tree
- Ensemble Learning
- Classification Algorithms
- Supervised Learning
- Machine Learning (ML) Algorithms
제공자:

IBM
IBM is the global leader in business transformation through an open hybrid cloud platform and AI, serving clients in more than 170 countries around the world. Today 47 of the Fortune 50 Companies rely on the IBM Cloud to run their business, and IBM Watson enterprise AI is hard at work in more than 30,000 engagements. IBM is also one of the world’s most vital corporate research organizations, with 28 consecutive years of patent leadership. Above all, guided by principles for trust and transparency and support for a more inclusive society, IBM is committed to being a responsible technology innovator and a force for good in the world.
강의 계획표 - 이 강좌에서 배울 내용
Logistic Regression
Logistic regression is one of the most studied and widely used classification algorithms, probably due to its popularity in regulated industries and financial settings. Although more modern classifiers might likely output models with higher accuracy, logistic regressions are great baseline models due to their high interpretability and parametric nature. This module will walk you through extending a linear regression example into a logistic regression, as well as the most common error metrics that you might want to use to compare several classifiers and select that best suits your business problem.
K Nearest Neighbors
K Nearest Neighbors is a popular classification method because they are easy computation and easy to interpret. This module walks you through the theory behind k nearest neighbors as well as a demo for you to practice building k nearest neighbors models with sklearn.
Support Vector Machines
This module will walk you through the main idea of how support vector machines construct hyperplanes to map your data into regions that concentrate a majority of data points of a certain class. Although support vector machines are widely used for regression, outlier detection, and classification, this module will focus on the latter.
Decision Trees
Decision tree methods are a common baseline model for classification tasks due to their visual appeal and high interpretability. This module walks you through the theory behind decision trees and a few hands-on examples of building decision tree models for classification. You will realize the main pros and cons of these techniques. This background will be useful when you are presented with decision tree ensembles in the next module.
검토
- 5 stars88.46%
- 4 stars10.89%
- 3 stars0.64%
SUPERVISED MACHINE LEARNING: CLASSIFICATION의 최상위 리뷰
Great course and very well structured. I'm really impressed with the instructor who give thorough walkthrough to the code.
I recommend this course to everyone who wants to excel in Machine Learning. This is a Great Course!
A well-structured and practical course which helps me answer lots of my concerns from the past until now.
The course is very well structured, and the explanations very clear. I would only suggest enhancing the peer-review community since it takes a long time to get a review sometimes.
자주 묻는 질문
강의 및 과제를 언제 이용할 수 있게 되나요?
이 수료 과정을 구독하면 무엇을 이용할 수 있나요?
궁금한 점이 더 있으신가요? 학습자 도움말 센터를 방문해 보세요.