Chevron Left
Get Familiar with ML basics in a Kaggle Competition(으)로 돌아가기

Coursera Project Network의 Get Familiar with ML basics in a Kaggle Competition 학습자 리뷰 및 피드백

4.4
별점
14개의 평가
3개의 리뷰

강좌 소개

In this 1-hour long project, you will be able to understand how to predict which passengers survived the Titanic shipwreck and make your first submission in an Machine Learning competition inside the Kaggle platform. Also, you as a beginner in Machine Learning applications, will get familiar and get a deep understanding of how to start a model prediction using basic supervised Machine Learning models. We will choose classifiers to learn, predict, and make an Exploratory Data Analysis (also called EDA). At the end, you will know how to measure a model performance, and submit your model to the competition and get a score from Kaggle. This guided project is for beginners in Data Science who want to do a practical application using Machine Learning. You will get familiar with the methods used in machine learning applications and data analysis. In order to be successful in this project, you should have an account on the Kaggle platform (no cost is necessary). Be familiar with some basic Python programming, we will use numpy and pandas libraries. Some background in Statistics is appreciated, like as knowledge in probability, but it’s not a requirement....

최상위 리뷰

필터링 기준:

Get Familiar with ML basics in a Kaggle Competition의 4개 리뷰 중 1~4

교육 기관: 121910303051 V S T

2021년 3월 6일

Great to start with the basics but needed a little more explanation on libraries

교육 기관: Isara S

2021년 9월 17일

This is a really good guided project to start with Kaggle Competition. I learnt all the basics require to start with Kaggle.

교육 기관: Mustak A

2021년 8월 5일

Need to add some more explanation about Kaggle

교육 기관: Hideki O

2021년 10월 22일

This course should be called the basics of how to use JupyterLab rather than ML basics. The instructor goes through some rudimentary data preprocessing, but there is very little theoretical explanation as to why the preprocessing should be done, and for a beginner it would be difficult to understand why the instructor did that. For example, there was no explanation as to why the "stratify" option was used when splitting the training and test data with the train_test_split() function. I was able to figure out the meaning of the option and why it matters by Google it, but I think it should have been explained in the lecture. This is just one example. Overall, there was too little explanation of the theoretical background in this class.