Chevron Left
Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization(으)로 돌아가기

deeplearning.ai의 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization 학습자 리뷰 및 피드백

4.9
40,838개의 평가
4,343개의 리뷰

강좌 소개

This course will teach you the "magic" of getting deep learning to work well. Rather than the deep learning process being a black box, you will understand what drives performance, and be able to more systematically get good results. You will also learn TensorFlow. After 3 weeks, you will: - Understand industry best-practices for building deep learning applications. - Be able to effectively use the common neural network "tricks", including initialization, L2 and dropout regularization, Batch normalization, gradient checking, - Be able to implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence. - Understand new best-practices for the deep learning era of how to set up train/dev/test sets and analyze bias/variance - Be able to implement a neural network in TensorFlow. This is the second course of the Deep Learning Specialization....

최상위 리뷰

HD

Dec 06, 2019

I enjoyed it, it is really helpful, id like to have the oportunity to implement all these deeply in a real example.\n\nthe only thing i didn't have completely clear is the barch norm, it is so confuse

CV

Dec 24, 2017

Exceptional Course, the Hyper parameters explanations are excellent every tip and advice provided help me so much to build better models, I also really liked the introduction of Tensor Flow\n\nThanks.

필터링 기준:

Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization의 4,278개 리뷰 중 3951~3975

교육 기관: Giacomo

Mar 09, 2018

As always, great course from Andrew: easy to be understood, useful trainings and exercices. The lecture are explained slowly and repeating the important concept, always a good think.

Thanks! I will proceed with my Specialization :)

교육 기관: Peter T

Apr 17, 2018

Useful information, good intuition, but lack of formal results. More homework would improve the learning experience.

교육 기관: Tri W G

Mar 10, 2018

Not so much different with the materials in the Machine Learning course from Prof. Andrew Ng itself. If you don't have the time to finish the ML course, then you should take this one.

교육 기관: Donguk L

Nov 25, 2017

Maybe providing some video or reading resource for back propagation processes for batch norm would be good?

교육 기관: Yuri G

May 12, 2018

It's a very useful course but it comes across as the creators have put it out and completely abandoned it: the contact email given at the beginning is a black hole (could have used no_reply), volunteer mentors are supporting the forums but well-known multiple bugs and mistakes are not being fixed. Programming assignments are designed to make an impression that students can reach farther than they actually can on their own by often turning them into a copy-and-paste of small pieces into the code already mostly written for them. One would expect that if one even needs to use this "knowledge" from programming assignments for work later it would be free and readily available. Wrong assumption.

교육 기관: DANG R

Jun 17, 2018

some kind of abstract

교육 기관: Hans E

Feb 18, 2018

Great material, very clear and pleasant teaching, good software environment for the programming exercises. The exercises are a bit boring at times (cut and paste without much thinking) but maybe this is a quick way to memorize the material...

Some long known problems in the exercises should REALLY REALLY be addressed! (would have given 5 stars)

교육 기관: HongZhang

Jun 14, 2018

Great course to deepen my knowledge after first course. However, I would like to access more programming exercise for practice. That will be perfect!

교육 기관: Dan B

Jul 04, 2018

The lectures are great - but the Jupyter notebook assignments are hell, as they they crash frequently and most of the time spent on the assignments is invested in dealing with the notebook instead of the exercise. (The content of the exercises is great though)

교육 기관: Tamás J

Jun 14, 2018

Jupiter Notebook fails too offen! I had to close the window, start again, which is very annoying!

교육 기관: Harry L

Jun 20, 2018

The second class on machine learning is still very informative. However, it's very hands-on and teaches me mainly how to tune learning algorithms to run faster. Hence, it's not very intellectually stimulating. Nonetheless, this is still a very educational course overall!.

교육 기관: Nazmus S

Apr 02, 2019

Learning a lot. But full of boiler plate codes. It would be great if students were challenged with programming. Writing a formula even in code is easy for most students.

교육 기관: Łukasz Z

May 02, 2019

bugs

교육 기관: bayu a n

May 04, 2019

Need more in-depth about parameter tuning, but it's a very good course overall

교육 기관: Aaron E

May 05, 2019

its a good intro, if not a little simplistic with the coding exercises, bring back the quizzes mid lecture

교육 기관: Surya J

Apr 23, 2019

Great course to build intuition about tuning NN. Solid Foundation in very short duration.

교육 기관: Amardip G

Apr 22, 2019

Useful for Debugging

교육 기관: Youssouf B

Apr 22, 2019

what I did recognize in the deeplearning specialization that there are now further reading suggestions or reading syllabus like the other courses.

교육 기관: Oceanusity

May 11, 2019

pretty good one

교육 기관: John M

Apr 04, 2019

TensorFlow needs more explaining

교육 기관: kritika

Mar 25, 2019

There was a lot of hand holding in programming assignments. It needs to be more rigorous.

교육 기관: Xiaoliang L

Mar 25, 2019

Practices are more like "type after me" than a real learning opportunity.

교육 기관: 成文辉

Mar 26, 2019

A programming assignment about batch normalization and softmax will be helpful.

교육 기관: Nicolás E C

Apr 09, 2019

Nice course, TensorFlow might need some more in-detail explanation because it's a different than programming with Python, but it was really nice.

교육 기관: Vishal

Mar 28, 2019

Tough Concepts are not explained clearly like dropout regularization