Chevron Left
Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization(으)로 돌아가기

deeplearning.ai의 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization 학습자 리뷰 및 피드백

4.9
42,913개의 평가
4,599개의 리뷰

강좌 소개

This course will teach you the "magic" of getting deep learning to work well. Rather than the deep learning process being a black box, you will understand what drives performance, and be able to more systematically get good results. You will also learn TensorFlow. After 3 weeks, you will: - Understand industry best-practices for building deep learning applications. - Be able to effectively use the common neural network "tricks", including initialization, L2 and dropout regularization, Batch normalization, gradient checking, - Be able to implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence. - Understand new best-practices for the deep learning era of how to set up train/dev/test sets and analyze bias/variance - Be able to implement a neural network in TensorFlow. This is the second course of the Deep Learning Specialization....

최상위 리뷰

HD

Dec 06, 2019

I enjoyed it, it is really helpful, id like to have the oportunity to implement all these deeply in a real example.\n\nthe only thing i didn't have completely clear is the barch norm, it is so confuse

CV

Dec 24, 2017

Exceptional Course, the Hyper parameters explanations are excellent every tip and advice provided help me so much to build better models, I also really liked the introduction of Tensor Flow\n\nThanks.

필터링 기준:

Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization의 4,540개 리뷰 중 4426~4450

교육 기관: Mor k

Aug 30, 2019

excellent

교육 기관: Luis E O

May 17, 2019

Excelente

교육 기관: IURII B

Apr 03, 2018

Thank you

교육 기관: Suman D

Jul 27, 2018

Awesome.

교육 기관: Давид К

Jul 13, 2018

easy bb

교육 기관: 刘倬瑞

Nov 02, 2017

helpful

교육 기관: qiaohong

Oct 28, 2019

作业过于简单

교육 기관: SONIA D

Jan 30, 2019

Useful

교육 기관: Mohd F B Z

Aug 29, 2017

Good!

교육 기관: Aditya S

Aug 09, 2019

good

교육 기관: Łukasz Z

May 02, 2019

bugs

교육 기관: Aakarapu S P

Jul 03, 2018

good

교육 기관: Dheeraj M P

Feb 23, 2018

good

교육 기관: Mohamed S

Oct 20, 2019

e

교육 기관: Shah Y A

Oct 28, 2019

TL;DR: lectures are awesome, notebooks are bad.

The lectures by Prof. Ng are amazing, comprehensive and intuitive. The prof starts from first principles of simple neural networks and goes onto show concepts like normalization, bias, variance, overfitting, underfitting, regularization, dropout, L1 and L2 regularization, exponentially weighted averages, stochastic, mini batch and batch gradient descent, momentum, RMSprop, Adam optimization, batch normalization and intro to deep learning frameworks. He not only gives the mathematical foundations and code implementations of each concept, but spends a lot of time explaining the intuition behind it, so that we grasp the concept well. It's amazing how he starts from decade old neural networks in the first video, and within 2-3 hours of lecturing, he brings us into the state-of-the-art deep learning models. Thank you Prof. Ng!

But the notebooks have many flaws. The lectures don't set you up for the programming needed in the notebooks. The descriptions in the notebooks are lacking proper tutorial in many places, leading the students incompetent for the exercises that follow. Example: Week 3 tensorflow tutorail; the sigmoid function exercise; the description above the exercise doesn't really teach you how to effectively use placeholders and variables. I was confused and had to go through the noisy dicsussion forum. Please fix it, and if you'd really like more constructive criticism from me, contact me yasser.aziz94 (at da rate ov) gee mail dut com. (lol)

교육 기관: Joshua P J

Jun 08, 2018

I've loved Andrew Ng's other courses, but this course was boring and not well-organized. The lectures were unfocused and they rambled a lot; they're nearly the opposite style of Prof. Ng's other material, which I found extremely well-organized. Most topics could be shortened 33-50% with no of clarity.

The course structure itself could use improvement:

The first part of Week 3 (Hyperparameter Tuning) belongs in Week 2.

The third part of Week 3 (Multi-Class Classification) should be its own week and its own assignment and could really be its own course. This is *THE* problem that almost every "applied" machine learning paper I've read is attempting to solve, whether by deep learning or some other class of algorithms. (Context and full disclosure: I'm a Ph.D. Geophysicist and my research is in seismology and volcanology.)

The introduction to TensorFlow needs to explain how objects and data structures work in TF. It really needs to explain the structure and syntax of the feed dictionary.

In the programming assignment for Week 3, there are three issues: (a) The correct use of feed_dict in 1.3 is completely new and cannot be guessed from the instructions or the TF website, and it's not clear why we use float32 for Y instead of int64; (b) In 1.4, "tf.one_hot(labels, depth, axis)" should be "tf.one_hot(labels, depth, axis=axis_number)". (c) In 2.1, the expected output for Y should have shape (6,?), not (10,?).

교육 기관: David M C

Jul 22, 2019

Nice explanation of Adam. Extremely minimal introduction to tensorflow; I felt unprepared to deal with all programming error messages I encountered when using TF. I would have liked to have had more exposure to softmax outputs as well; the multi-class case is new here. My biggest complaint is that there was quite a bit of time spent trying to explain batch normalization and no corresponding programming assignment. Also, in the past I felt I had my hand held a little too much in the programming exercises, whereas when tensorflow was introduced I felt I'd been thrown by that hand into the abyss; the expected output could not help me debug because it seemingly was designed to remind me over and over that tf.Session.run was needed to give value to tf variables. ya... I think you guys have some work to do on this course.

교육 기관: Todd J

Aug 18, 2017

Very mixed feelings about this course. The course title and nearly all (but 20 minutes) of the video content are on the topic of hyperparameter tuning, regularization and optimization of neural nets. This material is excellent. However, the programming assignment for Week 3 is about building a simple model in Tensorflow, with no coverage the rest of the material from the week. It is as if they included the wrong assignment, or just forgot to include the appropriate assignments to practice the actual content of the course. In addition, the Tensorflow intro in the videos and the Tensorflow assignment are not that great an introduction to the concepts behind Tensorflow. There are much better tutorials available on the web, such as from Tensorflow.org and codelabs.developers.google.com

교육 기관: Navaneethan S

Sep 20, 2017

This course was much less rigorous and theoretically-grounded than the first. There didn't seem to be much justification for any of the techniques presented, which was a stark contrast to the first course.

However, the topics are important and useful to know, so I'm glad they were covered. To me, the most useful sections were on softmax regression and deep learning frameworks, which I really enjoyed. The TensorFlow assignment was also interesting and (relative to the others) challenging.

I think there is a lot of scope for this course to be improved and I hope Dr Ng and team will do so in the near future.

교육 기관: Peter G

Dec 05, 2017

Nice course, but again, main emphasis on the practical side and 'never mind, you don't need to know the details' approach. Having optional parts where theory about batch-normalization implementation and softmax derivative derivation could be shown - that would be very desirable. Another not so great thing is that final TensorFlow-related practice exercises are too 'quick' in a sense that 99% of the code is written for you and hints are given in such a way that you literally don't even have to use a half of your brain. That is also frustrating, when everything is already done for you.

교육 기관: Minglei X

Oct 22, 2017

Some process that was discussed in details in previous courses are mostly omitted in new context. While it is sometimes nice for saving time and focusing on new ideas, I feel like there are sometimes subtleties in them. Like I could not imagine how backward propagation should be implemented in batch norm. I'm not sure if it's because there are really some subtleties that you think it's too tedious and not necessary to introduce in the short video. If it is the case, I still hope you could provide more detailed information about them somewhere, just for curious people like me.

교육 기관: Ashvin L

Aug 25, 2018

The course builds up on the first course and provides some ideas on how to tune the networks to perform better. However, at the core, I find the number of parameters overwhelming and it appears that by changing the parameters we can get any answer we want. There is no "formal" and mathematical basis for changing the parameters. This is a bit disconcerting.

The assignments were trivial. More importantly, at least one assignment appeared to indicate that the results are entirely dependent on weights chosen (at random) on the first iteration. This should not be the case.

교육 기관: Vikash C

Jan 28, 2019

Content was good.

But the system that checks our submitted our code checks wrongly even when I wrote it correctly.

In week 2 assignment, when I submitted the code, it gave many functions as wrong coded.

I resubmitted the code after few changes, for instance a+= 2 changes to a = a+2 and string text like 'W' changes to "W". It worked fine and gave 100 points.

In short, what I observed is that the code checking system is taking a+=2 and a=a+2 as differently, also 'W' and "W" are considered different, but they are not in actual output.

교육 기관: William K

Oct 01, 2018

I thought the content was well-chosen and typically presented clearly. However, unlike the previous course in this specialization, the assignments had an egregious number of typos and missing information. I found these errors confusing and time-consuming.

From the staff's forum activity, it looks like they are no longer actively involved in this course. I hope that Coursera will hire someone—an intern would probably be plenty capable—to take this course and carefully fix as many of the errors in it as she or he can find.

교육 기관: Dimitrios G

Nov 28, 2017

The course continues on the same path the previous Deep Learning course has set but I found the use of TensorFlow somewhat limiting. It is a great tool that simplifies the training and running of NNs but it does not allow for easy debugging or for easy looking within the built-in functions to spot problems. I felt that we were treating many tf.functions as black boxes and I am not so fond of this. Otherwise the course was fairly useful.