Excellent. Isn't Laurence just great! Fantastically deep knowledge, easy learning style, very practical presentation. And funny! A pure joy, highly relevant and extremely useful of course. Thank you!
Great course for anyone interested in NLP! This course focuses on practical learning instead of overburdening students with theory. Would recommend this to every NLP beginner/enthusiast out there!!
교육 기관: José D•
This third course provides main NLP concepts using Keras simple example codes. Just like Courses 1 & 2, there's no math and as explained in the videos, if you want a deeper understanding, then you want the "Deep Learning" specialization. Only quizzes, no graded exercises for this course
교육 기관: J E•
It was good but there are several errors in the code for some weekly exercises.
I wanted to raise a PR in the author's Github repo to fix theses. However, upon seeing the backlog unaddressed PRs in the author's Github repo, I didn't bother as they will probably not be looked at.
교육 기관: Ramón W•
The length of the videos is fine. Personally, it bothered me that there were no programming tasks, the quizzes were too short and some of the questions were repetitive. I would have liked to see programming tasks, more quizzes and also intermediate questions in the videos.
교육 기관: Rajat Y•
Since the course doesn't mention "Introduction" to NLP, I thought that the course will provide a detail insights to Natural Language Processing but the course only covers basics of it. Also as far as tensorflow is concerned I was expecting more hands-on experience in it.
교육 기관: Ignacio L•
the lack of graded exercise makes this course somewhat messy. Many of the codes that are given to analyze don't work on the go. The back and forth between sets and cases of classification in my case at least, did not help to fully grasp what was going on.
교육 기관: afshin m•
week 2 and week3 are disorganized - the examples don't run without making modifications based on information in the forums.
However the overall course is worth it. I hope they pay more attention to making the examples accessible and making them work.
교육 기관: Peter-John H•
This course did not require lab submissions which I really liked because it coerced and helped me by providing an objective to learn more. It also introducted topics such as LTSM, Global average pooling and regulizers which I feel were too rush
교육 기관: Thusitha C•
Nothing against the instructor, he was really nice. But the content is extremely basic, to the extent that the whole course could be completed in one day. At least the previous courses had graded assignments, but this one was way too easy.
교육 기관: PRATIK K C•
One example in case of text classification could have been theoretically worked out. For example classification using RNN/LSTM. How a word vector is passed as input to one unit of lstm? To view in on paper would make concepts more clear.
교육 기관: Hector B•
The course is good but lacks graded coding homeworks, these are the most powerful learning tools and it involves reflecting upon the matter, even if they have bugs or version mismatches, they are the most important learing tool!
교육 기관: Pandu D•
Please give an explanation for each code in colab like in teh previous course. Moving between expalantion video tab and colab tab was troublesome. Moreover, there are some labs that contain an error which is predict_classes().
교육 기관: Reem•
Good overview but the assignments seem a bit disconnected from the classes at times (e.g. when asking us to use regularizers). Transformers are a lot more popular now and they were not touched on in the course.
교육 기관: Chidvilas K R•
There was no continuity in the videos. Felt it could have been much better. And the level of course is too low. You should change the title to "Basics of Natural Language Processing in TensorFlow".
교육 기관: Kaivalya B•
The exercises were unclear and ungraded. It is essential to apply the skills learnt from videos. The programming assignment should be made graded and its comments should be descriptive as well.
교육 기관: Pang C H J•
I tried the first ungraded exercise to tokenize, then realize the google colab doesn't have NLTK library (to tokenize) already installed. I then decide not to follow up with exercises later.
교육 기관: Haikal A•
This is a very good course for beginners, but this course only focused on practical examples. I hope there are more theory behind the course and also the more challenging grade assessment.
교육 기관: Lautaro R S•
Not as good as the previous courses of the specialization. It is not as engaging, maybe it is the lack of rated exercises, but maybe also the contents are not taught as well as previously.
교육 기관: Robert K•
Too basic for me. I was hoping for some nice hands-on Tensorflow (not Keras) tutorials. Some deeper modeling and understanding. Good for children :), although they now know more than me.
교육 기관: Andreas F•
Overall a good course. Would like to see more commentary in the notebooks though. Furthermore, in my opinion, no Python2 code or deprecated functions of keras/tensorflow should be used
교육 기관: Purnendu S•
Good And Clear Lectures , But Programing Exercises Are Ungraded And The Week Quiz Questions Are Too Easy , I Felt Like Its Less Like Learning And More Like Rushing To Complete Course
교육 기관: 曹杰•
I got the basic idea behind the sequence models while lots of code shown in the course are used to deal with the text and prepare for the training, which leaves a huge knowledge gap.
교육 기관: Ankit G•
I guess there should have been a bit more explanation of the techniques used to provide a bit more understanding rather than just skimming from 50000 ft instaed of atleast 20000 ft.
교육 기관: Naveen D B•
Lack of programming assignments makes this course not highly rated in my opinion. You might as well watch the videos on YouTube or audit courses to get the same information.
교육 기관: Phung T•
The first two courses of the series were awesome, but this course is quite a let down imo. There is no graded lab, the content feels pretty rush and it feels quite lacking
교육 기관: David R C S•
Good course to introduce to the NLP world, but it lacks explanations, so you have to complement this with other sources even for understanding what you are doing here.