Chevron Left
Data Science Capstone(으)로 돌아가기

존스홉킨스대학교의 Data Science Capstone 학습자 리뷰 및 피드백

4.5
별점
1,061개의 평가
276개의 리뷰

강좌 소개

The capstone project class will allow students to create a usable/public data product that can be used to show your skills to potential employers. Projects will be drawn from real-world problems and will be conducted with industry, government, and academic partners....

최상위 리뷰

NT

Mar 05, 2018

Capstone did provide a true test of Data Analytics skills. Its like a being left alone in a jungle to survive for a month. Either you succumb to nature or come out alive with a smile and confidence.

SS

Mar 29, 2017

Wow i finally managed to finish the specialization!! definitely learned a lot and also found out difficulties in building predictors by trying to balancing speed, accuracy and memory constraints!!!

필터링 기준:

Data Science Capstone의 267개 리뷰 중 176~200

교육 기관: Raja J

Mar 27, 2018

Awesome course

교육 기관: Ahmed Z

Oct 03, 2019

Great Course

교육 기관: Pedro M

Jan 30, 2020

Pretty cool

교육 기관: Shailesh P

Apr 28, 2020

Very Good

교육 기관: Anand V

Jun 19, 2017

Excellent

교육 기관: Diego T B

Oct 19, 2018

engaging

교육 기관: Laro N P

Sep 13, 2018

Awesome.

교육 기관: Sergio R

May 10, 2018

Thanks!

교육 기관: Amit K

Jul 06, 2017

Thanks.

교육 기관: Abdelbarre C

Jan 09, 2018

Thanks

교육 기관: Efejiro A

Feb 23, 2019

Cool

교육 기관: Ganapathi N K

May 24, 2018

Nice

교육 기관: Sherif H M A A

Feb 13, 2018

Good

교육 기관: Thuyen H

May 31, 2016

good

교육 기관: Prabhakar B

Jan 15, 2019

E

교육 기관: Anil G

Jul 27, 2018

E

교육 기관: Dwayne D

Sep 02, 2017

Completion of this project requires most (all?) of the skills you will have learned in completing the prerequisite courses. If you've worked to ensure you truly understand the concepts, tools and techniques presented in the prerequisite courses, you will be able to complete this project. The problem domain is a little different from most of the examples in the prerequisite courses. I find that a good thing. Whenever I learn something I believe to be useful, I always wonder how it applies in other contexts. This course was an exercise in doing just that — applying what you've learned to a "new" (i.e., new to me) a domain.

Heads up / Be aware: If you're "like me" — inexperienced with NLP, and one of those people who doesn't feel quite right about using a recommended toolset or algorithm until I understand why it's the right tool for the job — you should start reading up on the basics of text mining, NLP and next-word prediction models 1-2 weeks before you start the course. For some, that might be overkill; but I'm a slow reader at the end of a workday (we all have day jobs, right!?). Given this foundational understanding, I felt comfortable making tradeoffs among the state-of-the-art and the practical, given the project objectives, my own time constraints, etc. Reading the course forums and reviews, I think some who had trouble completing the project weren't able to take sufficient time to get oriented with this domain before attempting to build their first word prediction model.

Note: By "foundational", I mean enough to intuitively grasp why what's accepted as best practice is that. When I've read about someone's approach to solving a problem, and I'm able to say "makes sense, but I probably don't need to do X or Y to meet the need for this effort", then that's often enough… But :-) because I at times overthink things (don't we all!), I get a little more comfortable when I at least skim over descriptions of how a couple others have solved a similar problem; and I can see patterns of convergence… I do NOT mean enough to write your own thesis, unless that's what you really want to do. Whatever floats your boat! LOL

I have a software development background (and completed the previous courses in the specialization), so translating approaches I found described in various sources into code wasn't "easy"; but it wasn't a barrier, either. I was helped along GREATLY by the existence of R packages such as tm and tokenizers, and I was always able to find guidance on addressing thorny issues via "good ole Google Search". Most often, my searches would lead me to StackOverflow or write-ups from capstone project alumni. While I did my own write-ups and wrote my own code, I benefited in a big way from lessons learned by others who've already tackled similar problems.

I would recommend the Data Science Specialization by JHSU, which (as it should be) is a package deal with the capstone project. Applying what I learned to a new domain really solidified my understanding and has whet my appetite for the next challenge.

교육 기관: Angela W

Apr 17, 2018

Overall, I was semi-satisfied with the capstone project:

On the negative side, my foremost issue is that the project has very little to do with what we learned in the nine courses before. I get that you will always see new data formats as a data scientist, but having the whole course cover numeric data and then having the final project be on text data where you can't apply what you learned seems sub-optimal. Also, to me it seemed that the accuracy increased mostly with how much data you train your algorithms on, and not so much how you design your algorithm. My second issue is that the class only starts every two months, and the assignments are blocked before the session starts so you can't see them if you're trying to get a head start. What happened to everyone learning at their own pace? I have a lot to do and had to switch sessions at least once for most classes, and this class was really stressful for me because I didn't want to move my completion back by two months. Lastly, I really hate RPresenter and that the instructors force us to use it, but maybe that's just me.

On the positive side, I did learn a lot: The basics of text prediction, how to do parallel programming in R and how to set up an RStudio instance on AWS (the latter two are not very hard, I recommend them to anyone struggling with gigantic runtimes, as long as you're willing to invest like $40 or so for the computing power). I liked that the guidelines were very broad, so there was a lot of room for creativity. I also finally found out how to make an pretty(-ish) presentation in R, though I would always choose Powerpoint in real life.

I really enjoyed the series as a whole and learned a great deal.

교육 기관: Telvis C

Jul 16, 2016

I enjoyed the course. This course took me waaaay more time than I thought because I struggled with a few issues. First, I wish I'd started by taking the NLP online course before starting the Capstone (https://www.youtube.com/watch?v=-aMYz1tMfPg). There was an issue installing RWeka, RJava and it took me several days to work through the issues. I eventually moved to using quanteda (https://cran.r-project.org/web/packages/quanteda/vignettes/quickstart.html). I also waited far too long to develop a method to test my model using a subset of the training data, so I could test whether changes to my model improved and reduced performance. It turns out that my model trained on a 25% sample performed just as well as a model trained on 100%. I'm thankful for the Discussion Forum and final peer review process. Both helped me learn how I can improve my model and demo application. I really appreciate the instructors for creating this specialization. I've learned a lot.

교육 기관: Romain F

Jul 03, 2017

A very tough and challenging project, but a great way to learn a lot about Natural Language Processing and algorithm coding in R, and in the end to have a cool Shiny app to add to your portfolio. The project weekly structure could be enhanced (maybe adding one more week could help) and the weekly instructions, while informative, could also be improved. Thankfully the forum has been very helpful. Informative and motivating videos but where were the SwiftKey people mentioned ? Finally, the quizzes 2 and 3 should be replaced by other exercises with more educational value. Overall an interesting learning opportunity !

교육 기관: Rose G

Mar 31, 2020

Interesting project. Not too tough if you take it slow and simple: my first version was already quite good and I earned a full score without going through the weird circumvolutions I saw other students get into.

New subject I loved to learn about.

It is a bit disappointing though that this does not follow the complete 10 classes: in my case, the knowledge I learnt in "Statistical inference", "Regression Models" and "Machine Learning" was not applied in this project, whereas we could say those courses are the core of the specialization.

교육 기관: Jay B

Oct 04, 2016

This is not for beginners with no experience. The estimated weekly hours are absurdly low.

No one has seen any sign whatsoever of the industry partner, SwiftKey, despite claims they will be around to help. The field has advanced dramatically since the course was developed. Be prepared to do a lot of research and trial and error.

The specialization has been an excellent way to learn a fair amount on the topic, but it is just the beginning. The capstone will challenge you. It is rewarding when you complete it.

교육 기관: Victoria A

Mar 29, 2017

With this course I learned to go through a data problem from the scratch and get a real data product, and document it. My only constructive comment is that, when reviewing the projects of classmates, there is a huge dispersion on the effort and quality of the products presented, from very basic and simple Apps to a very professional products, and the scoring of them all is quite the same, perhaps one or two points of difference, in eleven points maximum score.

교육 기관: Neeraj A

Sep 08, 2019

Feeling proud after completing all the courses under Data Science Specialization. This was not an easy task to complete especially if you are not familiar with the Statistics. Requires continuous dedication and motivation to follow and complete. Course is well designed and cover most of the topics. Its just stats part can be enhanced further to cover some basic aspects. Thanks for all the support

교육 기관: Rizwan M

Feb 04, 2020

its good learning. however NLP is never introduced in the course. its very hard to compute the metrics and train the model in R. Accuracy of the model is very low, as it's not easy to train the models in R as it requires humungous data sets to get up to 95% accuracy.

Neural networks, LSTM in python with tensor flow is the best choice to do the projects like this.