Chevron Left
Sample-based Learning Methods(으)로 돌아가기

앨버타 대학교의 Sample-based Learning Methods 학습자 리뷰 및 피드백

4.8
382개의 평가
73개의 리뷰

강좌 소개

In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment---learning from the agent’s own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment’s dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning. We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: - Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience - Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model - Understand the connections between Monte Carlo and Dynamic Programming and TD. - Implement and apply the TD algorithm, for estimating value functions - Implement and apply Expected Sarsa and Q-learning (two TD methods for control) - Understand the difference between on-policy and off-policy control - Understand planning with simulated experience (as opposed to classic planning strategies) - Implement a model-based approach to RL, called Dyna, which uses simulated experience - Conduct an empirical study to see the improvements in sample efficiency when using Dyna...

최상위 리뷰

KM

Jan 10, 2020

Really great resource to follow along the RL Book. IMP Suggestion: Do not skip the reading assignments, they are really helpful and following the videos and assignments becomes easy.

KN

Oct 03, 2019

Great course! The notebooks are a perfect level of difficulty for someone learning RL for the first time. Thanks Martha and Adam for all your work on this!! Great content!!

필터링 기준:

Sample-based Learning Methods의 73개 리뷰 중 51~73

교육 기관: John H

Nov 10, 2019

It was good.

교육 기관: Sohail

Oct 07, 2019

Fantastic!

교육 기관: LuSheng Y

Sep 10, 2019

Very good.

교육 기관: chao p

Dec 29, 2019

Great

교육 기관: JDH

Sep 23, 2019

Rating 4.3 stars – so far (first two classes combined)

Lectures: 4.0stars

Quizes: 4.0stars

Programming assignments: 4.5stars

Book (Sutton and Barto): 4.5stars

In the spectrum from the theoretical to practical where you have, very roughly,...

(1) “Why”: Why you are doing what you are doing

(2) “What”: What you are doing

(3) “How”: How to implement it (eg programming)…

...this is a “what-how” class.

To cover the “why-what” I strongly recommend augmenting this class with David Silver’s lectures (on Youtube) and notes from a class he gave at UCL. This covers more of the theory/math behind RL but covers less on the coding. Combined together with this class it probably comprises the best RL education you can get *anywhere*, creating a 5-star combo.

http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html

교육 기관: Neil S

Sep 12, 2019

This is THE course to go with Sutton & Barto's Reinforcement Learning: An Introduction.

It's great to be able to repeat the examples from the book and end up writing code that outputs the same diagrams for e.g. Dyna-Q comparisons for planning. The notebooks strike a good balance between hand-holding for new topics and letting you make your own msitakes and learn from them.

I would rate five stars, but decided to drop one for now as there are still some glitches in the coding of Notebook assignments, requiring work-arounds communicated in the course forums. I hope these will be worked on and the course materials polished to perfection in future.

교육 기관: Scott L

Sep 26, 2019

This course series is an incredible introduction to the basics of reinforcement learning, full stop. The course ... style, if you will, is a bit weird at first, but it seems to have been done on purpose with the aim of making the course somewhat timeless; they are presenting maths that will not change, in a format that will (hilariously) be no more slightly corny and weird in 2030 as it is in 2019.

교육 기관: David C

Oct 10, 2019

A very good course. The lectures are brief and provide a quick overview of the topics. The quizzes require more in-depth reading to pass (covering material not discussed in the lectures) and the projects are difficult but rewarding and really help to cement the information. My only suggestion would be to lengthen the lectures to provide more discussion on the topics.

교육 기관: Marius L

Sep 20, 2019

Overall, I found the course well made, inspiring and balanced. The videos really helped me to understand the rather austere textbook. I give 4 stars because some of the coding exercises felt more like work in progress, without the help of other students I would not have been able to overcome these issues.

교육 기관: Yicong H

Dec 05, 2019

Jump for here to there, it's nice to have all these algorithms. My gut tells me something is not correct. Too much focus on experience, which means a lot of data. The model part is touched very little, and main focus is on when model is wrong.....

교육 기관: Sebastian T

Feb 28, 2020

I

t

w

a

s

g

o

o

d

i

n

s

u

b

s

t

a

n

e but there is plenty of issues with the automated grader. you spend most time dealing with the letter not on actual learning of the matter.

교육 기관: Navid H

Oct 16, 2019

definitely interesting subjects, but I do not like the teaching method. Very mechanic and dull, with not enough connection to the real world

교육 기관: Tri W G

Mar 20, 2020

Pretty clear explanations! Nice starting point if you want to deep dive into RL. It gives clear picture over some confusing terms in RL.

교육 기관: Cristian V

Mar 31, 2020

The course provides a lot of value. I only give 4 stars because the classes are scripted and feel unnatural to me.

교육 기관: Max C

Oct 24, 2019

Some of the programming homeworks were difficult to debug due to the feedback from autograder being unhelpful.

교육 기관: Maxim V

Jan 12, 2020

Good content, but a lot of annoying issues with grader.

교육 기관: hope

Jan 25, 2020

This course is ok if you're reading the Sutton & Barto RL book and would like to have some quizzes to follow along. The programming assignments are not really "programming" because you're constrained to type a handful of lines in a few places into a solution that is largely has been prepared for you. With "hints" like "# given the state, select the action using self.choose_action_egreedy(),

# and save current state and action (~2 lines)

### self.past_state = ?

### self.past_action = ?"

it is impossible to get them wrong. These exercises are ok as labs (comparing various algorithms, etc), but the programming part can be done by rote. Coursera has classes with more intense and creative programming assignments and the learning there seems to be much deeper.

교육 기관: Andrew G

Dec 24, 2019

The course needs more support and / or error message output for the programming assignments. Code that seems correct can easily fail the autograder, and the only method of recourse is posting in the forums, which may or may not be received by a moderator.

교육 기관: Liam M

Mar 27, 2020

The assignments are an exercise in programming far more than they are a learning tool for RL. The course lectures are good, and I recommend auditing the course.

교육 기관: Chan Y F

Nov 04, 2019

The video content is not elaborated enough, need to read the book and search on the web to understand the idea

교육 기관: Bernard C

Mar 22, 2020

Course was good but assignment graders were terrible.

교육 기관: Duc H N

Feb 03, 2020

The last test is a little bit tricky

교육 기관: Juan C E

Mar 07, 2020

Many mistakes with grading and 100% penalty applied for tasks not completed on time, when the rules say that you can submit your assignments and do your quizzes after the deadlines without any penalty.