This course will show how one can treat the Internet as a source of data. We will scrape, parse, and read web data as well as access data using web APIs. We will work with HTML, XML, and JSON data formats in Python. This course will cover Chapters 11-13 of the textbook “Python for Everybody”. To succeed in this course, you should be familiar with the material covered in Chapters 1-10 of the textbook and the first two courses in this specialization. These topics include variables and expressions, conditional execution (loops, branching, and try/except), functions, Python data structures (strings, lists, dictionaries, and tuples), and manipulating files. This course covers Python 3.
제공자:
이 강좌에 대하여
배울 내용
Use regular expressions to extract data from strings
Understand the protocols web browsers use to retrieve documents and web apps
Retrieve data from websites and APIs using Python
Work with XML (eXtensible Markup Language) data
귀하가 습득할 기술
- Json
- Xml
- Python Programming
- Web Scraping
제공자:

미시건 대학교
The mission of the University of Michigan is to serve the people of Michigan and the world through preeminence in creating, communicating, preserving and applying knowledge, art, and academic values, and in developing leaders and citizens who will challenge the present and enrich the future.
강의 계획표 - 이 강좌에서 배울 내용
Getting Started
In this section you will install Python and a text editor. In previous classes in the specialization this was an optional assignment, but in this class it is the first requirement to get started. From this point forward we will stop using the browser-based Python grading environment because the browser-based Python environment (Skulpt) is not capable of running the more complex programs we will be developing in this class.
Regular Expressions (Chapter 11)
Regular expressions are a very specialized language that allow us to succinctly search strings and extract data from strings. Regular expressions are a language unto themselves. It is not essential to know how to use regular expressions, but they can be quite useful and powerful.
Networks and Sockets (Chapter 12)
In this section we learn about the protocols that web browsers use to retrieve documents and web applications use to interact with Application Program Interfaces (APIs).
Programs that Surf the Web (Chapter 12)
In this section we learn to use Python to retrieve data from web sites and APIs over the Internet.
검토
- 5 stars80.87%
- 4 stars15.69%
- 3 stars2.62%
- 2 stars0.46%
- 1 star0.34%
하이라이트
USING PYTHON TO ACCESS WEB DATA의 최상위 리뷰
This course was really interesting and did a good job introducing complicated topics in usefully simplified form. It was a pleasure to listen to the instructor and I got everything I wanted out of it.
interesting course. well structured and paced with practical real-life examples and clear study materials. i strongly recommend this course for anyone considering learning python. Thank you Dr. Chuck!
Great course. The lectures are clear and thorough. The assignments are challenging yet doable. The only thing I would like is to see more assignments to get more practice with the techniques learned.
It is a very good course even for an absolute beginner. Gives knowledge of what APIs, web scraping, different web protocols. Dr. Severence is engaging and very good at explaining hard things easily.
모두를 위한 Python 특화 과정 정보
This Specialization builds on the success of the Python for Everybody course and will introduce fundamental programming concepts including data structures, networked application program interfaces, and databases, using the Python programming language. In the Capstone Project, you’ll use the technologies learned throughout the Specialization to design and create your own applications for data retrieval, processing, and visualization.

자주 묻는 질문
강의 및 과제를 언제 이용할 수 있게 되나요?
이 전문 분야를 구독하면 무엇을 이용할 수 있나요?
재정 지원을 받을 수 있나요?
궁금한 점이 더 있으신가요? 학습자 도움말 센터를 방문해 보세요.