This course is all about big data. It’s for students with SQL experience that want to take the next step on their data journey by learning distributed computing using Apache Spark. Students will gain a thorough understanding of this open-source standard for working with large datasets. Students will gain an understanding of the fundamentals of data analysis using SQL on Spark, setting the foundation for how to combine data with advanced analytics at scale and in production environments. The four modules build on one another and by the end of the course you will understand: the Spark architecture, queries within Spark, common ways to optimize Spark SQL, and how to build reliable data pipelines.
제공자:
이 강좌에 대하여
배울 내용
Use the collaborative Databricks workspace to write scalable Spark SQL code that executes against a cluster of machines
Inspect the Spark UI to analyze query performance and identify bottlenecks
Create an end-to-end pipeline that reads data, transforms it, and saves the result
Build a medallion (bronze, silver, gold) lakehouse architecture with Delta Lake to ensure the reliability, scalability, and performance of your data
귀하가 습득할 기술
- Data Science
- Apache Spark
- Delta Lake
- SQL
제공자:

캘리포니아 대학교 데이비스 캠퍼스
UC Davis, one of the nation’s top-ranked research universities, is a global leader in agriculture, veterinary medicine, sustainability, environmental and biological sciences, and technology. With four colleges and six professional schools, UC Davis and its students and alumni are known for their academic excellence, meaningful public service and profound international impact.
강의 계획표 - 이 강좌에서 배울 내용
Introduction to Spark
In this module, you will be able to discuss the core concepts of distributed computing and be able to recognize when and where to apply them. You'll be able to identify the basic data structure of Apache Spark™, known as a DataFrame. Additionally, you will use the collaborative Databricks workspace and write SQL code that executes against a cluster of machines.
Spark Core Concepts
In this module, you will be able to explain the core concepts of Spark. You will learn common ways to increase query performance by caching data and modifying Spark configurations. You will also use the Spark UI to analyze performance and identify bottlenecks, as well as optimize queries with Adaptive Query Execution.
Engineering Data Pipelines
In this module, you will be able to identify and discuss the general demands of data applications. You'll be able to access data in a variety of formats and compare and contrast the tradeoffs between these formats. You will explore and examine semi-structured JSON data (common in big data environments) as well as schemas and parallel data writes. You will be able to create an end-to-end pipeline that reads data, transforms it, and saves the result.
Data Lakes, Warehouses and Lakehouses
In this module, you will identify the key characteristics of data lakes, data warehouses, and lakehouses. Lakehouses combine the scalability and low-cost storage of data lakes with the speed and ACID transactional guarantees of data warehouses. You will build a production grade lakehouse by combining Spark with the open-source project, Delta Lake. Whoever said time travel isn't possible hasn't been to a lakehouse!
검토
- 5 stars65.11%
- 4 stars23.89%
- 3 stars7.39%
- 2 stars1.47%
- 1 star2.11%
DISTRIBUTED COMPUTING WITH SPARK SQL의 최상위 리뷰
Great course, really well taught and delivered. Only thing I would say is you would really need knowledge of python to really understand this course 100%
A good introduction to Spark SQL. A pity that the course assignment was not very challenging but more focused on understanding of Spark itself.
Great course! I loved the exercises, they were very helpful and I learned a lot about Spark and ML from this course.
This has been an amazing course. What is worth mentioning is how the content was delivered. Nice hands on. Highly recommended for anyone who is new to Spark
Learn SQL Basics for Data Science 특화 과정 정보
This Specialization is intended for a learner with no previous coding experience seeking to develop SQL query fluency. Through four progressively more difficult SQL projects with data science applications, you will cover topics such as SQL basics, data wrangling, SQL analysis, AB testing, distributed computing using Apache Spark, Delta Lake and more. These topics will prepare you to apply SQL creatively to analyze and explore data; demonstrate efficiency in writing queries; create data analysis datasets; conduct feature engineering, use SQL with other data analysis and machine learning toolsets; and use SQL with unstructured data sets.

자주 묻는 질문
강의 및 과제를 언제 이용할 수 있게 되나요?
이 전문 분야를 구독하면 무엇을 이용할 수 있나요?
재정 지원을 받을 수 있나요?
궁금한 점이 더 있으신가요? 학습자 도움말 센터를 방문해 보세요.