This course is for students with SQL experience and now want to take the next step in gaining familiarity with distributed computing using Spark. Students will gain an understanding of when to use Spark and how Spark as an engine uniquely combines Data and AI technologies at scale. The four modules build on one another and by the end of the course the student will understand: Spark architecture, Spark DataFrame, optimizing reading/writing data, and how to build a machine learning model. The first module will introduce Spark, including how Spark works with distributed computing and what are Spark Dataframes. Module 2 covers the core concepts of Spark such as storage vs. computing, caching, partitions and Spark UI. The third module looks at Engineering Data Pipelines covering connecting to databases, schemas and type, file formats and writing good data. The final module looks at the application of Spark with Machine Learning through the business use case, a short introduction to what machine learning is, building and applying models and a final course conclusion. By understanding when to use Spark, either scaling out when the model or data is too large to process on a single machine, or having a need to simply speed up to get faster results, students will hone their SQL skills and become a more adept Data Scientist.
제공자:
이 강좌에 대하여
배울 내용
Use the collaborative Databricks workspace and write SQL code that executes against a cluster of machines
Use Spark UI to analyze performance and identify bottlenecks
Create an end-to-end pipeline that reads data, transforms it, and saves the result
Build a linear regression model and make predictions using SparkSQL
귀하가 습득할 기술
제공자:

캘리포니아 대학교 데이비스 캠퍼스
UC Davis, one of the nation’s top-ranked research universities, is a global leader in agriculture, veterinary medicine, sustainability, environmental and biological sciences, and technology. With four colleges and six professional schools, UC Davis and its students and alumni are known for their academic excellence, meaningful public service and profound international impact.
강의 계획 - 이 강좌에서 배울 내용
Introduction to Spark
In this module, you will be able to discuss the core concepts of distributed computing and be able to recognize when and where to apply them. You'll be able to identify the basic data structure of Apache Spark™, known as a DataFrame. You will be able to use the collaborative Databricks workspace and write SQL code that executes against a cluster of machines.
Spark Core Concepts
In this module, you will be able to explain the core concepts of Spark, and increase query performance by caching your data and modifying Spark configurations. You will also be able to use the Spark UI to analyze performance and identify bottlenecks.
Engineering Data Pipelines
In this module, you will be able to identify and discuss the general demands of data applications. You'll be able to access data in a variety of formats and compare and contrast the tradeoffs between these formats. You will explore and examine semi-structured JSON data, which is common in big data environments, schemas, and parallel data writes. You will be able to create an end-to-end pipeline that reads data, transforms it, and saves the result.
Machine Learning Applications of Spark
In this module, you will be able to define the basics of machine learning and identify the difference between regression and classification problems. You will build a linear regression model and use it to make predictions using Spark SQL. You will also be able to describe how machine learning fits in with concepts you learned in this course and from the other courses in this series. And lastly, you will be able to explain how a machine learning model is trained.
검토
DISTRIBUTED COMPUTING WITH SPARK SQL의 최상위 리뷰
I highly recommend this course for anyone in the BI and Data space interested in learning Spark. The course gives an easy to understand to the framework and applicable hands on examples.
This course provides hands-on experience in using Databricks's Spark. I learned a lot. I recommend people with some SQL experience to take this course for efficient programming skills!
Amazing course that really cuts through the fundamentals of using distributed computing power to analyze and manipulate data. Well organised structure on fundamentals
Great introduction to Spark with Databricks that seems to be an intuituve tool! Really cool to do the link between SQL and Data Science with a basic ML example!
Learn SQL Basics for Data Science 특화 과정 정보
This Specialization is intended for a learner with no previous coding experience seeking to develop SQL query fluency. Through four progressively more difficult SQL projects with data science applications, you will cover topics such as SQL basics, data wrangling, SQL analysis, AB testing, distributed computing using Apache Spark, and more. These topics will prepare you to apply SQL creatively to analyze and explore data; demonstrate efficiency in writing queries; create data analysis datasets; conduct feature engineering, use SQL with other data analysis and machine learning toolsets; and use SQL with unstructured data sets.

자주 묻는 질문
강의 및 과제를 언제 이용할 수 있게 되나요?
이 전문 분야를 구독하면 무엇을 이용할 수 있나요?
Is financial aid available?
강좌를 수료하면 대학 학점을 받을 수 있나요?
궁금한 점이 더 있으신가요? 학습자 도움말 센터를 방문해 보세요.