Data Analysis Using Pyspark

46개의 평가
Coursera Project Network
2,248명이 이미 등록했습니다.
학습자는 이 안내 프로젝트에서 다음을 수행하게 됩니다.

Learn how to setup the google colab for distributed data processing

Learn applying different queries to your dataset to extract useful Information

Learn how to visualize this information using matplotlib

Clock1.5 h
Cloud다운로드 필요 없음
Video분할 화면 동영상
Comment Dots영어
Laptop데스크톱 전용

One of the important topics that every data analyst should be familiar with is the distributed data processing technologies. As a data analyst, you should be able to apply different queries to your dataset to extract useful information out of it. but what if your data is so big that working with it on your local machine is not easy to be done. That is when the distributed data processing and Spark Technology will become handy. So in this project, we are going to work with pyspark module in python and we are going to use google colab environment in order to apply some queries to the dataset we have related to lastfm website which is an online music service where users can listen to different songs. This dataset is containing two csv files listening.csv and genre.csv. Also, we will learn how we can visualize our query results using matplotlib.

개발할 기술

Google colabData AnalysisPython ProgrammingpySpark SQL

단계별 학습

작업 영역이 있는 분할 화면으로 재생되는 동영상에서 강사는 다음을 단계별로 안내합니다.

  1. Prepare the Google Colab for distributed data processing

  2. Mounting our Google Drive into Google Colab environment

  3. Importing first file of our Dataset (1 Gb) into pySpark dataframe

  4. Applying some Queries to extract useful information out of our data

  5. Importing second file of our Dataset (3 Mb) into pySpark dataframe

  6. Joining two dataframes and prepapre it for more advanced queries

  7. Learn visualizing our query results using matplotlib

안내형 프로젝트 진행 방식

작업 영역은 브라우저에 바로 로드되는 클라우드 데스크톱으로, 다운로드할 필요가 없습니다.

분할 화면 동영상에서 강사가 프로젝트를 단계별로 안내해 줍니다.



모든 리뷰 보기

자주 묻는 질문

자주 묻는 질문

궁금한 점이 더 있으신가요? 학습자 도움말 센터를 방문해 보세요.