DES230 "Big Data Analytics Using Spark" by Yoav Freund, Professor of Computer Science and Engineering UC San Diego. Learn how to analyze large datasets using Jupyter notebooks, MapReduce and Spark as a platform. Part 4 of Data Science MicroMasters® Program on edX.
In data science, data is called "big" if it cannot fit into the memory of a single standard laptop or workstation. The analysis of big datasets requires using a cluster of tens, hundreds or thousands of computers. Effectively using such clusters requires the use of distributed files systems, such as the Hadoop Distributed File System (HDFS) and corresponding computational models, such as Hadoop, MapReduce and Spark. In this course, part of the Data Science MicroMasters program, we will learn what the bottlenecks are in massive parallel computation and how to use spark to minimize these bottlenecks. We will learn how to perform supervised and unsupervised machine learning on massive datasets using the Machine Learning Library (MLlib).
In this course, as in the other ones in this MicroMasters program, we will gain hands-on experience using PySpark within the Jupyter notebooks environment.
Programming Spark using Pyspark
Identifying the computational tradeoffs in a Spark application
Performing data loading and cleaning using Spark and Parquet
Modeling data through statistical and machine learning methods
This is a ten-week course.
(1) Memory Hierarchy, latency vs. throughput.
(2) Spark Basics
(3) Dataframes and SQL
(4) PCA and weather analysis
(5) K-means and intrinsic dimensions
(6) Decision trees, boosting, and random forests
(7) Neural Networks and TensorFlow