Taught by a 4 person team including 2 Stanford-educated, ex-Googlers and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with Java and with billions of rows of data.
Get your data to fly using Spark for analytics, machine learning and data science
Let’s parse that.
What's Spark? If you are an analyst or a data scientist, you're used to having multiple systems for working with data. SQL, Python, R, Java, etc. With Spark, you have a single engine where you can explore and play with large amounts of data, run machine learning algorithms and then use the same system to productionize your code.
Analytics: Using Spark and Python you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Dataframes to manipulate data with ease.
Machine Learning and Data Science : Spark's core functionality and built-in libraries make it easy to implement complex algorithms like Recommendations with very few lines of code. We'll cover a variety of datasets and algorithms including PageRank, MapReduce and Graph datasets.
Lot's of cool stuff ..
.. and of course all the Spark basic and advanced features:
This course is about
What does Donald Rumsfeld have to do with data analysis?
Why is Spark so cool?
An introduction to RDDs - Resilient Distributed Datasets
Built-in libraries for Spark
The PySpark Shell
Transformations and Actions
See it in Action: Munging Airlines Data with PySpark - I
[For Linux/Mac OS Shell Newbies] Path and other Environment Variables
RDD Characteristics: Partitions and Immutability
RDD Characteristics: Lineage, RDDs know where they came from
What can you do with RDDs?
Create your first RDD from a file
Average distance travelled by a flight using map() and reduce() operations
Get delayed flights using filter(), cache data using persist()
Average flight delay in one-step using aggregate()
Frequency histogram of delays using countByValue()
See it in Action : Analyzing Airlines Data with PySpark - II
Special Transformations and Actions
Average delay per airport, use reduceByKey(), mapValues() and join()
Average delay per airport in one step using combineByKey()
Get the top airports by delay using sortBy()
Lookup airport descriptions using lookup(), collectAsMap(), broadcast()
See it in Action : Analyzing Airlines Data with PySpark - III
Get information from individual processing nodes using accumulators
See it in Action : Using an Accumulator variable
Long running programs using spark-submit
See it in Action : Running a Python script with Spark-Submit
Behind the scenes: What happens when a Spark script runs?
Running MapReduce operations
See it in Action : MapReduce with Spark
We value your feedbacks, if you have any recommendations for improving our services, notice any bugs or any suggestions, feel free to share it with us by sending an email to firstname.lastname@example.org