In this guide, I'm going to introduce you some techniques for tuning your Apache Spark jobs for optimal efficiency. Using Spark to deal with massive datasets can become nontrivial, especially when you are dealing with a terabyte or higher volume of data. The first thing that comes up could be to use a large cluster of hundreds of machines with hundreds of cores and petabytes of RAM, but using a super-sized cluster has a cost that can exponentially grow. That's why I wrote this guide to help you to achieve better performance and saving costs.
-
Notifications
You must be signed in to change notification settings - Fork 2
umbertogriffo/apache-spark-best-practices-and-tuning
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
https://umbertogriffo.gitbook.io/apache-spark-best-practices-and-tuning/
Topics
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published