I have build a Spark and Flink k-means application.
My test case is a clustering on 1 million points on a 3 node cluster.
When in-memory bottlenecks begin, Flink starts to outsource to disk and work slowly but works.
However, Spark lose executers if the memory is full and starts again (infinite loop?).
I try to customize the memory setting with the help from the mailing list here, thanks. But Spark does still not work.
Is it necessary to have any configurations to be set? I mean Flink works with low memory, Spark must also be able to; or not?
I am not a Spark expert (and I am an Flink contributor). As far as I know, Spark is not able to spill to disk if there is not enough main memory. This is one advantage of Flink over Spark. However, Spark announced a new project call "Tungsten" to enable managed memory similar to Flink. I don't know if this feature is already available: https://databricks.com/blog/2015/04/28/project-tungsten-bringing-spark-closer-to-bare-metal.html
There are a couple of SO question about Spark out of memory problems (an Internet search with "spark out of memory" yield many results, too):
spark java.lang.OutOfMemoryError: Java heap space
Spark runs out of memory when grouping by key
Spark out of memory
Maybe one of those help.
Related
I currently own only one computer, and I won't have another.
I run Spark on its CPU cores : master=local[5], using it directly : I set spark-core and spark-sql for dependencies, do quite no other configuration, and my programs start immediately. It's confortable, of course.
But should I attempt to create an architecture with a master and some workers by the mean of Docker containers or minikube (Kubernetes) on my computer ?
Will solution #2 - with all the settings it requires - reward me with better performances, because Spark is truly designed to work that way, even on a single computer,
or will I loose some time, because the mode I'm currently running it, without network usage, without need of data locality will always give me better performances, and solution #1 will always be the best on a single computer ?
My hypothesis is that #1 is fine. But I have no true measurement for that. No source of comparison. Who have experienced the two manners of doing things on a sigle computer ?
It really depends on your goals - if you always will run your Spark code on the single node with local master, then just use it. But if you intend to run your resulting code in the distributed mode on multiple machines, then emulating cluster with Docker could be useful, as you'll get your code running in truly distributed manner, and you'll able to find problems that not always are found when you run your code with the local master.
Instead of direct Docker usage (that could be tricky to setup, although it's still possible), maybe you can consider to use Spark on Kubernetes, for example, via minikube - there is a plenty of articles found by Google on this topic.
Having done testing on this with executor size, the cutover from when it makes sense to use more multiple executors is # CPUs > 32. AWS EMR spark runtime defaults to at least 4 CPUs per executor and Databricks always uses fat executors which means > 32CPUS on the 8xl instances. Your greatest limitation tends to be the JVMs garbage collection which caps the size of the heap. Local mode has a couple performance advantages compared to cluster mode.
full stage code gen has to be run on both the drive and every single executor. For short queries this can add several 100MS per stage.
driver <-> executor communication has latency.
shared memory between driver and executors. This reduces the chance of OOM and reduces the amount of spilling to disk.
People end up choosing to go with multiple executors/instances not because it would be faster than a single instance but because it is the only way to scale up in terms of data volume and parallization. (also for failure recovery)
If you're feeling ambitious there's a performance testing tool called TPC-DS that runs a set of dataprocessing queries against a standardized dataset
https://github.com/databricks/spark-sql-perf
https://github.com/maropu/spark-tpcds-datagen
Also if you're feeling adventurous the spark code has a script to fire up a mini cluster on minikube if you want a quick and easy way to test this.
I have setup a JupyterHub and configured a pyspark kernel for it. When I open a pyspark notebook (under username Jeroen), two processes are added, a Python process and a Java process. The Java process is assigned 12g of virtual memory (see image). When running a test script on a range of 1B number it grows to 22g. Is that something to worry about when we work on this server with multiple users? And if it is, how can I prevent Java from allocating so much memory?
You don't need to worry about virtual memory usage, reserved memory is much more important here (the RES column).
You can control size of JVM heap usage using --driver-memory option passed to spark (if you use pyspark kernel on jupyterhub you can find it in environment under PYSPARK_SUBMIT_ARGS key). This is not exactly the memory limit for your application (there are other memory regions on JVM), but it is very close.
So, when you have multiple users setup, you should learn them to set appropriate driver memory (the minimum they need for processing) and shutdown notebooks after they finish work.
I find there is too many memory usage when shuffle occurred in Spark process.
Following figure is memory metric when I use 700MB data and just three rdd.map.
(I use Ganglia as monitoring tool, and show just three nodes of my cluster. x-axis means time-series, y-axis means memory usage)
enter image description here
and following figure is also memory metric when I use same data and use three rdd.groupBy, three rdd.flatMap (order : groupBy1->flatMap1->groupBy2->flatMap2->groupBy3->flatMap3)
enter image description here
As you can see, all of three node's memory is considerably increased (several GB) even though I use just 700MB data. Indeed I have 8 worker node, and all of 8 worker's memory is considerably increased.
I think the main cause is shuffle since rdd.map has no shuffle but rdd.groupBy has shuffle.
In this situation, I wonder three point below :
why is there too many memory usage? (more than 15GB is used when I use 700MB in all of my worker node.)
why does it seem that used memory for old shuffle is not removed before Spark application is finished?
Is there any way to reduce memory usage or remove memory generated in old shuffle?
P.S. - My environment :
cloud platform : MS Azure (8 worker nodes)
Spec. of one worker : 8 cores CPU, 16GB RAM
Language : Java
Spark version : 1.6.2
Java version : 1.7(development), 1.8(execution)
Run in Spark-standalone (Not use Yarn or Mesos)
In Spark, The operating system will decide if the data can stay in its buffer cache or should it be spilled to DISK. Each map task creates as many shuffle spill files as number of reducers. SPARK doesn't merge and partition shuffle spill files into one big file, which is the case with Apache Hadoop.
Example: If there are 6000 (R) reducers and 2000 (M) map tasks, there will be (M*R) 6000*2000=12 million shuffle files. This is because, in spark, each map task creates as many shuffle spill files as number of reducers. This caused performance degradation.
Please refer to this post which very well explains this in detail in continuation to above explanation.
You can also refer to Optimizing Shuffle Performance in Spark paper.
~Kedar
So assume i've got a cluster with 100 GB memory for spark to utilize. I got a dataset of 2000 GB and want to run a iterative application o this dataset. 200 iterations.
My question is, when using .cache(), will spark keep the first 100 GB in memory and perform the 200 iteration before reading the next 100 GB automatically?
When working within the memory limit sparks advantages are very clear, but when working with larger datasets im not entirely sure how spark and yarn manages the data.
This is not the behaviour you will see. Spark's caching is done using LRU eviction, so if you cache a dataset which is too big for memory, only the most recently used part will be kept in memory. However, spark also has a MEMORY_AND_DISK persistence mode (described in more detail at https://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence ) which sounds like it could be a good fit for your case.
I am confused by hadoop namenode memory problem.
when namenode memory usage is higher than a certain percentage (say 75%), reading and writing hdfs files through hadoop api will fail (for example, call some open() will throw exception), what is the reason? Does anyone has the same thing?
PS.This time the namenode disk io is not high, the CPU is relatively idle.
what determines namenode'QPS (Query Per Second) ?
Thanks very much!
Since the namenode is basically just a RPC Server managing a HashMap with the blocks, you have two major memory problems:
Java HashMap is quite costly, its collision resolution (seperate chaining algorithm) is costly as well, because it stores the collided elements in a linked list.
The RPC Server needs threads to handle requests- Hadoop ships with his own RPC framework and you can configure this with the dfs.namenode.service.handler.count for the datanodes it is default set to 10. Or you can configure this dfs.namenode.handler.count for other clients, like MapReduce jobs, JobClients that want to run a job. When a request comes in and it want to create a new handler, it go may out of memory (new Threads are also allocating a good chunk of stack space, maybe you need to increase this).
So these are the reasons why your namenode needs so much memory.
What determines namenode'QPS (Query Per Second) ?
I haven't benchmarked it yet, so I can't give you very good tips on that. Certainly fine tuning the handler counts higher than the number of tasks that can be run in parallel + speculative execution.
Depending on how you submit your jobs, you have to fine tune the other property as well.
Of course you should give the namenode always enough memory, so it has headroom to not fall into full garbage collection cycles.