I am working with dask on a distributed cluster, and I noticed a peak memory consumption when getting the results back to the local process.
My minimal example consists in instanciating the cluster and creating a simple array of ~1.6G with dask.array.arange.
I expected the memory consumption to be around the array size, but I observed a memory peak around 3.2G.
Is there any copy done by Dask during the computation ? Or does Jupyterlab needs to make a copy ?
import dask.array
import dask_jobqueue
import distributed
cluster_conf = {
"cores": 1,
"log_directory": "/work/scratch/chevrir/dask-workspace",
"walltime": '06:00:00',
"memory": "5GB"
}
cluster = dask_jobqueue.PBSCluster(**cluster_conf)
cluster.scale(n=1)
client = distributed.Client(cluster)
client
# 1.6 G in memory
a = dask.array.arange(2e8)
%load_ext memory_profiler
%memit a.compute()
# peak memory: 3219.02 MiB, increment: 3064.36 MiB
What happens when you do compute():
the graph of your computation is constructued (this is small) and send to the scheduler
the scheduler gets workers to produce the pieces of the array, which should be a total of about 1.6GB on the workers
the client constructs an empty array for the output you are asking for, knowing its type and size
the client receives bunches of bytes across the network or IPC from each worker which has pieces of the output. These are copied into the output of the client
the complete array is returned to you
You can see that the penultimate step here necessarily requires duplication of data. The original bytes buffers may eventually be garbage collected later.
Related
I'm trying to use Dask to process a dataset larger than memory, stored in chunks saved as NumPy files. I'm loading the data lazily:
array = da.concatenate([
da.from_delayed(
dask.delayed(np.load)(path),
shape=(size, window_len, vocab_size),
dtype=np.float32
)
for path, size in zip(shard_paths, shard_sizes)
])
Then I run some processing on the file using da.map_blocks:
da.map_blocks(fn, array, drop_axis=[-1]).compute()
When I run this, my process gets killed, presumably due to high memory usage (not only is the data larger than memory, but there is also a memory limit on each process).
I could easily limit the memory by processing the chunks sequentially, but that way I would not benefit from the parallelism provided by Dask.
How can I limit the memory used by Dask (e.g. by only loading a certain number of chunks at a time) while still parallelizing over as many chunks as possible?
It is possible to limit the memory used by the process on Unix using the resource module:
import resource
resource.setrlimit(resource.RLIMIT_AS, (max_memory, max_memory))
Dask seems to be able to reduce its memory usage once it reaches this limit.
However, the process can still crash on the delayed np.load, so this doesn't necessarily solve the problem.
I want to understand the efficient memory management process for Dask objects. I have setup a Dask GPU cluster and I am able to execute tasks that runs across the cluster. However, with the dask objects, especially when I run the compute function, the process that runs on the GPU is quickly growing by using more and more of the memory and soon I am getting "Run out of memory Error".
I want to understand how I can release the memory from dask object once I am done with using them. In this following example, after the compute function how can I release that object. I am running the following code for a few times. The memory keeps growing in the process where it is running
import cupy as cp
import pandas as pd
import cudf
import dask_cudf
nrows = 100000000
df2 = cudf.DataFrame({'a': cp.arange(nrows), 'b': cp.arange(nrows)})
ddf2 = dask_cudf.from_cudf(df2, npartitions=5)
ddf2['c'] = ddf2['a'] + 5
ddf2
ddf2.compute()
Please check this blog post by Nick Becker. you may want to set up a client first.
You read into cudf first, which you shouldn't do as practice. You should read directly into dask_cudf.
When dask_cudf computes, the result returns as a cudf dataframe, which MUST fit into the remaining memory of your GPU. Chances are reading into cudf first may have taken a chunk of your memory.
Then, you can delete a dask object when you are done using client.cancel().
I am trying to use some of the uncore hardware counters, such as: skx_unc_imc0-5::UNC_M_WPQ_INSERTS. It's supposed to count the number of allocations into the Write Pending Queue. The machine has 2 Intel Xeon Gold 5218 CPUs with cascade lake architecture, with 2 memory controllers per CPU. linux version is 5.4.0-3-amd64. I have the following simple loop and I am reading this counter for it. Array elements are 64 byte in size, equal to cache line.
for(int i=0; i < 1000000; i++){
array[i].value=2;
}
For this loop, when I map memory to DRAM NUMA node, the counter gives around 150,000 as a result, which maybe makes sense: There are 6 channels in total for 2 memory controllers in front of this NUMA node, which use DRAM DIMMs in interleaving mode. Then for each channel there is one separate WPQ I believe, so skx_unc_imc0 gets 1/6 from the entire stores. There are skx_unc_imc0-5 counters that I got with papi_native_avail, supposedly each for different channels.
The unexpected result is when instead of mapping to DRAM NUMA node, I map the program to Non-Volatile Memory, which is presented as a separate NUMA node to the same socket. There are 6 NVM DIMMs per-socket, that create one Interleaved Region. So when writing to NVM, there should be similarly 6 different channels used and in front of each, there is same one WPQ, that should get again 1/6 write inserts.
But UNC_M_WPQ_INSERTS returns only around up 1000 as a result on NV memory. I don't understand why; I expected it to give similarly around 150,000 writes in WPQ.
Am I interpreting/understanding something wrong? Or is there two different WPQs per channel depending wether write goes to DRAM or NVM? Or what else could be the explanation?
It turns out that UNC_M_WPQ_INSERTS counts the number of allocations into the Write Pending Queue, only for writes to DRAM.
Intel has added corresponding hardware counter for Persistent Memory: UNC_M_PMM_WPQ_INSERTS which counts write requests allocated in the PMM Write Pending Queue for IntelĀ® Optaneā¢ DC persistent memory.
However there is no such native event showing up in papi_native_avail which means it can't be monitored with PAPI yet. In linux version 5.4, some of the PMM counters can be directly found in perf list uncore such as unc_m_pmm_bandwidth.write - Intel Optane DC persistent memory bandwidth write (MB/sec), derived from unc_m_pmm_wpq_inserts, unit: uncore_imc. This implies that even though UNC_M_PMM_WPQ_INSERTS is not directly listed in perf list as an event, it should exist on the machine.
As described here the EventCode for this counter is: 0xE7, therefore it can be used with perf as a raw hardware event descriptor as following: perf stat -e uncore_imc/event=0xe7/. However, it seems that it does not support event modifiers to specify user-space counting with perf. Then after pinning the thread in the same socket as the NVM NUMA node, for the program that basically only does the loop described in the question, the result of perf kind of makes sense:
Performance counter stats for 'system wide': 1,035,380 uncore_imc/event=0xe7/
So far this seems to be the the best guess.
I'm distributing the computation of some functions using Dask. My general layout looks like this:
from dask.distributed import Client, LocalCluster, as_completed
cluster = LocalCluster(processes=config.use_dask_local_processes,
n_workers=1,
threads_per_worker=1,
)
client = Client(cluster)
cluster.scale(config.dask_local_worker_instances)
work_futures = []
# For each group do work
for group in groups:
fcast_futures.append(client.submit(_work, group))
# Wait till the work is done
for done_work in as_completed(fcast_futures, with_results=False):
try:
result = done_work.result()
except Exception as error:
log.exception(error)
My issue is that for a large number of jobs I tend to hit memory limits. I see a lot of:
distributed.worker - WARNING - Memory use is high but worker has no data to store to disk. Perhaps some other process is leaking memory? Process memory: 1.15 GB -- Worker memory limit: 1.43 GB
It seems that each future isn't releasing its memory. How can I trigger that? I'm using dask==1.2.0 on Python 2.7.
Results are help by the scheduler so long as there is a future on a client pointing to it. Memory is released when (or shortly after) the last future is garbage-collected by python. In your case you are keeping all of your futures in a list throughout the computation. You could try modifying your loop:
for done_work in as_completed(fcast_futures, with_results=False):
try:
result = done_work.result()
except Exception as error:
log.exception(error)
done_work.release()
or replacing the as_completed loop with something that explicitly removes futures from the list once they have been processed.
This is a follow-up to this question.
I'm experiencing problems with persisting a large dataset in distributed memory. I have a scheduler running on one machine and 8 workers each running on their own machines connected by 40 gigabit ethernet and a backing Lustre filesystem.
Problem 1:
ds = DataSlicer(dataset) # ~600 GB dataset
dask_array = dask.array.from_array(ds, chunks=(13507, -1, -1), name=False) # ~22 GB chunks
dask_array = client.persist(dask_array)
When inspecting the Dask status dashboard, I see all 28 tasks being assigned to and processed by one worker while the other workers do nothing. Additionally, when every task has finished processing and the tasks are all in the "In memory" state, only 22 GB of RAM (i.e. the first chunk of the dataset) is actually stored on the cluster. Access to the indices within the first chunk are fast, but any other indices force a new round of reading and loading the data before the result returns. This seems contrary to my belief that .persist() should pin the complete dataset across the memory of the workers once it finishes execution. In addition, when I increase the chunk size, one worker often runs out of memory and restarts due to it being assigned multiple huge chunks of data.
Is there a way to manually assign chunks to workers instead of the scheduler piling all of the tasks on one process? Or is this abnormal scheduler behavior? Is there a way to ensure that the entire dataset is loaded into RAM?
Problem 2
I found a temporary workaround by treating each chunk of the dataset as its own separate dask array and persisting each one individually.
dask_arrays = [da.from_delayed(lazy_slice, shape, dtype, name=False) for \
lazy_slice, shape in zip(lazy_slices, shapes)]
for i in range(len(dask_arrays)):
dask_arrays[i] = client.persist(dask_arrays[i])
I tested the bandwidth from persisted and published dask arrays to several parallel readers by calling .compute() on different chunks of the dataset in parallel. I could never achieve more than 2 GB/s aggregate bandwidth from the dask cluster, far below our network's capabilities.
Is the scheduler the bottleneck in this situation, i.e. is all data being funneled through the scheduler to my readers? If this is the case, is there a way to get in-memory data directly from each worker? If this is not the case, what are some other areas in dask I may be able to investigate?