Hive job on Tez execution engine fails on memory allocation - memory

org.apache.parquet.hadoop.MemoryManager$1: New Memory allocation 1046531 bytes is smaller than the minimum allocation size of 1048576 bytes.

Related

Getting CUDA out of memory error on Google Colab when there is enough free memory

I am getting a CUDA out of memory error on Google Colab when there is enough free memory
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 15.90 GiB total capacity; 14.96 GiB already allocated; 21.75 MiB free; 14.98 GiB reserved in total by PyTorch)
I am using a batch size of 1 and I have emptied cache using torch.cuda.empty_cache(). Anyone have any ideas of what could be happening?

Word size and memory addresses

My understanding of word size and memory addresses is as follows. An 8bit machine will have an address bus of size 8bit and have 256 memory addresses. A memory address is the location of each address so this machine can make use of 256 bytes of RAM. Now a 32 bit machine has an 32 bit word size ie 4bytes. At this point I get confused. In terms of usable memory, online tells me 4gb but if each memory address is 4bytes in size then surely it only has 1 million total memory addresses available ie 1gb of ram can be used? What am I mixing up here?

tensorflow GPU: resource exhausted error

I'm currently running a CNN on 3D medical images on tensorflow GPU. Whenever I run the code, resource exhausted error appears on the command prompt. I have already tried running the code in small batches of size 1(one patient at a time).
My GPU is NVIDIA GeForce GTX 960. I'm looking at my GPU's specifications, but I'm not sure which component is limiting the memory. Is it the Standard Memory Config(2GB)?
The command prompt returns the following:
2017-06-11 16:23:37.095587: W c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\framework\op_kernel.cc:1152] Resource exhausted: OOM when allocating tensor with shape[3,3,3,8,16]
2017-06-11 16:23:47.096178: W c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 365.63MiB.
Current allocation summary follows.
2017-06-11 16:23:47.096349: I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:643] Bin (256): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
......
2017-06-11 16:23:47.144036: I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:696] 3 Chunks of size 13824 totalling 40.5KiB
2017-06-11 16:23:47.144745: I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:696] 3 Chunks of size 383385600 totalling 1.07GiB
2017-06-11 16:23:47.145486: I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:696] 1 Chunks of size 398141184 totalling 379.70MiB
2017-06-11 16:23:47.146771: I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:700] Sum Total of in-use chunks: 1.44GiB
2017-06-11 16:23:47.146796: I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:702]
Stats:
Limit: 1548396134
InUse: 1548374272
MaxInUse: 1548396032
NumAllocs: 35
MaxAllocSize: 398141184

Maximum memory size a system can support

Suppose that I have a computer with an address register of size 16 bits (MAR, for example). The smallest addressable unit in this computer is a word and each word is of size 2 bytes. What is the maximum memory size (in bytes) this system can support?
I thought it would be 2^16 = 65536 bytes, but the part about the smallest addressable unit implies that this is not the way to solve it.
Thanks in advance
There is no direct correlation to the maximum amount of memory a system can support, and the size of address registers.
16bit computers 30 years ago could very well support more than 64 kilobytes. On the other hand, modern 64bit processors typilcally only have lanes for 52 bits (or less), but even so a typical computer cannot nearly support 2^52 bytes of memory.
Typical 64bit computers today could in theory address 16 exibytes, but present-time CPUs only support 4 petabytes of phyisical and 256 terabytes of per-process virtual memory. Typical desktop mainboards support 128GiB maximum, if you buy extra expensive DIMMS. With affordable DIMMS, you're limited to about half as much (there are only so and so many slots).
Operating systems typically allow for main memory sizes in the hundreds of gigabytes only (e.g. 512 GiB for Windows 8 enterprise/professional, and 128GiB otherwise, or as little as 16GiB for Windows 7 Home Premium)
Generally the smallest addressable size is one byte, as you have calculated it, if it were one byte it would be 2^16*1 = 65536 bytes. However, because on this system there are two bytes per address, it is actually 2^16*2 = 131072 bytes.

Understanding memory usage in CUDA

I have a NVIDIA GTX 570 graphics card running on a Ubuntu 10.10 system with Cuda 4.0.
I know that for performance, we need to access memory efficiently, and use register and shared memory on the device cleverly.
However I don't understand how to calculate, number of registers available per thread, or how much shared memory can a single block use and other such simple / important calculations for particular kernel configurations.
I want to understand this by an explicit example.
Incidentally, I am currently trying to write an a particle code, in which one of the kernels should look like this.
Each block is a 1-D collection of threads, and each grid is a 1-D collection of blocks.
Number of blocks : 16384
Number of threads per block : 32 ( => total threads 32*16384 = 524288)
Each thread-block is given a 32 x 32 two-d integer array of shared memory
to work with.
Within a thread I would like to store some numbers of type double. But I am not sure
how many such double numbers I can store without any register spilling into local memory (which is on device). Can someone tell
me how many doubles can be stored per thread for this kernel configuration?
Also is the above mentioned configuration for shared-memory for each of my blocks valid?
A sample computation about how one would go about deducing these things would be very
illustrative and helpful
Here is the information about my GTX 570: (using deviceQuery from CUDA-SDK)
[deviceQuery] starting...
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Found 1 CUDA Capable device(s)
Device 0: "GeForce GTX 570"
CUDA Driver Version / Runtime Version 4.0 / 4.0
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 1279 MBytes (1341325312 bytes)
(15) Multiprocessors x (32) CUDA Cores/MP: 480 CUDA Cores
GPU Clock Speed: 1.46 GHz
Memory Clock rate: 1900.00 Mhz
Memory Bus Width: 320-bit
L2 Cache Size: 655360 bytes
Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)
Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Concurrent kernel execution: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support enabled: No
Device is using TCC driver mode: No
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 2 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.0, CUDA Runtime Version = 4.0, NumDevs = 1, Device = GeForce GTX 570
[deviceQuery] test results...
PASSED
Press ENTER to exit...
So, the kernel configuration is a little complicated. You should use the CUDA OCCUPANCY CALCULATOR. And the other hand you have to study how warps work. Once a block is assigned to a SM, it is further divided into 32-thread units called warps. We can say that a warp is a unit of thread scheduling in SMs. We can calculate the number of warps that reside in a SM for a given block size and given number of blocks assigned to each SM. In your case a warp consists in 32 threads, so if you have a block with 256 threads then you have 8 warps. Now choosing a correctly kernel setting depends of your data and operations, remember that you have to full occupy a SM, that is: you have to get full thread capacity in each SM and the maximal number of warps for scheduling around the long-latency operations. Another important thing is dont exceed the limitations of up to maximum threads per blocks, in your case 1024.

Resources