I am using kafka-streams and the off heap memory usage grows up to the physical limits of the machine. However when running kafka-streams in docker memory usage grows past the limits of the container so the container gets OOM killed.
My assumption is that rocksdb is allocating the off heap space. -Xmx can be used to limit the heap usage but I can not find anything similar for the rockdb off heap usage.
How does rocksdb detect the physical memory limit and is there a way to simulate this limit in a container?
This is an issue of memory fragmentation.
You can either tune the glibc memory allocator by setting the environment variable MALLOC_ARENA_MAX=2 or change memory allocator from glibc to jemalloc.
Related
I'm encountring a critical problem. working on a Java RCP app (desktop) which is frequently crashing on my machine (not on the machine of my collegue).
I set the xmx and the xms as the config of my collegue app and i run a diagnostic on memory but the app still crashes.
according to the charts of memory diagnostics, at the moment of the crash, the non heap memory is increasing (because of the increase of loaded classes). the RAM of the computer of my collegue is about 3 times superior to the RAM on my machine . After setting the heap memoy (xmx & xms) like on the other machine, i'm suspecting a heap memory problem(i'm using the last version of java 8).
There is any way to know the limit of the heap memory?
Does the value of the ram affect the non-heap memory?
I am using the official elasticsearch docker image. Since ES requires a certain level of memory mapped regions, (as documented), I increased it using
docker-machine ssh <NAME> sudo sysctl -w vm.max_map_count=262144
I also read here that the memory allocated should be around 50% of the total system memory.
I am confused about how these two play together. How does allocating more memory mapped regions affect the RAM allocated. Is it the part of the RAM, or is it taken above the RAM allocation for elasticsearch?
To sum it up very shortly, the heap is used by Elasticsearch only and Lucene will use the rest of the memory to map index files directly into memory for blazing fast access.
That's the main reason why the best practice is to allocate half the memory to the ES heap to let the remaining half to Lucene. However, there's also another best practice to not allocate more than 32-ish GB of RAM to the ES heap (and sometimes it's even less than 30B).
So, if you have a machine with 128GB of RAM, you won't allocate 64GB to ES but still a maximum 32-ish GB and Lucene will happily gobble up all the remaining 96GB of memory to map its index files.
Tuning the memory settings is a savant mix of giving enough memory (but not too much) to ES and making sure Lucene can have a blast by using as much as the remaining memory as possible.
My Java application runs on Windows Server 2008 R2 and JDK 1.6
When I monitor it with JConsole, the committed virtual memory increases continually over time, while the heap memory usage stays below 50MB.
The max heap size is 1024MB.
The application creates many small, short-lived event objects over time. The behavior is as if each heap allocation is counted against the committed virtual memory.
When the committed virtual memory size approaches 1000MB, the application crashes with a native memory allocation failure.
java.lang.OutOfMemoryError: requested 1024000 bytes for GrET in C:\BUILD_AREA\jdk6\hotspot\src\share\vm\utilities\growableArray.cpp. Out of swap space?
My conclusion is the virtual address space of 2GB has been exhausted.
Why does JConsole show the committed virtual address space growing over time even though the heap is not growing over time?
I read somewhere that pinned memory in CUDA is scarce source. What is upper bound on pinned memory? In windows, in linux?
Pinned memory is just physical RAM in your system that is set aside and not allowed to be paged out by the OS. So once pinned, that amount of memory becomes unavailable to other processes (effectively reducing the memory pool available to rest of the OS).
The maximum pinnable memory therefore is determined by what other processes (other apps, the OS itself) are competing for system memory. What processes are concurrently running in either Windows or Linux (e.g. whether they themselves are pinning memory) will determine how much memory is available for you to pin at that particular time.
I am currently testing insertion of keys in a database Redis (on local).
I have more than 5 millions keys and I have just 4GB RAM so at one moment I reach capacity of RAM and swap fill in (and my PC goes down)...
My problematic : How can I make monitoring memory usage on the machine which has the Redis database, and in this way alert no more insert some keys in the Redis database ?
Thanks.
Memory is a critical resource for Redis performance. Used memory defines total number of bytes allocated by Redis using its allocator (either standard libc, jemalloc, or an alternative allocator such as tcmalloc).
You can collect all memory utilization metrics data for a Redis instance by running “info memory”.
127.0.0.1:6379> info memory
Memory
used_memory:1007280
used_memory_human:983.67K
used_memory_rss:2002944
used_memory_rss_human:1.91M
used_memory_peak:1008128
used_memory_peak_human:984.50K
Sometimes, when Redis is configured with no max memory limit, memory usage will eventually reach system memory, and the server will start throwing “Out of Memory” errors. At other times, Redis is configured with a max memory limit but noeviction policy. This would cause the server not to evict any keys, thus preventing any writes until memory is freed. The solution to such problems would be configuring Redis with max memory and some eviction policy. In this case, the server starts evicting keys using eviction policy as memory usage reaches the max.
Memory RSS (Resident Set Size) is the number of bytes that the operating system has allocated to Redis. If the ratio of ‘memory_rss’ to ‘memory_used’ is greater than ~1.5, then it signifies memory fragmentation. The fragmented memory can be recovered by restarting the server.
Concerning memory usage, I'd advise you to look at the redis.io FAQ and this article about using redis as a LRU cache.
You can either cap the memory usage via the maxmemory configuration setting, in which case once the memory limit is reached all write requests will fail with an error, or you could set the maxmemory-policy to allkeys-lru, for example, to start overwriting the least recently used data on the server with stuff you currently need, etc. For most use cases you have enough flexibility to handle such problems through proper config.
My advice is to keep things simple and manage this issue through configuration of the redis server rather than introducing additional complexity through os-level monitoring or the like.
There is a good Unix utility named vmstat. It is like top but command line, so you can get the memory usage and be prepared before you system is halt. You can also use ps v PID to get this info about specific process. Redis's PID is can be retrieved this way: pidof redis-server