We have a three-node Kafka cluster with the machine configuration:
node0 32core 128g
node1 4core 8g
node2 32core 128g
Only XMS is configured for all three nodes and XMX is not specified
But after a while,
Node0 occupies 12.7 GB of memory
Node0 top
Node1 occupies 2.1 GB of memory
Node1 top
Node2 occupies 3.1g memory
Node2 top
For node1, the JVM XMX defaults to 25% of the machine's memory
Arthas has the following memory information for node0:
node jvm dashboard
In arthas's dashboard, node0's heap memory footprint is similar to node2's, but the MMAP portion is much larger.
I have two questions:
Why is the memory usage of kafka's three nodes so different?
Why is node0's RES memory smaller than Arthas's in-heap + out-of-heap sum
Related
Here's the scenario:
On a Debian GNU/Linux 9 (stretch) VM I have two containers running. The day before yesterday I got a warning from the monitoring that the memory usage is relatively high. After looking at the VM it could be determined that not the containers but Docker daemon needs them. htop
After a restart of the service I noticed a new increase of memory demand after two days. See graphic.
RAM + Swap overview
Is there a known memory leak for this version?
Docker version
Memory development (container) after 2 days:
Container 1 is unchanged
Container 2 increased from 21.02MiB to 55MiB
Memory development (VM) after 2 days:
The MEM increased on the machine from 273M (after reboot) to 501M
dockerd
- after restart 1.3% MEM%
- 2 days later 6.0% of MEM%
Monitor your containers to see if their memory usage changes over time:
> docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
623104d00e43 hq 0.09% 81.16MiB / 15.55GiB 0.51% 6.05kB / 0B 25.5MB / 90.1kB 3
We saw a similar issue and it seems to have been related to the gcplogs logging driver. We saw the problem on docker 19.03.6 and 19.03.9 (the most up-to-date that we can easily use).
Switching back to using a log forwarding container (e.g. logspout) resolved the issue for us.
We are having a query regarding memory consumption on kafka scale out. It would be very helpful if you can give suggestion/solution for the below query.
We are running kafka as docker container on kubernetes.
Memory limit of 4GiB is configured for Kafka broker POD. With some large load Kafka broker POD’s memory reached 4GiB. So we decided to manually scale out Kafka broker POD replicas from 1 to 3. But after scale out, for same load each Kafka broker PODs are consuming 4GiB memory. We expected Kafka broker POD’s memory consumption to ~1.33GiB as we are running 3 PODs for same amount of load.
Before Kafka Broker Scale out :
1 broker
6 topics each with 1 partition each
Memory consumption: 4GiB
After Kafka Broker Scale out and rebalancing topics over all the brokers:
3 broker
6 topics each with 1 partitions each
Memory consumption: 10GiB (Pod1: 2GiB, Pod2: 4GiB, Pod3: 4GiB)
After Kafka Broker Scale out and rebalancing topics over all the brokers:
3 broker
6 topics each with 3 partitions each
Memory consumption: 12GiB (Pod1: 4GiB, Pod2: 4GiB, Pod3: 4GiB)
All deployments are tested with same amount of load. Replication factor for all topics is 1.
Edit 1
I'm trying to limit my container so that it doesn't take up all the RAM on the host. From the Docker docs I understand that --memory limits the RAM and --memory-swap limits (RAM+swap). From the docker-compose docs it looks like the terms for those are mem_limit and memswap_limit, so I've constructed the following docker-compose file:
> cat docker-compose.yml
version: "2"
services:
stress:
image: progrium/stress
command: '-m 1 --vm-bytes 15G --vm-hang 0 --timeout 10s'
mem_limit: 1g
memswap_limit: 2g
The progrium/stress image just runs stress, which in this case spawns a single thread which requests 15GB RAM and holds on to it for 10 seconds.
I'd expect this to crash, since 15>2. (It does crash if I ask for more RAM than the host has.)
The kernel has cgroups enabled, and docker stats shows that the limit is being recognised:
> docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
7624a9605c70 0.00% 1024MiB / 1GiB 99.99% 396B / 0B 172kB / 0B 2
So what's going on? How do I actually limit the container?
Update:
Watching free, it looks like the RAM usage is effectively limited (only 1GB of RAM is used) but the swap is not: the container will gradually increase swap usage until it's eaten though all of the swap and stress crashes (it takes about 20secs to get through 5GB of swap on my machine).
Update 2:
Setting mem_swappiness: 0 causes an immediate crash when requesting more memory than mem_limit, regardless of memswap_limit.
Running docker info shows WARNING: No swap limit support
According to https://docs.docker.com/engine/installation/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities this is disabled by default ("Memory and swap accounting incur an overhead of about 1% of the total available memory and a 10% overall performance degradation.") You can enable it by editing the /etc/default/grub file:
Add or edit the GRUB_CMDLINE_LINUX line to add the following two key-value pairs:
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
then update GRUB with update-grub and reboot.
I have run my docker container containing RabbitMQ instance. Ia have used docker run command with three parameters (among others):
-m 300m
--kernel-memory="300m"
--memory-swap="400m"
docker stats shows:
> CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
> 1f50929f8e4e 0.40% 126.8 MB / 349.2 MB 36.30% 908.2 kB / 1.406 MB 24.69 MB / 1.114 MB
I have expected that RabbitMQ will see only 300MB of RAM memory, but high watermark visible on Rabbit UI shows 5,3GB. My host has 8GB available so, probably RabbitMQ read memory size from host.
I'm running my rethinkdb container in Kubernetes cluster. Below is what I notice:
Running top in the host which is CoreOS, rethinkdb process takes about 3Gb:
$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
981 root 20 0 53.9m 34.5m 20.9m S 15.6 0.4 1153:34 hyperkube
51139 root 20 0 4109.3m 3.179g 22.5m S 15.0 41.8 217:43.56 rethinkdb
579 root 20 0 707.5m 76.1m 19.3m S 2.3 1.0 268:33.55 kubelet
But running docker stats to check the rethinkdb container, it takes about 7Gb!
$ docker ps | grep rethinkdb
eb9e6b83d6b8 rethinkdb:2.1.5 "rethinkdb --bind al 3 days ago Up 3 days k8s_rethinkdb-3.746aa_rethinkdb-rc-3-eiyt7_default_560121bb-82af-11e5-9c05-00155d070266_661dfae4
$ docker stats eb9e6b83d6b8
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
eb9e6b83d6b8 4.96% 6.992 GB/8.169 GB 85.59% 0 B/0 B
$ free -m
total used free shared buffers cached
Mem: 7790 7709 81 0 71 3505
-/+ buffers/cache: 4132 3657
Swap: 0 0 0
Can someone explain why the container is taking a lot more memory than the rethinkdb process itself?
I'm running docker v1.7.1, CoreOS v773.1.0, kernel 4.1.5
In top command, your are looking at physical memory amount. in stats command, this also include the disk cached ram, so it's always bigger than the physical amount of ram. When you really need more RAM, the disk cached will be released for the application to use.
In deed, the memmory usage is pulled via cgroup memory.usage_in_bytes, you can access it in /sys/fs/cgroup/memory/docker/long_container_id/memory.usage_in_bytes. And acording to linux doc https://www.kernel.org/doc/Documentation/cgroups/memory.txt section 5.5:
5.5 usage_in_bytes
For efficiency, as other kernel components, memory cgroup uses some
optimization to avoid unnecessary cacheline false sharing.
usage_in_bytes is affected by the method and doesn't show 'exact'
value of memory (and swap) usage, it's a fuzz value for efficient
access. (Of course, when necessary, it's synchronized.) If you want to
know more exact memory usage, you should use RSS+CACHE(+SWAP) value in
memory.stat(see 5.2).