docker pull
I pull the image provided by Apache ignite, address: https://hub.docker.com/r/apacheignite/ignite.
Pull command: docker pull apachesignite / signite.
docker run
Then execute the dock run command: docker run - ITD -- CPU set CPU s = - 0 "- M 4096m -- memory reservation 4096m -- name apachesignite_ ignite --net=host apacheignite/ ignite:2.10.0 .
result
But after running for 2 days, we found that the memory used by it has been increasing, as shown in the figure:
Excuse me, this memory has been growing how to solve the situation?
Related
How can you analyze the size of a Docker container at runtime?
I have a docker image which has 1.5GB.
$ docker images
my-image latest 36ccda75244c 3 weeks ago 1.49GB
Much more space is required on the hard drive while the container is running. How can I get this storage space displayed? With dive or docker inspect etc. you only get the information of the packed image.
You can use docker stats for that, here's an example:
$ docker run --rm -d nginx
e3c2fd
$ docker stats --all --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}" --no-stream e3c2fdc
CONTAINER CPU % MEM USAGE / LIMIT
e3c2fdc 0.00% 2.715MiB / 9.489GiB
I am running several Java Applications with the Docker image jboss/wildfly:20.0.1.Final on Kubernetes 1.19.3. Wildfly server is running in OpenJDK 11 so the jvm is supporting container memory limits (cgroups).
If I set a memory limit this limit is totally ignored by the container when running in Kubernetes. But it is respected on the same machine when I run it in plain Docker:
1. Run Wildfly in Docker with a memory limit of 300M:
$ docker run -it --rm --name java-wildfly-test -p 8080:8080 -e JAVA_OPTS='-XX:MaxRAMPercentage=75.0' -m=300M jboss/wildfly:20.0.1.Final
verify Memory usage:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
515e549bc01f java-wildfly-test 0.14% 219MiB / 300MiB 73.00% 906B / 0B 0B / 0B 43
As expected the container will NOT exceed the memory limit of 300M.
2. Run Wildfly in Kubernetes with a memory limit of 300M:
Now I start the same container within kubernetes.
$ kubectl run java-wildfly-test --image=jboss/wildfly:20.0.1.Final --limits='memory=300M' --env="JAVA_OPTS='-XX:MaxRAMPercentage=75.0'"
verify memory usage:
$ kubectl top pod java-wildfly-test
NAME CPU(cores) MEMORY(bytes)
java-wildfly-test 1089m 441Mi
The memory limit of 300M is totally ignored and exceeded immediately.
Why does this happen? Both tests can be performed on the same machine.
Answer
The reason for the high values was an incorrect output of the Metric data received from Project kube-prometheus. After uninstalling the kube-projemet and installing instead the metric-server all data was display correctly using kubctl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data.
I`m placing this answer as community wiki since it might be helpful for the community. Kubectl top was displying incorrect data. OP solved the problem with uninstalling the kube-prometheus stack and installing the metric-server.
The reason for the high values was an incorrect output of the Metric
data received from Project kube-prometheus. After uninstalling the
kube-projemet and installing instead the metric-server all data was
display correctly using kubectl top. It shows now the same values as
docker stats. I do not know why kube-prometheus did compute wrong
data. In fact it was providing the double values for all memory data.
We are running ArangoDB v3.5.2. The Docker container unexpectedly restarts at random intervals, and all the connected clients get disconnected. After further investigation, we found that Docker container running Arango is reaching the memory allocated to it fully. The memory gets filled incrementally ever since the container starts running and it never goes down, until it is filled and the container restarts.
Below is the docker command used to run the container
docker run -d --name test -v /mnt/test:/var/lib/arangodb3 --restart always --memory="1200m" --cpus="1.5" -p 8529:8529 --log-driver="awslogs" --log-opt awslogs-region="eu-west-1" --log-opt awslogs-group="/docker/test" -e ARANGO_RANDOM_ROOT_PASSWORD=1 -e ARANGO_STORAGE_ENGINE=rocksdb arangodb/arangodb:3.5.2 --log.level queries=warn --log.level performance=warn --rocksdb.block-cache-size 256MiB --rocksdb.enforce-block-cache-size-limit true --rocksdb.total-write-buffer-size 256MiB --cache.size 256MiB
Why does the memory keep increasing and does not go down, especially when it is not being used? how do i solve this issue?
My Environment
ArangoDB Version: 3.5.2
Storage Engine: RocksDB
Deployment Mode: Single Server
Deployment Strategy: Manual Start in Docker
Configuration:
Infrastructure: AWS t3a.small Machine
Operating System: Ubuntu 16.04
Total RAM in your machine: 2GB. However, the container's limit is 1.2GB
Disks in use: SSD
Used Package: Docker
There's a known issue if you have mixed settings with comments in the configuration file arangodb.conf (source: https://github.com/arangodb/arangodb/issues/5414)
And here's the part where they talk about the solution: https://github.com/arangodb/arangodb/issues/5414#issuecomment-498287668
I think the syntax config file syntax does not support lines such as
block-cache-size = 268435456 # 256M
I think effectively this will be interpreted as something like
block-cache-size = 268435456256000000, which is way higher than intended.
just remove the comments, and retry!
I created a docker container from an image which is of size: ~90 MB.
The container is running fine.
I wanted to know how much RAM does a container must be using on its host machine. So I ran "docker stats" command and it shows following output:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
66c266363746 sbrconfigurator_v0_2 0.00% 32.29MiB / 740.2MiB 4.36% 0B / 0B 15.1MB / 4.1kB 10
Here it shows memory usage as followings:
MEM USAGE / LIMIT
32.29MiB / 740.2MiB
I don't know what does this 740.2 MB memory means, whether it means that 740.2 MB of RAM has been allocated to this container i.e 740.2 MB RAM is being used by this container or not.
Please help me know how much RAM (host's machine) does this container must be using.
Host machine is Linux, Ubuntu.
The memory limit shows how much memory docker will allow the container to use before killing the container with an OOM. If you do not set the memory limit on a container, it defaults to all of the memory available on the docker host. With Docker for Win/Mac, this is memory allocated to the embedded VM which you can adjust in the settings. You can set the memory limit in the compose file, or directly on the docker run cli:
$ docker run -itd --name test-mem -m 1g busybox /bin/sh
f2f9f041a76c0b74e4c6ae51dd57070258a06c1f3ee884d07fef5a103f0925d4
$ docker stats --no-stream test-mem
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
f2f9f041a76c test-mem 0.00% 360KiB / 1GiB 0.03% 5.39kB / 0B 3.29MB / 0B 1
In the above example, busybox is not using 1GB of memory, only 360KB.
Without setting the limit, the memory limit in GiB can be converted (GiB*1024 = KB) to show something very close to what you see in the free command for total memory on the host. Not sure if the small difference between the two accounts for the kernel or some other overhead.
Image size of 90MB is only size on disk, it doesn't have much to do with occupied memory (RAM).
When you start a Docker container on Linux, e.g.:
$ docker run -itd --name test busybox /bin/sh
74a1cb1ecf7ad902c5ddb842d055b4c2e4a11b3b84efd1e600eb47aefb563cb3
Docker will create a directory in cgroups fs, typically mounted in /sys/fs/cgroup under the long ID of the container.
/sys/fs/cgroup/memory/docker/74a1cb1ecf7ad902c5ddb842d055b4c2e4a11b3b84efd1e600eb47aefb563cb3/
In order to make the path shorter I'll write
/sys/fs/cgroup/memory/docker/$(docker inspect -f "{{.Id}}" test)
instead. The directory contains number of plain text files read/written by the kernel.
When you run Docker container without -m / --memory flag, the container would be able to allocate all memory available on your system, which would be equal to:
$ numfmt --to=iec $(cat /sys/fs/cgroup/memory/docker/memory.max_usage_in_bytes)
When you specify a memory limit e.g. -m 1g
$ docker run -itd --name test -m 1g busybox /bin/sh
Docker will write the limit into file memory.limit_in_bytes:
$ numfmt --to=iec $(cat /sys/fs/cgroup/memory/docker/$(docker inspect -f "{{.Id}}" test)/memory.limit_in_bytes)
1.0G
This tells the Linux kernel to enforce hard memory limit on the container. In case that the container is trying to allocate more memory than the limit, the kernel will invoke the infamous OOM killer.
The "MEM USAGE" is probably read from memory.usage_in_bytes, which is approximation of actual memory usage.
$ numfmt --to=iec $(cat /sys/fs/cgroup/memory/docker/$(docker inspect -f "{{.Id}}" test)/memory.usage_in_bytes)
312K
According to cgroups documentation more precise value can be obtained from memory.stat file.
If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP)
value in memory.stat(see 5.2).
Where you'd have to sum those 3 lines. Normally memory.usage_in_bytes is good enough estimate.
I have created a new docker services and am determining its required resources. Since applying the RAM to a new service is greedy--saying the container can have 8GB of RAM it will get them--I don't want to waste the cluster's resources.
Now I am trying to find out how much RAM a docker run took at its peak.
For example, I created a httpie-image (for the rightfully paranoid, the Dockerfile is also on dockerhub that I execute via:
docker run -it k0pernikus/httpie-docker-alpine HEAD https://stackoverflow.com/
I know that there is a docker stats command, yet it appears to show the current memory usage, and I don't really want to monitor that.
If I run it after the container ended, it will show 0. (To get the container id, I use the d flag.)
$ docker run -itd k0pernikus/httpie-docker-alpine HEAD https://stackoverflow.com/
132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c
$ docker stats 132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c
it will show:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c 0.00% 0 B / 0 B 0.00% 0 B / 0 B 0 B / 0 B 0
Yet how much RAM did it consume at maximum?
tl;dr:
Enable memory accounting
cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.max_usage_in_bytes
Docker uses cgroups under the hood. We only have to ask the kernel the right question, that is to cat the correct file. For this to work memory accounting has to be enabled (as noted in the docs).
On systemd based systems this is quite straight forward. Create a drop-in config for the docker daemon:
systemctl set-property docker.service MemoryAccounting=yes
systemctl daemon-reload
systemctl restart docker.service
(This will add a little overhead though as each time the container allocates RAM a counter has to be updated. Further details: https://lwn.net/Articles/606004/)
Then by using the full container id that you can discover via docker inspect:
docker inspect --format="{{.Id}}" $CONTAINER_NAME
you can get the maximum memory used:
cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.max_usage_in_bytes
The container has to be running for this to work.