We are running ArangoDB v3.5.2. The Docker container unexpectedly restarts at random intervals, and all the connected clients get disconnected. After further investigation, we found that Docker container running Arango is reaching the memory allocated to it fully. The memory gets filled incrementally ever since the container starts running and it never goes down, until it is filled and the container restarts.
Below is the docker command used to run the container
docker run -d --name test -v /mnt/test:/var/lib/arangodb3 --restart always --memory="1200m" --cpus="1.5" -p 8529:8529 --log-driver="awslogs" --log-opt awslogs-region="eu-west-1" --log-opt awslogs-group="/docker/test" -e ARANGO_RANDOM_ROOT_PASSWORD=1 -e ARANGO_STORAGE_ENGINE=rocksdb arangodb/arangodb:3.5.2 --log.level queries=warn --log.level performance=warn --rocksdb.block-cache-size 256MiB --rocksdb.enforce-block-cache-size-limit true --rocksdb.total-write-buffer-size 256MiB --cache.size 256MiB
Why does the memory keep increasing and does not go down, especially when it is not being used? how do i solve this issue?
My Environment
ArangoDB Version: 3.5.2
Storage Engine: RocksDB
Deployment Mode: Single Server
Deployment Strategy: Manual Start in Docker
Configuration:
Infrastructure: AWS t3a.small Machine
Operating System: Ubuntu 16.04
Total RAM in your machine: 2GB. However, the container's limit is 1.2GB
Disks in use: SSD
Used Package: Docker
There's a known issue if you have mixed settings with comments in the configuration file arangodb.conf (source: https://github.com/arangodb/arangodb/issues/5414)
And here's the part where they talk about the solution: https://github.com/arangodb/arangodb/issues/5414#issuecomment-498287668
I think the syntax config file syntax does not support lines such as
block-cache-size = 268435456 # 256M
I think effectively this will be interpreted as something like
block-cache-size = 268435456256000000, which is way higher than intended.
just remove the comments, and retry!
Related
I am running several Java Applications with the Docker image jboss/wildfly:20.0.1.Final on Kubernetes 1.19.3. Wildfly server is running in OpenJDK 11 so the jvm is supporting container memory limits (cgroups).
If I set a memory limit this limit is totally ignored by the container when running in Kubernetes. But it is respected on the same machine when I run it in plain Docker:
1. Run Wildfly in Docker with a memory limit of 300M:
$ docker run -it --rm --name java-wildfly-test -p 8080:8080 -e JAVA_OPTS='-XX:MaxRAMPercentage=75.0' -m=300M jboss/wildfly:20.0.1.Final
verify Memory usage:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
515e549bc01f java-wildfly-test 0.14% 219MiB / 300MiB 73.00% 906B / 0B 0B / 0B 43
As expected the container will NOT exceed the memory limit of 300M.
2. Run Wildfly in Kubernetes with a memory limit of 300M:
Now I start the same container within kubernetes.
$ kubectl run java-wildfly-test --image=jboss/wildfly:20.0.1.Final --limits='memory=300M' --env="JAVA_OPTS='-XX:MaxRAMPercentage=75.0'"
verify memory usage:
$ kubectl top pod java-wildfly-test
NAME CPU(cores) MEMORY(bytes)
java-wildfly-test 1089m 441Mi
The memory limit of 300M is totally ignored and exceeded immediately.
Why does this happen? Both tests can be performed on the same machine.
Answer
The reason for the high values was an incorrect output of the Metric data received from Project kube-prometheus. After uninstalling the kube-projemet and installing instead the metric-server all data was display correctly using kubctl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data.
I`m placing this answer as community wiki since it might be helpful for the community. Kubectl top was displying incorrect data. OP solved the problem with uninstalling the kube-prometheus stack and installing the metric-server.
The reason for the high values was an incorrect output of the Metric
data received from Project kube-prometheus. After uninstalling the
kube-projemet and installing instead the metric-server all data was
display correctly using kubectl top. It shows now the same values as
docker stats. I do not know why kube-prometheus did compute wrong
data. In fact it was providing the double values for all memory data.
Setting up a docker instance of Elasticsearch Cluster.
In the instructions it says
Make sure Docker Engine is allotted at least 4GiB of memory
I am ssh'ing to the host, not using docker desktop.
How can I see the resource allotments from the command line?
reference URL
https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html
I had same problem, with Docker Desktop on Windows 10 while running Linux containers on WSL2.
I found this issue: https://github.com/elastic/elasticsearch-docker/issues/92 and tried to apply similar logic to the solution of there.
I entered the WSL instance's terminal by
wsl -d docker-desktop command.
Later I run sysctl -w vm.max_map_count=262144 command to set 'allotted memory'.
After these steps I could run elasticsearch's docker compose example.
I'd like to go about it by just using one command.
docker stats -all
This will give a output such as following
$ docker stats -all
CONTAINER ID NAME CPU% MEM USAGE/LIMIT MEM% NET I/O BLOCK I/O PIDS
5f8a1e2c08ac my-compose_my-nginx_1 0.00% 2.25MiB/1.934GiB 0.11% 1.65kB/0B 7.35MB/0B 2
To modify the limits :
when you're making your docker-compose.yml include the following at the end of your file. (if you'd like to set up a 4 GiB limit)
resources:
limits:
memory: 4048m
reservations:
memory: 4048m
I'm running the Seq docker image on an AWS EC2 instance.
In order to have the logs written to persistent storage, I've attached an EBS volume to the instance, and mounted it from within the instance with the rexray/ebs plugin:
docker plugin install rexray/ebs:latest REXRAY_PREEMPT=true EBS_REGION=eu-central-1a --grant-all-permissions EBS_ACCESSKEY=... EBS_SECRETKEY=...
docker volume create --driver rexray/ebs --name SeqData
Then instructed Seq to use that volume:
docker run -d --name seq -e ACCEPT_EULA=Y -v SeqData:/data -p 80:80 -p 5341:5341 datalust/seq:latest
Seq runs fine for a while (sometimes for a few hours, sometimes a few days), then I notice that the container is no longer running, and the AWS console shows that the volume has been detached. The AWS logs show that a DetachVolume event was initiated by the instance.
I reattach the volume manually in the AWS console, and restart the container. Seq resumes its normal operation, then after a while the phenomenon repeats itself.
The docker log doesn't give any hint. It just shows Seq logging its normal activity (retention, indexing, etc.) approximately every 5 minutes - up till about 10 minutes before the time that the detaching occurred.
I have limited experience with AWS or Docker, so I'll be grateful if anyone can help me out.
For Seq's memory management to work effectively, both --memory and --memory-swap need to be passed to the docker run command. Normally these should have the same value (i.e. no swap).
docker run --memory=4g --memory-swap=4g <other args> datalust/seq
So I have a container, and to create it in docker toolbox I used:
docker run --memory=4096m -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=MY_PASSWORD' -p 1433:1433 -d --name CONTAINER_NAME microsoft/mssql-server-linux
I have different names and password, but I swapped those out. Every time I run that, it creates, but then exits immediately. When I use:
docker ps -a
to check on it, under status it says:
Exited (1) 7 minutes ago
and then when I try to run:
docker logs CONTAINER_NAME
to check what happened, I get an error saying:
sqlservr: This program requires a machine with at least 2000 megabytes of memory.
My computer has plenty of ram available, and when I created the container I gave it 4gb of ram, so I don't understand what the issue is here. Also, I cannot use docker for windows.
The fix was to remove the "default" vm that gets created automatically, using:
docker-machine rm default
and then re-creating it with the command:
docker-machine -D create -d virtualbox --virtualbox-memory 8096 --virtualbox-disk-size "100000" default
which gives it 8 gigs of memory and 100 gigs of disk space. Also, renaming it as default keeps Kitematic working, which is a plus.
I'm trying to set absolute limits on Docker container CPU usage. The CPU shares concept (docker run -c <shares>) is relative, but I would like to say something like "let this container use at most 20ms of CPU time every 100ms. The closest answer I can find is a hint from the mailing list on using cpu.cfs_quota_us and cpu.cfs_period_us. How does one use these settings when using docker run?
I don't have a strict requirement for either LXC-backed Docker (e.g. pre0.9) or later versions, just need to see an example of these settings being used--any links to relevant documentation or helpful blogs are very welcome too. I am currently using Ubuntu 12.04, and under /sys/fs/cgroup/cpu/docker I see these options:
$ ls /sys/fs/cgroup/cpu/docker
cgroup.clone_children cpu.cfs_quota_us cpu.stat
cgroup.event_control cpu.rt_period_us notify_on_release
cgroup.procs cpu.rt_runtime_us tasks
cpu.cfs_period_us cpu.shares
I believe I've gotten this working. I had to restart my Docker daemon with --exec-driver=lxc as I
could not find a way to pass cgroup arguments to libcontainer. This approach worked for me:
# Run with absolute limit
sudo docker run --lxc-conf="lxc.cgroup.cpu.cfs_quota_us=50000" -it ubuntu bash
The necessary CFS docs on bandwidth limiting are here.
I briefly confirmed with sysbench that this does seem to introduce an absolute limit, as shown below:
$ sudo docker run --lxc-conf="lxc.cgroup.cpu.cfs_quota_us=10000" --lxc-conf="lxc.cgroup.cpu.cfs_period_us=50000" -it ubuntu bash
root#302e651c0686:/# sysbench --test=cpu --num-threads=1 run
<snip>
total time: 90.5450s
$ sudo docker run --lxc-conf="lxc.cgroup.cpu.cfs_quota_us=20000" --lxc-conf="lxc.cgroup.cpu.cfs_period_us=50000" -it ubuntu bash
root#302e651c0686:/# sysbench --test=cpu --num-threads=1 run
<snip>
total time: 45.0423s