We are using docker swarm on the server. using openjdk8. If do :
docker service ls
see the result :
ID NAME MODE REPLICAS IMAGE PORTS
7l89205dje61 integration_api replicated 1/1 docker.repo1.tomba.com/koppu/koppu-api:3.1.2.96019dc
.................
I am trying to update jvm heap size for this service so I tried :
docker service update --env-add JAVA_OPTS="-Xms3G -Xmx3G -XX:MaxPermSize=1024m" integration_api
Saw this result:
integration_api
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
Now I am trying to see the heap size and not finding a way as when tried to get inside the container taking the id above as :
docker exec -it 7l89205dje61 bash
getting error :
this container does not exit.
Any suggestion?
Perhaps you can exec into the running container and display the current heap size with something like this?
# get the name of a container within your service with
docker exec -it <CONTAINER-ID> bash
# after execing into the container,
java -XX:+PrintFlagsFinal -version | grep HeapSize
Use this Stack Orerflow post to figure out how to exec into a service
Got the java code to print heap settings from this Stack Overflow post
Note these ideas don't have a good example out there as yet to my knowledge. However, one good way of doing it is to implement a "healthcheck" process that would query the JVM statistics like heap and other things and report it to another system.
Another way is exposing the Spring Boot Actuator API so that Prometheus can read and track it over time.
Related
We are running a docker swarm and using Monit to see resources utilisation. The
Process memory for dockerd keeps on growing over time. This happens on all nodes that at least perform a docker action e.g docker inspect or docker exec. I'm suspecting it might be something related to this these actions but I'm not sure how to replicate it. I have a script like
#!/bin/sh
set -eu
containers=$(docker container ls | awk '{if(NR>1) print $NF}')
# Loop forever
while true;
do
for container in $containers; do
echo "Running Inspect on $container"
CONTAINER_STATUS="$(docker inspect $container -f "{{.State}}")"
done
done
but I'm open to other suggestions
Assuming you can run ansible to run a command via ssh on all servers:
ansible swarm -a "docker stats --no-stream"
A more SRE solution is containerd + Prometheus + AlerManager / Grafana to gather metrics from the swarm nodes and then implement alerting when container thresholds are exceeded.
Don't forget you can simply set a resource constraint on Swarm services to limit the amount of memory and cpu service tasks can consume or be restarted. Then just look for services that keep getting OOM killed.
We are using docker swarm on the server for orchestration. using openjdk8. My backend application is a rest service named "api". On the master, if do :
docker service ls
see the result :
ID NAME MODE REPLICAS IMAGE PORTS
7l89205dje61 integration_api replicated 1/1 docker.repo1.tomba.com/koppu/koppu-api:3.1.2.96019dc
.................
Time to time I am seeing an error in this docker service log (docker service logs integration_api):
java.lang.OutOfMemoryError: Java heap space
And hence, I am trying to update jvm heap size for this docker service. so I tried :
docker service update --env-add JAVA_OPTS="-Xms3G -Xmx3G -XX:MaxPermSize=1024m" integration_api
Saw this result:
integration_api
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
However, it does not actually seem to increase the heap in the container. What should I do differently?
Setting up a docker instance of Elasticsearch Cluster.
In the instructions it says
Make sure Docker Engine is allotted at least 4GiB of memory
I am ssh'ing to the host, not using docker desktop.
How can I see the resource allotments from the command line?
reference URL
https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html
I had same problem, with Docker Desktop on Windows 10 while running Linux containers on WSL2.
I found this issue: https://github.com/elastic/elasticsearch-docker/issues/92 and tried to apply similar logic to the solution of there.
I entered the WSL instance's terminal by
wsl -d docker-desktop command.
Later I run sysctl -w vm.max_map_count=262144 command to set 'allotted memory'.
After these steps I could run elasticsearch's docker compose example.
I'd like to go about it by just using one command.
docker stats -all
This will give a output such as following
$ docker stats -all
CONTAINER ID NAME CPU% MEM USAGE/LIMIT MEM% NET I/O BLOCK I/O PIDS
5f8a1e2c08ac my-compose_my-nginx_1 0.00% 2.25MiB/1.934GiB 0.11% 1.65kB/0B 7.35MB/0B 2
To modify the limits :
when you're making your docker-compose.yml include the following at the end of your file. (if you'd like to set up a 4 GiB limit)
resources:
limits:
memory: 4048m
reservations:
memory: 4048m
Going through Part 3 tutorial of Docker's Getting Started.
I was able to run the load balanced app with 5 instances using the below command
$ docker stack deploy -c docker-compose.yml getstartedlab
top-level network "webnet" is ignored
service "web": network "webnet" is ignored
Waiting for the stack to be stable and running...
web: Ready [pod status: 5/5 ready, 0/5 pending, 0/5 failed]
But, when I try to list the services with command docker service ls it does not show any data.
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
Am I doing something wrong here. Could anyone please guide me?
It looks like you're using Kubernetes instead of Swarm as the orchestrator. In the current implementation, you can only get the services for a specific stack, not list all services.
docker stack services getstartedlab
Perhaps you didn't mean to use Kubernetes as the stack orchestrator? You can disable it by going the Docker menu → Preferences… → Kubernetes, and unchecking “Enable Kubernetes”.
I have a testenvironment for code in a docker image which I use by running bash in the container:
me#host$ docker run -ti myimage bash
Inside the container, I launch a program normally by saying
root#docker# ./myprogram
However, I want the process of myprogram to have a negative niceness (there are valid reasons for this). However:
root#docker# nice -n -7 ./myprogram
nice: cannot set niceness: Permission denied
Given that docker is run by the docker daemon which runs as root and I am root inside the container, why doesn't this work and how can force a negative niceness?
Note: The docker image is running debian/sid and the host is ubuntu/12.04.
Try adding
--privileged=true
to your run command.
[edit] privileged=true is the old method. Looks like
--cap-add=SYS_NICE
Should work as well.
You could also set the CPU priority of the whole container with -c for CPU shares.
Docker docs: http://docs.docker.com/reference/run/#runtime-constraints-on-cpu-and-memory
CGroups/cpu.shares docs: https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt