How can you get the docker image size at runtime? - docker

How can you analyze the size of a Docker container at runtime?
I have a docker image which has 1.5GB.
$ docker images
my-image latest 36ccda75244c 3 weeks ago 1.49GB
Much more space is required on the hard drive while the container is running. How can I get this storage space displayed? With dive or docker inspect etc. you only get the information of the packed image.

You can use docker stats for that, here's an example:
$ docker run --rm -d nginx
e3c2fd
$ docker stats --all --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}" --no-stream e3c2fdc
CONTAINER CPU % MEM USAGE / LIMIT
e3c2fdc 0.00% 2.715MiB / 9.489GiB

Related

How to specify the number of CPU a container will use?

I am on working on Windows Server 2019 and trying to run a docker container of CentOS on it. I am running the following:
PS C:\Windows\system32> docker run -dit --name=testing23 --cpu-shares=12 raycentos:1.0
6a3ffb86c1d9509a9d80f0de54fc6acf5731ca645ee74e6aabe41d7153b3af70
PS C:\Windows\system32> docker exec -it 6a3ffb86c1d9509a9d80f0de54fc6acf5731ca645ee74e6aabe41d7153b3af70 bash
(app-root) bash-4.2# nproc
2
It still specifies only 2, and not 32. How can we assign more CPUs to the container?
refer to this topic for more details https://docs.docker.com/config/containers/resource_constraints/#cpu
you have to add the values with proper flags
try :
--cpus=<value> for maximum CPU resources a container can use
--cpuset-cpus = 12
By default, all can be used, or you can limit it per container using the --cpuset-cpus parameter.
docker run --cpuset-cpus="0-2" myapplication:latest
That would restrict the container to 3 CPU's (0, 1, and 2). See the docker run docs for more details.
The chosen way to limit CPU usage of containers is with a fractional limit on CPUs:
docker run --cpus 2.5 myapplication:latest

clean docker volume with no reclaimable space on local volumes

I am running Jenkins and Docker, and Jenkins periodically stops working due to a lack of space.
Up til now, docker's cleanup command:
docker system prune -a
cleared up enough space to resolve this error; however, the available space was smaller each time. What I noticed is that
docker system df
produces the following output:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 4 4 882.9MB 79.15MB (8%)
Containers 6 6 16.45kB 0B (0%)
Local Volumes 4 4 31.95GB 0B (0%)
Build Cache 0B 0B
as you can see, Local Volumes is taking up a giant 32 GB, so
is this normal?
how can I safely reduce the size of Local Volumes?
Thanks
Yes it is normal, when you run $docker system prune by default it doesn't prune the volumes, you have to add extra option to trigger it:
use:
$ docker system prune --volumes
or
$docker volume prune
So here is a temporary solution that does not involve removing or partitioning the volume - the key is to stop the active container before prune:
find the container id:
docker ps
stop the that (or all) containers:
docker container stop <id>
then prune:
docker system prune -a
and then, if you get a get "getsockopt: connection refused" error, I believe you need to recreate the docker registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
This cleans out most of the volume, until it fills up again. I would appreciate it if someone could address why it fills up in the first place

How to create and download image from Docker container running in Docker Swarm

Imaging following scenario everyone can stack in production:
we are running Elastic search as docker containers, indexing some
data we would like to backup data every 3 months
means we need to
create docker image from running container and upload it to registry.
Haven't found any clues how to do that in documentation.
With the swarm orchestration, your individual containers/tasks inside of the service may be restarted (e.g. if you have a node failure or your application crashes). For persistent data, I'd use an external volume and backup that volume directly. If you want to do this in swarm, you can commit the containers it creates by locating the specific container and committing it with the standard commands:
$ docker service create --name test-commit busybox /bin/sh -c 'while true; do ls / >/tmp/ls.`date +%T`.log; sleep 30; done'
2vbnf5s39vs0jfc53at3ko1cg
$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
2vbnf5s39vs0 test-commit 1/1 busybox /bin/sh -c while true; do ls / >/tmp/ls.`date +%T`.log; sleep 30; done
$ docker service ps test-commit
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
eu28da042s9tdwlddzk6adkan test-commit.1 busybox docker-demo Running Running 9 seconds ago
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
545e7fe6f5bd busybox:latest "/bin/sh -c 'while tr" 28 seconds ago Up 26 seconds test-commit.1.eu28da042s9tdwlddzk6adkan
$ docker diff test-commit.1.eu28da042s9tdwlddzk6adkan
C /tmp
A /tmp/ls.12:02:13.log
A /tmp/ls.12:02:43.log
$ docker commit test-commit.1.eu28da042s9tdwlddzk6adkan
test-commit:1
sha256:2255b476b307b69cf20afbc7c46fae43f05c92a70f1525aa5d745c26a406dc90
$ docker images | grep test-commit
test-commit 1 2255b476b307 9 seconds ago 1.093 MB
You can use docker commit to turn a container into an image.
But I would advise against doing that in this case. It's better to use some kind of volume for your data and back that up separately.

Resize disk usage of a Docker container

Every Docker container will be configured with 10 GB disk space, which is the default configuration of devicemapper in CentOS. So how can I configure every container newly created with more than 10 GB disk space in default? (The host server is installed with CentOS 6 and Docker 1.7.1)
Yes you can. Use the dm.basesize attribute when starting the Docker daemon. For example:
docker daemon --storage-opt dm.basesize=50G ...
More info can be found in the official docs.
(optional) If you have already downloaded any image via docker pull you need to clean them first - otherwise they won't be resized
docker rmi your_image_name
Edit the storage config
vi /etc/sysconfig/docker-storage
There should be something like DOCKER_STORAGE_OPTIONS="...", change it to DOCKER_STORAGE_OPTIONS="... --storage-opt dm.basesize=100G"
Restart the docker deamon
service docker restart
Pull the image
docker pull your_image_name
(optional) verification
docker run -i -t your_image_name /bin/bash
df -h
I was struggling with this a lot until I found out this link http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/
turns out you have to remove/pull image after enlarging the basesize.
For those who use Mac, here's an easier solution:
Click "Preference" from Docker icon in the status bar:
Then navigate to "Disk" tab, adjust the disk image size with the slider. Docker will take a moment to restart.
That's it.
Above answers properly suggest we have to edit dm.basesize attribute of devicemapper,
but proposed solutions are out of date or simply does not work in my case.
First make sure your storage dirver is a devicemapper with:
docker info | grep "Storage Driver"
you can also check current max-size of container with: (default 10 gb)
docker info | grep "Base Device Size"
From devicemapper documentation
1) edit dm.basesize in etc/docker/daemon.json file, or create new one if does not exist
{
"storage-opts": [
"dm.basesize=30G"
]
}
2) restart docker deamon
sudo systemctl stop docker
sudo systemctl start docker
3) run again command below to check if size changed
docker info | grep "Base Device Size"
4) Its important to update current images so change is applied

Memory usage of Docker containers

I am using Docker to run some containerized apps. I am interested in measuring how much resources they consume (as far as regarding CPU and Memory usage).
Is there any way to measure the resources consumed by Docker containers like RAM & CPU usage?
Thank you.
You can get this from docker stats e.g:
$ docker stats --no-stream
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6b5c0fcfa7d4 0.13% 2.203 MiB / 4 MiB 55.08% 5.223 kB / 648 B 102.4 kB / 876.5 kB 3
Update: See #Adrian Mouat's answer below as docker now supports docker stats!
There isn't a way to do this that's built into docker in the current version. Future versions will support this via an api or plugin.
https://github.com/dotcloud/docker/issues/36
It does look like there's an lxc project that you should be able to use to track CPU and Memory.
https://github.com/Soulou/acadock-live-lxc
Also, you can read resource metrics directly from cgroups.
See example below (I am running on Debian Jessie and docker 1.2)
> docker ps -q
afa03c363af5
> ls /sys/fs/cgroup/memory/system.slice/ | grep docker-afa03c363af5
docker-afa03c363af54815d721d938e01fe4cb2debc4f6c15ebff1851e20f6cde3ae0e.scope
> cd docker-afa03c363af54815d721d938e01fe4cb2debc4f6c15ebff1851e20f6cde3ae0e.scope
> cat memory.usage_in_bytes
4358144
> cat memory.limit_in_bytes
1073741824
Kindly check out below commands for getting CPU and Memory usages of docker containers:-
docker status container_ID #to check single container resources
for i in $(docker ps -q); do docker stats $i --no-trunc --no-stream ; echo "--------";done #to check/list all container resources
docker stats --all #to check all container resources live
docker system df -v #to check storage related information
Memory usage of docker containers
docker system df -v
local docker space
df -kh

Resources