Dangling and unused docker images in RAM memory? - docker

Can anyone tell me why RAM is released when dangling and unused images are deleted?
I have a Swarm Cluster with 3 nodes to 8Gb RAM each and 100 Gb HDD Memory
Note that there is no free RAM left at all, docker stats does not show prohibitive memory values.
The solution was to use the docker images prune -f command
And as you can see by the graphics, not only has HDD memory been released, but RAM as well
So dangling and unused images back up RAM?
I can’t understand

Related

Colima: increase docker image size limit

I'm running docker through colima and my total images size hit ~10GBs. I need to increase this size in order to continue.
Is there a way to define this somewhere in colima?
I had the same issue and it is possible to customise Colima VMs CPU, Memory (GB) and Disk (GiB):
colima start --cpu 4 --memory 4 --disk 100
But it is weird because the documentation states:
the default VM created by Colima has 2 CPUs, 2GiB memory and 60GiB
storage
Colima - Customizing the VM
The default VM created by Colima has 2 CPUs, 2GiB memory and 60GiB storage.
The VM can be customized either by passing additional flags to colima start. e.g. --cpu, --memory, --disk, --runtime. Or by editing the config file with colima start --edit.
NOTE: disk size cannot be changed after the VM is created.
Customization Examples
create VM with 1CPU, 2GiB memory and 10GiB storage.
colima start --cpu 1 --memory 2 --disk 10
modify an existing VM to 4CPUs and 8GiB memory.
colima stop
colima start --cpu 4 --memory 8
resize storage to 100 GB.
Due to disk size cannot be changed after the VM is created.
work around: we can destroy and recreate the VM if we accept data lost
colima delete
colima start --disk 100
Reference
https://stackoverflow.com/a/74402260/9345651 by #Carlos Cavero
https://github.com/abiosoft/colima#customizing-the-vm
https://github.com/abiosoft/colima/blob/main/docs/FAQ.md

How to set RAM memory of a Docker container by terminal or DockerFile

I need to have a Docker Container with 6gb of RAM memory.
I tried this command:
docker run -p 5311:5311 --memory=6g my-linux
But it doesn't work because I logged in to the Docker Container and I checked the amount of memory available. This is the output which shows there are only 2gb available:
>> cat /proc/meminfo
MemTotal: 2046768 kB
MemFree: 1747120 kB
MemAvailable: 1694424 kB
I tried setting the preferences -> advance in the Docker Application.
If I set 6gb, it works... I mean, I have a container with 6gb MemTotal.
In this way all my containers will have 6gb...
I was wondering how to allocate 6gb of memory for only one container using some commands or setting something in the Docker File. Any help?
Don't rely on /proc/meminfo for tracking memory usage from inside a docker container. /proc/meminfo is not containerized, which means that the file is displaying the meminfo of your host system.
Your /proc/meminfo indicates that your Host system has 2G of memory available. The only way you'll be able to make 6G available in your container without getting more physical memory is to create a swap partition.
Once you have a swap partition larger or equal to ~4G, your container will be able to use that memory (by default, docker imposes no limitation to running containers).
If you want to limit the amount of memory available to your container explicitly to 6G, you could do docker run -p 5311:5311 --memory=2g --memory-swap=6g my-linux, which means that out of a total memory limit of 6G (--memory-swap), upto 2G may be physical memory (--memory). More information about this here.
There is no way to set memory limits in the Dockerfile that I know of (and I think there shouldn't be: Dockerfiles are there for building containers, not running them), but docker-compose supports the above options through the mem_limit and memswap_limit keys.

How memory allocation is happening in docker

I am having a Docker image of virtual size 6.5 GB
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
Image_Name latest d8dcd701981e About an hour ago 6.565 GB
but the RAM in my system is only 4GB , the container is working at a good speed though , I am really confused as how the RAM allocation is done for the docker containers . Is there any limit to the RAM size being allocated to a container as in the end docker container is just another isolated process running in the Operating system.
The virtual size of an image has nothing to do with memory allocation
If your huge image, once launched as a container, does very little, it won't reserve much memory (not consume much CPU)
For more on memory allocation, see this answer: you can limit at runtime the maximum memory allocation. And that, independently of the image size.
For example:
$ docker run -ti -m 300M --memory-swap -1 ubuntu:14.04 /bin/bash
We set memory limit and disabled swap memory limit, this means the processes in the container can use 300M memory and as much swap memory as they need (if the host supports swap memory).

How can my docker harddrive be bigger than the hosts?

I run some docker images on an EC2 host and recently noticed, that the docker FS is always 100GB. The host FS is only 8GB though.
What would happen, if i use more than 8GB on the docker image? Magic?
That comes from PR 14709 and the docker daemon --storage-opt dm.basesize= option:
Current default basesize is 10G. Change it to 100G. Reason being that for
some people 10G is turning out to be too small and we don't have capabilities
to grow it dyamically.
This is just overcommitting and no real space is allocated till container
actually writes data. And this is no different then fs based graphdrivers
where virtual size of a container root is unlimited.
So when you go over 8 GB, you should get a "No more space left on device" error message. No magic.

Docker volume size limit on BlueMix

Docker containers let us conveniently mount volumes for persistent data. I've researched this, and if I understand correctly, the volume's space allocation is bound by the container host's drive space.
My question is - how does this translate to a cloud system like Bluemix? With a container(on Bluemix), you can set the drive limit to, say, 32GB, etc, and know you can run the image with 32GB available TO the container. Are any created volumes also capped and folled into that 32GB limit?
I'm not able to find any documentation on this. The closest I found was creating "Data Containers", where the volume limit is the size of the data container. But if I just create a volume and mount it to a container, what rules govern the size limit of that particular volume?
running inspect
{
"hostPath": "/vol/af2f348b-cad6-4b86-ac0c-b1bc072ca241/PGDATA",
"spaceGuid": "af2f348b-cad6-4b86-ac0c-b1bc072ca241",
"volName": "PGDATA"
}
This question seems specific to Bluemix, but not necessarily, since it might shed light on practices other "container as a service" providers might use.
On Bluemix you could use external docker volumes for data persistence: if a storage is inside a container its persistence will be limited to when the container is running.
You could create a volume using the cf CLI
cf ic volume create volume_name
and check for it through
cf ic volume list
Then you could mount it on the container on Bluemix through the cf ic command with -v option, or through the dockerfile when building on Bluemix
For a reference
https://www.ng.bluemix.net/docs/containers/container_single_ov.html
Edit Jan 18th
There is a relationship between memory and storage size for containers on Bluemix, it is shown by the dashboard options (according to the account/org type)
Pico: 64 MB Memory, 4GB Storage
Nano: 128 MB Memory, 8GB Storage
Micro: 256 MB Memory, 16GB Storage (default)
Tiny: 512 MB Memory, 32GB Storage
Small: 1GB Memory, 64GB Storage
Medium: 2 GB Memory, 128GB Storage
Large: 4 GB Memory, 256GB Storage
X-Large: 8 GB Memory, 512GB Storage
XX-Large: 16 GB Memory, 1TB Storage

Resources