I run the following docker image:
docker run -d -v home/myap:/home/node/appl example1 :latest
Let's assume that we run container example1 and it writes extensively into /home/node/app directory (1 GB every 1 minute). So in 10 minutes, we have written 10 GB.
If the docker container wasn't restarted am I right that size of the directory home/myapp as well as the size of docker container /home/node/app will be 10GB? So did our write operation take in total 20 GB of space?
Is there a way to keep the size inside of a docker container restricted, for example, no more than 3GB? Would Swarm or K8s help us with that?
Am I right that size of the directory home/myapp as well as the size of docker container /home/node/app will be 10GB?
No, you are not. Your home/myapp directory will be mounted into the container – so instead of writing into the container, it will write into your home/myapp directory.
However, from the container's perspective, /home/node/app will be 10Gi in size – and say so if you jump into the container and have a look.
This follows UNIX mechanics, where you have a virtual file system, and storage mediums are mounted to different paths inside this virtual file system. In your case, said storage medium is the file path home/myapp, which is mounted to /home/node/app.
Related
My Problem is i have Ubuntu Machine , and 2 partitions root and home . I have a Docker Image of MySQLDB which of 100GB. I keep them under root -> /var/lib/docker. Docker images itself uses 100GB of my root partition.
Now If I run this particular docker image -> Container gets created and tries to use another 100 GB of Hard Disk on root partition while running the Docker Container.so 200 GB it uses from root partition.
Is there anyway i can keep the docker images on root partition, and while running them, i want the container to use the other partition in hard disk (not the same where the images are stored).
I am not sure whether this is feasible.
Thanks in Advance for the help.
make a soft link for image folder
systemctl stop docker
mv /var/lib/docker/image /partition_2/
ln -s /partition_2/image /var/lib/docker/image
systemctl start docker
I am running a docker compose setup on a AWS EC2 instance with three docker container.
After a few weeks running my docker images the size of the /containers dir increases quite a bit:
8,1G /var/lib/docker/containers
0 /var/lib/docker/plugins
3,1G /var/lib/docker/overlay2
When I stop all my images and remove them and the containers and restart my docker images it looks like this:
96K /var/lib/docker/containers
0 /var/lib/docker/plugins
3,1G /var/lib/docker/overlay2
A docker image prune --all did not free anything.
So how can I prevent the var/lib/docker/containers from growing that much.
this happens because you are writing data into the container itself. you should write data to an external volume. each time you write data into the container, a new layer is created on top of the current image.
after a while, your /var/lib/docker/container will be collecting a lot of layers of changed/written file and keep growing
each time you stop your container, the layers are removed, and you are back to the original state of the image when you build them.
Quote:
Containers that write a lot of data consume more space than containers
that do not. This is because most write operations consume new space
in the container’s thin writable top layer.
Note: for write-heavy applications, you should not store the data in the container. Instead, use Docker volumes, which are independent
of the running container and are designed to be efficient for I/O. In
addition, volumes can be shared among containers and do not increase
the size of your container’s writable layer.
Reference: https://docs.docker.com/storage/storagedriver/
Let say I have 10 docker images. In all the 10 docker images many layers are similar.
While using docker save -o, saved image is standalone and therefore images size grow bigger. (~ For 10 images size is around 9GB )
After pulled docker images, I explore:
/var/lib/docker
--- aufs (~3GB only)
--- containers (Few KBs)
--- image (Few KBs)
--- ...
--- mnt
Is there anyway to efficiently export images ?
I also tried copy-paste aufs and image folder to new host. But some containers can't start.
While inspect log:
/usr/bin/sudo must be owned by uid 0 and have the setuid bit set
Note: I already referred this . This question is not duplicate of it. It didn't solve my use case which I mentioned above.
I don't know whether this is a correct approach. But Below method worked for me. I tested with 30 Kolla Openstack Docker Images.
Why I have problem with docker save ?
As I said, I have 30 docker images. While I save using docker save -o <save image to path> <image name>. Total size is 15 GB which is too BIG to portable.
What I did ?
Long Story Short: I copied aufs and image folder carefully.
Step 1:
In the machine you want to export : Make sure you have only images(which are to be export). Remove all the containers which are running and stopped.
In the machine you want to import: Make sure You don't have any images and containers.
Step 2: tar -cvf docker-images.tar /var/lib/docker/aufs /var/lib/docker/image It will zip all of your image layers and its database into a single tar file. Size is only 3 GB
Step 3: In the machine, you want to import images,
tar -xvf docker-images.tar -C /var/lib/docker/.
Now restart docker. service docker restart.
I have a docker container which does alot of read/write to disk. I would like to test out what happens when my entire docker filesystem is in memory. I have seen some answers here that say it will not be a real performance improvement, but this is for testing.
The ideal solution I would like to test is sharing the common parts of each image and copy to your memory space when needed.
Each container files which are created during runtime should be in memory as well and separated. it shouldn't be more than 5GB fs in idle time and up to 7GB in processing time.
Simple solutions would duplicate all shared files (even those part of the OS you never use) for each container.
There's no difference between the storage of the image and the base filesystem of the container, the layered FS accesses the images layers directly as a RO layer, with the container using a RW layer above to catch any changes. Therefore your goal of having the container running in memory while the Docker installation remains on disk doesn't have an easy implementation.
If you know where your RW activity is occurring (it's fairly easy to check the docker diff of a running container), the best option to me would be a tmpfs mounted at that location in your container, which is natively supported by docker (from the docker run reference):
$ docker run -d --tmpfs /run:rw,noexec,nosuid,size=65536k my_image
Docker stores image, container, and volume data in its directory by default. Container HDs are made of the original image and the 'container layer'.
You might be able set this up using a RAM disk. You would hard allocate some RAM, mount it, and format it with your file system of choice. Then move your docker installation to the mounted RAM disk and symlink it back to the original location.
Setting up a Ram Disk
Best way to move the Docker directory
Obviously this is only useful for testing as Docker and it's images, volumes, containers, etc would be lost on reboot.
The glossary of docker says that
A Docker container consists of
A Docker image
Execution environment
A standard set of instructions
When I type docker images, I see 324.2 MB in SIZE column of mysql:5.6.
When I type docker ps -s -a, this command tells me that the size of the container, which is created by docker run mysql:5.6 -d, is also 324.2 MB.
Does this mean that Execution environment and A standard set of instructions do not occupy any disk space?
or the disk space they use is less than 0.1 MB?
or docker ps -s -a just lists the size of the container's image?
Because of the copy-on-write mechanism, the size of a container is... at first 0.
Meaning, you can launch 100 containers, then won't take 100 times the size of the image. They will share the filesystem proposed by the image.
Then any modification done during the life of the container will be written in a new layer, one per image.
See more at "Understand images, containers, and storage drivers":
When you create a new container, you add a new, thin, writable layer on top of the underlying stack. This layer is often called the “container layer”.
All changes made to the running container - such as writing new files, modifying existing files, and deleting files - are written to this thin writable container layer. The diagram below shows a container based on the Ubuntu 15.04 image.