What is the best way/tool to monitor an EBS volume available space when mounted inside a Docker container?
I really need to monitor the available disk space in order to prevent crash because of no space left on device.
Do you know of any tool that can monitor that, like datadog, newrelic grafana, prometheus or something opensource?
The telegraf/influxdb/grafana stack can monitor space left on disc. Kapacitor can also be added if you want alerts. If you want to specify a limit, you have to use a dedicated partition / mount point or a btrfs subvolume with quotas.
Another option is to make cron job to clean up unused docker images, unused docker volume, and exited docker container. I use this method myself.
Related
My server is at 0% free space. I just deleted 100gb of data in one of my docker volumes in an actively running container.
How do I free up the space and release it to the host system so that I am not at 0%. Do I need to stop the docker container to release it?
Thanks!
If the container is still running, and deleting files doesn't show any drop in disk usage, odds are good that the process inside your container has those file handles open. The OS won't release the underlying file and reclaim the space until all processes with open file handles close those file handles, or the process exits. In short, you likely need to restart the container.
what you need to do is clean your docker system by using:
docker system prune : https://docs.docker.com/config/pruning/
This will remove unused containers, clean images and more.
An error occurred because there is not enough disk space
I decided to check how much is free and came across this miracle
Cleaned up via docker system prune -a and
docker container prune -f
docker image prune -f
docker system prune -f
But only 9GB was cleared
Prune removes containers/images that have not been used for a while/stopped. I would suggest you do a docker ps -a and then remove/stop all the containers that you don't want with docker stop <container-id>, and then move on to remove docker images by docker images ps and then remove them docker rmi <image-name>
Once you have stooped/removed all the unwanted containers run docker system prune --volumes to remove all the volumes/cache and dangling images.
Don't forget to prune the volumes! Those almost always take up way more space than images and containers. docker volume prune. Be careful if you have anything of value stored in them though.
It could be your logging of the running containers. I've seen Docker logging writing disks full with logging. By default Docker containers can write unlimited logs.
I always add logging configuration to my docker-compose restrict total size. Docker Logging Configuration.
From the screenshot I think there's some confusion on what's taking up how much space. Overlay filesystems are assembled from directories on the parent filesystem. In this case, that parent filesystem is within /var/lib/docker which is part of / in your example. So the df output for each overlay filesystem is showing how much disk space is used/available within /dev/vda2. Each container isn't using 84G, that's just how much is used in your entire / filesystem.
To see how much space is being used by docker containers, images, volumes, etc, you can use docker system df. If you have running containers (likely from the above screenshot), docker will not clean those up, you need to stop them before they are eligible for pruning. Once containers have been deleted, you can then prune images and volumes. Note that deleting volumes deletes any data you were storing in that volume, so be sure it's not data you wanted to save.
It's not uncommon for docker to use a lot of disk space from downloading lots of images (docker makes it easy to try new things) and those are the easiest to prune when the containers are stopped. However what's harder to see are the logs that containers are outputting which will slowly fill the drive within the containers directory. For more details on how to clean the logs automatically, see this answer.
If you want, you could dig deep at granular level and pinpoint the file(s) which are the cause of this much disk usage.
du -sh /var/lib/docker/overley2/<container hash>/merged/* | sort -h
this would help you coming to a conclusion much easily.
Is there any way to limit the size that a mounted docker volume can grow to? I'm thinking of doing it like it's done here: How to set limit on directory size in Linux? but I feel it's a bit too convoluted for what I need.
By default when you mount a host directory/volume into your Docker container while running it, the Docker container gets full access to that directory and can use as much space as is available.
The way you're trying is yeah, tedious.
What you can do is, for, eg. maybe mount a new partition to your server, (maybe an EBS to your EC2 instance) with limited size and mount that inside your container and that will suit your purpose.
I have a docker container which does alot of read/write to disk. I would like to test out what happens when my entire docker filesystem is in memory. I have seen some answers here that say it will not be a real performance improvement, but this is for testing.
The ideal solution I would like to test is sharing the common parts of each image and copy to your memory space when needed.
Each container files which are created during runtime should be in memory as well and separated. it shouldn't be more than 5GB fs in idle time and up to 7GB in processing time.
Simple solutions would duplicate all shared files (even those part of the OS you never use) for each container.
There's no difference between the storage of the image and the base filesystem of the container, the layered FS accesses the images layers directly as a RO layer, with the container using a RW layer above to catch any changes. Therefore your goal of having the container running in memory while the Docker installation remains on disk doesn't have an easy implementation.
If you know where your RW activity is occurring (it's fairly easy to check the docker diff of a running container), the best option to me would be a tmpfs mounted at that location in your container, which is natively supported by docker (from the docker run reference):
$ docker run -d --tmpfs /run:rw,noexec,nosuid,size=65536k my_image
Docker stores image, container, and volume data in its directory by default. Container HDs are made of the original image and the 'container layer'.
You might be able set this up using a RAM disk. You would hard allocate some RAM, mount it, and format it with your file system of choice. Then move your docker installation to the mounted RAM disk and symlink it back to the original location.
Setting up a Ram Disk
Best way to move the Docker directory
Obviously this is only useful for testing as Docker and it's images, volumes, containers, etc would be lost on reboot.
Assume I am starting a big number of docker containers which are based on the same docker image. It means that each docker container is running the same application. It could be the case that the application is big enough and requires a lot of hard drive memory.
How is docker dealing with it?
Does all docker containers sharing the static part defined in the docker image?
If not does it make sense to copy the application into some directory on the machine which is used to run docker containers and to mount this app directory for each docker container?
Docker shares resources at kernel level. This means application logic is in never replicated when it is ran. If you start notepad 1000 times it is still stored only once on your hard disk, the same counts for docker instances.
If you run 100 instances of the same docker image, all you really do is keep the state of the same piece of software in your RAM in 100 different separated timelines. The hosts processor(s) shift the in-memory state of each of these container instances against the software controlling it, so you DO consume 100 times the RAM memory required for running the application.
There is no point in physically storing the exact same byte-code for the software 100 times because this part of the application is always static and will never change. (Unless you write some crazy self-altering piece of software, or you choose to rebuild and redeploy your container's image)
This is why containers don't allow persistence out of the box, and how docker differs from regular VM's that use virtual hard disks. However, this is only true for the persistence inside the container. The files that are being changed by docker software on the hard disk are "mounted" into containers using the docker volumes and thus arent really part of the docker environments, but just mounted into them. (Read more about this at: https://docs.docker.com/userguide/dockervolumes/)
Another question that you might want to ask when you think about this, is how does docker store changes that it makes to its disk on runtime. What is really sweet to check out, is how docker actually manages to get this working. The original state of the container's hard disk is what is given to it from the image. It can NOT write to this image. Instead of writing to the image, a diff is made of what is changed in the containers internal state in comparison to what is in the docker image.
Docker uses a technology called "Union Filesystem", which creates a diff layer on top of the initial state of the docker image.
This "diff" (referenced as the writable container in the image below) is stored in memory and disappears when you delete your container. (Unless you use the command "docker commit", however: I don't recommend this. The state of your new docker image is not represented in a dockerfile and can not easily be regenerated from a rebuild)