I have a docker swarm with 10 docker worker nodes and i'm experiencing issues with docker images storage (in thin pool). It keeps getting full as i got rather small disks (30GB-60GB).
The error:
Thin Pool has 7330 free data blocks which is less than minimum required 8455 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
Because of that, cleaning strategy has to be aggressive, meaning deleting all images three times a day. This aggressive cleaning strategy results in broken pulls ( when cleaning happens at the same time when someone is pulling an image) and that developers cannot use cached images - instead they need to download the images that just got deleted by cleaning mechanism.
However, there is an option to use GlusterFs storage and i want to mount glusterFS volumes to each docker node and use them to create thin pool for docker images and /var/lib/docker.
I'm looking for guide how to do that exactly. Have anyone tried that?
P.S. I made my research about shared storage for docker images between multiple docker nodes and it seems its not possible, as stated here. However mounting separate volumes to each docker node should be possible.
Related
Currently, I am migrating to Docker Swarm and have begun to use docker configs to offload most of the configuration files but I have one file remaining that is several GBs that is used by my tileserver. Right now, I have a 1 master / 4 workers and I am looking for a way to share that file with all nodes in the swarm to prepare for a time when the tileserver goes down.
Any ideas ?
If you want highly available data then a solution that distributes data amongst nodes (or servers).
One approach would be deploying an object storage solution onto the swarm - something like minio gives you an s3 compatible REST api and when deployed with a minimum of 4 disks in erasure coding mode tolerates 1 disk down for writing and 2 disks down for reading (assuming you have a node per disk).
If re-jigging your app to work with object storage isnt in scope then investigate something like glusterfs which you will want to install on the metal, rather than on docker. glusterfs will give you a unified filesystem with decent HA on 3 nodes, you can add disks on the fly.
Obviously with minio its expected your app would use the s3 api to access its files. With glusterfs you would need to mount gfs volumes on host locations where containers than then mount volumes to gain access to that network storage.
unless you are willing to go wandering through the world of rex-ray and other community supported docker volume drivers that either havn't seen an update in years or are literally maintained by one guy for fun which can bring some first class support for glusterfs based docker volumes to your hopefully non production docker swarm.
I have a Kubernetes-cluster with 1 master-node and 3 worker-nodes. All Nodes are running on CentOS 7 with Docker 19.06. I'm also running Longhorn for dynamic provisioning of volumes (if that's important).
My problem is that every few days one of the worker nodes grows the HDD-usage to 85% (43GB). This is not a linear increase, but happens over a few hours, sometimes quite rapidly. I can "solve" this problem for a few days by first restarting the docker service and then doing a docker system prune -a. If I don't restart the service first, the prune removes next to nothing (only a few MB).
I also tried to find out which container is taking up all that space, but docker system df says it doesn't use the space. I used df and du to crawl along the /var/lib/docker subdirectories too, and it seems none of the folders (alone or all together) takes up much space either. Continuing this all over the system, I can't find any other big directories either. There are 24GB that I just can't account for. What makes me think this is a docker problem nonetheless is that a restart and prune just solves it every time.
Googling around I found a lot of similar issues where most people just decided to increase disk space. I'm not keen on accepting this as the preferred solution, as it feels like kicking the can down the road.
Would you have any smart ideas on what to do instead of increasing disk space?
It seems like it is expected behavior, from Docker documentation you can read:
Docker takes a conservative approach to cleaning up unused objects
(often referred to as “garbage collection”), such as images,
containers, volumes, and networks: these objects are generally not
removed unless you explicitly ask Docker to do so. This can cause
Docker to use extra disk space. For each type of object, Docker
provides a prune command. In addition, you can use docker system prune to clean up multiple types of objects at once. This topic shows
how to use these prune commands.
So it seems like you have to clean it up manually using docker system/image/container prune. Other issue might be that those containers create too much logs and you might need to clean it up.
I couldn't have enough courage to start using docker now I'm feel like came from last century. I want to clear my doubts about docker before get started. My question is mainly for deploying/running docker images on cloud or hosting environment.
Can I build a docker image with any type of server (eg. wildfly, payara) and/or database server (eg. mysql, oracle) and will it work on docker enabled cloud/datacenter?
If it's yes how about persistent datas like database files and static storages (eg. images, uploaded documents, logs) those are stored in docker images or somewhere else? What will happen to those files when I update my application and redeploy new image?
I read posts about what is docker but I couln't find specific answer. Forgive me for not doing enough googling.
I have run docker on AWS and other cloud providers. It is really not that hard if you have some experience with system administration and or devops. Regarding cloud hosters and getting started, most providers have some sort of tutorial on how to get started using docker with their infrastructure:
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-dockerextension/
Can I build a docker image with any type of server (eg. wildfly,
payara) and/or database server (eg. mysql, oracle) and will it work on
docker enabled cloud/datacenter?
To get a server up and running, you just need the docker engine installed on the host, there are packages for many distros:
https://docs.docker.com/engine/installation/
After docker engine is installed, you can create dockerfiles for basically any server or service. Hopefully you do not need to, in most cases, since there are countless docker files and pre-configured, vendor maintained images already available on dockerhub (I use wildfly, elk-stack, and mysql for example). Be careful about selecting images are maintained, otherwise you end up with security issues in your images that might never get fixed! Or you have to do it yourself!
Example images:
https://hub.docker.com/r/jboss/wildfly/
https://hub.docker.com/_/mysql/
https://hub.docker.com/_/oraclelinux/
https://hub.docker.com/u/payara/
If it's yes how about persistent datas like database files and static
storages (eg. images, uploaded documents, logs) those are stored in
docker images or somewhere else? What will happen to those files when
I update my application and redeploy new image?
In general, you will want to store persistent data external to the docker image and mount it into the image as a volume:
https://docs.docker.com/engine/tutorials/dockervolumes/
Some cloud based storage providers might be easier to mount or connect to in other ways, but this volume approach is standard, IMO.
For logfiles, I actually push them to an ELK server, so having a volume for the logs is not necessarily required. However, since the ELK server is also a docker image, it does have a volume where the data is persisted.
So you have:
documentation from your cloud hoster (or docker themselves)
a host in your cloud running docker engine
0..n images that you can either grab from dockerhub or build yourself.
storage for persistent data on this host or mounted from elsewhere that you mount into your docker images on startup. this is where e.g. mysql data folders live, or where you can persist logs, etc.
Of course, it can get much more complex from there, e.g. how to transparently scale and update your environment etc., but that is something for e.g. kubernetes or docker swarm or some other solution (I've scripted a bit on my own but do not need the robustness or elastic scalability of large systems).
Regarding cluster management, it should be noted that Swarm is now included in the Docker Core. This has created some controversy in the community and even talks of a fork of the core:
https://technologyconversations.com/2015/11/04/docker-clustering-tools-compared-kubernetes-vs-docker-swarm/
https://jaxenter.com/docker-1-12-is-probably-the-most-important-release-since-1-0-129080.html
http://searchitoperations.techtarget.com/news/450303918/Docker-fork-talk-prompts-container-standardization-brawl
http://www.infoworld.com/article/3118345/cloud-computing/why-kubernetes-is-winning-the-container-war.html
I have experience running docker on Alibaba cloud and AWS as well. I did not see any difference in working with docker on both cloud providers. Docker images can be build same way on all linux platform regardless of the cloud provider. However, persistence of data need to be taken care using docker volumes. However, it is recommended to use managed service such as RDS in Alibaba cloud for databases instead of using docker.
Can I build a docker image with any type of server (eg. wildfly,
payara) and/or database server (eg. mysql, oracle) and will it work on
docker enabled cloud/datacenter?
You can build your own Docker images or use solutions that are already pre-packaged and proven by cloud providers. For example, here is an auto-clustering Docker-based implementation of GlassFish that can be run and managed on Jelastic PaaS.
If it's yes how about persistent datas like database files and static
storages (eg. images, uploaded documents, logs) those are stored in
docker images or somewhere else? What will happen to those files when
I update my application and redeploy new image?
With the above mentioned cluster, all data is kept inside containers and stays without changes after restart. As an option, you can also connect a separate data storage container if you wish to share it across other containers.
The general scenario is that we have a cluster of servers and we want to set up virtual clusters on top of that using Docker.
For that we have created Dockerfiles for different services (Hadoop, Spark etc.).
Regarding the Hadoop HDFS service however, we have the situation that the disk space available to the docker containers equals to the disk space available to the server. We want to limit the available disk space on a per-container basis so that we can dynamically spawn an additional datanode with some storage size to contribute to the HDFS filesystem.
We had the idea to use loopback files formatted with ext4 and mount these on directories which we use as volumes in docker containers. However, this implies a large performance loss.
I found another question on SO (Limit disk size and bandwidth of a Docker container) but the answers are almost 1,5 years old which - regarding the speed of development of docker - is ancient.
Which way or storage backend would allow us to
Limit storage on a per-container basis
Has near bare-metal performance
Doesn't require repartitioning of the server drives
You can specify runtime constraints on memory and CPU, but not disk space.
The ability to set constraints on disk space has been requested (issue 12462, issue 3804), but isn't yet implemented, as it depends on the underlying filesystem driver.
This feature is going to be added at some point, but not right away. It's a bit more difficult to add this functionality right now because a lot of chunks of code are moving from one place to another. After this work is done, it should be much easier to implement this functionality.
Please keep in mind that quota support can't be added as a hack to devicemapper, it has to be implemented for as many storage backends as possible, so it has to be implemented in a way which makes it easy to add quota support for other storage backends.
Update August 2016: as shown below, and in issue 3804 comment, PR 24771 and PR 24807 have been merged since then. docker run now allow to set storage driver options per container
$ docker run -it --storage-opt size=120G fedora /bin/bash
This (size) will allow to set the container rootfs size to 120G at creation time.
This option is only available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers
Documentation: docker run/#Set storage driver options per container.
As far as I understand Docker uses memory mapped files to start from image. Since I can do this over and over again and as far as I remember start different instances of the same image in parallel, I guess docker abstracts the file system and stores changes somewhere else.
I wonder if docker can be configured (or does it by default) to run in a memory only mode without some sort of a temporary file?
Docker uses a union filesystem that allows it to work in "layers" (devicemapper, BTRFS, etc). It's doing copy-on-write so that starting new containers is cheap, and when it performs the first write, it actually creates a new layer.
When you start a container from an image, you are not using memory-mapped files to restore a frozen process (unless you built all of that into the image yourself...). Rather, you're starting a normal Unix process but inside a sandbox where it can only see its own unionfs filesystem.
Starting many copies of an image where no copy writes to disk is generally cheap and fast. But if you have a process with a long start-up time, you'll still pay that cost for every instance.
As for running Docker containers wholly in memory, you could create a RAM disk and specify that as Docker's storage volume (configurable, but typically located under /var/lib/docker).
In typical use-cases, I would not expect this to be a useful performance tweak. First, you'll spend a lot of memory holding files you won't access. The base layer of an image contains most Linux system files. If you fetch 10 packages from the Docker Hub, you'll probably hit 20G worth of images easily (after that the storage cost tends to plateau). Second, the system already manages memory and swapping pretty well (which is why a RAM disk is a performance tweak) and you get all of that applied to processes running inside a container. Third, for most of the cases where a RAM disk might help, you can use the -v flag to mount the disk as a volume on the container rather than needing to store your whole unionfs there.