I have an SSD drive mounted at /ssd. I'd like to create a docker volume container for a MySQL server that will be run in a container, and have it use this SSD drive for data storage. It seems that by default, docker creates data volume containers in /var/lib/docker. How can I force docker to use the SSD drive for the data volume container?
The point of containers is they're decoupled. There's not any way to segregate 'where they go' by local filesystem.
However what I do for my databases is pass through mount:
docker create -v /ssd:/path/to/data --name data_for_mysql <imagename> /bin/true
Then you can run with --volumes-from data_for_mysql. It will write directly on the filesystem.
Related
I have a container with code in it. This container runs on production server. How can I share folder with code in this container to my local machine? May be with Samba server and then mount (cifs) this folder with code on my machine? May be some examples...
Using
docker cp <containerId>:/file/path/within/container /host/path/target
you could copy some data from the container. If the data in the container and on your machine need to be in sync constantly I suggest you use a data volume to share a directory from your server with the container. This directory can then be shared from the server to your local machine with any method (e.g. sshfs)
The docker documentation about Manage data in containers shows how to add a volume:
$ docker run -d -P --name web -v /webapp training/webapp python app.py
The data from you severs training/webapp location will then be available in the docker container at /webapp.
the scenario: I have a host that has a running docker daemon and a working docker client and socket. I have 1 docker container that was started from the host and has a docker socket mounted within it. It also has a mounted docker client from the host. So I'm able to issue docker commands at will from whithin this docker container using the aforementioned mechanism.
the need: I want to start another docker container from within this docker container; in other words, I want to start a sibling docker container from another sibling docker container.
the problem: A problem arises when I want to mount files that live inside the host filesystem to the sibling container that I want to spin up from the other docker sibling container. It is a problem because when issuing docker run, the docker daemon mounted inside the docker container is really watching the host filesystem. So I need access to the host file system from within the docker container which is trying to start another sibling.
In other words, I need something along the lines of:
# running from within another docker container:
docker run --name another_sibling \
-v {DockerGetHostPath: path_to_host_file}:path_inside_the_sibling \
bash -c 'some_exciting_command'
Is there a way to achieve that? Thanks in advance.
Paths are always on the host, it doesn't matter that you are running the client remotely (or in a container).
Remember: the docker client is just a REST client, the "-v" is always about the daemon's file system.
There are multiple ways to achieve this.
You can always make sure that each container mounts the correct host directory
You can use --volumes-from ie :
docker run -it --volumes-from=keen_sanderson --entrypoint=/bin/bash debian
--volumes-from Mount volumes from the specified container(s)
You can use volumes
I am not looking to save or commit docker images or containers. This is a question on how to save files (e.g., to drive configs which can be volume mounted on docker containers).
I am looking for a way to save files on the docker host filesystem which will survive docker restarts e.g., boot2docker down + boot2docker up.
That would be using volumes (docker volumes create/ls/...):
Those volumes can be mounted by containers (--volumes-from), part of the disk.vmdk which is the boot2docker VM disk stored on host.
And they are persisted in /var/lib/docker/volumes. That will survive a boot2docker session (docker-machine stop/start)
If you really need to backup them directly on the host (outside the virtual disk), you can copy /var/lib/docker/volumes to /c/Users/... (or /Users/...).
A docker volume ls combined with docker volume inspect will list the folders within /var/lib/docker/volumes that you need to consider for backup.
Is it possible to share a directory between docker instances to allow the different docker instances / containers running on the same server directly share access to some data?
You can mount the same host directory to both containers docker run -v /host/shared:/mnt/shared ... or use docker run --volumes-from=some_container to mount a volume from another container.
Yes, this is what "Docker volumes" are. See Managing Data in Containers:
Mount a Host Directory as a Data Volume
[...] you can also mount a directory from your own host into a container.
$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This will mount the local directory, /src/webapp, into the container as the /opt/webapp directory.
[...]
Creating and mounting a Data Volume Container
If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it's best to create a named Data Volume Container, and then to mount the data from it.
Why does docker have docker volumes and volume containers? What is the primary difference between them. I have read through the docker docs but couldn't really understand it well.
Docker volumes
You can use Docker volumes to create a new volume in your container and to mount it to a folder of your host. E.g. you could mount the folder /var/log of your Linux host to your container like this:
docker run -d -v /var/log:/opt/my/app/log:rw some/image
This would create a folder called /opt/my/app/log inside your container. And this folder will be /var/log on your Linux host. You could use this to persist data or share data between your containers.
Docker volume containers
Now, if you mount a host directory to your containers, you somehow break the nice isolation Docker provides. You will "pollute" your host with data from the containers. To prevent this, you could create a dedicated container to store your data. Docker calls this container a "Data Volume Container".
This container will have a volume which you want to share between containers, e.g.:
docker run -d -v /some/data/to/share --name MyDataContainer some/image
This container will run some application (e.g. a database) and has a folder called /some/data/to/share. You can share this folder with another container now:
docker run -d --volumes-from MyDataContainer some/image
This container will also see the same volume as in the previous command. You can share the volume between many containers as you could share a mounted folder of your host. But it will not pollute your host with data - everything is still encapsulated in isolated containers.
My resources
https://docs.docker.com/userguide/dockervolumes/