I start various docker containers through docker-compose up -d and they all have the restart always flags enabled.
When I reboot the host machine (Windows 10 Pro), the containers restart but it appears the volumes are not mounted. I know this because I keep the config files on the mounted volume and the services bring up the initial setup screen when I go to the webpages, plex for example does the starting new server. When I run docker-compose stop, and then docker-compose up -d they have the volumes mounted and are using the correct config files. Is there a difference between how the containers get started on a restart versus docker-compose up?
For host-mounted volumes, it's a known problem that the volumes are not available immediately after restart: https://github.com/docker/for-win/issues/584#issuecomment-286792858
I recommend using non-host mounted volumes or other workarounds.
Related
I am using a multi-container Docker application in a EC2 linux instance.
I have started it with: docker-compose -p myapplication up -d
I also have mounted my EFS under (in my EC2 host machine): /mnt/efs/fs1/
Everything is working fine at that point.
Now I need to access this EFS from one of my docker containers.
So I guess I have to add a volume to one of my containers linking /mnt/efs/fs1/ (in host) to /mydestinationpath (on container)
I can see my running containers IDs and images with: docker container ls
How can attach the volume to my container?
Edit the docker-compose.yml file to add the volumes: you need, and re-run the same docker-compose up -d command. Compose will notice that some specific services' configurations have changed, and delete and recreate those specific containers.
Most of the configuration for a Docker container (image name and tag, environment variables, published ports, volume mounts, ...) can only be specified when the container is first created. At the same time, the general expectation is that there's nothing important in a container filesystem. So it's extremely routine to delete and recreate a container to change options like this, and Compose can automate it for you.
I put the docker host in one machine and put the client in other. Then I try to run some like this from machine 2 (client).
docker -H tcp://machine1:port run -v ./dummy-folder:/dummy-folder alpine sh
Is that dummy-folder going to work through TCP connection?
Is the same valid for docker-compose too?
is the same valid for docker swarm mode?
The volume mount will run locally on the docker host where the container runs, there's no path over the TCP connection for the volume mount (there is a build time packaging of the build context to send that from the client to the server). Swarm is unchanged, if you mount a volume, it will mount on whatever host the container happens to run on.
If you can't replicate your data across the hosts, then you'll want to use a volume mount over the network to a shared storage location or use a volume driver that does the replication for you (e.g. nfs, infinite, glusterfs, flocker).
I'm using docker-compose.yml to setup docker containers. And I have started the services using docker-compose up -d.
Now every time I deploy the application to the server I need to restart one of the services.
Previously I used to run the container without docker-compose using just the docker run command like this: docker run --name test-mvn -v "$(pwd)":/usr/src/app test/mvn-spring-boot -d.
And to restart the container I used to do docker restart test-mvn.
But now there are two options out there docker-compose restart and docker restart. I'm not sure which one I should prefer.
I want to know what is the difference between these two options and which one I should use in my case.
With docker-compose you manage a services, typically constituting multiple containers, while docker manages individual containers. Thus docker-compose restart will restart all the containers of a service and docker restart only the given containers.
Assuming "one of the services" in your question refers to an individual container I would suggest docker restart.
I am using docker toolbox on Mac. The setup looks like:
docker host - Boot2Docker VirtualBox VM running on Mac
docker client - Mac
I am using following command - docker run -it -v $PWD/dir_on_docker_client:/dir_inside_container ubuntu:14.04 /bin/bash to run a container with a volume mount. I wonder, how is docker able to mount volume from docker client (in this case Mac) into a docker container running on docker host (in this case, VM running on Mac)?
The toolbox VM includes a shared directory from the client. /c/Users (C:\Users) on Windows and /Users on Mac.
Directories in these folders, on the client, can be added as volumes in a container.
Note though that if you add for example /tmp as a volume, it will be /tmp in the toolbox.
The main problem is that virtulbox shares only your home folder with the docker machine at the moment you can only shares content inside this directory. It's uncomfortable but the unique way that I fund to resolve this problem is with the bootlocal.sh file, you can write this file inside your docker-machine to mount after the boot new directory
https://github.com/boot2docker/boot2docker/blob/master/doc/FAQ.md#local-customisation-with-persistent-partition
Yesterday during this dockercon they announced a public beta for "Docker For Mac", I think that you can replace docker-machine with this tool, it provide the best experience with docker and macos, and it resolves this problem
https://www.docker.com/products/docker
Why does docker have docker volumes and volume containers? What is the primary difference between them. I have read through the docker docs but couldn't really understand it well.
Docker volumes
You can use Docker volumes to create a new volume in your container and to mount it to a folder of your host. E.g. you could mount the folder /var/log of your Linux host to your container like this:
docker run -d -v /var/log:/opt/my/app/log:rw some/image
This would create a folder called /opt/my/app/log inside your container. And this folder will be /var/log on your Linux host. You could use this to persist data or share data between your containers.
Docker volume containers
Now, if you mount a host directory to your containers, you somehow break the nice isolation Docker provides. You will "pollute" your host with data from the containers. To prevent this, you could create a dedicated container to store your data. Docker calls this container a "Data Volume Container".
This container will have a volume which you want to share between containers, e.g.:
docker run -d -v /some/data/to/share --name MyDataContainer some/image
This container will run some application (e.g. a database) and has a folder called /some/data/to/share. You can share this folder with another container now:
docker run -d --volumes-from MyDataContainer some/image
This container will also see the same volume as in the previous command. You can share the volume between many containers as you could share a mounted folder of your host. But it will not pollute your host with data - everything is still encapsulated in isolated containers.
My resources
https://docs.docker.com/userguide/dockervolumes/