I read this article http://blog.docker.io/2013/09/docker-can-now-run-within-docker/ and I want to share images between my "host" docker and "child" docker. But when I run
sudo docker run -v /var/lib/docker:/var/lib/docker -privileged -t -i jpetazzo/dind
I can't connect to "child" docker from dind container.
root#5a0cbdc2b7df:/# docker version
Client version: 0.8.1
Go version (client): go1.2
Git commit (client): a1598d1
2014/03/13 18:37:49 Can't connect to docker daemon. Is 'docker -d' running on this host?
How can I share my local images between host and child docker?
You shouldn't do that! Docker assumes that it has exclusive access to /var/lib/docker, and if you (or another Docker instance) meddles with this directory, it could have unexpected results.
There are multiple solutions, depending on what you want to achieve.
If you want to be able to run Docker commands from within a container, but don't need a separate daemon, then you can share the Docker control socket with this container, e.g.:
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
-t -i ubuntu bash
If you really want to run a different Docker daemon (e.g. because you're hacking on Docker and/or want to run a different version), but want to access the same images, maybe you could run a private registry in a container, and use that registry to easily share images between Docker-in-the-Host and Docker-in-the-Container.
Don't hesitate to give more details about your use-case so we can tell you the most appropriate solution!
Related
How to create a local-registry container, that mounts a volume from the host machine and persist locally all the images that get pulled?
I want to not download images more than once, if not necessary, even after the registry (or the whole Docker VM) is being thrown away and recreated.
This is useful when having slow connection or no connectivity. Would also allow to mount a backup with pre-downloaded images, as docker volume, skipping altogether the need for an internet connection.
This latter is already possible, but it would be more convenient than having to manually docker push/docker pull onto the local registry, or to docker save/docker load each image that need to be available there.
It's a rephrasing on this, that wasn't reopened because of lack of feedback. Main purpose is to make the answer available for search, but feel free to propose better solutions.
Here are the step-by-step instructions. Hopefully will save time & make life easier to somebody else, travelling or living in disadvantaged areas of the world where internet connections can't access the Docker world, because they are too limited or sometime absent altogether!
Istructions are for macOS and Minikube but can be adapted also for VM running on Windows or via Docker Desktop.
(note: you will need to check if your virtualization technology provides automount of the system user directory)
Configuration
Define first your environment variables with the desired values. See env-vars in the code below (PROXIED_REGISTRY, REGISTRY_USERNAME, REGISTRY_PASSWORD, PATH_WHERE_TO_PERSIST_IMAGES, etc.)
On the host machine
Minikube
If using minikube, first bind to docker on its VM's
eval $(minikube docker-env)
or run the commands directly from inside the VM, via minikube ssh.
Create local registry
(note: some envs might be unnecessary; check Docker docs to see what you need)
The -v option mounts onto the local registry the path where you want to persist the registry data (repositories folders and image layers).
When you use Minikube, this latter will automatically mount the home folder from the host (/Users/, on macOS) onto the virtual machine where Docker is run.
docker run -d -p 5000:5000 \
-e STANDALONE=false \
-e "REGISTRY_LOG_LEVEL=debug" \
-e "REGISTRY_REDIRECT_DISABLE=true" \
-e MIRROR_SOURCE="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_REMOTEURL="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_USERNAME="${REGISTRY_USER}" \
-e REGISTRY_PROXY_PASSWORD="${REGISTRY_PASSWORD}" \
-v /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry:/var/lib/registry \
--restart=always \
--name local-registry \
registry:2
Login to your local registry
echo -n "${REGISTRY_PASSWORD}" | docker login -u "${REGISTRY_USER}" --password-stdin "localhost:5000"
(optional) Verify that the persist directories are present
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
Try to pull one image from your private registry
(to see it proxied through the repository localhost:5000)
docker pull localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
(optional) Verify the image data has been synced on local host, where desired
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
If using Kubernetes
change the deployment spec container image to:
localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
Et voila!
You now can keep the images downloaded from your repository stored onto your host machine!
If internet is available, the local registry will ensure to have the most recent version of your pulled images, requesting it to the proxied registry (private, or the the Docker hub).
And you will have a last resort backup to run your container also when your internet connection is too slow for re-downloading everything you need, or is unavailable altogether!
(really useful with Minikube, when you need to destroy your docker virtual machine)
References:
https://docs.docker.com/registry/recipes/mirror/#run-a-registry-as-a-pull-through-cache
https://minikube.sigs.k8s.io/docs/handbook/mount/#driver-mounts
How to create a local-registry container that mounts a volume from the host machine and persist locally all the images that get pulled?
Local Docker registry with persisted images
It should be possible to have an ephemeral registry container (and its docker volume), allowing to not download images more than once, even after the registry (or the whole Docker VM) is being throw away and recreated.
This would allow to pull just once the images, having them available when internet connectivity isn't good (or available at all); would allow also to mount a docker volume with pre-downloaded images.
It would be more convenient than having to manually docker push/docker pull onto the local registry, or to docker save/docker load each image that need to be available there.
Notes:
destination of the mount should probably be /var/lib/registry/docker/registry.
it is possible to configure a local Docker registry as a pull-through cache.
my specific setup runs docker via minikube, on macOS; but the answer doesn't have to be specific to it.
I managed it, here are the step-by-step instructions. Hopefully will make life easier to somebody else!
Configuration
Define first your environment variables with the desired values. See env-vars in the code below (PROXIED_REGISTRY, REGISTRY_USERNAME, REGISTRY_PASSWORD, PATH_WHERE_TO_PERSIST_IMAGES, etc.)
On the host machine
Minikube
If using minikube, first bind to docker on its VM's
eval $(minikube docker-env)
or run the commands directly from inside the VM, via minikube ssh.
Create local registry
(note: some envs might be unnecessary; check Docker docs to see what you need)
The -v option mounts onto the local registry the path where you want to persist the registry data (repositories folders and image layers).
When you use Minikube, this latter will automatically mount the home folder from the host (/Users/, on macOS) onto the virtual machine where Docker is run.
docker run -d -p 5000:5000 \
-e STANDALONE=false \
-e "REGISTRY_LOG_LEVEL=debug" \
-e "REGISTRY_REDIRECT_DISABLE=true" \
-e MIRROR_SOURCE="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_REMOTEURL="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_USERNAME="${REGISTRY_USER}" \
-e REGISTRY_PROXY_PASSWORD="${REGISTRY_PASSWORD}" \
-v /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry:/var/lib/registry \
--restart=always \
--name local-registry \
registry:2
Login to your local registry
echo -n "${REGISTRY_PASSWORD}" | docker login -u "${REGISTRY_USER}" --password-stdin "localhost:5000"
(optional) Verify that the persist directories are present
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
Try to pull one image from your private registry
(to see it proxied through the repository localhost:5000)
docker pull localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
(optional) Verify the image data has been synced on local host, where desired
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
If using Kubernetes
change the deployment spec container image to:
localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
Et voila!
You now can keep the images downloaded from your repository stored onto your host machine!
If internet is available, the local registry will ensure to have the most recent version of your pulled images, requesting it to the proxied registry (private, or the the Docker hub).
And you will have a last resort backup to run your container also when your internet connection is too slow for re-downloading everything you need, or is unavailable altogether!
(really useful with Minikube, when you need to destroy your docker virtual machine)
References:
https://docs.docker.com/registry/recipes/mirror/#run-a-registry-as-a-pull-through-cache
https://minikube.sigs.k8s.io/docs/handbook/mount/#driver-mounts
I'm looking for a way to pull the latest image in Docker vanilla after a container crashed/exited.
As in my current architecture, I don't have access to Docker Engine API but only to the container itself, I want to be able to update the container based on the image after this service is exited.
The Docker way to upgrade containers seems to be the following:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
But that's based on the Docker engine CLI API and as I explained before - that's not an approach that I want to try.
Is there a possible way to configure the Docker when the container is pulling again the image from the latest repository upon restart/crash?
What you are asking for is this.
Which seems possible using docker service update for which you will need docker swarm. With plain docker installed on single VM, don't seems feasible.
Hope this helps.
On *nix systems, it is possible to bind-mount the docker socket from the host machine to the VM by doing something like this:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Is there an equivalent way to do this when running docker on a windows host?
I tried various combinations like:
docker run -v tcp://127.0.0.1:2376:/var/run/docker.sock ...
docker run -v "tcp://127.0.0.1:2376":/var/run/docker.sock ...
docker run -v localhost:2376:/var/run/docker.sock ...
none of these have worked.
For Docker for Windows following seems to be working:
-v //var/run/docker.sock:/var/run/docker.sock
As the Docker documentation states:
If you are using Docker Machine on Mac or Windows, your Engine daemon
has only limited access to your OS X or Windows filesystem. Docker
Machine tries to auto-share your /Users (OS X) or C:\Users (Windows)
directory. So, you can mount files or directories on OS X using:
docker run -v /Users/<path>:/<container path> ...
On Windows, mount directories using:
docker run -v /c/Users/<path>:/<container path> ...
All other paths come from your virtual machine’s filesystem, so if you
want to make some other host folder available for sharing, you need to
do additional work. In the case of VirtualBox you need to make the
host folder available as a shared folder in VirtualBox. Then, you can
mount it using the Docker -v flag.
With all that being said, you can still use the:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
The first /var/run/docker.sock refers to the same path in your boot2docker virtual machine.
For example, when I run my own Jenkins image using the following command in a Windows machine:
$ docker run -dP -v /var/run/docker.sock:/var/run/docker.sock alidehghanig/jenkins
I can still talk to the Docker Daemon in the host machine using the typical docker commands. For example, when I run docker ps in the Jenkins container, I can see running containers in the host machine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
65311731f446 jen... "/bi.." 10... Up 10.. 0.0.0.0:.. jenkins
Just to top it off on the answers provided earlier
When using docker-compose, one must set the COMPOSE_CONVERT_WINDOWS_PATHS=1 by either:
1) create a .env file at the same location as the project's docker-compose.yml file
2) in the CLI set COMPOSE_CONVERT_WINDOWS_PATHS=1
before running the docker-compose up command.
source
This never worked for me on Windows 10 even if it is a linux container:
-v /var/run/docker.sock:/var/run/docker.sock
But this did:
-v /usr/local/bin/docker:/usr/bin/docker
Solution taken from this issue i opened: https://github.com/docker/for-win/issues/4642
Some containers (eg. portainer) work fine with -v /var/run/docker.sock:/var/run/docker.sock
The jenkins container required --user root permissions on the docker run command to successfully access the Docker UNIX socket (using Docker-Desktop on Windows).
By default, a unix domain socket (or IPC socket) is created at
/var/run/docker.sock, requiring either root permission, or docker
group membership.
Source: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
--group-add docker had no effect using Docker-Desktop on Windows.
To bind to a Windows container you need to use pipes.
-v \\.\pipe\docker_engine:\\.\pipe\docker_engine
What it was suitable for me in Windows 10 was:
-v "\\.\pipe\docker_engine:\\.\pipe\docker_engine"
Have in mind that I was trying to access to portainer that I do recommend a lot it's a great app. For that I use this command:
docker run -d -p 9000:9000 -v "\\.\pipe\docker_engine:\\.\pipe\docker_engine" portainer/portainer
And then just go to:
http://localhost:9000/
I never made it worked myself, but i know it works on windows container on docker for windows server 2016 using this technique:
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
We actually have at the shop vsts-agents on windows containers that uses the host docker like that:
# listen using the default unix socket, and on 2 specific IP addresses on this host.
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2
# then you can execute remote docker commands (from container to host for example)
$ docker -H tcp://0.0.0.0:2375 ps
This is what actually made it work for me
docker run -p 8080:8080 -p 50000:50000 -v D:\docker-data\jenkins:/var/jenkins_home -v /usr/local/bin/docker:/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -u root jenkins/jenkins:lts
it works well :
docker run -it -v //var/run/docker.sock:/var/run/docker.sock -v /usr/local/bin/docker:/usr/bin/docker ubuntu
I'd like to dockerize my Strongloop Loopback based Node server and start using Process Manager(PM) to keep it running.
I've been using RancherOS on AWS which rocks.
I copied (but didn't add anything to) the following Dockerfile as a template for my own Dockerfile:
https://hub.docker.com/r/strongloop/strong-pm/~/dockerfile/
I then:
docker build -t somename .
(Dockerfile is in .)
It now appears in:
docker images
But when I try to start it, exits right away:
docker run --detach --restart=no --publish 8701:8701 --publish 3001:3001 --publish 3002:3002 --publish 3003:3003 somename
AND if I run the strong-pm image and after opening ports on AWS, it works as above with strongloop/strong-pm not somename
(I can browse aws-instance:8701/explorer)
Also, these instructions to deploy my app https://strongloop.com/strongblog/run-create-node-js-process-manager-docker-images/ require:
slc deploy http://docker-host:8701/
but Rancher doesn't come with npm (or curl) installed and when I bash into the vm, slc isn't installed, so seems like slc needs to be "outside" the vm
docker exec -it fb94ddab6baa bash
If you're still reading, nice. I think I'm trying to add a Dockerfile to my git repo that will deploy my app server (including pulling code from repos) on any docker box.
The workflow for the strongloop/strong-pm docker image assumes you are deploying to it from a workstation. The footprint for npm install -g strongloop is significantly larger than strong-pm alone, which is why the docker image has only strong-pm installed in it.