I have seen docker inside docker docker container for Ubuntu/Linux. As per the replies in this thread, the following command works
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker [your image
Are there any similar commands available for docker in Windows 7?
I am using the below command in Windows 10 to run docker inside docker. The docker image is with alpine OS. Note that the path is //var/run/docker.sock
docker run -it --rm --privileged --name dockerindocker -v //var/run/docker.sock:/var/run/docker.sock docker
/ # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02285c22006f docker "docker-entrypoint..." 3 seconds ago Up 2 seconds dockerindocker
/ # cat /etc/alpine-release
3.6.2
Unfortunately Windows doesn't support true docker-in-docker yet.
All the answers here are about running a docker client in a container which connects to the top level docker server on the host (same docker running the container where you invoke docker from). It is not a real docker in docker.
See discussion here for more details https://github.com/docker-library/docker/issues/49
Related
I created a new build for my Teamcity pipeline. For the first time I use then Docker buildstep. After I setup everything I realized the build agent does not seem to be ready for it.
I understand that my agent does not seem to be ready for building with docker but nobody is actually telling me how you can do that. I read the official guides but no word about how to actually install docker into my agent (if that's the way to solve the problem).
Can someone tell me what I have to do to get it to work?
EDIT
#Senior Pomidor helped me to get one step closer. I added his first example to the docker run command
docker run -it -e SERVER_URL="<url to TeamCity server>" \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
After doing so I got rid of the mentioned messages in the screenshot. My Agents configuration now has the following:
docker.server.osType linux
docker.server.version 18.06.1
docker.version 18.06.1
But still Teamcity is complaining with this message:
Which kinda leaves me clueless again.
Final Solution:
The upcoming EDIT2 issue could be resolved by just restarting the teamcity server instance. The agent was actually able to run the build but teamcity was not able to realise that without a reboot.
EDIT2
Request Information:
My CI Server OS:
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
Running Container:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f8e0b04d6a6 jetbrains/teamcity-agent "/run-services.sh" 19 hours ago Up 19 hours 9090/tcp teamcity-agent
20964c22b2d9 jetbrains/teamcity-server "/run-services.sh" 37 hours ago Up 37 hours 0.0.0.0:80->8111/tcp teamcity-server-instance
Container run by:
## Server
docker run -dit --name teamcity-server-instance -v /data/teamcity:/data/teamcity_server/datadir -v /var/log/teamcity:/opt/teamcity/logs -p 80:8111 jetbrains/teamcity-server
## Agent
docker run -itd --name teamcity-agent -e SERVER_URL="XXX.XXX.XXX.XXX:80" --privileged -e DOCKER_IN_DOCKER=start -v /etc/teamcity/agent/conf:/data/teamcity_agent/conf jetbrains/teamcity-agent
Build Step Information:
TC restricted the configuration because of TA doesn't start Docker daemon.
You should pass -e DOCKER_IN_DOCKER=start for automatically staring the docker daemon in the container. Also, docker daemon needs the docker socket. In a Linux container, if you need a Docker daemon available inside your builds, you have two options:
--privileged flag. New Docker daemon running within your container
-v docker_volumes:/var/lib/docker Docker from the host (in this case you will benefit from the caches shared between the host and all your containers but there is a security concern: your build may actually harm your host Docker, so use it at your own risk)
In a Linux container, if you need a Docker daemon available inside your builds, you have two options:
Docker from the host (in this case you will benefit from the caches shared between the host and all your containers but there is a security concern: your build may actually harm your host Docker, so use it at your own risk)
examples
docker run -it -e SERVER_URL="<url to TeamCity server>" \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
docker run -it -e SERVER_URL="<url to TeamCity server>" \
-v /var/run/docker.sock:/var/run/docker.sock \
jetbrains/teamcity-agent
UPD
docker.server.osType required because in the build step was sets linux
What worked for me was changing permissions on the agent container for /var/run/docker.sock
Run a shell inside the container:
docker exec -u 0 -it <CONTAINER_ID> bash
Change permissions of the docker socket:
chmod 666 /var/run/docker.sock
Verify the docker container use the socket:
docker version
I'm running https://hub.docker.com/r/jenkinsci/blueocean/ in docker. Trying to build a docker image in jenkins.
but i get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
clearly the jenkins version in docker does not have access to the docker binary.
I confirmed this by,
docker exec -it db4292380977 bash
docker images
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
"db4292380977" is the running container. It shows the same error.
Question:
how do I allow access to docker in the jenkins container?
The docker client is installed on the jenkinsci/blueocean image, but not the daemon. Docker client will use the daemon (by default via the socket unix:///var/run/docker.sock). Docker client needs a Docker daemon in order to work, you can read Docker Architecture for more info.
What you can do:
Use docker-in-docker (DinD) image
Library Docker image provides a way to run a Docker daemon in Docker, you can then use it from another container. For example, using plain docker CLI:
docker run --name docker-dind --privileged -d docker:stable-dind
docker run --name jenkins --link=docker-dind -d jenkinsci/blueocean
docker exec jenkins docker -H docker-dind images
REPOSITORY TAG IMAGE ID CREATED SIZE
Docker daemon runs in docker-dind container and can be reached using the same hostname. You just need to provide the docker client with the daemon host (-H docker-dind in the example, you can also use DOCKER_HOST env variable as described in the doc).
Mount host machine /var/run/docker.sock in your container
As described by #Herman Garcia answer:
docker run -p 8080:8080 --user root \
-v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean
You need to mount your local /var/run/docker.sock and run the container as root user
NOTE: this might be a security flaw so be careful who has access to the jenkins container
docker run -p 8080:8080 --user root \
-v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean
you will be able to execute docker inside the container
➜ ~ docker exec -it gracious_agnesi bash
bash-4.4# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
c4dc85b0d88c jenkinsci/blueocean "/sbin/tini -- /usr/…" 18 seconds ago Up 16 seconds 0.0.0.0:8080->8080/tcp, 50000
/tcp gracious_agnesi
Just only try to do the same command but with sudo in the beginning
For example
sudo docker images
sudo docker exec -it db4292380977 bash
To avoid use sudo in the future you should run this command in Unix O.S
sudo usermod -aG docker <your-user>
Change for the user that you are using at this moment. Remember to log out and back in for this to take effect! More information about Docker installation click here
I have a GitLab CI job that is currently using DinD. The CI runs inside a docker container.
What I am trying to accomplish is:
The CI job docker container, using dind, runs a docker container with a volume.
docker run --name cvmfs --pid=host --user 0 --privileged --restart always -v /cvmfsmounts:/cvmfsmounts:rshared <our_registry>/vcs/cvmfs-automounter:master
The CI job docker container runs another docker container using the same volume.
docker run --rm -v /cvmfsmounts/cvmfs:/cvmfs:rslave busybox ls -lrt /cvmfs/atlas.cern.ch
This is trying to automount a volume on the second docker container. It works when not using dind.
The main issue is this:
Error response from daemon: linux mounts: path /cvmfsmounts is mounted on / but it is not a shared mount
Any idea what is wrong with it?
My centos version and docker version(install by yum)
Use docker common error in container
My docker run command:
docker run -it -d -u root --name jenkins3 -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker docker.io/jenkins/jenkins
but,its error when I exec docker info in jenkins container
/usr/bin/docker: 2: .: Can't open /etc/sysconfig/docker
Exposing the host's docker socket to your jenkins container will work with
-v /var/run/docker.sock:/var/run/docker.sock
but you will need to have the docker executable installed in your jenkins image via a Dockerfile.
It is likely the example you are looking at is already using a docker image. A quick google search brings up https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ whose example uses a docker image (already has the executable installed):
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
Also note from that same post your exact issue with mounting the binary:
Former versions of this post advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
I am trying to build a new docker image using docker provided base Ubuntu image. I'll be using docker file to run few scripts and install applications on the base image. However my script requirement is that the hostname should remain same. I couldn't find any information on OS names for docker images. Does anybody has an idea that once we add layers to a docker image does the OS name remains same.
You can set the hostname with the -h argument to Docker run, otherwise it gets the short form of the container ID as the hostname:
$ docker run --rm -it debian bash
root#0d36e1b1ac93:/# exit
exit
$ docker run --rm -h myhost -it debian bash
root#myhost:/# exit
exit
As far as I know, you can't tell docker build to use a given hostname, but see Dockerfile HOSTNAME Instruction for docker build like docker run -h.