docker inside docker container - docker

I want to install docker inside a running docker container.
docker run -it centos:centos7
My base container is using centos, I can login to running container using docker exec. But when I try to install docker inside it using yum install -y docker it installs.
But somehow I can't start the docker service with docker -d &, it gives me error as:
INFO[0000] Option DefaultNetwork: bridge
WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: , error: exit status 1
FATA[0000] Error starting daemon: Error initializing network controller: Error initializing bridge driver: Setup IP forwarding failed: open /proc/sys/net/ipv4/ip_forward: read-only file system
Is there a way I can install docker inside docker container or build image already having running docker? I have already seen these examples but none works for me.
The output of uname -r on the host machine:
[fedora# ~]$ uname -r
4.2.6-200.fc22.x86_64
Any help would be appreciated.
Thanks in advance

Update
Thanks to https://stackoverflow.com/a/38016704/372019 I want to show another approach.
Instead of mounting the host's docker binary, you should copy or install a container specific release of the docker binary. Since you're only using it in a client mode, you won't need to install it as a system service. You still need to mount the Docker socket into the container so that you can easily communicate with the host's Docker engine.
Assuming that you got a base image with a working Docker binary (e.g. the official docker image), the example now looks like this:
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
docker:1.12 docker info
Without actually answering your question I'd suggest you to read Using Docker-in-Docker for your CI or testing environment? Think twice.
It explains why running docker-in-docker should be replaced with a setup where Docker containers run as siblings of the "outer" or "base" container. The article also links to the original https://github.com/jpetazzo/dind project where you can find working examples how to run Docker in Docker - in case you still want to have docker-in-docker.
An example how to enable a container to access the host's Docker daemon look like this:
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /usr/bin/docker:/usr/bin/docker\
busybox:latest /usr/bin/docker info

If you are on Mac with Docker toolbox.
The below command WON’T WORK
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /usr/bin/docker:/usr/bin/docker\
busybox:latest /usr/bin/docker info
Because /var/run/docker.sock will not be on your OSX filesystem
the Docker daemon is running inside the boot2docker VM - and that's where the unix socket is.
So you have to run the container from boot2docker VM
$ docker-machine ssh default
$ docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v $(which docker):/usr/bin/docker\
busybox:latest /usr/bin/docker info
$ exit
This looks like Docker-in-Docker, feels like Docker-in-Docker, but it’s not Docker-in-Docker, when this container will create more containers, those containers will be created in the top-level Docker.

You need the --privileged parameter.
By default, Docker containers are “unprivileged” and cannot, for
example, run a Docker daemon inside a Docker container.
Source
Run your base image with the command docker run --privileged -it centos:centos7 bash. Then you may install and run another docker container inside that container.

I`ve a similar problems in my vms.
I`ve solve the problem with change the storage file system from image to vfs(in daemon.json file)
like the image bellow
For image works first create a base image, in my case with centos7
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
with this image builded (in my case i called local/c7-systemd) create a second image, installing docker and moving daemon.json to inside.
FROM local/c7-systemd
RUN yum install -y yum-utils
RUN yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
RUN yum install -y docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.28.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
COPY daemon.json /etc/docker/daemon.json
RUN yum install -y nano
RUN systemctl enable docker
EXPOSE 80
EXPOSE 8080
EXPOSE 8161
EXPOSE 6379
EXPOSE 8761
CMD ["/usr/sbin/init"]
enjoy!

Related

Docker file not found error inside the container to create a new image

I need to create a container for which I'm able to create new images.
My first guest was to run docker on docker but found that the right
way to do this was using the --privileged argument so the container
has access to the docker daemon.
For this I'm runnin the following comand:
docker run --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /home/user/container_data:/app/app -d -p 5100:5100 mcf2:latest
I'm using -v /home/user/container_data:/app/app because I'm creating the folder for the new images from
templates for flask apps and saving them on that directory.
One of the files I'm creating from the templates is 'create_image.sh' which has the docker build statement E.G.
'docker build -t new_container:latest .'
for that I'm running the following code inside the running container:
bash_path= 'app/classification_model/create_image.sh'
subprocess.call([bash_path],shell=True)
But I always get this error:
/bin/sh: 1: app/model/create_image.sh: docker: not found
But the file does exist, if do ls in the container 'app/' is in the list of folders
I have also checked the bind directory and
'/home/user/container_data/classification_model/create_image.sh'
Does exist.
I have tried changing bash_path to
bash_path= '/app/classification_model/create_image.sh'
and
bash_path= '/app/app/classification_model/create_image.sh'
But get the same error for all the cases
**EDIT: **
I have changed the Docker file to:
From docker:dind
FROM ubuntu:18.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
...
...
And run again:
docker run --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /home/user/container_data:/app/app -d -p 5100:5100 mcf2:latest
I'm still getting the same error:
/bin/sh: 1: docker: not found
You are mixing two thing
Docker in Docker
Docker in Docker with host Docker Socket
In the both cases, Docker should be installed in the container, it does not mean by mounting -v /var/run/docker.sock:/var/run/docker.sock this any container will able to launch or run docker command.
In the first option, it will start containers as a child container.
In the second option, the container will have access to the Docker socket, and will, therefore, be able to start containers. Except that instead of starting “child” containers, it will start “sibling” containers.
updated:
Docker offical dind image is alpine based so you can install using apk instead of apt.
FROM docker:dind
RUN apk add --no-cache python3 python3-dev
https://pkgs.alpinelinux.org/packages

How can I call docker daemon of the host-machine from a container?

Here is exactly what I need. I already have a project which is starting up a particular set of docker images and it works completely fine.
But I want to create another image, which is particularly to build this project from the scratch having all the dependencies inside. So, the problem is, when building, to create docker images, we need to access the docker daemon running on the host machine from the building container.
Is there any way of doing this?
If you need to access docker on the host from inside a container, you can simply expose the Docker socket inside the container using a host mount (-v /host/path:/container/path on the docker run command line).
For example, if I start a new fedora container exposing the docker socket on my host:
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock fedora bash
Then install docker inside the container:
[root#d28650013548 /]# yum -y install docker
...many lines elided...
I can now talk to docker on my host:
[root#d28650013548 /]# docker info
Containers: 6
Running: 1
Paused: 0
Stopped: 5
Images: 530
Server Version: 17.05.0-ce
...
You can let the container access to the host's docker daemon through the docker socket and "tricking" it to have the docker executable inside the container without installing docker inside it. Just on this way (with an Ubuntu-Xenial container for the example):
docker run --name dockerInsideContainer -ti -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker ubuntu:xenial
Inside this, you can launch any docker command like for example docker images to check it's working.
If you see an error like this: docker: error while loading shared libraries: libltdl.so.7: cannot open shared object file: No such file or directory you should install inside the container a package called libltdl7. So for example you can create a Dockerfile for the container or installing it directly on run:
FROM ubuntu:xenial
apt update
apt install -y libltdl7
or
docker run --name dockerInsideContainer -ti -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker ubuntu:xenial bash -c "apt update && apt install libltdl7 && bash"
Hope it helps

Start ssh using systemctl inside the docker container

I' m a beginner in the Docker;
I have pulled a CentOS 7 image from Hub and ran it ;
I need to ssh in to the docker container(CentOS 7) from my host.
Got the docker container's IP using docker inspect container-id
I have installed the following using
initscripts
systemd.x86_64
systemd-libs.x86_64
open-ssh
firewalld
net-tools
when i tried to start the firewall to open the port for ssh(22)
[root#a6f3e3eb095c ~]# systemctl start firewall
Failed to get D-Bus connection: Operation not permitted
Also tried,
[root#a6f3e3eb095c ~]# /usr/lib/systemd/systemd --system &
[1] 353
[root#a6f3e3eb095c ~]# systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Detected virtualization xen.
Detected architecture x86-64.
Welcome to CentOS Linux 7 (Core)!
Set hostname to <a6f3e3eb095c>.
Cannot determine cgroup we are running in: No such file or directory
Failed to allocate manager object: No such file or directory
[1]+ Exit 1 /usr/lib/systemd/systemd --system
How to start the firewall/ssh inside the docker container ?
inside docker container run following commands :
yum update -y glibc-common
yum install -y sudo passwd openssh-server openssh-clients tar screen crontabs strace telnet perl libpcap bc patch ntp dnsmasq unzip pax which
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum install -y hiera lsyncd sshpass rng-tools
service sshd start;
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config;
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config;
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config;
sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/CentOS-Base.repo
mkdir -p /root/.ssh/;
rm -f /var/lib/rpm/.rpm.lock;
echo "StrictHostKeyChecking=no" > /root/.ssh/config;
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config
echo "root:password" | chpasswd
( or )
Simply you can pull docker image of centos with ssh in docker hub
https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=centos+ssh&starCount=0
https://hub.docker.com/r/kinogmt/centos-ssh/
https://hub.docker.com/r/jdeathe/centos-ssh/
You can avoid the "Failed to get D-Bus connection: Operation not permitted" / aka installing systemd inside a docker by using the https://github.com/gdraheim/docker-systemctl-replacement ... after that the docker-exec stuff should be all fine to do things inside a container.
If you really do need an ssh or sftp container, then you can use my Docker Image as a source image for your own or run it directly:
If using the official CentOS-7 Image and you require systemd, there are instructions on how to enable it under the section "Systemd integration".
However, based on the following:
I need to ssh in to the docker container(CentOS 7) from my host.
You can use docker exec to run commands in a running, (backgrounded), container so, for images that have bash available, you can access an interactive tty and run bash as follows from your host - where container can be either the name or id:
docker exec --tty --interactive <container> bash
OR
docker exec -ti <container> bash
Finally, it's unlikely to be necessary to install the firewall package in your image as the operator will decide what ports to publish from those which are exposed and you can make use of Docker Networking to only expose the necessary public facing services.
If you are using the Docker CLI, then you can get into the Docker container using the following command
docker exec -it containerId bash
I am not sure how to ssh into the docker container, but if you want to do basic operation inside the Docker container, you can make use of the above docker command.

How to install docker in docker container?

This is my Dockerfile:
FROM golang
# RUN cat /etc/*release
RUN apt-get update
RUN apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
RUN apt-get update
RUN apt-get -y install docker-ce
RUN docker run hello-world
The golang Dockerfile is official, it bases on the
Debian GNU/Linux 8 (jessie)
So I wrote down this Dockerfile by checking the install steps from Docker Install Tutor(Debian)
But the output is
Step 8/8 : RUN docker run hello-world
---> Running in b183b8cc5d10
docker: Cannot connect to the Docker daemon at
unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
How to solve this problem? I want to establish docker containers in the host docker container.
I had a similar problem trying to install Docker inside a Bamboo Server image. To solve this:
first remove the line: RUN docker run hello-world from your Dockerfile
The simplest way is to just expose the Docker socket, by bind-mounting it with the -v flag or mounting a volume using Docker Compose:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Use Docker-in-Docker for this task. They have already solved many of the problems for you.
In your .dockerfile add this line to install Docker:
RUN curl -fsSL https://get.docker.com | sh
After build is done, when running your container, add a volume mapping to the host Docker socket with the -v switch , e.g.:
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock my-container
Then, from within the container shell, check the connection by running:
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bf420851572 my-image "bash" 8 minutes ago Up 8 minutes my-container
The easiest way is to use the official Docker-in-Docker images from https://hub.docker.com/_/docker/ with the :dind tag (which is the successor of the project Hendrikvh already mentioned).
You definitely need to use the --priviledged flag also:
docker run --privileged --name yourDockerContainerNameHere -d docker:dind
With that your Docker-in-Docker experiments should work - but be aware of the many stumbleblocks that could be in your way: https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
//create container in privileged mode
sudo docker container run -it --name uob_20.04 --privileged=true <dockerhub-image> /bin/bash
//give access
sudo chmod ugo+rw /var/run/docker.sock
sudo nohup dockerd > /dev/null 2>&1 &
//check docker installation
docker images
Try with starting docker service before of executing any docker command.
Add this line
RUN bash service docker start
to your Dockerfile above of this line:
RUN docker run hello-world

How to 'avahi-browse' from a docker container?

I'm running a container based on ubuntu:14.04, and I need to be able to use avahi-browse inside it. However:
(.env)root#8faa2c44e53e:/opt/cluster-manager# avahi-browse -a
Failed to create client object: Daemon not running
(.env)root#8faa2c44e53e:/opt/cluster-manager# service avahi-daemon status
Avahi mDNS/DNS-SD Daemon is running
The actual problem I have is a pybonjour error; pybonjour.BonjourError: (-65537, 'unknown') but I've read that is linked to the problem with the avahi-daemon.
So; how do I connect to the avahi-daemon from the container ?
P.S. I have to switch dbus off in the avahi-daemon.conf fill to make it possible to start it, otherwise avahi-daemon won't start, with a dbus error like this:
(.env)root#8faa2c44e53e:/opt/cluster-manager# avahi-daemon
Found user 'avahi' (UID 103) and group 'avahi' (GID 107).
Successfully dropped root privileges.
avahi-daemon 0.6.31 starting up.
dbus_bus_get_private(): Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
WARNING: Failed to contact D-Bus daemon.
avahi-daemon 0.6.31 exiting.
As far I can test you can use host's avahi-daemon through Unix socket for mDNS to resolve and /var/run/dbus for avali-browse to work.
E.g.:
docker run -v /var/run/dbus:/var/run/dbus -v /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket -ti debian:10-slim bash
To test inside container:
apt-get update && apt-get install avahi-utils iputils-ping -y
ping whatever.local
avahi-browse -a
Avahi requires D-BUS in order to communicate with clients. Sounds like your docker container isn't starting the system D-BUS. If you do that, then Avahi should work.
You need D-BUS for most of Avahi's functionality (including avahi-browse) so disabling it won't really help.
There is a docker image supposedly supporting avahi from within the container. The trick seems to be to mount /var/run/dbus from the host into the container.
Note that I couldn't make it work to run this image on my 16.04. host.
I ran into the same problem getting avahi and dbus to operate correctly on Ubuntu 14.04 (specifically, I was trying to use ROS TurtleBot). I solved it by incorporating a modified version of the instructions in docker-systemd into my Dockerfile:
FROM ubuntu:14.04
RUN apt-get update &&\
apt-get install -y avahi-utils avahi-daemon libnss-mdns systemd
RUN cd /lib/systemd/system/sysinit.target.wants/;\
ls | grep -v systemd-tmpfiles-setup | xargs rm -f $1 \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*; \
rm -f /lib/systemd/system/plymouth*; \
rm -f /lib/systemd/system/systemd-update-utmp*
RUN mkdir -p /var/run/dbus
ENV init /lib/systemd/systemd
After modifying your Dockerfile to include these instructions, you should create a container using the following command:
docker run --rm --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -it <DOCKER_IMAGE> /bin/bash
Finally, once you're inside the container, you must execute the following commands before attempting to use avahi-browse (directly or indirectly):
$ dbus-service --system
$ /etc/init.d/avahi-daemon start
Another solution is to use mdns-repeater on the host to forward mDNS packets to the Docker network
mdns-repeater eth1 docker0
I needed to add 2 parameters in my call to docker run command for avahi-browse -at command to run inside the container:
--privileged and -v /var/run/dbus:/var/run/dbus

Resources