Running containers over an Ubuntu container - docker

I need to separate the environments so my team could work without ports conflicts. My idea was to use an ubuntu container to run a lot of other containers and map just the ports we would use, without conflict.
Unfortunately after the Docker installation over the ubuntu container it gives the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Is it possible to use Docker over containers? Does this idea works?
Plus, if this is not the best way to solve the original problem could you please give me a better solution?

First question:
I think you have to bind the docker daemon to your Ubuntu container
-v /var/run/docker.sock:/var/run/docker.sock
Or optional using the official docker image with the DinD flag (docker in docker) which based on Ubuntu 18.09
docker run --privileged --name some-docker -v /my/own/var-lib-docker:/var/lib/docker -d docker:dind
Second question:
Instead of the ubuntu container with docker you could use a reverse proxy in front of your other service containers.
For example traefik or nginx

You can use kubernetes, create multiple namespaces for each developer. Use nginx and dynamic server_name to map url to different namespaces.

Related

dockerized app needs to interact with other dockers over localhost

I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost

GitLab - Docker inside gitlab/gitlab-ce get errors

I'm running a gitlab/gitlab-ce container on docker. Then , inside it, i want to run a gitlab-runner service, by providing docker as runner. And every single command that i run (e.g docker ps, docker container ..), i get this error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running
P.s: i've tried service docker restart, reinstal docker and gitlab-runner.
By default it is not possible to run docker-in-docker (as a security measure).
You can run your Gitlab container in privileged mode, mount the socket (-v /var/run/docker.sock://var/run/docker.sock) and try again.
Also, there is a docker-in-docker image that has been modified for docker-in-docker usage. You can read up on it here and create your own custom gitlab/gitlab-ce image.
In both cases, the end result will be the same as docker-in-docker isn't really docker-in-docker but lets your manage the hosts docker-engine from within a docker container. So just running the Gitlab-ci-runner docker image on the same host has the same result and is a lot easier.
By default the docker container running gitlab does not have access to your docker daemon on your host. The docker client uses a socket connection to communicate to the docker daemon. This socket is not available in your container.
You can use a docker volume to make the socket of your host available in the container:
docker run -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-ce
Afterwards you will be able to use the docker client in your container to communicate with the docker daemon on the host.

How to use docker inside docker container in a safe way

I have some docker containers running on my docker environment (on a CentOS VM) which need docker inside. So I mount /var/run/docker.sock inside the containers.
Now I'm creating /etc/default/docker in which I put
DOCKER_OPTS="-H tcp://xx.xx.xx.xx:2376"
But now my question is: which IP is xx.xx.xx.xx? Is it the IP of the host or the IP of a container? + Is this the savest way to let a docker container use the socket? (=use docker in docker)
Running docker within docker is not so trivial an you might have a good reason for doing that.
The last time I was doing that, I was using dind (docker in docker) and had to mount the socket (/var/run/docker.sock) and used it in a combination with the --privileged flag. However things might have changed now (see https://github.com/docker/docker/pull/15596) and it should be able to run it without the socket mount:
docker run --privileged -d docker:dind
So be sure to check out this comprehensive guide at https://hub.docker.com/_/docker/
Working with Docker in Docker can be tricky. I would recommend using the official Docker image with the dind tag. You shouldn't need to specify the DOCKER_HOST in options as it will be correctly configured. For example running:
docker run -ti --name docker -v /var/run/docker.sock:/var/run/docker.sock --privileged docker:dind sh
Will drop you to a shell inside the container. Then if your run docker ps you should see a list of containers running on the host machine. Note the --privileged flag is required in this case as we are accessing the Docker daemon outside the container.
Hope this helps!
Dylan
Edit
Drop the --privileged flag from the above command due to security issues highlighted by Alexander in the comments. You also can drop the dind tag as its not required.

docker-compose.yml to start containers on multiple VM's

I have created Docker containers using docker-compose.yml on a single host.
Can anybody tell if docker-compose.yml file can be used to start Docker containers on multiple VMs ? If yes, how?
The compose file cannot be used with the new Docker "Swarm Mode" introduced in June (Docker 1.12). The "legacy" Docker Swarm accepts compose files but you should really focus on learning Docker "Swarm Mode", not the old Docker Swarm. It's much simpler too, except for the missing support for compose files.
"swarm mode" accepts dab files and there is a way to convert compose files to dab, but it's experimental (which means that a lot of what you have put in your compose file won't translate). So the current best way is to create bash scripts with the CLI commands eg: docker service create --name nginx nginx:1.10-alpine.
and do have look at #Matt 's link about learning the basics of Docker swarm mode. http://docs.docker.com/engine/swarm/key-concepts
You can quickly spin up a stack swarm from the external containers and a node list (by ip like 10.0.0.1 or hostname like nodeb):
docker run -d -P --restart=always --name swarm-manager swarm manager \
"nodes://10.0.0.1:2376,nodeb:2376,nodec:2376"
export DOCKER_HOST=$(docker port swarm-manager 2375)
docker-compose up
Before running this, you'd need to configure the engines to listen on 2376 with TLS configured, a client key/certificate, and the appropriate network access. See docker's documentation on TLS for more details on configuring this.

Pass --net=host to docker build

To pass other options to docker build, you can speciy DOCKER_OPTS in /etc/default/docker, however --net is not available. Is it possible to use the host's networking stack when building a container?
I'm running Docker version 1.3.2, build 39fa2fa.
Thanks!
Try --network instead of --net. It is new in the 1.25 API and sets the networking mode for the RUN instructions.
To solve the problem, configure docker daemon to use the your company DNS server. For instance, if your resolv.conf has the following configuration:
$> cat /etc/resolv.conf
domain mycompany
search mycompany
nameserver 10.123.123.123
Change /etc/default/docker to contain the following:
DOCKER_OPTS="--dns 10.123.123.123"
And restart docker daemon with:
sudo service docker restart
Now, containers will have access to the intranet during the build operation.
Related answer: Dockerfile: Docker build can't download packages: centos->yum, debian/ubuntu->apt-get behind intranet
From the newest versions (currently docker ce v17) it is possible to add --network=host to your docker build command which is similar to --net=host when using docker run!

Resources