docker network port binding - docker

I was playing around with docker and containers.
I have the docker engine setup on a Ubuntu box(running in VMware player) and am trying to bind the daemon to the network port with the following command:
root#ubuntu:~# docker -H 10.0.0.7:2375 -d &
[1] 10046
root#ubuntu:~# flag provided but not defined: -d
See 'docker --help'.
Why is it that the -d parameter throwing it off. I am very new to Linux so any suggestion is welcome.
Thanks in advance.

You're looking for docker daemon, not docker -d. This has been moved to dockerd in 1.12 but calling docker daemon still works there (it's just a pass through to the new command).

Related

Running containers over an Ubuntu container

I need to separate the environments so my team could work without ports conflicts. My idea was to use an ubuntu container to run a lot of other containers and map just the ports we would use, without conflict.
Unfortunately after the Docker installation over the ubuntu container it gives the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Is it possible to use Docker over containers? Does this idea works?
Plus, if this is not the best way to solve the original problem could you please give me a better solution?
First question:
I think you have to bind the docker daemon to your Ubuntu container
-v /var/run/docker.sock:/var/run/docker.sock
Or optional using the official docker image with the DinD flag (docker in docker) which based on Ubuntu 18.09
docker run --privileged --name some-docker -v /my/own/var-lib-docker:/var/lib/docker -d docker:dind
Second question:
Instead of the ubuntu container with docker you could use a reverse proxy in front of your other service containers.
For example traefik or nginx
You can use kubernetes, create multiple namespaces for each developer. Use nginx and dynamic server_name to map url to different namespaces.

How to use docker inside docker container in a safe way

I have some docker containers running on my docker environment (on a CentOS VM) which need docker inside. So I mount /var/run/docker.sock inside the containers.
Now I'm creating /etc/default/docker in which I put
DOCKER_OPTS="-H tcp://xx.xx.xx.xx:2376"
But now my question is: which IP is xx.xx.xx.xx? Is it the IP of the host or the IP of a container? + Is this the savest way to let a docker container use the socket? (=use docker in docker)
Running docker within docker is not so trivial an you might have a good reason for doing that.
The last time I was doing that, I was using dind (docker in docker) and had to mount the socket (/var/run/docker.sock) and used it in a combination with the --privileged flag. However things might have changed now (see https://github.com/docker/docker/pull/15596) and it should be able to run it without the socket mount:
docker run --privileged -d docker:dind
So be sure to check out this comprehensive guide at https://hub.docker.com/_/docker/
Working with Docker in Docker can be tricky. I would recommend using the official Docker image with the dind tag. You shouldn't need to specify the DOCKER_HOST in options as it will be correctly configured. For example running:
docker run -ti --name docker -v /var/run/docker.sock:/var/run/docker.sock --privileged docker:dind sh
Will drop you to a shell inside the container. Then if your run docker ps you should see a list of containers running on the host machine. Note the --privileged flag is required in this case as we are accessing the Docker daemon outside the container.
Hope this helps!
Dylan
Edit
Drop the --privileged flag from the above command due to security issues highlighted by Alexander in the comments. You also can drop the dind tag as its not required.

Docker container doesn't expose ports when --net=host is mentioned in the docker run command

I have a CentOS docker container on a CentOS docker host. When I use this command to run the docker image docker run -d --net=host -p 8777:8777 ceilometer:1.x the docker container get host's IP but doesn't have ports assigned to it.
If I run the same command without "--net=host" docker run -d -p 8777:8777 ceilometer:1.x docker exposes the ports but with a different IP. The docker version is 1.10.1. I want the docker container to have the same IP as the host with ports exposed. I also have mentioned in the Dockerfile the instruction EXPOSE 8777 but with no use when "--net=host" is mentioned in the docker run command.
I was confused by this answer. Apparently my docker image should be reachable on port 8080. But it wasn't. Then I read
https://docs.docker.com/network/host/
To quote
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
That's rather annoying as I'm on a Mac. The docker command should report an error rather than let me think it was meant to work.
Discussion on why it does not report an error
https://github.com/docker/for-mac/issues/2716
Not sure I'm convinced.
The docker version is 1.10.1. I want the docker container to have same ip as the host with ports exposed.
When you use --net=host it tells the container to use the hosts networking stack. So you can't expose ports to the host, because it is the host (as far as the network stack is concerned).
docker inspect might not show the expose ports, but if you have an application listening on a port, it will be available as if it were running on the host.
On Linux, I have always used --net=host when myapp needed to connect to an another docker container hosting PostgreSQL.
myapp reads an environment variable DATABASE in this example
Like Shane mentions this does not work on MacOS or Windows...
docker run -d -p 127.0.0.1:5432:5432 postgres:latest
So my app can't connect to my other other docker container:
docker run -e DATABASE=127.0.0.1:5432 --net=host myapp
To work around this, you can use host.docker.internal instead of 127.0.0.1 to resolve your hosts IP address.
Therefore, this works
docker run -e DATABASE=host.docker.internal:5432 -d myapp
Hope this saves someone time!

Problems getting docker containers to see (ping) each other by name

I have three docker containers,
java container (JC): for my java application (spring boot)
elasticsearch container (EC): for ElasticSearch
test container (TC): testing container to troubleshoot with ping test
Currently, the JC cannot see the EC by "name". And when I say "see" I mean if I do a ping on the JC to EC, I get a ping: unknown host. Interestingly, if I do a ping on the TC to EC, I do get a response.
Here is how I start the containers.
docker run -dit --name JC myapp-image
docker run -d --name EC elasticsearch:1.5.2 elasticsearch -Des.cluster.name=es
docker run --rm --name TC -it busybox:latest
Then, to ping EC from JC, I issue the following commands.
docker exec JC ping -c 2 EC
I get a ping: unknown host
With the TC, since I am already at the shell, I can just do a ping -c 2 EC and I get 2 replies.
I thought maybe this had something to do with my Java application, but I doubt it because I modified my Dockerfile to just stand up the container. The Dockerfile looks like the following.
FROM java:8
VOLUME /tmp
Note that you can create the above docker image by docker build -no-cache -t myapp-image ..
Also note that I have Docker Weave Net installed, and this does not seem to help getting the JC to see the EC by name. On the other hand, I tried to find the IP address of each container as follows.
docker inspect -f '{{ .NetworkSettings.IPAddress }}' JC --> 172.17.0.4
docker inspect -f '{{ .NetworkSettings.IPAddress }}' EC --> 172.17.0.2
docker inspect -f '{{ .NetworkSettings.IPAddress }}' TC --> 172.17.0.3
I can certainly ping EC from JC by IP address: docker exec JC ping -c 2 172.17.0.2. But getting the containers to see each other by IP address does not help as my Java application needs a hostname reference as a part of its configuration.
Any ideas on what's going on? Is it the container images themselves? Why would the busybox container image be able to ping the ElasticSearch container by name but the java container not?
Some more information.
VirtualBox 5.0.10
Docker 1.9.1
Weave 1.4.0
CentOS 7.1.1503
I am running docker inside a CentOS VM on a Windows 10 desktop as a staging environment before deployment to AWS
Any help is appreciated.
Within the same docker daemon, use the old --link option in order to update the /etc/hosts of each component and make sure one can ping the other:
docker run -d --name EC elasticsearch:1.5.2 elasticsearch -Des.cluster.name=es
docker run -dit --name JC --link ED myapp-image
docker run --rm --name TC -it busybox:latest
Then, a docker exec JC ping -c 2 EC should work.
If it does not, check if this isn't because of the base image and a security issue: see "Addressing Problems with Ping in Containers on Atomic Hosts".
JC is based on docker/_java:8, itself based on jessie-curl, jessie.
Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy docker run --link option. docs.docker.org.
It should also work using the new networking.
docker network create -d bridge non-default
docker run --net non-default ...
There isn't a specific option which applies this behavior to the default network (AFAICT from looking at docker network inspect). I guess it's just triggered by the option "com.docker.network.bridge.default_bridge".
In the first part of another question, it's suggested this was changed in Docker 1.9. Note that Docker 1.9 was when they turned on the new networking system in the stable release. The section of the userguide that I quoted from above, did not exist in version 1.8. Docker 1.9.0 "bridge" versus a custom bridge network results in difference in hosts file and SSH_CLIENT env variable

Delete Docker from Docker?

Is it possible to control (list/start/stop/delete) docker containers from docker container running on the same machine?
The idea/intent is to have docker container which monitors/controls neighbours.
Both low/high level details would be useful.
Thanks!
Yes, the easiest way is to mount the docker socket from the host inside the docker container e.g:
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker debian /bin/bash
root#dcd3b64945ed:/# docker ps -q
dcd3b64945ed
3178d5269041
e59d5e37e0f6
Mounting the docker socket is the easiest however its unsecure as gives the root access to everyone who has access to the docker.sock
Id suggest using the Docker Remote API to do the list/start/stop/etc with a program which hides the docker remote ( in your case local ) daemon .
Ref: https://docs.docker.com/articles/basics/

Resources