I run a docker swarm with docker swarm mode. Let's say I have 4 nodes, 1 manager, 3 worker. The hostnames are:
manager0
worker0
worker1
worker2
I start the service in global mode, so every node runs the service once.
Let's say the command looks like this:
docker service create --name myservice --mode global --network mynetwork ubuntu wait 3600
mynetwork is an overlay network.
Now I am trying to access the hostname of the docker host in the containers, so I can pass the hostname to an application in the container.
I tried to pass the hostname with the environment variables (--env hostname=$(hostname)), but actually ${hostname} is only executed on the manager and the hostname is set to manager0 for all nodes.
Is there a way to access the hostname or pass the hostname to the containers?
You can use latest naming templates to create service with hostname.
Here is the feature request, that has been implemented in docker version 17.10
https://github.com/moby/moby/issues/30966
Related
I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/
I have 2 docker containers running on my Mac host - container 1 is Jenkins from Docker Hub and container 2 is SonarQube from Docker Hub. I have both containers running successfully. I can access Jenkins from my host by going to http://localhost:8080/ and I can access my SonarQube by going to http://localhost:9000/.
The Jenkins container was started like this:
docker run -d -p 8080:8080 -p 50000:50000 jenkins/jenkins:latest
The SonarQube container was started like this:
docker run -d -p 9000:9000 sonarqube
Now I want to have each container communicate with each other so I need to provide the IP address of the other container to each container.
I got the IP address of each container by executing this:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' container_name_or_id
This returns an IP address of 172.17.0.2 for the Jenkins container and 172.17.0.3 for the SonarQube container. But when I try and access the Jenkins container from my host by going to http://172.17.0.2:8080 I get a request timeout. The same thing happens when I try and access the SonarQube container from my host by going to http://172.17.0.3:9000
Is this normal behavior?
Shouldn't I be able to access each container from my host by their internal IP address?
And how can I test that one container (e.g. Jenkins) can access the other container (e.g. SonarQube) by IP address?
Is this normal behavior? Shouldn't I be able to access each container from my host by their internal IP address?
What you describe is normal behavior: you can't directly reach the Docker-internal IP addresses from a MacOS host. See "Per-container IP addressing is not possible" in the Docker for Mac docs.
How can I test that one container (e.g. Jenkins) can access the other container (e.g. SonarQube) by IP address?
This isn't something I normally "test" per se. Start up both processes and have them make their normal (HTTP) connections; if it works you'll see appropriate log messages, and if it doesn't work you'll see complaints. (Getting a root shell in a container to send ICMP packets from one container to another seems to be a popular option but doesn't prove much.)
Also: don't make this connection by explicit IP address. As you've noticed already the Docker-internal IP addresses aren't usable in some contexts, and they'll change whenever you restart containers. Instead, Docker provides an internal DNS service that can resolve host names when communicating between containers, but you need to explicitly set up a non-default bridge network. That setup would look like:
docker network create jenkinsnet
docker run --name sonarqube -d --net jenkinsnet \
-p 9000:9000 \
sonarqube
docker run --name jenkins -d --net jenkinsnet \
-p 8080:8080 -p 50000:50000 \
-e SONARQUBE_URL=http://sonarqube:9000 \
jenkins/jenkins:latest
So I've explicitly created a network; started both containers connected to it; and told the client container (via an environment variable) where the server container is. You don't have to publish ports with docker run -p to reach them this way; whether you do or not, use the port the server process is listening on (the second port number in the docker run -p option).
From the host, your only (portable, reliable) path to reach the container is via its published ports.
Looks like you are using default bridge network model. Internal IPs are meant for each container to talk to each other under bridge networking. You cannot access them from host.
There are multiple options for you.
You can configure http://172.17.0.3:9000 as your sonar endpoint in Jenkins.
You can configure http://172.17.0.2:8000 as your jenkins endpoint in sonar.
If you don't want to hard code above Ips then both of your containers can talk to each using Docker Default GatewayIp(172.17.0.1) and their internal port. so essentially you can configure http://172.17.0.1 as well.
Note - Default Gateway Ip change change if you define user defined bridge network.
https://docs.docker.com/v17.09/engine/userguide/networking/#the-default-bridge-network
https://docs.docker.com/network/network-tutorial-standalone/
If you want to spin up both containers using docker-compose, then you can link both containers using service name. Just follow Networking in Compose.
The accepted answer (https://stackoverflow.com/a/53992787/7730554) already provides valid options of which I personally usually prefer using docker compose.
But as you are running Docker on Mac you could also use host.docker.internal in combination with the defined forwarding host port. So Docker will take care that host.docker.internal is resolved to the corresponding IP even if your Host IP changes.
See https://docs.docker.com/desktop/mac/networking/.
Note that this is for development mode only and works when you use Docker Desktop.
We have swarm running on 6 hosts and about 15 containers. There are one accesspoint open as port 3010.
On every host, which are nodes of swarm, there is local isolated network with 3 docker containers. On each host, one of this containers want to connect to that publish port 3010.
I like to use port on that host, which is currently running that container. I do not know, if this is wise?
How to solve the name of host to use on docker container to connect to the local swarm port. Localhost and 127.0.0.1 are not available. I can connect container on overlay network on swarm, but it is not possible, when starting container, because of local isolated network.
How to solve the name of host to use on docker container to connect to the local swarm port.
It the name of your service name.
E.g. when docker service create --name blue --network dev markuman/color
then you can attache to this service container with figuring out the exactly name id
docker ps| grep blueeb46c52d0568 markuman/color:latest
"/bin/sh -c '/bin/..." 51 seconds ago Up 49 seconds 80/tcp blue.1.o5w76smq3kh5jlomltf6yohj3
and simply do
docker exec -ti blue.1.o5w76smq3kh5jlomltf6yohj3 bash
That's it. From there you can ping or ssh into other serivces which are assigned to the same network.
E.g. when docker service create --name apache ... is running in the same network, just do a ping apache. That's sufficient
I'm trying to setup some very simple networking between a pair of Docker containers and so far all the documentation I've seen is far more complex than for what I am trying to do.
My use case is simple:
Container 1 is already running and is listening on port 28016
Container 2 will start after container 1 and needs to connect to container 1 on port 28016.
I am aware I can set this up via Docker-Compose with ease, however Container 1 is long-lived and for this use case, I do not want to shut it down. Container 2 needs to start and automatically connect to container 1 via port 28016. Also, both containers are running on the same machine. I cannot figure out how to do this.
I've exposed 28016 in Container 1's dockerfile, and I'm running it with -p 28016:28016. What do I need to do for Container 2 to connect to Container 1?
There are a few ways of solving this. Most don't require you to publish the ports.
Using a user defined network
If you start your long-running container in a user-defined network, because then docker will handle
docker network create service-network
docker run --net=service-network --name Container1 service-image
If you then start your ephemeral container in the same network, it will be able to refer to the long-running container by name. E.g:
docker run --name Container2 --net=service-network ephemeral-image
Using the existing container network namespace
You can just run the ephemeral container inside the network namespace of the long running container:
docker run --name Container2 --net=container:Container1 ephemeral-image
In this case, the service would be available via localhost:28016.
Accessing the service on the host
Since you've published the service on the host with -p 28016:28016, you can refer to that access using the address of the host, which from inside the container is going to be the default gateway. You can get that with something like:
address=$(ip route | awk '$1 == "default" {print $3}')
And your service would be available on ${address}:28016.
Here are the steps to perform:
Create a network: docker network create my-net
Attach the network to the already running container: docker container attach <container-name> my-net
Start the new container with the --network my-net or with docker-compose add a network property:
...
networks:
- my-net
networks:
my-net:
external: true
The container should now be able to communicate using the container-name as a DNS host name
Trying to get acquainted with docker, so bear with me...
If I create a database container (psql) with port 5432 exposed, and then create another webapp which wants to connect on 5432, they get assigned some ip addresses on the bridge network from docker...
probably 172.0.0.1 and 172.0.0.2 respectively. if I fire up the containers, inspect their ips with docker network inspect <bridge id>
if I then take those ips and plug in the port on my webapp settings, everything works great...
BUT I shouldn't have to run my webapp, shell into it, change settings, and then run a server, I should be able to just run the container...
So what am I missing here, is there a way to have these two containers networked without having to do all of that?
Use a Docker network
docker create network myapp
docker run --network myapp --name db [first-container...]
docker run --network myapp --name webapp [second-container...]
# ... and so on
Now you can refer to containers by their names, from within other containers. Just like they were hostnames in DNS.
In the application running in the webapp container, you can configure the database server using db as if it is a hostname.