I am trying to start a jenkins container. The port 8080 is mapped to the host port 80.
docker run -p 80:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
I can curl the jenkins app from the host, however I can not reach it from outside the host.
curl localhost:80 works inside host
but
curl <fqdn or ip address>:80 does not work outside host (same network as host) and timeout.
If I set --net=host, the above command works, but I would like to not use the host network in case I need to add more services in the future.
I think it is not a firewall issue because by setting the network to host, the link is working.
With tcpdump I can see the requests in the host main interface, however, no packets are forwarded to the docker0 interface. Is it normal behaviour ?
What can I do to resolve the issue in order to reach jenkins outside the host ?
Related
I have a Jenkins running in a docker container in Linux ec2 instance. I am running testcontainers within it and I want to expose all ports to the host. For that I am using network host.
When I run the jenkins container with -p 8080:8080 everything works fine and I am able to access jenkins on {ec2-ip}:8080
docker run id -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts
however, If I want to run the same image using --network=host as I want to expose all ports to the host
docker run id --network=host jenkins/jenkins:lts
{ec2-ip}:8080 becomes unreachable. I can curl to it locally within the container localhost:8080 but accessing jenkins from the browser doesn't work.
I am not sure how network host would change the way I access jenkins on port 8080. the application should be still available on port 8080 on the host IP address?
Check if you are enabling the port 8080 in the security group for the instance.
When a Docker container is running in the host network mode using the --network=host option, it shares the network stack with the Docker host. This means that the container is not isolated and uses the same network interface as the host.
In your case, you should be able to access the Jenkins from the browser with ec2-ip:8080
I tested it by running Jenkins with the following command:
docker run -id --name jenkins --network=host jenkins/jenkins:lts
if the issue still persists, you can check the following:
make sure the container is running
make sure that there is no other process is running on port 8080
make sure that you enabled the port 8080 for your ec2
AFAIU --network doesn't do what you expect it to do. --network flag allows you to connect the container to a network. For example, when you do --nerwork=host your container will be able to use the Docker host network stack. Not the other way around. Take a look at the official documentation.
Figured it out. needed to update iptables to allow port 8080 on network host.
sudo iptables -D INPUT -i eth0 -p tcp -m tcp --dport 8080 -m comment --comment "# jenkins #" -j ACCEPT
I'm stuck on port mapping in Docker.
I want to map port 8090 on the outside of a container to port 80 on the inside of the container.
Here is the container running:
ea41c430105d tag-xx "/usr/local/openrest…" 4 minutes ago Up 4 minutes 8090/tcp, 0.0.0.0:8090->80/tcp web
Notice that it says that port 8090 is mapped to port 80.
Now inside another container I do
curl web
I get a 401 response. Which means that the container responds. So far so good.
But when I do curl web:8090 I get:
curl: (7) Failed to connect to web port 8090: Connection refused
Why is port mapping not working for me?
Thanks
P.S. I know that specifically my container responds to curl web with a 401 because when I stop docker stop web and do curl web again, I get could not resolve host: web.
You cannot connect to a published port from inside another container because those are only available on the host. In your case:
From host:
curl localhost:8090 will connect to your container
curl localhost:80 won't connect to your container because the port isn't published
From another container in the same network
curl web will work
curl web:8090 won't work because the only port exposed and listening for the web service is the 80.
Docker containers unless specified connects to the default bridge network. This default bridge network does not support automatic DNS resolution between containers. It looks like you are most likely on the default bridge network. However, on a default bridge network, you could connect using the container IP Address which can be found out using the following command
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container name>
So, curl <IP Address of web container>:8090 should work.
It is always better to create a user defined bridge network and attach the containers to this network. On a user defined bridge network, the containers that are connected have their ports exposed to each other and not to the outside world. The user defined bridge network also support automatic DNS resolution and you could refer to the container's name instead of IP Address. You could try the following commands to create a user defined bridge network and attach your containers to it.
docker network create --driver bridge my-net
docker attach web
docker attach <other container name>
Now, from the other container you should be able to run curl on the 'web' container.
You can create network to connect between containers.
Or you can use --link :
docker run --name container1 -p 80:???? -d image (expose on port 80)
docker run --name container2 --links lcontainer1:container1
and inside container2 you can use :
curl lcontainer1
Hope it helps
I have 2 docker containers running on my Mac host - container 1 is Jenkins from Docker Hub and container 2 is SonarQube from Docker Hub. I have both containers running successfully. I can access Jenkins from my host by going to http://localhost:8080/ and I can access my SonarQube by going to http://localhost:9000/.
The Jenkins container was started like this:
docker run -d -p 8080:8080 -p 50000:50000 jenkins/jenkins:latest
The SonarQube container was started like this:
docker run -d -p 9000:9000 sonarqube
Now I want to have each container communicate with each other so I need to provide the IP address of the other container to each container.
I got the IP address of each container by executing this:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' container_name_or_id
This returns an IP address of 172.17.0.2 for the Jenkins container and 172.17.0.3 for the SonarQube container. But when I try and access the Jenkins container from my host by going to http://172.17.0.2:8080 I get a request timeout. The same thing happens when I try and access the SonarQube container from my host by going to http://172.17.0.3:9000
Is this normal behavior?
Shouldn't I be able to access each container from my host by their internal IP address?
And how can I test that one container (e.g. Jenkins) can access the other container (e.g. SonarQube) by IP address?
Is this normal behavior? Shouldn't I be able to access each container from my host by their internal IP address?
What you describe is normal behavior: you can't directly reach the Docker-internal IP addresses from a MacOS host. See "Per-container IP addressing is not possible" in the Docker for Mac docs.
How can I test that one container (e.g. Jenkins) can access the other container (e.g. SonarQube) by IP address?
This isn't something I normally "test" per se. Start up both processes and have them make their normal (HTTP) connections; if it works you'll see appropriate log messages, and if it doesn't work you'll see complaints. (Getting a root shell in a container to send ICMP packets from one container to another seems to be a popular option but doesn't prove much.)
Also: don't make this connection by explicit IP address. As you've noticed already the Docker-internal IP addresses aren't usable in some contexts, and they'll change whenever you restart containers. Instead, Docker provides an internal DNS service that can resolve host names when communicating between containers, but you need to explicitly set up a non-default bridge network. That setup would look like:
docker network create jenkinsnet
docker run --name sonarqube -d --net jenkinsnet \
-p 9000:9000 \
sonarqube
docker run --name jenkins -d --net jenkinsnet \
-p 8080:8080 -p 50000:50000 \
-e SONARQUBE_URL=http://sonarqube:9000 \
jenkins/jenkins:latest
So I've explicitly created a network; started both containers connected to it; and told the client container (via an environment variable) where the server container is. You don't have to publish ports with docker run -p to reach them this way; whether you do or not, use the port the server process is listening on (the second port number in the docker run -p option).
From the host, your only (portable, reliable) path to reach the container is via its published ports.
Looks like you are using default bridge network model. Internal IPs are meant for each container to talk to each other under bridge networking. You cannot access them from host.
There are multiple options for you.
You can configure http://172.17.0.3:9000 as your sonar endpoint in Jenkins.
You can configure http://172.17.0.2:8000 as your jenkins endpoint in sonar.
If you don't want to hard code above Ips then both of your containers can talk to each using Docker Default GatewayIp(172.17.0.1) and their internal port. so essentially you can configure http://172.17.0.1 as well.
Note - Default Gateway Ip change change if you define user defined bridge network.
https://docs.docker.com/v17.09/engine/userguide/networking/#the-default-bridge-network
https://docs.docker.com/network/network-tutorial-standalone/
If you want to spin up both containers using docker-compose, then you can link both containers using service name. Just follow Networking in Compose.
The accepted answer (https://stackoverflow.com/a/53992787/7730554) already provides valid options of which I personally usually prefer using docker compose.
But as you are running Docker on Mac you could also use host.docker.internal in combination with the defined forwarding host port. So Docker will take care that host.docker.internal is resolved to the corresponding IP even if your Host IP changes.
See https://docs.docker.com/desktop/mac/networking/.
Note that this is for development mode only and works when you use Docker Desktop.
I'm new to docker and maybe this is something I don't fully understand yet, but what I'm trying to do is connect to an open port in a running docker container. I've pulled and run the rabbitmq container from hub (https://hub.docker.com/_/rabbitmq/). The rabbitmq container should uses port 5672 for clients to connect to.
After running the container (as instructed in the hub page):
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
Now what I want to do is telnet into the open post (it is possible on a regular rabbitmq installation and should be on a container as well).
I've (at least I think I did) gotten the container IP address using the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
And the result I got was 172.17.0.2. When I try to access using telnet 172.17.0.2 5672 it's unsuccessful.
The address 172.17.0.2 seems strange to me because if I run ipconfig on my machine I don't see any interface using 172.17.0.x address. I do see Ethernet adapter vEthernet (DockerNAT) using the following ip: 10.0.75.1. Is this how it is supposed to be?
If I do port binding (adding -p 5672:5672) then I can telnet into this port using telnet localhost 5672 and immidiatly connect.
What am I missing here?
As you pointed out, you need port binding in order to achieve the result you need because you are running the application over the default bridge network (on Windows i guess).
From the official docker doc
Containers connected to the same user-defined bridge network automatically expose all ports to each other, and no ports to the outside world. [...]
If you run the same application stack on the default bridge network, you need to open both the web port and the database port, using the -p or --publish flag for each. This means the Docker host needs to block access to the database port by other means.
Later in the rabbitmq hub there is a reference to a Management Plugin which is run by executing the command
docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
Which exposes the port 8080 used for management which I think is what you may need.
You should also notice that they talk about clusters and nodes there, maybe they meant the container to be run as a service in a swarm (hence using the overlay network and not the bridge one).
Hope I could help somehow :)
I have two machines connected via SSH tunnelling such that machine1:2222 can access machine2:2222 as localhost. machine2 runs the container docker2 and exposes services on port 2222 to localhost only. I can access these from machine1 on port 2222.
I would like to be able to access machine1:2222 from docker1, a container running on machine1 as localhost. I can determine the gateway IP address from within docker1, however connections are rejected because they come from the IP address assigned to docker1 rather than localhost.
So, what is the best way to access services on machine2 from docker1 on machine1? Solutions I've seen seem to involve modifying iptables on the host machine which doesn't seem all that portable.
This is what the --net flag is for:
user#machine1:/ docker run --net="host" -ti docker1 /bin/bash
root#machine1:/ wget localhost:2222
>> (this will download whatever a request to machine2:2222 provides)