I have a two HTTP servers on my host machine; one listening on 8080, the other listening on 8081. The 8080 is a webapp, and the 8081 is an API.
I also have a Docker container that should connect to the webapp on 8080 using an automated tool, and that webapp should make HTTP requests to the API that's on 8081.
Here is a visual representation of what I want:
Host machine HTTP 8080
⇩ ⇖
⇧ Docker container
Host machine HTTP 8081
The problem I'm having is that the Docker container cannot connect to the website on the host machines 8080. I'm not sure why, because I set the --network=host flag, so shouldn't it be using the host machines network?
This is my Docker image:
## Redacted irrelevant stuff...
EXPOSE 8080 8081
This is how run the container:
docker run -d -p 8080:8080 -p 8081:8081 --network=host --name=app app
Any ideas what's wrong with my setup?
So you have two services running directly on the machine and you want to deploy a Docker container that should connect to one of those services.
In that case, you shouldn't map those port to the container and you shouldn't expose those ports in the Dockerfile as those ports are not for the container.
Remove the Expose ports from the Dockerfile
Start the container using docker run -d --network=host --name=app app. The container should be able to access the services using localhost:8080.
Related
I have a Jenkins running in a docker container in Linux ec2 instance. I am running testcontainers within it and I want to expose all ports to the host. For that I am using network host.
When I run the jenkins container with -p 8080:8080 everything works fine and I am able to access jenkins on {ec2-ip}:8080
docker run id -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts
however, If I want to run the same image using --network=host as I want to expose all ports to the host
docker run id --network=host jenkins/jenkins:lts
{ec2-ip}:8080 becomes unreachable. I can curl to it locally within the container localhost:8080 but accessing jenkins from the browser doesn't work.
I am not sure how network host would change the way I access jenkins on port 8080. the application should be still available on port 8080 on the host IP address?
Check if you are enabling the port 8080 in the security group for the instance.
When a Docker container is running in the host network mode using the --network=host option, it shares the network stack with the Docker host. This means that the container is not isolated and uses the same network interface as the host.
In your case, you should be able to access the Jenkins from the browser with ec2-ip:8080
I tested it by running Jenkins with the following command:
docker run -id --name jenkins --network=host jenkins/jenkins:lts
if the issue still persists, you can check the following:
make sure the container is running
make sure that there is no other process is running on port 8080
make sure that you enabled the port 8080 for your ec2
AFAIU --network doesn't do what you expect it to do. --network flag allows you to connect the container to a network. For example, when you do --nerwork=host your container will be able to use the Docker host network stack. Not the other way around. Take a look at the official documentation.
Figured it out. needed to update iptables to allow port 8080 on network host.
sudo iptables -D INPUT -i eth0 -p tcp -m tcp --dport 8080 -m comment --comment "# jenkins #" -j ACCEPT
I am running a docker container by docker run -p 8080:8080. Other computers can visit my server by visiting [my ip]:8080. However, for security reasons, I want only localhost(127.0.0.0) is able to access to my server. I do not want other people to connect to my server. How do I restrict that a docker container only listens the host 127.0.0.1?
You can use:
docker run -p 127.0.0.1:8080:8080 your_image_name
This will map the container's port 8080 to only listen on host's 127.0.0.1 at port 8080.
I am trying to run jenkins container. I used "docker run --restart always --name myjenkins -p 8080:80 jenkins" but cannot access jenkins at http://localhost:8080 on browser. If I use docker run --restart always --name myjenkins -p 8080:8080 jenkins, I can access the jenkins url.
Thanks in advance
Without Docker
Each application must use a different port.
You can access to your application using directly its ports (if are available of course):
APP_A : 192.168.4.5:8080
APP_B : 10.10.10.15:8081
APP_C : www.app.com:8082
With Docker
Applications could use any port because each one "is a different world"
You can not access to your docker applications using its internal ports:
APP_A : 192.168.4.5:8080
APP_B : 10.10.10.15:8080
APP_C : www.app.com:8080
Because for instance, 8080 of APP_B is only visible inside APP_B container. No body can access to this applications.
In order to access to your docker applications, You must explicitly establish a relationship between:
Linux host ports <-> inside containers ports.
To do that you could use -p parameter
docker run -d -p 8080:8080 APP_A ...
docker run -d -p 8081:8080 APP_B ...
docker run -d -p 8082:8080 APP_C ...
After this you could access to your docker applications using its new ports :
APP_A : 192.168.4.5:8080
APP_B : 10.10.10.15:8081
APP_C : www.app.com:8082
Also a common error when docker-compose & docker network are used is use localhost instead ip when a docker app needs to connect to another docker app. As you can see you need to use ip or domain + external port instead localhost:8080
what is the difference between publishing 8080:80 and 8080:8080 in a docker run?
With 8080:80 you expect that your application uses or start with the 80 internal port inside container.
With 8080:8080 you expect that your application uses or start with the 8080 internal port inside container.
You just need to research what is the internal container port used by your jenkins and put it in docker run -p ...
8080:80 refers that in the container you are using port 80 and you are forwarding that port to host machine's 8080 port. So you are running Jenkins on port 80 inside your container wherever in scenario 2 you are running Jenkins on port 8080 inside the container and exposing it over the same port on host machine.
For example if I am running mysql in container I may use 8080:3306 so mysql would be running on port 3306 but exposed on 8080 of host machine but if choose it to be 8080:80 for mysql it may not work because as per the code of mysql it binds itself on port 3306 not port 80. Same is the scenario in your case of Jenkins too.
When you say 8080:80, it means any request coming on port 8080 will be forwarded to service running on port 80 inside your docker container.
Similarly 8080:8080 means any request coming for port 8080 will be forwarded to service running on port 8080 inside your container
You can also think of it as -
Port for Outside World: Actual Port of service in container
Hope this helps
The syntax looks like below. More details about -p flag.
docker run -p [ip-on-host:]port-on-host:port-in-container image-name
In your case, -p 8080:80 means leading all traffic to port 80 in container. If you check port status on host by netstat -lntp|grep 8080, there is a process managed by docker-proxy who is listening on port 8080 on host machine. It would manage all traffic routing between port 8080 on host and port 80 in container.
I'm new using docker.
I was asking me if is possible to run many containers on the same aws ec2 instance, triggering all port of the containers on one sigle port on the ec2 instance.
Suppose that we have 3 container:
container1 that run apache2 on port 80
container2 that run nginx on port 80
container3 with tomcat on port 8080
How can access to these services from my pc?
To do this I read that I need to expose ports by typing option -p externport : containerport but its not working
so i thought to change network and then I use option --network=host to trig all port to the same ip but it doesn't work.
I'd like just to accesso to these container in this way:
my-ec2-instance-public-dns:8080 -> container1
my-ec2-instance-public-dns:8081 -> container2
my-ec2-instance-public-dns:8082 -> container3
Can anyone help me?
It is not possible to map two services to the same port. You can map container ports to host ports using the -p flag, formatted hostPort:containerPort when you use container networking mode.
In your case, it could be
docker run -p 8080:80 nginx
docker run -p 8081:80 apache2
docker run -p 8082:8080 tomcat
Make sure you set the AWS security group of your virtual machine to allow traffic from your IP to ports 8080-8082.
My docker container (sctp server) is running on sctp with port number 36412. However, my sctp client on the host machine unable to communicate with the container. How do I expose this port from container to host? Is it not same as TCP/UDP?
When I run docker run -p 36412:36412 myimage, I get below error.
Invalid proto: sctp
From reading source code, the general form of the docker run -p option is
docker run -p ipAddr:hostPort:containerPort/proto
Critically, the "protocol" part of this is allowed to be any of tcp, udp, or sctp; it is lowercased, and defaults to tcp if not specified.
It looks like for your application, you should be able to
docker run -p 36412:36412/sctp ...
Use the -p flag when running to to map an open port on your host machine to the container port. The below example maps port 36412 on the host to 36412 in the container.
docker run -p 36412:36412 mysctpimage
To view the ports running on your container and where they are mapping to:
docker port <containerId>
This will tell you what port and protocol the container is mapping to your host machine. For example running a simple WebApi project may yield:
80/tcp -> 0.0.0.0:32768
Docker Port Documentation
How to publish or expose a port when running a container