docker within-container port forwarding - docker

Is there a way to forward one container-local port to another container-local port?
I know that
docker run -d --name web_lb -p 8000:80 --link web_1:web_1 --link web_2:web_2 tutum/haproxy
forwards the host port 8000 to the container port 80, but how would I forward the container's port 8000 to the container's port 80?
thanks

You normally wouldn't need to do that: any EXPOSEd port in web_1 is directly accessible by the running container.
If multiple identical ports are exposed in linked containers (like web_1 and web_2), then the running container needs its own reverse proxy service (usually an NGiNX) to proxy-pass to one or the other.

Related

Docker is only listening to port 80

I'm learning docker and I'm testing running containers. It works fine only when I run a container listening on port 80.
Example:
Works OK:
docker run -d --name fastapicontainer_4 -p **8090**:80 fastapitest
docker run -d --name fastapicontainer_4 -p **8050**:80 fastapitest
Don´t work OK::
docker run -d --name fastapicontainer_4 -p **8050**:**8080** fastapitest
When I change the port where the program listens in the container and put a port different than 80, the page didn't work. Someone knows if it's possible to use a different port from 80? and how can I do it? I'm using fastapi.
Thanks,
Guillermo
The syntax of the -p argument is <host port>:<container port>. You can make the host port be anything you want, and Docker will arrange for it to redirect to the container port, but you cannot set the container port to an arbitrary value. There needs to be a service in the container listening on that port.
So if you have a web server in the container running on port 80, then the <container port> part of the -p option must always be 80, unless you change the web server configuration to listen on a different port.
What you are doing:
docker run -d --name fastapicontainer_4 -p 8050:8080 fastapitest
Explanation: What this is doing is forwarding the Host port 8050 to the container port 8080. In case your fastapi service is not listening on the port 8080, the connection will fail.
Host 8050 -> Container 8080
Correct way of doing it:
docker run -d --name fastapicontainer_4 -p 8080:80 fastapitest
Explanation: This is forwarding the host port 8080 to the container port 80
Host 8080 -> Container 80
Note: Docker doesn't validate the connection when you share a port, it just opens the gate so you can do whatever you want with that open port, so even if your service is not listening on that port, docker doesn't care.
You need to specify the custome port you want to use to run fastapi.
e.g.
uvicorn.run(app, host="0.0.0.0", port=8050)
Now if you run mapping 8050(or ay other) port on host with 8050 on container, it should work:
docker run -d --name fastapicontainer_4 -p 8050:8080 fastapitest

Coexistence of UCP and HTTP server on Docker

I have a Docker EE running on a Host with IP 172.10.100.17. I have installed UCP using the default parameters and I have also deployed nginx container with host port 443 mapped to 443 on the container.
docker run -it --rm --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp install --host-address 172.10.100.17 --interactive
docker run -it -d --name ngx -p 80:80 -p 443:443 nginx
Can UCP and Nginx co-exist with both serving at
https://172.10.100.17?
What is the best practice for deploying UCP when my primary goal is to have nginx/apache serving on Host IP?
Would it be recommended to set a static IP to nginx container/service?
(Note:https is enabled on nginx)
The key is in the -p parameter, which handles port mapping. The first port listed is on the host, and the second is in the container. So -p 80:80 means to map port 80 on the host to port 80 in the container.
Let’s expand this to Nginx. I’m going to assume you want to use HTTPS with both UCP and Nginx. Only one application can listen per port on a host. So, if two containers both expose port 443, you can have one use port 443 on the host (-p 443:443) and the other use a different port (-p 4443:443). Then you’ll access them at ports 443 and 4443 on the host, respectively, even though both containers expose port 443 - Docker is doing the port forwarding.
It may be that you’re asking how to run both containers on a single port using Nginx as a reverse proxy. That’s a possibility as well, though more complex.

Docker aws port

I'm new using docker.
I was asking me if is possible to run many containers on the same aws ec2 instance, triggering all port of the containers on one sigle port on the ec2 instance.
Suppose that we have 3 container:
container1 that run apache2 on port 80
container2 that run nginx on port 80
container3 with tomcat on port 8080
How can access to these services from my pc?
To do this I read that I need to expose ports by typing option -p externport : containerport but its not working
so i thought to change network and then I use option --network=host to trig all port to the same ip but it doesn't work.
I'd like just to accesso to these container in this way:
my-ec2-instance-public-dns:8080 -> container1
my-ec2-instance-public-dns:8081 -> container2
my-ec2-instance-public-dns:8082 -> container3
Can anyone help me?
It is not possible to map two services to the same port. You can map container ports to host ports using the -p flag, formatted hostPort:containerPort when you use container networking mode.
In your case, it could be
docker run -p 8080:80 nginx
docker run -p 8081:80 apache2
docker run -p 8082:8080 tomcat
Make sure you set the AWS security group of your virtual machine to allow traffic from your IP to ports 8080-8082.

Docker networking: Why can I have 2 containers which have opened the same port

I understand port mapping with -p. I understand I can only map my container port on one port on the host network:
$ docker run -d -p 8080:80 nginx
There can no other container map its port on 8080 because there is already running a container. This port 8080 will be mapped on docker0 port 80 and so on on docker-container-port 80.
But I don't really understand why I can have another nginx:
$ docker -run -d -p 8888:80
I have to map my port on a different port of the host (8888) but why can my docker0 network open port 80 2 times? there are 2 containers behind it with port 80. I know it works but I just don't understand why.
Each container runs in a separate network namespace. This is an isolated network environment that does not shared network resources (addresses, interfaces, routes, etc) with the host. When you start a service in a container, it is as if you have started it on another machine.
Just as you can have two different machines on your network with webservers running on port 80, you can have two different containers on your host with webservers running on port 80.
Because they are in different network namespaces, there is no conflict.
For more reading on network namespaces:
https://blog.scottlowe.org/2013/09/04/introducing-linux-network-namespaces/
https://lwn.net/Articles/580893/

docker: mutual access of container and host ports

From my docker container I want to access the MySQL server running on my host at 127.0.0.1. I want to access the web server running on my container container from the host. I tried this:
docker run -it --expose 8000 --expose 8001 --net='host' -P f29963c3b74f
But none of the ports show up as exposed:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
093695f9bc58 f29963c3b74f "/bin/sh -c '/root/br" 4 minutes ago Up 4 minutes elated_volhard
$
$ docker port 093695f9bc58
If I don't have --net='host', the ports are exposed, and I can access the web server on the container.
How can the host and container mutually access each others ports?
When --expose you define:
The port number inside the container (where the service listens) does
not need to match the port number exposed on the outside of the
container (where clients connect). For example, inside the container
an HTTP service is listening on port 80 (and so the image developer
specifies EXPOSE 80 in the Dockerfile). At runtime, the port might be
bound to 42800 on the host. To find the mapping between the host ports
and the exposed ports, use docker port.
With --net=host
--network="host" gives the container full access to local system services such as D-bus and is therefore considered insecure.
Here you have nothing in "ports" because you have all ports opened for host.
If you dont want to use host network you can access host port from docker container with docker interface
- How to access host port from docker container
- From inside of a Docker container, how do I connect to the localhost of the machine?.
When you want to access container from host you need to publish ports to host interface.
The -P option publishes all the ports to the host interfaces. Docker
binds each exposed port to a random port on the host. The range of
ports are within an ephemeral port range defined by
/proc/sys/net/ipv4/ip_local_port_range. Use the -p flag to explicitly
map a single port or range of ports.
In short, when you define just --expose 8000 the port is not exposed to 8000 but to some random port. When you want to make port 8000 visible to host you need to map published port -p 8000:8000.
Docker's network model is to create a new network namespace for your container. That means that container gets its own 127.0.0.1. If you want a container to reach a mysql service that is only listening on 127.0.0.1 on the host, you won't be able to reach it.
--net=host will put your container into the same network namespace as the host, but this is not advisable since it is effectively turning off all of the other network features that docker has-- you don't get isolation, you don't get port expose/publishing, etc.
The best solution will probably be to make your mysql server listen on an interface that is routable from the docker containers.
If you don't want to make mysql listen to your public interface, you can create a bridge interface, give it a random ip (make sure you don't have any conflicts), connect it to nothing, and configure mysql to listen only on that ip and 127.0.0.1. For example:
sudo brctl addbr myownbridge
sudo ifconfig myownbridge 10.255.255.255
sudo docker run --rm -it alpine ping -c 1 10.255.255.255
That IP address will be routable from both your host and any container running on that host.
Another approach would be to containerize your mysql server. You could put it on the same network as your other containers and get to it that way. You can even publish its port 3306 to the host's 127.0.0.1 interface.

Resources