From my docker container I want to access the MySQL server running on my host at 127.0.0.1. I want to access the web server running on my container container from the host. I tried this:
docker run -it --expose 8000 --expose 8001 --net='host' -P f29963c3b74f
But none of the ports show up as exposed:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
093695f9bc58 f29963c3b74f "/bin/sh -c '/root/br" 4 minutes ago Up 4 minutes elated_volhard
$
$ docker port 093695f9bc58
If I don't have --net='host', the ports are exposed, and I can access the web server on the container.
How can the host and container mutually access each others ports?
When --expose you define:
The port number inside the container (where the service listens) does
not need to match the port number exposed on the outside of the
container (where clients connect). For example, inside the container
an HTTP service is listening on port 80 (and so the image developer
specifies EXPOSE 80 in the Dockerfile). At runtime, the port might be
bound to 42800 on the host. To find the mapping between the host ports
and the exposed ports, use docker port.
With --net=host
--network="host" gives the container full access to local system services such as D-bus and is therefore considered insecure.
Here you have nothing in "ports" because you have all ports opened for host.
If you dont want to use host network you can access host port from docker container with docker interface
- How to access host port from docker container
- From inside of a Docker container, how do I connect to the localhost of the machine?.
When you want to access container from host you need to publish ports to host interface.
The -P option publishes all the ports to the host interfaces. Docker
binds each exposed port to a random port on the host. The range of
ports are within an ephemeral port range defined by
/proc/sys/net/ipv4/ip_local_port_range. Use the -p flag to explicitly
map a single port or range of ports.
In short, when you define just --expose 8000 the port is not exposed to 8000 but to some random port. When you want to make port 8000 visible to host you need to map published port -p 8000:8000.
Docker's network model is to create a new network namespace for your container. That means that container gets its own 127.0.0.1. If you want a container to reach a mysql service that is only listening on 127.0.0.1 on the host, you won't be able to reach it.
--net=host will put your container into the same network namespace as the host, but this is not advisable since it is effectively turning off all of the other network features that docker has-- you don't get isolation, you don't get port expose/publishing, etc.
The best solution will probably be to make your mysql server listen on an interface that is routable from the docker containers.
If you don't want to make mysql listen to your public interface, you can create a bridge interface, give it a random ip (make sure you don't have any conflicts), connect it to nothing, and configure mysql to listen only on that ip and 127.0.0.1. For example:
sudo brctl addbr myownbridge
sudo ifconfig myownbridge 10.255.255.255
sudo docker run --rm -it alpine ping -c 1 10.255.255.255
That IP address will be routable from both your host and any container running on that host.
Another approach would be to containerize your mysql server. You could put it on the same network as your other containers and get to it that way. You can even publish its port 3306 to the host's 127.0.0.1 interface.
Related
I need to understand how TCP uses ephemeral port in a container. I understand network is namespaced, and TCP port in container would be NAT’d to host port. Does that mean for two containers running in the same host, if one container binds to 64000 ports(use up 64k ports available inside the container using TCP bind(), without binding to host port), the other container won’t be able to use any port as all ports in the host system are used up?
Assuming one IP per host ofcourse
Hi TheJoker if you'll try to use localhost docker engine and run two containers with port let's say 80 with simple nginx server you can run them without any problem as long as you are not binding them with host port. If you are binding port 80 of container with port 80 of host obviously you can do that only for one container.
If you will run this command twice
docker run -d -p 80:80 nginx
You'll receive similar message to this one
docker: Error response from daemon: driver failed programming external connectivity on endpoint angry_mclean (d8bbf5af6503b4d54d234f1bf69ee372a8ada6ef07a5ebd138479691d5679994): Bind for 0.0.0.0:80 failed: port is already allocated.
To sum up you can run as many containers as you want with exposed port but you can bind only one to host port.
If you'll run container with 64000 ports bind to your host (-P option to bind all exposed ports) than your container is occupying all ports (not possible as your host system use some ports but theoretically).
UPDATE:
For more information please see :
https://docs.docker.com/engine/reference/builder/#expose
https://docs.docker.com/network/iptables/
Right now, when I bind a docker container port to a port on my computer, it can be accessed through every IP address belonging to my computer.
I know this since I tried connecting to the port through another computer using my Docker host's static LAN ip address.
I want to restrict that specific container to be accessible exclusively by my docker host (127.0.0.1 or localhost). When I change my web server's IP to localhost, it becomes inaccessible from my docker host (probably because that makes it local to the container, not the host).
How can I make a docker container local to the host?
If you run the container like this it will be accesable only from 127.0.0.1
docker run --rm -it -p 127.0.0.1:3333:80 httpd
--rm: I use it for testing it removing the container after exit.
-it: interactive tty.
-p: port mapping, map 3333 on the host to 80 in the container and restrict access only from localhost.
The docker-compose equivalent would be:
services:
db:
ports:
- "127.0.0.1:80:80"
I'm stuck on port mapping in Docker.
I want to map port 8090 on the outside of a container to port 80 on the inside of the container.
Here is the container running:
ea41c430105d tag-xx "/usr/local/openrest…" 4 minutes ago Up 4 minutes 8090/tcp, 0.0.0.0:8090->80/tcp web
Notice that it says that port 8090 is mapped to port 80.
Now inside another container I do
curl web
I get a 401 response. Which means that the container responds. So far so good.
But when I do curl web:8090 I get:
curl: (7) Failed to connect to web port 8090: Connection refused
Why is port mapping not working for me?
Thanks
P.S. I know that specifically my container responds to curl web with a 401 because when I stop docker stop web and do curl web again, I get could not resolve host: web.
You cannot connect to a published port from inside another container because those are only available on the host. In your case:
From host:
curl localhost:8090 will connect to your container
curl localhost:80 won't connect to your container because the port isn't published
From another container in the same network
curl web will work
curl web:8090 won't work because the only port exposed and listening for the web service is the 80.
Docker containers unless specified connects to the default bridge network. This default bridge network does not support automatic DNS resolution between containers. It looks like you are most likely on the default bridge network. However, on a default bridge network, you could connect using the container IP Address which can be found out using the following command
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container name>
So, curl <IP Address of web container>:8090 should work.
It is always better to create a user defined bridge network and attach the containers to this network. On a user defined bridge network, the containers that are connected have their ports exposed to each other and not to the outside world. The user defined bridge network also support automatic DNS resolution and you could refer to the container's name instead of IP Address. You could try the following commands to create a user defined bridge network and attach your containers to it.
docker network create --driver bridge my-net
docker attach web
docker attach <other container name>
Now, from the other container you should be able to run curl on the 'web' container.
You can create network to connect between containers.
Or you can use --link :
docker run --name container1 -p 80:???? -d image (expose on port 80)
docker run --name container2 --links lcontainer1:container1
and inside container2 you can use :
curl lcontainer1
Hope it helps
My docker container (sctp server) is running on sctp with port number 36412. However, my sctp client on the host machine unable to communicate with the container. How do I expose this port from container to host? Is it not same as TCP/UDP?
When I run docker run -p 36412:36412 myimage, I get below error.
Invalid proto: sctp
From reading source code, the general form of the docker run -p option is
docker run -p ipAddr:hostPort:containerPort/proto
Critically, the "protocol" part of this is allowed to be any of tcp, udp, or sctp; it is lowercased, and defaults to tcp if not specified.
It looks like for your application, you should be able to
docker run -p 36412:36412/sctp ...
Use the -p flag when running to to map an open port on your host machine to the container port. The below example maps port 36412 on the host to 36412 in the container.
docker run -p 36412:36412 mysctpimage
To view the ports running on your container and where they are mapping to:
docker port <containerId>
This will tell you what port and protocol the container is mapping to your host machine. For example running a simple WebApi project may yield:
80/tcp -> 0.0.0.0:32768
Docker Port Documentation
How to publish or expose a port when running a container
I understand port mapping with -p. I understand I can only map my container port on one port on the host network:
$ docker run -d -p 8080:80 nginx
There can no other container map its port on 8080 because there is already running a container. This port 8080 will be mapped on docker0 port 80 and so on on docker-container-port 80.
But I don't really understand why I can have another nginx:
$ docker -run -d -p 8888:80
I have to map my port on a different port of the host (8888) but why can my docker0 network open port 80 2 times? there are 2 containers behind it with port 80. I know it works but I just don't understand why.
Each container runs in a separate network namespace. This is an isolated network environment that does not shared network resources (addresses, interfaces, routes, etc) with the host. When you start a service in a container, it is as if you have started it on another machine.
Just as you can have two different machines on your network with webservers running on port 80, you can have two different containers on your host with webservers running on port 80.
Because they are in different network namespaces, there is no conflict.
For more reading on network namespaces:
https://blog.scottlowe.org/2013/09/04/introducing-linux-network-namespaces/
https://lwn.net/Articles/580893/