how does docker net=host handle port conflict - docker

How does Docker handle docker run -d --net=host <image> if I run 2 images that have the exact same port EXPOSEd?
For example, if I run:
$ docker run -d --net=host nginx
$ docker run -d --net=host nginx
$ docker run -d --net=host httpd
# I now have 3 containers running, all of which EXPOSE port 80
# what does the following return?
$ curl http://localhost:80/
What response do I get? The first nginx? The second nginx? The Apache httpd? And how does Docker manage it under the covers? There is no NAT involved since I did --net=host

Well, I have my answer. Just like a process that tries to bind to a port already in use will fail with EADDRINUSE, so too will a container that tries to bind to a port already in use on the host if --net=host.

Related

Expected exposed port on Redis container isn't reachable, even after binding the port

I'm having a rather awful issue with running a Redis container. For some reason, even though I have attempted to bind the port and what have you, it won't expose the Redis port it claims to expose (6379). Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
Docker Redis Page (for reference to where I pulled the image from): https://hub.docker.com/_/redis/
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
docker run --name ausbot-ranksync-redis -d redis
docker run --name ausbot-ranksync-redis --expose=6379 -d redis
https://gyazo.com/991eb379f66eaa434ad44c5d92721b55 (The last container I scan is a MariaDB container)
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
Those two should work and make the port available on your host.
Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
You shouldn't be checking the ports directly on the container from outside of docker. If you want to access the container from the host or outside, you publish the port (as done above), and then access the port on the host IP (or 127.0.0.1 on the host in your first example).
For docker networking, you need to run your application listening on all interfaces (not localhost/loopback). The official redis image already does this, and you can verify with:
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot netstat -lnt
or
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot ss -lnt
To access the container from outside of docker, you need to publish the port (docker run -p ... or ports in the docker-compose.yml). Then you connect to the host IP and the published port.
To access the container from inside of docker, you create a shared network, run your containers there, and access using docker's DNS and the container port (publish and expose are not needed for this):
docker network create app
docker run --name ausbot-ranksync-redis --net app -d redis
docker run --name redis-cli --rm --net app redis redis-cli -h ausbot-ranksync-redis ping

Docker curl to other container in same host

I have 2 containers, first one running on port 80 and the other is running on port 8022, I try to make a curl request from the container that`s is running on port 8022 to the container on port 80 and i get a empty response.
For the container on port 8022 i run this command:
docker run -d -it --privileged -p 0.0.0.0:8022:80 -v ~/path/to/my/app:/var/www/app --network=bridge --memory 1073741824 my/app:latest
If a make a curl request to other host form example google, I get the response correctly.
Thanks for help
UPDATED
Ok I can solve this, creating a network and using this network in both containers, then I add to host file of the 8022 container the IP of the port 80 container.
Thanks for #zero298 for help!!
This is a very simple example of how to do inter-container communication over a user-defined, bridged network. Here are the 3 files that I have defined to make this possible:
~/Desktop/code/bootstrap.sh
This will kick off the demo by first creating a user defined, isolated network that your containers can talk over named example_nw. It then creates a new container, named "servertest", that will hold the server we will curl to. I'm using a node container because I'm just more familiar with it.
It will also create a volume that binds to your machines ~/Desktop/code/ directory which should contain all the code that we are using, including the node server.js script. The server listens on port 3000 and responds with: "Hello World`".
After creating the server, it kicks off another container, named "curler" that will install curl (debian doesn't come with it installed). After that, it curls to servertest:3000 and gets the correct reply because they are both connected to the same docker, user-defined, network: example_nw.
After completing, it cleans up by killing the server container and removing the example_nw network.
#!/usr/bin/env bash
# Create user-defined network
docker network create example_nw
# Create server to listen to pings on our network
docker run \
--rm \
-d \
-v ~/Desktop/code:/my/stuff \
--name servertest \
--expose 3000 \
--network example_nw \
node node /my/stuff/server
# Create a curler
docker run \
-it \
--rm \
-v ~/Desktop/code:/my/stuff \
--name pingtest \
--network example_nw debian \
/my/stuff/curler.sh
# Clean up
docker container stop servertest
docker network rm example_nw
~/Desktop/code/server.js
This is a really simple node.js script that will create a server that listens on port 3000.
/*jslint node:true, esversion:6*/
"use strict";
const http = require("http"),
port = 3000;
// Make a server
const server = http.createServer((req, res) => {
console.log("Got request");
res.end("Hello World!\n");
});
// Have the server listen
server.listen(port, (err) => {
if (err) {
return console.error(err);
}
console.log(`Listening on port: ${port}`);
});
~/Desktop/code/curler.sh
This just installs curl in the container and then curls to servertest:3000.
#!/usr/bin/env bash
apt update
apt install -y curl
curl servertest:3000
Running ~/Desktop/code/bootstrap.sh will demonstrate the communication.
I would recommend reading the Docker documentation: Work with network commands because it gives a lot of good examples and use cases.
This is an example to use the user-generated network
docker network create mynetwork
docker run --network=mynetwork -d -p 127.0.0.1:8022:80 --name mynginx nginx
docker run -it --network=mynetwork appropriate/curl http://mynginx
Otherwise you can use the older --link option
docker run -d -p 127.0.0.1:8022:80 --name mynginx nginx
docker run -it --link=nginx appropriate/curl http://mynginx
Your container is not listening on port 8022, you've pushed the port 8022 to forward to the container's port 80. From container to container, you connect to the container port, not the published port on the host. So remove 8022 from your curl command to go to the default port 80.
For container to container networking, you need the containers to be on a common docker network. And for docker's internal DNS, you need to be on a docker network other than the default bridge network named "bridge" (that "bridge" network has some historical properties that does not have DNS enabled). You can docker network create $network_name and then run your containers with the --net $network_name option to implement this.
You do not need to modify the host files or use the deprecated "link" functionality for container to container networking.

Docker host network not working

I am really confused about this problem. I have two computer in our internal network. Both computers can ping internal servers.
Both computers have same docker version.
I run simple docker container with docker run -it --rm --name cont1 --net=host java:8 command on both computers. Then ssh into containers and try to ping internal server. One of the container can ping an internal server but other one can't reach any internal server.
How it can be possible? Do you have any idea about that?
Thank you
connect container to other systems in the same network is done by port mapping .
for that you need to run docker container with port mapping.
like - docker run -it --rm --name cont1 -p host_ip:host_port:container_port java:8
e.g., docker run -it --rm --name cont1 -p 192.168.134.122:1234:1500 java:8
NOTE : container port given in docker run is exposed in Dockerfile
now for example container ip will be - 172.17.0.2 port given in run is :1500
Now the request send to host_ip(192.168.134.122) and host_port(1234) is redirect to container with ip (172.17.0.2) and port (1500).
See the binding details in iptables -L -n -t nat
Thanks

Make container accessible only from localhost

I have Docker engine installed on Debian Jessie and I am running there container with nginx in it. My "run" command looks like this:
docker run -p 1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9
It works fine, problem is that now content of this container is accessible via http://{server_ip}:1234. I want to run multiple containers (domains) on this server so I want to setup reverse proxies for them.
How can I make sure that container will be only accessible via reverse proxy and not directly from IP:port? Eg.:
http://{server_ip}:1234 # not found, connection refused, etc...
http://localhost:1234 # works fine
//EDIT: Just to be clear - I am not asking how to setup reverse proxy, but how to run Docker container to be accessible only from localhost.
Specify the required host IP in the port mapping
docker run -p 127.0.0.1:1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9
If you are doing a reverse proxy, you might want to put them all on a user defined network along with your reverse proxy, then everything is in a container and accessible on their internal network.
docker network create net
docker run -d --net=web -v /var/www/:/usr/share/nginx/html nginx:1.9
docker run -d -p 80:80 --net=web haproxy
Well, solution is pretty simple, you just have to specify 127.0.0.1 when mapping port:
docker run -p 127.0.0.1:1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9

Docker in Docker: Port Mapping

I have found a similar thread, but failed to get it to work. So, the use case is
I start a container on my Linux host
docker run -i -t --privileged -p 8080:2375 mattgruter/doubledocker
When in that container, I want to start another one with GAE SDK devserver running.
At that, I need to access a running app from the host system browser.
When I start a container in the container as
docker run -i -t -p 2375:8080 image/name
I get an error saying that 2375 port is in use. I start the app, and can curl 0.0.0.0:8080 when inside both containers (when using another port 8080:8080 for example) but cannot preview the app from the host system, since lohalhost:8080 listens to 2375 port in the first container, and that port cannot be used when launching the second container.
I'm able to do that using the image jpetazzo/dind. The test I have done and worked (as an example):
From my host machine I run the container with docker installed:
docker run --privileged -t -i --rm -e LOG=file -p 18080:8080
jpetazzo/dind
Then inside the container I've pulled nginx image and run it with
docker run -d -p 8080:80 nginx
And from the host environment I can browse the nginx welcome page with http://localhost:18080
With the image you were using (mattgruter/doubledocker) I have some problem running it (something related to log attach).

Resources