Problems accessing multiple docker containers remotely - docker

I'm trying to set up some docker container demo blogs but I'm having problems when I try to access more than one:
docker run --volumes-from my-data -p 80:8080 --name site1 tutum/wordpress
docker run --volumes-from my-data -p 80:8081 --name site2 tutum/wordpress
I can access the first one from myhost:8080 but I can't access the second one from myhost:8081
Is there anything obvious I'm missing?

Yes. The -p argument tells docker how to map external addresses to internal (container) addresses. You are instructing it to map port 80 of all host interfaces to port 8080/8081 of the respective container. Assuming the container processes really listen on port 8080/8081 you might want to try -p 8080:8080 / -p8081:8081. If the containers run standard webservers on port 80, you might want to use -p 8080:80 / -p 8081:80 instead. The proper port mapping will make the container service accessible on port 8080/8081 of all host interfaces.

Related

Why "port is already allocated" if I try run within inner network?

I try to run 3 containers: nginx and two containers with flask-uwsgi but one for staging one for live.
In docker i create network:
docker network create --attachable --subnet 1.1.1.0/29 some-network
And run containers like:
# nginx
docker run -d --rm \
--net some-network --ip 1.1.1.2 \
-p 80:80 -p 443:443 \
--name my-nginx "$REGISTRY/$IMAGE"
# flask+uwsgi on :8080 (live)
docker run -d --rm \
--net some-network --ip 1.1.1.3 \
-p 8080:8080 \
--name flask-live "$REGISTRY/$IMAGE"
And i have default server config with upstreams:
upstream flask_live {
server 1.1.1.3:8080;
}
upstream flask_dev {
server 1.1.1.4:8080;
}
server {
listen 80;
server_name hostname.com;
location / {
include uwsgi_params;
uwsgi_pass flask_live;
}
}
server {
listen 80;
server_name develop.hostname.com;
location / {
include uwsgi_params;
uwsgi_pass flask_dev;
}
}
But when i try to run third container with flask develop like:
docker run -d --rm \
--net some-network --ip 1.1.1.4 \
-p 8080:8080 \
--name flask-live "$REGISTRY/$IMAGE"
I get error: Bind for 0.0.0.0:8080 failed: port is already allocated
I don't understand why port allocated if i try to publish on 1.1.1.0/29 subnet
I read docker docs about networking, I google stuff but I am just blind or slow but i don't understand how to "isolate" container or something like that.
-p 8080:8080 is equal to -p 0.0.0.0:8080:8080 where:
0.0.0.0 - address on the host to redirect from
first 8080 - port to redirect from
second 8080 - port of container (inside docker network)
So, your error message says that you cannot bind to port 8080 of the host (not your internal docker network).
This command allows you to access your container from host network (eg. from localhost:8080 to 8080 port of your container). Basically, it uses iptables to redirect packets from one network to another. So, when you call -p 8080:8080, it will redirect packets from port 8080 to your first container and when you call the same command for second container it fails, because port 8080 is already in use. You cannot you the single port to redirect to both containers at the same time.
Based on your description, you don't even need to publish ports to your flask-uwsgi containers, because you have a nginx proxy, which will allow you to access them based on host names. These ports will still be available inside your Docker network, you just won't publish then to your host OS.
If you still need to access your flask-uwsgi containers directly (without nginx), then you can publish them to different ports. Eg. first - -p 8081:8080, second - -p 8082:8080).
The docker run -p option opens a port on the host that forwards to a port in the container. That generally means you can't run two containers that have the same first -p port number, and that first port number also can't conflict with a non-container process running on the host.
Separately, Docker maintains its own networking layer. It's useful to know that, as an implementation detail, each container happens to have its own IP address, so you can run processes in separate containers that each listen on port 8080 and they won't conflict. You don't need a -p option to do this.
So: connections from outside Docker...
Use the host's DNS name or IP address
Use the first docker run -p port number (which must be unique across all containers and host processes)
The process inside the container must listen on the second -p port number and the special 0.0.0.0 "everywhere" IP address
Connections between containers...
Must be on the same docker run --net network
Use the other container's docker run --name as a DNS name
Use the port number the destination container's process is listening on; it also must be listening on the special 0.0.0.0 "everywhere" IP address
Ignore docker run -p port mappings
Notice that neither of these cases actually needs to know the container-private IP address, and it's not usually useful to specify these or look them up. In your example I would delete the docker network create --subnet option and the docker run --ip options.

Parts of a Docker command

I came across the command docker run -d -p 80:80 docker/getting-started, which appeared to be a demonstration command to initialize a container. However, I am curious as to what the 80:80 does in regard to the overall command. What does this do? (If an answer to my question can be found in their documentation or some other resource, please do link it as I have done a good deal of searching around to no avail and am more than willing to do the reading myself. Thanks!)
The -p HOST_PORT:CONTAINER_PORT flag binds your container port to the host port. In your case it's 80:80, which the containers port 80 is bound to the port 80 of the host. (default is TCP)
https://docs.docker.com/config/containers/container-networking/
based on docker run -d -p 80:80 docker/getting-started
docker run:start your Container
-d detach your container when you start it
-p 80:80: map your container port to your host port that's mean when you connect to 80 port host you are connect to 80 port container.Architecture is -p {host_port}:{container_port}
docker/getting-started:is your image name

Why docker container is not able to access other container?

I have 3 docker applications(containers) in which one container is communicating with other 2 containers. If I run that containers using below command, container 3 is able to access the container 1 and container 2.
docker run -d --network="host" --env-file container1.txt -p 8001:8080 img1:latest
docker run -d --network="host" --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --network="host" --env-file container3.txt -p 8000:8080 img3:latest
But this is working only with host network if I remove this --network="host" option then I am not able to access this application outside(on web browser). In order to access it outside i need to make the host port and container ports same as below.
docker run -d --env-file container1.txt -p 8001:8001 img1:latest
docker run -d --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --env-file container3.txt -p 8000:8000 img3:latest
With this above commands i am able to access my application on web browser but container 3 is not able to communicate with container 1. here container 3 can access the container 2 because there i am exposing 8080 host + container port. But i can't expose again 8080 host port for container 3.
How to resolve this issue??
At last my goal is this application should be accessible on browser without using host network, it should use the bridge network . And container 3 needs to communicate with container 1 & 2.
On user-defined networks, containers can not only communicate by IP address but can also resolve a container name to an IP address. This capability is called automatic service discovery.
Read this for more details on Docker container networking.
You can perform the following steps to achieve the desired result.
Create a private bridge network.
docker network create --driver bridge privet-net
Now start your application containers along with the --network private-net added to your docker run command.
docker run -d --env-file container1.txt -p 8001:8001 --network private-net img1:latest
docker run -d --env-file container2.txt -p 8080:8080 --network private-net img2:latest
docker run -d --env-file container3.txt -p 8000:8000 --network private-net img3:latest
With this way, all the three containers will be able to communicate with each other and also to the internet.
In this case when you are using --network=host, then you are telling docker to not isolate the network rather to use the host's network. So all the containers are on the same network, hence can communicate with each other without any issues. However when you remove --newtork=host, then docker will isolate the network as well there by restricting container 3 to communicate with container 1.
You will need some sort of orchestration service like docker compose, docker swarm etc.

Two docker's container see each others in the same machine

I create an Docker's image with name is sample, then I installed nginx on both of them that listen to port 80 and it shows simple index.html.
then I use below commands to run contianers:
docker run -it -p 80:80 --name sample1 sample
docker run -it -p 81:80 --name sample2 sample
and I successfully see the index.html from main OS from two containers, but when I go inside container sample1 I couldn't see the index.html of sample2 and It does not work conversely either.
The -p option is the shortform for ports. When you do -p you are binding the container's port 80 to its host's port 80.
So container sample1 and sample2 are just merely binding their respective port 80 to the host's port 80 and 81, hence there is no direct linkage between them.
To make the containers visible to each other, first you will have to use the --link option and then do an --expose to allow the containers to see each other through the exposed port.
Example:
docker run -it -p 80:80 --name sample1 sample
docker run -it -p 81:80 --link=sample1 --expose="80" --name sample2 sample
Essentially --link means to allow the container to see the link value's container
--expose means the linked containers are able to communicate through that expose port.
Note: linking the containers is not sufficient, you need to expose ports for them to communicate.
You might want refer to the docker-compose documentation for more details;
https://docs.docker.com/compose/compose-file/
While the documentation is for docker-compose but the options are pretty much the same as the raw docker binary, and everything is nicely put on 1 page. That's why I prefer looking at there.
In Docker you can bind container's port to docker machine (Machine installed with docker) port using
docker run -it -p 80:80 image
Then you can use docker machine Ip and port inside the another container.

How can I run a docker container on localhost over the default IP?

I'm following the following tutorial on how to start a basic nginx server in a docker container. However, the example's nginx docker container runs on localhost (0.0.0.0) as shown here:
Meanwhile, when I run it it for some reason it runs on the IP 10.0.75.2:
Is there any particular reason why this is happening? And is there any way to get it to run on localhost like in the example?
Edit: I tried using --net=host but had no results:
The default network is bridged. The 0.0.0.0:49166->443 shows a port mapping of exposed ports in the container to high level ports on your host because of the -P option. You can manually map specific ports by changing that flag to something like -p 8080:80 -p 443:443 to have port 8080 and 443 on your host map into the container.
You can also change the default network to be your host network as you've requested. This removes some of the isolation and protections provided by the container, and limits your ability to configure integrations between containers, which is why it is not the default option. That syntax would be:
docker run --name nginx1 --net=host -d nginx
Edit: from your comments and a reread I see you're also asking about where the 10.0.75.2 ip address comes from. This is based on how you launch the docker daemon. That IP binding is assigned when you pass the --ip flag to the daemon documentation here. If you're running docker in a vm with docker-machine, I'd expect this to be the IP of your vm.
A good turnaround is to set using -p flag (--publish short)
docker run -d -p 3000:80 --name <your_image_name> nginx:<version_tag>

Resources