Docker: cannot link containers in --net=host mode - docker

I have a Couchbase server container named db launched with --net=host option which exposes port 11210, and now I have to link another container to it.
If I use the --link option while running my new container, that is type:
docker run -d -P --name my_name --link db:db my_image
I get:
Error response from daemon: Conflicting options: host type networking can't be used with links. This would result in undefined behavior.
How can I solve this?

You can't.
"Linking" containers doesn't make any sense when using --net=host. When you link containers, Docker creates entries in /etc/hosts so that the containers can connect to each other by name, but when using --net=host your containers do not have unique addresses. They are sharing the host network environment.
You can just use localhost to access services running in either container, or any valid address on your host (assuming that your service is configured to listen on all available addresses).

Related

Docker network bridge

I'm trying to run multiple containers with the same ports on docker.
For this, I have created a network in brigde mode and specified a subnet.
docker network create -d --subnet 192.168.99.0/24 mynetwork
Then connected the docker containers to it with a static IP.
docker run -i -t -d -p 2377:2377 -p 7946:7946 -p 4789:4789-name container image
docker network connect --ip 192.168.99.98 mynetwork container
I did this with three containers (using different IP's), after starting the second one I got:
Error response from daemon: driver failed programming external connectivity on endpoint container(...): Bind for 0.0.0.0:7946 failed: port is already allocated
As far as I'm concerned, I should not be getting this error, due to bridge mode.
The docker run -p option allocates a port on the host system; those are shared across all containers, independently of what Docker-private network they’re using. These also will conflict with non-Docker processes running on the host.
If your goal is just to be able to communicate between containers on the same network, you don’t need a -p option at all. They can use each others’ --name and the port the service inside the container is listening on to connect.
If you’re trying to run multiple Docker container stacks at the same time, you need to decide which specific instance port 2377 on your host will route to, and change the other container’ -p option.
Specifically setting the Docker-internal private IP addresses (or worrying about them at all) is almost never necessary. I’d delete those --subnet and --ip options. To communicate between containers, put them on the same network as described above; from outside you need a (unique) -p option.

dockerized app needs to interact with other dockers over localhost

I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost

Communicating between a windows and linux docker container on the same host

This may seem trivial, but after some trial error I come to the SO community for a little help!
I create a network, call it docker-net.
I have a linux container, let's all it LC1, that has a published port of 6789 (so when created it had the parameter -p 6789:6789) and I make it join docker-net network (--network docker-net)
This works fine, through my host, I can communicate with it no problem.
I switch to the windows containers and check that LC1 is still running. It does! Amazing.
I create a container, let's call it WC1. It also publishes a port of 9000 that maps internally to 80 (-p 9000:80)
The application inside WC1 tries to connect to LC1 using the IP assigned from the network (docker inspect LC1) and I can't communicate.
There's probably a concept that I can't get my head around to.
I understand that the WC1 and LC1 have different gateways and subnets. Could that be the culprit?
Any help to get me to make that work is appreciated !
EDIT:
Here are the commands I ran for the scenario above:
docker network create docker-net
docker run -d -p 6789:6789 --name LC1 --network docker-net LC1
docker inspect LC1
The IP is 172.18.0.2
switch to the windows container
docker run -d -p 9000:80 --name WC1 WC1
In the docker network connect documentation it states that you can assign an IP to a container the same should work with docker run --network name --ip. Then use that IP to access the container.
Specify the IP address a container will use on a given network
You can specify the IP address you want to be assigned to the
container’s interface.
$ docker network connect --ip 10.10.36.122 multi-host-network
container2
I have found these:
a deleted question on serverfault about the same issue. See the cached-by-google version: Connect Windows container to Linux container running on same Docker host [closed]
an article: Run Linux and Windows Containers on Windows 10
and I think that the only way to make the 2 containers communicate is through the host and by exposing ports. For exampple LC1 will use -p [your app port]:8080 and WC1 -p [your app port]:9090.
By saying [your app port] I mean that it is up to you to decide what to use (a tcp/udp listening socket, a REST api...)
As docker evolves maybe there will be a better solution in the near future.

How can I run a docker container on localhost over the default IP?

I'm following the following tutorial on how to start a basic nginx server in a docker container. However, the example's nginx docker container runs on localhost (0.0.0.0) as shown here:
Meanwhile, when I run it it for some reason it runs on the IP 10.0.75.2:
Is there any particular reason why this is happening? And is there any way to get it to run on localhost like in the example?
Edit: I tried using --net=host but had no results:
The default network is bridged. The 0.0.0.0:49166->443 shows a port mapping of exposed ports in the container to high level ports on your host because of the -P option. You can manually map specific ports by changing that flag to something like -p 8080:80 -p 443:443 to have port 8080 and 443 on your host map into the container.
You can also change the default network to be your host network as you've requested. This removes some of the isolation and protections provided by the container, and limits your ability to configure integrations between containers, which is why it is not the default option. That syntax would be:
docker run --name nginx1 --net=host -d nginx
Edit: from your comments and a reread I see you're also asking about where the 10.0.75.2 ip address comes from. This is based on how you launch the docker daemon. That IP binding is assigned when you pass the --ip flag to the daemon documentation here. If you're running docker in a vm with docker-machine, I'd expect this to be the IP of your vm.
A good turnaround is to set using -p flag (--publish short)
docker run -d -p 3000:80 --name <your_image_name> nginx:<version_tag>

docker network link to 2 or multiple containers

As per docker link docs I can only --link to one (already running) container to access internal ports of that container.
How can I link one container to 2 or more other containers? (MongoDB and another web service in my case.)
(Right now I am exposing ports of second container to host and then accessing via host:port, also possible workaround might be Let two Containers getting linked to eachother .)
docker run -d --link node1:node1 --link node2:node2 --link node3:node3 -p hostport:containerport your-image
I run the command above and it works.
Alternatively, you can turn on inter-container communication by adding --icc=true to the docker daemon's command-line, and you won't have to link the containers, just access them using the Docker Host's IP address and the containers' published ports.
Docker Networking
For an easy solution you could use Docker-compose. in you compose file (docker-compose.yml) use the option links
Link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name.
container_name:
links:
- node1
- node2
- node3:alias3
- noden

Resources