docker network link to 2 or multiple containers - docker

As per docker link docs I can only --link to one (already running) container to access internal ports of that container.
How can I link one container to 2 or more other containers? (MongoDB and another web service in my case.)
(Right now I am exposing ports of second container to host and then accessing via host:port, also possible workaround might be Let two Containers getting linked to eachother .)

docker run -d --link node1:node1 --link node2:node2 --link node3:node3 -p hostport:containerport your-image
I run the command above and it works.

Alternatively, you can turn on inter-container communication by adding --icc=true to the docker daemon's command-line, and you won't have to link the containers, just access them using the Docker Host's IP address and the containers' published ports.
Docker Networking

For an easy solution you could use Docker-compose. in you compose file (docker-compose.yml) use the option links
Link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name.
container_name:
links:
- node1
- node2
- node3:alias3
- noden

Related

Docker networks: How to get container1 to communicate with server in container2

I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/

Installing 2-node cluster of MarkLogic in docker

I'd like to install MarkLogic in docker and form a cluster i.e. two or more ML nodes instance running on the same machine. How to achieve that ?
In the Building a MarkLogic Docker Container blog entry, it describes how to create and initialize a Docker image running MarkLogic.
Near the bottom of the article, it describes how to link multiple containers using the --link switch and docker-compose to assist in managing a cluster of Docker containers:
Linking Containers
You are the one who tells Docker how containers should communicate! When using the docker run command, you can also pass in a --link flag.
Consider the following examples:
docker run -d --name=marklogic1 --hostname=marklogic1.local -p 8000-8002:8000-8002 marklogic:8.05-preinitialized
docker run -d --name=marklogic2 --hostname=marklogic2.local --link marklogic1:marklogic1 -p 18000-18002:8000-8002 marklogic:8.05-preinitialized
The above creates two MarkLogic containers. The second has the --link flag. Docker networking sets environment variables and the /etc/hosts file inside each container being linked along and also the linking container. This sets up the ability for Docker containers to communicate over the internal Docker network. The --hostname flag is used to be consistent with MarkLogic, which uses the full domain name when contacting other MarkLogic servers in the cluster. So we simply add the .local domain to the name of the container.
Finally, note the -p flag on the second container exposes the MarkLogic’s ports in the range of 8000 to 8002 to the host computer’s ports of 18000 to 18002. Why not use the host computer’s ports of 8000 to 8002? Because the first container is already using them. Remember, Docker shares networking with the host computer! But of course, you can choose any range of open ports on your host computer to map the container’s MarkLogic ports.
Now, simply point your browser to port 8001 in the first container (marklogic1) and go through the post-installation steps. Skip joining a cluster. When finished, point your browser to port 18001 for the second container (marklogic2) and go through the post-installation steps. When asked to join a cluster, simply use the host name of localhost and leave the port number at 8001. MarkLogic in the second container will contact MarkLogic in the first container. The configuration will be updated such that the marklogic2 joins the cluster with marklogic1. Create and add a third MarkLogic container, also linking it to marklogic1:marklogic1 and marklogic2:marklogic2 and you’ll soon have a proper 3-node MarkLogic cluster!
Using docker-compose
Docker has created another tool to aid in managing clusters of Docker containers. docker-compose has commands to create multiple containers and network them together. You can then create them, start them and stop them using docker-compose commands. Docker uses a file called Dockerfile to build containers. docker-compose uses a file called docker-compose.yml to build networks of containers.
docker-compose is available as a separate download.

dockerized app needs to interact with other dockers over localhost

I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost

Docker: cannot link containers in --net=host mode

I have a Couchbase server container named db launched with --net=host option which exposes port 11210, and now I have to link another container to it.
If I use the --link option while running my new container, that is type:
docker run -d -P --name my_name --link db:db my_image
I get:
Error response from daemon: Conflicting options: host type networking can't be used with links. This would result in undefined behavior.
How can I solve this?
You can't.
"Linking" containers doesn't make any sense when using --net=host. When you link containers, Docker creates entries in /etc/hosts so that the containers can connect to each other by name, but when using --net=host your containers do not have unique addresses. They are sharing the host network environment.
You can just use localhost to access services running in either container, or any valid address on your host (assuming that your service is configured to listen on all available addresses).

Link & Expose Docker Container Simultaniously

Does the following command link two containers and also expose the port on my network?..
docker run -d -p 5000:5000 --link my-postgres:postgres danwahlin/aspnetcore
I'm watching Dav Wahlin's course on Docker and this one command is blowing my mind. Does this mean that port 5000 will be accessible from my network AND linked between the two containers? If so, then the link isn't essential to communicate between the containers since they could just use the IP and port in a config file. Correct?
Looks like you're confusing "legacy linking" with "container networks". Creating a link, as your example shows, creates an entry in the containers hosts file so they can resolve each other by name.
In the example above, you created an alias of "postgres" to the "my-postgres" container. Think name resolution here. This does nothing to isolate the network stack.
Next you have the --port or -p switch which exposes a container port to the network. Here you are exposing port 5000. Without this switch you would not expose anything and, therefore, would not receive any incoming calls.
Should you want to isolate containers you could do so using a "bridge network" like so:
docker network create --driver bridge mynetwork
Once the network is created, only containers added to the network will communicate with each other. For example:
docker run -d --net=mynetwork --name postgres postgres:latest
docker run -d --net=mynetwork --name node node:latest

Resources