Docker container communication restrictions - docker

My setup is based on running two Docker containers, one with an API and the other with a DB.
This methodology makes it possible that both containers have an exposed port to web services.
But what I want is that the DB container (toolname-db) can only be exposed to the API container (toolname-api). This makes sure that the DB is not not exposed to web services directly.
How do I have to alter my setup in order to make sure what I want is possible?
Currently I use the following commands:
sudo docker build -t toolname .
sudo docker run -d -p 3333:3333 --name=toolname-db mdillon/postgis
sudo docker run -it -p 4444:4444 --name=toolname-api --network=host -d toolname

A container will only be reachable from outside Docker space if it has published ports. So you need to remove the -p option from your database container.
For the two containers to be able to talk to each other they need to be on the same network. Docker's default here is for compatibility with what's now a very old networking setup, so you need to manually create a network, though it doesn't need any special setting.
Finally, you don't need --net host. That disables all of Docker's networking setup; port mappings with -p are disabled, and you can't communicate with containers that don't themselves have ports published. (I usually see it recommended as a hack to work around hard-coded localhost connection strings.)
That leaves your final setup as:
sudo docker build -t toolname .
sudo docker network create tool
sudo docker run -d --net=tool --name=toolname-db mdillon/postgis
sudo docker run -d --net=tool -p 4444:4444 --name=toolname-api toolname
As #BentCoder suggests in a comment, it's very common to use Docker Compose to run multiple containers together. If you do, it creates a network for you which can save you a step.

Related

How Docker Container can access each other

In my docker-compose , i have 2 containers .
How to make this 2 containers access each other as they installed in one host without containers .
How they can see each other and their file systems
To allow inter-container communication create a common bridge network, and put both containers into the same network. The build phase assuming nothing needs to "talk" to each other does not need the --network switch.
docker network create jointops
docker build --network jointops -t srv1 /srv1
docker build --network jointops -t srv2 /srv2
docker run --network jointops -d -t srv1
docker run --network jointops -d -t srv2
To check both machines are on the same network now issue the command
docker network inspect jointops
You should see both machines having an IP Allocation.
Ok... so how do they communicate ?
The bridge network - jointops by default will perform dns-resolution
So if srv1 has something like
curl -c http://srv2/bla/bla/bla
This will be resolved correctly.
Regarding Shared Data access ..
Do not run 2 apps in 1 container
Instead
create a docker volume
run 2 separate containers
each container can connect to the same volume
See here for inter-container communication. Each container encapsulates its contents, so use ports for communication instead of trying to just openly expose the full filesystem of one container to another.
If both applications need access to the same filesystem, consider running both in the same container. That is supported.

dockerized app needs to interact with other dockers over localhost

I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost

How to set docker back to default configuration

I have installed docker compose and used it a little. Then decided I did not need it it. Now when I create containers by hand they are assigned a network with an ip address, gateway and other things. When I inspected older containers before i installed docker compose they do not have these network settings.
I have tried unoinstalling docker compose and reinstalling docker which did not work. Is there anything I can do? The reason I am asking is I can't link containers together because every new container is assigned an ip address and other network settings.
Docker always does that, nothing to do with compose. Compose doesn't modify your Docker installation in any way, purely connects to the daemon to run commands under the hood.
By linking containers together I'm assuming you mean just so they can communicate with each other? --link is deprecated for some time now in favor of docker network .... Try the following:
$ docker network create test-net
$ docker run -d --name c1 --net test-net alpine:3.3 sleep 20000
$ docker run -it --name c2 --net test-net alpine:3.3 ping c1

Multiple docker containers as web server on a single IP

I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.

How to assign a host port to container port using docker if container is already created and running?

We can create a new container and define your application port in docker run command like
sudo docker run -d -p 5000:5000 training/webapp python app.py
or
sudo docker run -d -P training/webapp python app.py
But, what if someone forgot to specify -p or -P option in docker run command? The container get created and runs the application locally. Now how could I assign a port on which application is running locally in container to the port of my Ubuntu host machine?
Kindly, help on this.
Thanks.
Short: You can't. You need to stop the container (or not) and start a new one with the proper parameters.
Docker spins up a local proxy and setup the iptables for proper NAT. If you really can't start a new container, you could manually setup the iptables and spin up a socat. You can take a look at the network part of the Docker code for more info.

Resources