Docker – fix service IP addresses [duplicate] - docker

This question already has answers here:
Assign static IP to Docker container
(8 answers)
Closed 7 years ago.
I have a docker-compose setup with a bunch of backend services (postgres, redis, ...), a few apps (rails, node, ...) and an nginx on top of it.
The apps are connected to the databases using docker env variables (e.g. DOCKERCOMPOSEDEMO_POSTGRES_1_PORT_5432_TCP_ADDR), and the nginx is connected to the apps using the docker generated /etc/hosts: (e.g. upstream nodeapp1-upstream { server dockercomposedemo_node_app1_1:3000; })
The problem is that each time I restart some service it gets a new IP address, and thus everything on top of it can't connect to it any more, so restarting a rails app requires to restart nginx, and restarting a database requires to restart the apps and the nginx.
Am I doing somethings wrong, or is it the intended behaviour? Always restarting all that stuff doesn't look like a good solution.
Thank you

It is an intended behaviour, there are many ways how to avoid restart of dependent services, I'm using next approach:
I run most of my dockerized services tied to own static ips using the next approach:
I create ip aliases for all services on docker host
Then I run each service redirecting ports from this ip into container so each service have own static ip which could be used by external users and other containers.
Sample:
docker run --name dns --restart=always -d -p 172.16.177.20:53:53/udp dns
docker run --name registry --restart=always -d -p 172.16.177.12:80:5000 registry
docker run --name cache --restart=always -d -p 172.16.177.13:80:3142 -v /data/cache:/var/cache/apt-cacher-ng cache
docker run --name mirror --restart=always -d -p 172.16.177.19:80:80 -v /data/mirror:/usr/share/nginx/html:ro mirror
...

Related

Docker container talks to docker container in the same local host? [duplicate]

This question already has answers here:
accessing a docker container from another container
(8 answers)
Closed 1 year ago.
I am in a confusion right now. I try many things I can find on the web, but, none solved it. I have Win10 and Docker desktop installed using WSL 2 to host Linux containers. I use the following command to start the Jenkins website.
docker run --name jenkins-master-c -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:2.282-alpine
This works fine. I can access the website using http://localhost:8080/
The problem is, I try to curl http://localhost:8080 from another alpine docker container, but, I am not getting the web page back, it said connection refused. I tried my own tiny web service on my Windows machine without docker. Same thing. I can access the web service using web browser on Windows 10. However, if I get inside a container, I couldn't access the web service on the localhost.
I know I am missing some thing really basic, because the web doesn't seem to have this topic. I am just on my own computer without anything fancy. Thus, I just want to use localhost. The web said the default is supposed to use bridge which the container should talk to each other easily, but, it is not working for me. What am I missing. Maybe I shouldn't type localhost? But, what else should I do?
thank you
Edit: just want to explain what I did to get my problem solved. The creating network --network my-network-name was what I originally did, which failed because the way I curl the webpage is wrong. I did --name jenkins-master-c only to make it easy locate my container on the docker ps. But, as pointed out in my question, I suspected the localhost is wrong, which is confirmed by the solution. Instead of using localhost, I do curl http://jenkins-master-c:8080 which worked. Thanks
localhost is always a question of perspective, it refers to the current machine. This means if you call localhost from a container it speaks to himself and not the machine you see as localhost. If you want to call a service running on this one you have to use its real IP address.
You can imagine that docker containers are individual virtual machines, they have their own localhost. And they are isolated from your host pc and other containers.
Now if you want to communicate among two or more docker containers then you can use bridge network. In docker perspective, a bridge network allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. You can see the docker doc for bridge network.
On the other hand, if you want to communicate with your docker container from your host then you need to fort-forward for opening/exposing a port to connect with the container (which you did in -p 8080:8080)
Another way you can bring your containers under a local host is using kubernetes, in kuernetes you can run one or more containers in a pod and then they will share same network space. kubernetes pod
Probably these two containers are not in the same network, you they cannot see and talk to each other.
First of all, create a network by docker command docker network create SOMENAME, and then run containers again (both of them):
docker run --name jenkins-master-c --network SOMENAME -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:2.282-alpine
Now it should talk to another docker container.

What goes in "some-network" placeholder in dockerized redis cli?

I'm looking at documentation here, and see the following line:
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
What should go in the --network some-network field? My docker run command in the field before did default port mapping of docker run -d -p 6379:6379, etc.
I'm starting my redis server with default docker network configuration, and see this is in use:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abcfa8a32de9 redis "docker-entrypoint.s…" 19 minutes ago Up 19 minutes 0.0.0.0:6379->6379/tcp some-redis
However, using the default bridge network produces:
$ docker run -it --network bridge --rm redis redis-cli -h some-redis
Could not connect to Redis at some-redis:6379: Name or service not known
Ignore the --network bridge command and use:
docker exec -it some-redis redis-cli
Docker includes support for networking containers through the use of network drivers. By default, Docker provides two network drivers for you, the bridge and the overlay drivers. You can also write a network driver plugin so that you can create your own drivers but that is an advanced task.
Read more here.
https://docs.docker.com/engine/tutorials/networkingcontainers/
https://docs.docker.com/v17.09/engine/userguide/networking/
You need to run
docker network create some-network
It doesn't matter what name some-network is, just so long as the Redis server, your special CLI container, and any clients talking to the server all use the same name. (If you're using Docker Compose this happens for you automatically and the network will be named something like directoryname_default; use docker network ls to find it.)
If your Redis server is already running, you can use docker network connect to attach the existing container to the new network. This is one of the few settings you're able to change after you've created a container.
If you're just trying to run a client to talk to this Redis, you don't need Docker for this at all. You can install the Redis client tools locally and run redis-cli, pointing at your host's IP address and the first port in the docker run -p option. The Redis wire protocol is simple enough that you can also use primitive tools like nc or telnet as well.

dockerized app needs to interact with other dockers over localhost

I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost

Multiple docker containers as web server on a single IP

I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.

How do I run 2 environments of SkyDns/Skydock simultaneously?

Ref: https://github.com/crosbymichael/skydock
https://github.com/crosbymichael/skydns
First I fired up those two instances.
docker run -d -p 8080:8080 -p 172.17.42.1:53:53/udp --name skydns crosbymichael/skydns -nameserver 8.8.8.8:53 -domain docker
docker run -d -v /var/run/docker.sock:/docker.sock --name skydock crosbymichael/skydock -ttl 30 -environment dev -s /docker.sock -domain docker -name skydns
And this setup is working as expected.
Now I want to spawn another production environment. This time I only fired another skydock container with the env production as follows.
docker run -d -v /var/run/docker.sock:/docker.sock --name skydock-prod crosbymichael/skydock -ttl 30 -environment prod -s /docker.sock -domain docker -name skydns
Querying the api doesn't show the production skydoc.
curl $(docker-ip):8080/skydns/services/
And now I am wondering on how to setup the production version of skydock.
Do I have to run in separate docker-host?
If I fire up in the same docker host, in which DNS url entry will the new containers be available?
Do I have to pass some flags/variables when I fire new containers to be available in the production env?
I don't about the way to make 2 or more skydock instances listen to the same docker.sock (within single host machine). I think conceptually it is not right. Docker containers know nothing about your logical enviroments (production, staging, ...)
I got a multihost setup with skydns and skydock. I run skydns on a separate host. Each of two other servers run single instance of skydock, which registers all docker containers ips in centralised SkyDNS, so that all containers are visible by dns name across different physical hosts.
All of that is working on top of Flannel network overlay https://github.com/coreos/flannel (which requires etcd)

Resources