docker networking: How to know which ip will be assigned? - docker

Trying to get acquainted with docker, so bear with me...
If I create a database container (psql) with port 5432 exposed, and then create another webapp which wants to connect on 5432, they get assigned some ip addresses on the bridge network from docker...
probably 172.0.0.1 and 172.0.0.2 respectively. if I fire up the containers, inspect their ips with docker network inspect <bridge id>
if I then take those ips and plug in the port on my webapp settings, everything works great...
BUT I shouldn't have to run my webapp, shell into it, change settings, and then run a server, I should be able to just run the container...
So what am I missing here, is there a way to have these two containers networked without having to do all of that?

Use a Docker network
docker create network myapp
docker run --network myapp --name db [first-container...]
docker run --network myapp --name webapp [second-container...]
# ... and so on
Now you can refer to containers by their names, from within other containers. Just like they were hostnames in DNS.
In the application running in the webapp container, you can configure the database server using db as if it is a hostname.

Related

Docker networks: How to get container1 to communicate with server in container2

I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/

Connecting Multiple Docker containers to host

TL;DR: I just want a way to forward trafic to localhost to the host without using --net=host
I'm running multiple containers on the same host, and need them to access an instance of Redis that's available at localhost:6379. Also, I need to use port forwarding, so using --net=host is not an option.
How can I start multiple containers and allow all of them to forward trafic to localhost to the host?
I have also tried docker run --add-host localhost:<private ip address> -p <somehostport>:<somecontainerport> my_image, with no success (I still get that connection to 127.0.0.1:6379 is refused, as if localhost was not resolved to the host's private IP)
I'm running multiple containers on the same host, and need them to access an instance of Redis that's available at localhost:6379.
You can't.
If something is listening only on localhost, then you can't connect to it from another computer, from a virtual machine, or from a container. However, if your service is listening to any other address on your host, so you can simply point your containers at that address.
One solution is to configure your Redis service to listen on the address of the docker0 bridge, and then point your containers at that address.
This is better solved by a small redesign. Move redis into a container. Connect containers via container networking, and publish the redis port to localhost for anything that still isn't in a container. E.g.
docker network create redis
docker run -d --net redis -p 127.0.0.1:6379:6379 --name redis redis
docker run -d --net redis -e REDIS_URL=redis:6379 your_app
Containers need to communicate by the container name over a user created network, so your app will need to be configured with the new redis URL (changing localhost to redis).
The only other solution I've seen for this involves hacking of iptables rules, which isn't very stable when containers get redeployed.

Add docker host into docker network

Step1: create a docker network
docker network create my-work-network
Step2: add some container into this network
docker run ... --name my-work-pg --network my-work-network image1
docker run ... --name my-work-redis --network my-work-network image2
My question is:
Current, in my-work-pg, I could ping my-work-redis, and vice versa.
But, in my docker host, I have to parse two container IP address and
access those container, is there exist a solution to make my docker host can JOIN IN this virtual network?
purpose
Just for development conveniency, I want to access all started container
in this network directly as if docker host is a container. e.g. I can access postgresql database with following config
host: my-work-pg # pg container host name
port: 5432
But, this not exactly what I want.
For your purpose the best way is used link option --link and pass docker_image:name_to_binding, for example to link redis with pg with your configuration it something like that
docker run ... --name my-work-redis --link my-work-pg image2
In these case the same name of container of pg it used as internal name in redis container, and not need something like my-work-pg:name-pg-internal-redis
I could assure you there is no such a way to add docker daemon host into user-defined network my-work-network.
Just like you wrote above, your purpose is for development convenience. So I don't think your configuration host: my-work-pg is essential. As #Sebastian mentioned, you could do the mapping of application's port in container and port on host with -p flag. Then you may access postgres service through localhost:5432.
I suggest you read docs on [docker.com][1].
Hope it helps~

How to connect two docker containers through localhost?

I have two services running in separate containers, one is grunt(application) and runs off port 9000 and the other is sails.js (server) which runs off port 1337. What I want to try to do is have the client app connect with the server through localhost:1337. Is this feasible? Thanks.
HOST
You won't be able to connect to the other container with localhost (as localhost is the current container) but you can connect via the container host (the host that is running your container). In your case you need boot2docker VM IP (echo $(boot2docker ip)). For this to work, you need to expose your port at the host level (which you are doing with -p 1337:1337).
LINK
Another solution that is most common and that I prefer when possible, is to link the containers.
You need to add the --name flag to the server docker run command:
--name sails_server
You need to add the --link flag to the application docker run command:
--link sails_server:sails_server
And inside your application, you will be able to access the server at sail_server:1337
You could also use environment variables to get the server IP. See documentation: https://docs.docker.com/userguide/dockerlinks/
BONUS: DOCKER-COMPOSE
Your run commands may start to be a bit long... in this case I like to use docker-compose that allows me to define my containers and their relationships (volumes, names, link, commands...) in one file.
Yes if you use docker parameter -p 1337:1337 in your docker run command, it will expose the port 1337 from inside the container to your localhost:1337

Running Docker container on a specific port

I have a Rails app deployed with Dokku on DigitalOcean. I've created a Postgres database and linked it with a Rails app. Everything worked fine until I restarted the droplet. I figured out that apps stopped working because on restart every Docker container gets a new port and Rails application isn't able to connect to it. If I run dokku postgresql:info myapp it shows the old port, but it has changed. If I manually change database.yml and push it to the dokku repo everything works.
So how do I prevent Docker from assigning different port each time the server restarts? Or maybe there is an option to change ports of running containers.
I don't have much experience with Dokku but for docker there's no such thing of A container's port.
In docker you can expose a container's port to receive incoming request and map it to specific ports in your host machine.
With that you can, for instance, run your postgres inside a container and tell docker that you wanna expose the 5432, default postgresql port, to receive incoming requests:
sudo docker run --expose=5432 -P <IMAGE> <COMMAND>
The --expose=5432 tells docker to expose the port 5432 to receive incoming connections from outside world.
The -P flag tells docker to map all exposed ports in your container to the host machine's port.
with that you can connect to postgres pointing to your host's ip:port.
If you want to map a container's port to a different host machine port you can use the -p flag:
sudo docker run --expose=5432 -p=666 <IMAGE> <COMMAND>
Not sure if this can help you with Dokku environment, but I hope so.
For more information about docker's run command see: https://docs.docker.com/reference/commandline/cli/#run

Resources