Networking in Docker Compose file - docker

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?

The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

Related

Running an executable inside a docker container from another container

I am trying to run an executable file from another docker container while already inside a docker container. Is this possible?
version: '3.7'
services:
py:
build: .
tty: true
networks:
- dataload
volumes:
- './src:/app'
- '~/.ssh:/ssh'
winexe:
build:
context: ./winexe
dockerfile: Dockerfile
networks:
- dataload
ports:
- '8001:8001'
volumes:
- '~/path/to/winexe:/usr/bin/winexe'
- '~/.ssh:/ssh'
depends_on:
- py
networks:
dataload:
driver: bridge
I am trying to access Winexe from 'py'
Assuming you mean running another Docker container from inside a container, this can be done in several ways:
Install the docker command inside your container and:
Contact the hosting Docker instance over TCP/IP. For this you will have to have exposed the Docker host to the network, which is neither default nor recommended.
Map the docker socket (usually /var/run/docker.sock) in to your container using a volume. This will allow the docker command inside the container to contact the host instance directly.
Be aware this essentially gives the container root level access to the host! I'm sure there are many more ways to do the same, but approach number 2 is the one I see most often.
If you mean to run another executable inside another - already running - Docker container, you can do that in the above way as well by using docker exec or run some kind of daemon in the second container that accepts commands and runs the required command for you.
So you need to think of your containers as if they were two separate computers, or servers, and they can interact accordingly.
Happily, docker-compose gives you a url you can use to communicate between the containers. In the case of your docker-compose file, you could access the winexe container from your py container like so:
http://winexe:8001 // or ws://winexe:8001 or postgres://winexe:8001 (you get the idea)
(I've used port 8001 here because that's the port you've made available for winexe – I have no idea if it could be used for this).
So now what you need is something in your winexe container than listens to that signal and sends a useful reply (like a browser sending an ajax call to a server)
Learn more here:
https://docs.docker.com/compose/networking/

Docker RUN multiple instance of a image with different parameters

I am new to docker, so this may sound a bit basic question.
I have a VS.Net core2 console application that is able to take some commandline parameters and provide different services. so in a normal command prompt I can run something like
c:>dotnet myapplication.dll 5000 .\mydb1.db
c:>dotnet myapplication.dll 5001 .\mydb2.db
which creates 2 instance of this application listing on port 5000 & 5001.
I want to now create one docker container for this application and want to run multiple instance of that image and have an ability to pass this parameter as a commandline to the docker run command. However I am unable to see how to configure this either in the docker-compose.yml or the Dockerfile
DockerFile
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
# ignoring some of the code here
ENTRYPOINT ["dotnet", "myapplication.dll"]
docker-Compose.yml
version: '3.4'
services:
my.app:
image: ${DOCKER_REGISTRY}my/app
ports:
- 5000:80
build:
context: .
dockerfile: dir/Dockerfile
I am trying to avoid creating multiple image one per each combination of commandline arguments. so is it possible to achieve what I am looking for?
Docker containers are started with an entrypoint and a command; when the container actually starts they are simply concatenated together. If the ENTRYPOINT in the Dockerfile is structured like a single command then the CMD in the Dockerfile or command: in the docker-compose.yml contains arguments to it.
This means you should be able to set up your docker-compose.yml as:
services:
my.app1:
image: ${DOCKER_REGISTRY}my/app
ports:
- 5000:80
command: [80, db1.db]
my.app2:
image: ${DOCKER_REGISTRY}my/app
ports:
- 5001:80
command: [80, db2.db]
(As a side note: if one of the options to the program is the port to listen on, this needs to match the second port in the ports: specification, and in my example I've chosen to have both listen on the "normal" HTTP port and remap it on the hosts using the ports: setting. One container could reach the other, if it needed to, as http://my.app2/ on the default HTTP port.)

docker rabbitmq how to expose port and reuse container with a docker file

Hi I am finding it very confusing how I can create a docker file that would run a rabbitmq container, where I can expose the port so I can navigate to the management console via localhost and a port number.
I see someone has provided this dockerfile example, but unsure how to run it?
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
I have got rabbit working locally fine, but everyone tells me docker is the future, at this rate I dont get it.
Does the above look like a valid way to run a rabbitmq container? where can I find a full understandable example?
Do I need a docker file or am I misunderstanding it?
How can I specify the port? in the example above what are first numbers 5672:5672 and what are the last ones?
How can I be sure that when I run the container again, say after a machine restart that I get the same container?
Many thanks
Andrew
Docker-compose
What you posted is not a Dockerfile. It is a docker-compose file.
To run that, you need to
1) Create a file called docker-compose.yml and paste the following inside:
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
2) Download docker-compose (https://docs.docker.com/compose/install/)
3) (Re-)start Docker.
4) On a console run:
cd <location of docker-compose.yml>
docker-compose up
Do I need a docker file or am I misunderstanding it?
You have a docker-compose file. The rabbitmq:3-management is the Docker image built using the RabbitMQ Dockerfile (which you don't need. The image will be downloaded the first time you run docker-compose up.
How can I specify the port? In the example above what are the first numbers 5672:5672 and what are the last ones?
"5672:5672" specifies the port of the queue.
"15672:15672" specifies the port of the management plugin.
The numbers on the left-hand-side are the ports you can access from outside of the container. So, if you want to work with different ports, change the ones on the left. The right ones are defined internally.
This means you can access the management plugin after at http:\\localhost:15672 (or more generically http:\\<host-ip>:<port exposed linked to 15672>).
You can see more info on the RabbitMQ Image on the Docker Hub.
How can I be sure that when I rerun the container, say after a machine restart that I get the same container?
I assume you want the same container because you want to persist the data. You can use docker-compose stop restart your machine, then run docker-compose start. Then the same container is used. However, if the container is ever deleted you lose the data inside it.
That is why you are using Volumes. The data collected in your container gets also stored in your host machine. So, if you remove your container and start a new one, the data is still there because it was stored in the host machine.

docker-compose scale with nginx and without environment variable

I use docker-compose to describe the deployment of one of my application. The application is composed of a
mongodb database,
a nodejs application
a nginx front end the static file of nodejs.
If i scale the nodejs application, i would like nginx autoscale to the three application.
Recently i use the following code snippet :
https://gist.github.com/cmoore4/4659db35ec9432a70bca
This is based on the fact that some environment variable are created on link, and change when new server are present.
But now with the version 2 of the docker-compse file and the new link system of docker, the environment variable doesn't exist anymore.
How my nginx can now detect the scaling of my application ?
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile.nodejs
image: docker.shadoware.org/passprotect-server:1.0.0
expose:
- 3000
links:
- mongodb
environment:
- MONGODB_HOST=mongodb://mongodb:27017/passprotect
- NODE_ENV=production
- DEBUG=App:*
nginx:
image: docker.shadoware.org/nginx:1.2
links:
- nodejs
environment:
- APPLICATION_HOST=nodejs
- APPLICATION_PORT=3000
mongodb:
image: docker.shadoware.org/database/mongodb:3.2.7
Documentation states here that:
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
So I believe that you could just set your services names in that nginx conf file like:
upstream myservice {
yourservice1;
yourservice2;
}
as they would be exported as host entries in /etc/hosts for each container.
But if you really want to have that host:port information as environment variables you could write a script to parse that docker-compose.yml and define an .env file, or doing it manually.
UPDATE:
You can get that port information from outside the container, this will return you the ports
docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' your_container_id
But if you want to do it from the inside of a containers then what you want is a service discovery system like zookeeper
There's a long feature request thread in docker's repo, about that.
One workaround solution caught my attention. You could try building your own nginx image based on that.

Access host machine dns from a docker container

/I'm using docker beta on a mac an have some services set up in service-a/docker-compose.yml:
version: '2'
services:
service-a:
# ...
ports:
- '4000:80'
I then set up the following in /etc/hosts:
::1 service-a.here
127.0.0.1 service-a.here
and I've got an nginx server running that proxies service-a.here to localhost:4000.
So on my mac I can just run: curl http://service-a.here. This all works nicely.
Now, I'm building another service, service-b/docker-compose.yml:
version: '2'
services:
service-b:
# ...
ports:
- '4001:80'
environment:
SERVICE_A_URL: service-a.here
service-b needs service-a for a couple of things:
It needs to redirect the user in the browser to the $SERVICE_A_URL
It needs to perform HTTP requests to service-a, also using the $SERVICE_A_URL
With this setup, only the redirection (1.) works. HTTP requests (2.) do not work because the service-b container
has no notion of service-a.here in it's DNS.
I tried adding service-a.here using the add_hosts configuration variable, but I'm not sore what to set it to. localhost will not work of course.
Note that I really want to keep the docker-compose files separate (joining them would not fix my problem by the way) because they both already have a lot of services running inside of them.
Is there a way to have access to the DNS resolving on localhost from inside a docker container, so that for instance curl service-a.here will work from inside a container?
You can use 'link' instruction in your docker-compose.yml file to automatically resolve the address from your container service-b.
service-b:
image: blabla
links:
- service-a:service-a
service-a:
image: blablabla
You will now have a line in the /etc/hosts of you service-b saying:
service-a 172.17.0.X
And note that service-a will be created before service-b while composing your app. I'm not sure how you can after that specify a special IP but Docker's documentation is pretty well done. Hope that's what you were looking for.

Resources