Now that links are deprecated in docker-compose.yml (and we're able to use the new networking feature to communicate between containers), we've lost a way to explicitly define dependencies between containers. How can we, now, tell our mysql container to come up first, before our api-server container starts up (which connects to mysql via the dns entry myapp_mysql_1 in docker-compose.yml?
There is a possibility to use "volumes_from" as a workaround until depends_on feature (discussed below) is introduced. Assuming you have a nginx container depending on php container, you could do the following:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
volumes_from:
- php
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
One big caveat in the above approach is that the volumes of php are exposed to nginx, which is not desired. But at the moment this is one docker specific workaround that could be used.
depends_on feature This probably would be a futuristic answer. Because the functionality is not yet implemented in Docker (as of 1.9)
There is a proposal to introduce "depends_on" in the new networking
feature introduced by Docker. But there is a long running debate about
the same # https://github.com/docker/compose/issues/374 Hence, once it
is implemented, the feature depends_on could be used to order the
container start-up, but at the moment, you would have to resort to the above approach.
Related
I have this docker-compose.yml, and I have a Postgres database and Grafana running over it to make queries on data.
version: "3"
services:
db:
image: postgres
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- db
ports:
- "3000:3000"
I start this compose with the command docker-compose up, but then, if I want to not lose any data, I must run docker-compose stop instead of docker-compose down.
I also read about docker commit, but "the commit operation will not include any data contained in volumes mounted inside the container", so I guess it's no use for my needs.
What's the proper way to store the created volumes and reusing them with commands up/down, so even when recreating the containers? I must use some sort of backup methods provided by every image (so, for example, a DB export for Postgres, and some other type of export for Grafana), or there is a way to do this inside docker-compose.yml?
EDIT:
I also read about volumes, but is there a standard way to store everything?
In the link provided by #DannyB, setting volumes to ./postgres-data:/var/lib/postgresql instead of ./postgres-data:/var/lib/postgresql/data caused the container to not store the actual folder.
My question is: every image must follow a particular pattern like the one above? This path to data to store the volume underlying is present in every Docker image Readme? Or is there something like:
volumes:
- ./my_image_root:/
Docker provides for volumes as the way to persist volumes between container invocations and to share data between containers.
They are quite simple to declare and use in compose files:
volumes:
postgres:
grafana:
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
volumes:
- postgres:/var/lib/postgresql/data
grafana:
image: grafana/grafana
depends_on:
- db
volumes:
- grafana:/var/lib/grafana
ports:
- "3000:3000"
Optionally, you can also set a local directory as your container volume
with the added convince of having the files easily accessible not only from inside the container. This is especially helpful for mounting specific config files to their location in the container, you can edit the file locally like any other file restart the container with the updated configuration (certificates and other similar files also make good use of this option). And you do that like so:
volumes:
- /home/myusername/postgres_data/:/var/lib/postgresql/data/
PS. I have omitted the container_name and version directives from this compose.yml because (as of docker 20.10), the docker compose spec determines version automatically, and docker compose exposes enough functionality that accessing the containers directly using short names isn't necessary usually.
Having the following architecture:
Microservice 1 + DB (microservice1/docker-compose.yml)
Microservice 2 + DB (microservice2/docker-compose.yml)
Proxy (proxy/docker-compose.yml)
Which of the following options would be the best to deploy in the production environment?
Docker Compose Overriding. Have a docker-compose for each microservice and another docker-compose for the proxy. When the production deployment is done, all the docker-compose would be merged to create only one (with docker-compose -f microservice1/docker-compose.yml -f microservice2/docker-compose.yml -f proxy/docker-compose.yml up. In this way, the proxy container, for example nginx, would have access to microservices to be able to redirect to one or the other depending on the request.
Shared external network. Have a docker-compose for each microservice and another docker-compose for the proxy. First, an external network would have to be created to link the proxy container with microservices.docker network create nginx_network. Then, in each docker-compose file, this network should be referenced in the necessary containers so that the proxy has visibility of the microservices and thus be able to use them in the configuration. An example is in the following link https://stackoverflow.com/a/48081535/6112286.
The first option is simple, but offers little felxibility when configuring many microservices or applications, since all docker-compose of all applications would need to be merged to generate the final configuration. The second option uses networks, which are a fundamental pillar of Docker. On the other hand, you don't need all docker-compose to be merged.
Of these two options, given the scenario of having several microservices and needing a single proxy to configure access, which would be the best? Why?
Tnahks in advance.
There is a third approach, for example documented in https://www.bogotobogo.com/DevOps/Docker/Docker-Compose-Nginx-Reverse-Proxy-Multiple-Containers.php and https://github.com/Einsteinish/Docker-compose-Nginx-Reverse-Proxy-II/. The gist of it is to have the proxy join all the other networks. Thus, you can keep the other compose files, possibly from a software distribution, unmodified.
docker-compose.yml
version: '3'
services:
proxy:
build: ./
networks:
- microservice1
- microservice2
ports:
- 80:80
- 443:443
networks:
microservice1:
external:
name: microservice1_default
microservice2:
external:
name: microservice2_default
Proxy configuration
The proxy will refer to the hosts by their names microservice1_app_1 and microservice2_app_1, assuming the services are called app in directories microservice1 and microservice2.
docker-compose is designed to orchestrate multiple containers in one single file. I do not know the content of your docker-compose files but the right way is to write one single docker-compose.yml that could contains:
version: '3.7'
services:
microservice1_app:
image: ...
volumes: ...
networks:
- service1_app
- service1_db
microservice1_db:
image: ...
volumes: ...
networks:
- service1_db
microservice2_app:
image: ...
volumes: ...
networks:
- service2_app
- service2_db
microservice2_db:
image: ...
volumes: ...
networks:
- service2_db
nginx:
image: ...
volumes: ...
networks:
- default
- service1_app
- service2_app
volumes:
...
networks:
service1_app:
service1_db:
service2_app:
service2_db:
default:
name: proxy_frontend
driver: bridge
In this way nginx container is able to communicate with microservice1_app container through microservice1_app hostname. If other hostnames are needed, it can be configured with aliases subsection within services networks section.
Security Bonus
In the above configuration, microservice1_db is only visible by microservice1_app (same for microservice2) and nginx is only able to see microservice1_app and microservice2_app and is reachable from outside of Docker (bridge mode)
I'm not quite sure about the correct usage of docker networks.
I'm running a (single hosted) reverse proxy and the containers for the application itself, but I would like to set up networks like proxy, frontend and backend. The last one for project1, assuming there could be multiple projects at the end.
But I'm even not sure, if this structure is the way it should be done. I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
So this is my current working structure with only one network (bridge) - which doesn't make sense:
Reverse proxy (network: reverse-proxy):
jwilder/nginx-proxy
jrcs/letsencrypt-nginx-proxy-companion
Database
mongo:3.6.2
Project 1
one/frontend
one/backend
two/frontend
two/backend
So my first docker-compose looks like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
mongodb:
container_name: mongodb
image: mongo:3.6.2
networks:
- reverse-proxy
volumes:
html:
networks:
reverse-proxy:
external:
name: reverse-proxy
That means I had to create the reverse-proxy before. I'm not sure if this is correct so far.
The project applications - frontend containers and backend containers - are created by my CI using docker commands (not docker compose):
docker run
--name project1-one-frontend
--network reverse-proxy
--detach
-e VIRTUAL_HOST=project1.my-server.com
-e LETSENCRYPT_HOST=project1.my-server.com
-e LETSENCRYPT_EMAIL=mail#my-server.com
project1-one-frontend:latest
How should I split this into useful networks?
TL;DR; You can attach multiple networks to a given container, which let's you isolate traffic to a great degree.
useful networks
Point of context, I'm inferring from the question that "useful" means there's some degree of isolation between services.
I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
This is pretty simple with docker-compose. Just specify the networks you want at the top level, just like you've done for reverse-proxy:
networks:
reverse-proxy:
external:
name: reverse-proxy
frontend:
backend:
Then something like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
...
frontend1:
image: some/image
networks:
- reverse-proxy
- backend
backend1:
image: some/otherimage
networks:
- backend
backend2:
image: some/otherimage
networks:
- backend
...
Set up like this, only frontend1 can reach backend1 and backend2. I know this isn't an option, since you said you're running the application containers (frontends and backends) via docker run. But I think it's a good illustration of how to achieve roughly what you're after within Docker's networking.
So how can you do what's illustrated in docker-compose.yml above? I found this: https://success.docker.com/article/multiple-docker-networks
To summarize, you can only attach one network using docker run, but you can use docker network connect <container> <network> to connect running containers to more networks after they're started.
The order in which you create networks, run docker-compose up, or run your various containers in your pipeline is up to you. You can create the networks inside the docker-compose.yml if you like, or use docker network create and import them into your docker-compose stack. It depend on how you're using this stack, and that will determine the order of operations here.
The guiding rule, probably obvious, is that the networks need to exist before you try to attach them to a container. The most straightforward pipeline might look like..
docker-compose up with all networks defined in the docker-compose.yml
for each app container:
docker run the container
docker network attach the right networks
... would like to set up networks like proxy, frontend and backend. ... I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
Networks in docker don't talk to other docker networks, so I'm not sure if the above was in reference to networks or containers on those networks. What you can have is a container on multiple docker networks, and it can talk with services on either network.
The important part about designing a network layout with docker is that any two containers on the same network can communicate with each other and will find each other using DNS. Where people often mess this up is creating something like a proxy network for a reverse proxy, attaching multiple microservices to the proxy network and suddenly find that everything on that proxy network can find each other. So if you have multiple projects that need to be isolated from each other, they cannot exist on the same network.
In other words if app-a and app-b cannot talk to each other, but do need to talk to the shared proxy, then the shared proxy needs to be on multiple app specific networks, rather than each app being on the same shared proxy network.
This can get much more complicated depending on your architecture. E.g. one design that I've been tempted to use is to have each stack have it's own reverse proxy that is attached to the application private network and to a shared proxy network without publishing any ports. And then a global reverse proxy publishes the port and talks to each stack specific reverse proxy. The advantage there is that the global reverse proxy does not need to know all of the potential app networks in advance, while still allowing you to only expose a single port, and not have microservices connecting to each other through the shared proxy network.
I have two different Docker containers and each has a different image. Each app in the containers uses non-conflicting ports. See the docker-compose.yml:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
hostname: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
I can cURL each image successfully from the host machine (Mac OS), e.g. curl -k https://localhost:6473/service_a/api/version works. What I'd like to do is to be able to refer to postgres container from the service_a container via localhost as if these two containers were one and they share the same localhost. I know that it's possible if I use the hostname postgres.dev from inside the service_a container, but I'd like to be able to use localhost. Is this possible? Please note that I am not very well versed in networking or Docker.
Mac version: 10.12.4
Docker version: Docker version 17.03.0-ce, build 60ccb22
I have done quite some prior research, but couldn't find a solution.
Relevant: https://forums.docker.com/t/localhost-and-docker-compose-networking-issue/23100/2
The right way: don't use localhost. Instead use docker's built in DNS networking and reference the containers by their service name. You shouldn't even be setting the container name since that breaks scaling.
The bad way: if you don't want to use the docker networking feature, then you can switch to host networking, but that turns off a very key feature and other docker capabilities like the option to connect containers together in their own isolated networks will no longer work. With that disclaimer, the result would look like:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "host"
depends_on:
- postgres
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
postgres:
container_name: postgres.dev
image: postgres:9.6
network_mode: "host"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Note that I removed port publishing from the container to the host, since you're no longer in a container network. And I removed the hostname setting since you shouldn't change the hostname of the host itself from a docker container.
The linked forum posts you reference show how when this is a VM, the host cannot communicate with the containers as localhost. This is an expected limitation, but the containers themselves will be able to talk to each other as localhost. If you use a VirtualBox based install with docker-toolbox, you should be able to talk to the containers by the virtualbox IP.
The really wrong way: abuse the container network mode. The mode is available for debugging container networking issues and specialized use cases and really shouldn't be used to avoid reconfiguring an application to use DNS. And when you stop the database, you'll break your other container since it will lose its network namespace.
For this, you'll likely need to run two separate docker-compose.yml files because docker-compose will check for the existence of the network before taking any action. Start with the postgres container:
version: "2"
services:
postgres:
container_name: postgres.dev
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ../configs/postgres/scripts:/docker-entrypoint-initdb.d/
Then you can make a second service in that same network namespace:
version: "2"
services:
service_a:
container_name: service_a.dev
image: service_a.dev
network_mode: "container:postgres.dev"
ports:
- "6473:6473"
- "6474:6474"
- "1812:1812"
volumes:
- ../configs/service_a/var/conf:/opt/services/service_a/var/conf
Specifically for Mac and during local testing, I managed to get the multiple containers working using docker.for.mac.localhost approach. I documented it http://nileshgule.blogspot.sg/2017/12/docker-tip-workaround-for-accessing.html
I want to restart a container if it crashes automatically. I am not sure how to go about doing this. I have a script docker-compose-deps.yml that has elasticsearch, redis, nats, and mongo. I run this in the terminal to set this up: docker-compose -f docker-compose-deps.yml up -d. After this I set up my containers by running: docker-compose up -d. Is there a way to make these containers restart if they crash? I noticed that docker has a built in restart, but I don't know how to implement this.
After some feedback I added restart: always to my docker-compose file and my docker-compose-deps.yml file. Does this look correct? Or is this how you would implement the restart always?
docker-compose sample
myproject-server:
build: "../myproject-server"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5880:5880
- 6971:6971
volumes:
- "../myproject-server/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
myproject-associate:
build: "../myproject-associate"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5870:5870
volumes:
- "../myproject-associate/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
docker-compose-deps.yml sample
nats:
image: nats
container_name: nats
restart: always
ports:
- 4222:4222
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- "./data:/data"
ports:
- 27017:27017
If you're using compose, it has a restart flag which is analogous to the one existing in the docker run command, so you can use that. Here is a link to the documentation about this part -
https://docs.docker.com/compose/compose-file/
When you deploy out, it depends where you deploy to. Most container clusters like kubernetes, mesos or ECS would have some configuration you can use to auto-restart your containers. If you don't use any of these tools you are probably starting your containers manually and can then just use the restart flag just as you would locally.
Looks good to me. What you want to understand when working on Docker policies is what each one means. always policy means that if it crashes for any reason automatically restart.
So if it stops for any reason, go ahead and restart it.
So why would you ever want to use always as opposed to say on-failure?
In some cases, you might have a container that you always want to ensure is running such as a web server. If you are running a public web application chances are you want that server to be available 100% of the time.
So for web application I expect you want to use always. On the other hand if you are running a worker process on a file and then naturally exit, that would be a good use case for the on-failure policy, because the worker container might be finished processing the file and you probably want to let it close out and not have it restart.
Thats where I would expect to use the on-failure policy. So not just knowing the syntax, but when to apply which policy and what each one means.