Why containers are called services in docker compose? - docker

I am creating containers using docker-compose. The technical name for the container is service in docker compose.
version: "3.9"
services:
web:
build: .
ports:
- "1026:5000"
depends_on:
- redis
volumes:
- vol1:/code
redis:
image: "redis:alpine"
container_name: db
volumes:
vol1:

A service is a specification for running containers. A service persists while, at any point in time, there might be 0, 1, or many containers that are service tasks, or replicas.
With compose, you can scale the number of replicas of a service using the scale keyword. there are now many containers, but still one service to manage them.
If containers shut down, the service restart policy can have rules about restarting the service. This will cause new containers to be created. Each stopped container can be examined for its own crash - or success - logs. But again, the service is the single contact point that can retrieve the list of containers, their status and so on.

Related

Limit deployment of certain services with docker swarm/compose

I have this compose file:
version: "3.3"
services:
icecc-scheduler:
build: services/icecc-scheduler
restart: unless-stopped
network_mode: host
icecc-daemon:
build: services/icecc-daemon
restart: unless-stopped
network_mode: host
I then have a docker swarm configured with 5 machines, the one I'm on is the manager. When I deploy my stack I want the icecc-daemon container to be deployed to all nodes in the swarm while the icecc-scheduler is only deployed once (preferably to the swarm manager). Is there any way to have this level of control with docker compose/stack/swarm?
Inside docker swarm, you can achieve desired behaviour by using placement constraints.
To achieve the service is deployed only to the manager node the constraint should be: - "node.role==manager"
To achieve the service is only deployed once you can refer to the
deploy:
mode: replicated
replicas: 1
section. This will make your service run on one replica only inside the whole swarm cluster.
To achieve service is deployed exactly as one container per swarm node, you can use:
deploy:
mode: global
More information on the parameters on official docs

Multiple apps (microservices) and one proxy (nginx) docker-compose configuration/architecture

Having the following architecture:
Microservice 1 + DB (microservice1/docker-compose.yml)
Microservice 2 + DB (microservice2/docker-compose.yml)
Proxy (proxy/docker-compose.yml)
Which of the following options would be the best to deploy in the production environment?
Docker Compose Overriding. Have a docker-compose for each microservice and another docker-compose for the proxy. When the production deployment is done, all the docker-compose would be merged to create only one (with docker-compose -f microservice1/docker-compose.yml -f microservice2/docker-compose.yml -f proxy/docker-compose.yml up. In this way, the proxy container, for example nginx, would have access to microservices to be able to redirect to one or the other depending on the request.
Shared external network. Have a docker-compose for each microservice and another docker-compose for the proxy. First, an external network would have to be created to link the proxy container with microservices.docker network create nginx_network. Then, in each docker-compose file, this network should be referenced in the necessary containers so that the proxy has visibility of the microservices and thus be able to use them in the configuration. An example is in the following link https://stackoverflow.com/a/48081535/6112286.
The first option is simple, but offers little felxibility when configuring many microservices or applications, since all docker-compose of all applications would need to be merged to generate the final configuration. The second option uses networks, which are a fundamental pillar of Docker. On the other hand, you don't need all docker-compose to be merged.
Of these two options, given the scenario of having several microservices and needing a single proxy to configure access, which would be the best? Why?
Tnahks in advance.
There is a third approach, for example documented in https://www.bogotobogo.com/DevOps/Docker/Docker-Compose-Nginx-Reverse-Proxy-Multiple-Containers.php and https://github.com/Einsteinish/Docker-compose-Nginx-Reverse-Proxy-II/. The gist of it is to have the proxy join all the other networks. Thus, you can keep the other compose files, possibly from a software distribution, unmodified.
docker-compose.yml
version: '3'
services:
proxy:
build: ./
networks:
- microservice1
- microservice2
ports:
- 80:80
- 443:443
networks:
microservice1:
external:
name: microservice1_default
microservice2:
external:
name: microservice2_default
Proxy configuration
The proxy will refer to the hosts by their names microservice1_app_1 and microservice2_app_1, assuming the services are called app in directories microservice1 and microservice2.
docker-compose is designed to orchestrate multiple containers in one single file. I do not know the content of your docker-compose files but the right way is to write one single docker-compose.yml that could contains:
version: '3.7'
services:
microservice1_app:
image: ...
volumes: ...
networks:
- service1_app
- service1_db
microservice1_db:
image: ...
volumes: ...
networks:
- service1_db
microservice2_app:
image: ...
volumes: ...
networks:
- service2_app
- service2_db
microservice2_db:
image: ...
volumes: ...
networks:
- service2_db
nginx:
image: ...
volumes: ...
networks:
- default
- service1_app
- service2_app
volumes:
...
networks:
service1_app:
service1_db:
service2_app:
service2_db:
default:
name: proxy_frontend
driver: bridge
In this way nginx container is able to communicate with microservice1_app container through microservice1_app hostname. If other hostnames are needed, it can be configured with aliases subsection within services networks section.
Security Bonus
In the above configuration, microservice1_db is only visible by microservice1_app (same for microservice2) and nginx is only able to see microservice1_app and microservice2_app and is reachable from outside of Docker (bridge mode)

how to connect my docker container (frontend) connect to a containerized database running on a different VM

Unable to connect to containers running on separate docker hosts
I've got 2 docker Tomcat containers running on 2 different Ubuntu vm's. System-A has a webservice running and System-B has a db. I haven't been able to figure out how to connect the application running on system-A to the db running on system-B. When I run the database on system-A, the application(which is also running on system-A) can connect to the database. I'm using docker-compose to setup the network(which works fine when both containers are running on the same VM). I've execd into etc/hosts file in the application container on system-A and I think whats missing is the ip address of System-B.
services:
db:
image: mydb
hostname: mydbName
ports:
- "8012: 8012"
networks:
data:
aliases:
- mydbName
api:
image: myApi
hostname: myApiName
ports:
- "8810: 8810"
networks:
data:
networks:
data:
You would configure this exactly the same way you would as if Docker wasn't involved: configure the Tomcat instance with the DNS name or IP address of the other server. You would need to make sure the service is published outside of Docker space using a ports: directive.
On server-a.example.com you could run this docker-compose.yml file:
version: '3'
services:
api:
image: myApi
ports:
- "8810:8810"
env:
DATABASE_URL: "http://server-b.example.com:8012"
And on server-b.example.com:
version: '3'
services:
db:
image: mydb
ports:
- "8012:8012"
In principle it would be possible to set up an overlay network connecting the two hosts, but this is a significantly more complicated setup.
(You definitely don't want to use docker exec to modify /etc/hosts in a container: you'll have to repeat this step every time you delete and recreate the container, and manually maintaining hosts files is tedious and error-prone, particularly if you're moving containers between hosts. Consul could work as a service-discovery system that provides a DNS service.)

rationale behind docker compose "links" order

I have a Redis - Elasticsearch - Logstash - Kibana stack in docker which I am orchestrating using docker compose.
Redis will receive the logs from a remote location, will forward them to Logstash, and then the customary Elasticsearch, Kibana.
In the docker-compose.yml, I am confused about the order of "links"
Elasticsearch links to no one while logstash links to both redis and elasticsearch
elasticsearch:
redis:
logstash:
links:
- elasticsearch
- redis
kibana:
links:
- elasticsearch
Is this order correct? What is the rational behind choosing the "link" direction.
Why don't we say, elasticsearch is linked to logstash?
Instead of using the Legacy container linking method, you could instead use Docker user defined networks. Basically you can define a network for your services and then indicate in the docker-compose file that you want the container to run on that network. If your containers all run on the same network they can access each other via their container name (DNS records are added automatically).
1) : Create User Defined Network
docker network create pocnet
2) : Update docker-compose file
You want to add your containers to the network you just created. Your docker-compose file would look something along the lines of this :
version: '2'
services:
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "{your:ports}"
networks:
- pocnet
redis:
image: redis
container_name: redis
ports:
- "{your:ports}"
networks:
- pocnet
logstash:
image: logstash
container_name: logstash
ports:
- "{your:ports}"
networks:
- pocnet
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
networks:
- pocnet
networks:
pocnet:
external: true
3) : Start Services
docker-compose up
note : you might want to open a new shell window to run step 4.
4) : Test
Go into the Kibana container and see if you can ping the elasticsearch container.
your__Machine:/ docker exec -it kibana bash
kibana#123456:/# ping elasticsearch
First of all Links in docker are Unidirectional.
More info on links:
there are legacy links, and links in user-defined networks.
The legacy link provided 4 major functionalities to the default bridge network.
name resolution
name alias for the linked container using --link=CONTAINER-NAME:ALIAS
secured container connectivity (in isolation via --icc=false)
environment variable injection
Comparing the above 4 functionalities with the non-default user-defined networks , without any additional config, docker network provides
automatic name resolution using DNS
automatic secured isolated environment for the containers in a
network
ability to dynamically attach and detach to multiple networks
supports the --link option to provide name alias for the linked
container
In your case: Automatic dns will help you on user-defined network. first create a new network:
docker network create ELK -d bridge
With this approach you dont need to link containers on the same user-defined network. you just have to put your elk stack + redis containers in ELK network and remove link directives from composer file.
Your order looks fine to me. If you have any problem regarding the order, or waiting for services to get up in dependent containers, you can use something like the following:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
entrypoint: ./wait-for-it.sh db:5432
db:
image: postgres
This will make the web container wait until it can connect to the db.
You can get wait-for-it script from here.

execute binary on linked container in docker

I have 3 Docker containers, one running nginx, another php and one running serf by hashicorp.
I want to use the php exec function to call the serf binary to fire off a serf event
In my docker compose I have written
version: '2'
services:
web:
restart: always
image: `alias`/nginx-pagespeed:1.11.4
ports:
- 80
volumes:
- ./web:/var/www/html
- ./conf/nginx/default.conf:/etc/nginx/conf.d/default.conf
links:
- php
environment:
- SERVICE_NAME=${DOMAIN}
- SERVICE_TAGS=web
php:
restart: always
image: `alias`/php-fpm:7.0.11
links:
- serf
external_links:
- mysql
expose:
- "9000"
volumes:
- ./web:/var/www/html
- ./projects:/var/www/projects
- ./conf/php:/usr/local/etc/php/conf.d
serf:
restart: always
dns: 172.17.0.1
image: `alias`/serf
container_name: serf
ports:
- 7496:7496
- 7496:7496/udp
command: agent -node=${SERF_NODE} -advertise=${PRIVATE_IP}:7496 -bind=0.0.0.0:7496
I was imagining that I would do something like in php exec('serf serf event "test"') where serf is the hostname of the container.
Or perhaps someone can give an idea of how to get something like this set up using alternative methods?
The "linked" containers allow network level discovery between containers. With docker networks, the linked feature is now considered legacy and isn't really recommended anymore. To run a command in another container, you'd need to either open up a network API functionality on the target container (e.g. a REST based http request to the target container), or you need to expose the host to the source container so it can run a docker exec against the target container.
The latter requires that you install the docker client in your source container, and then expose the server with either an open port on the host or mounting the /var/run/docker.sock in the container. Since this allows the container to have root access on the host, it's not a recommended practice for anything other than administrative containers where you would otherwise trust the code running directly on the host.
Only other option I can think of is to remove the isolation between the containers with a shared volume.
An ideal solution is to use a message queuing service that allows multiple workers to spin up and process requests at their own pace. The source container sends a request to the queue, and the target container listens for requests when it's running. This also allows the system to continue even when workers are currently down, activities simply queue up.

Resources