How to auto-restart a docker-compose cluster when EC2 instance reboots - docker

I had this docker-compose.yml file:
version: '2.2'
services:
kibana:
restart: always
depends_on:
- es01
- es02
image: docker.elastic.co/kibana/kibana:7.3.1
container_name: kibana
ports:
- 5601:5601
environment:
ELASTICSEARCH_HOSTS: http://es01:9200
ELASTICSEARCH_URL: http://es01:9200
es01:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
es02:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
volumes:
esdata01:
driver: local
esdata02:
driver: local
but these containers did not restart when the ec2 instance rebooted. Maybe I should use something like this instead:
docker-compose up -d --restart # the --restart flag maybe?
?
Notice the "restart" properties in the yml file, guess they didn't do anything in this case?
But there is no --restart flag:
(account-api) ubuntu#account_management5-interos:~/interos/repos/elastic-search-app$
docker-compose up -d --restart
Builds, (re)creates, starts, and attaches to containers for a service.
Unless they are already running, this command also starts any linked services.
The `docker-compose up` command aggregates the output of each container. When
the command exits, all containers are stopped. Running `docker-compose up -d`
starts the containers in the background and leaves them running.
If there are existing containers for a service, and the service's configuration
or image was changed after the container's creation, `docker-compose up` picks
up the changes by stopping and recreating the containers (preserving mounted
volumes). To prevent Compose from picking up changes, use the `--no-recreate`
flag.
If you want to force Compose to stop and recreate all containers, use the
`--force-recreate` flag.
Usage: up [options] [--scale SERVICE=NUM...] [SERVICE...]
Options:
-d, --detach Detached mode: Run containers in the background,
print new container names. Incompatible with
--abort-on-container-exit.
--no-color Produce monochrome output.
--quiet-pull Pull without printing progress information
--no-deps Don't start linked services.
--force-recreate Recreate containers even if their configuration
and image haven't changed.
--always-recreate-deps Recreate dependent containers.
Incompatible with --no-recreate.
--no-recreate If containers already exist, don't recreate
them. Incompatible with --force-recreate and -V.
--no-build Don't build an image, even if it's missing.
--no-start Don't start the services after creating them.
--build Build images before starting containers.
--abort-on-container-exit Stops all containers if any container was
stopped. Incompatible with -d.
-t, --timeout TIMEOUT Use this timeout in seconds for container
shutdown when attached or when containers are
already running. (default: 10)
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
--remove-orphans Remove containers for services not defined
in the Compose file.
--exit-code-from SERVICE Return the exit code of the selected service
container. Implies --abort-on-container-exit.
--scale SERVICE=NUM Scale SERVICE to NUM instances. Overrides the
`scale` setting in the Compose file if present.
I am looking for the equivalent to:
docker run -d -p 27017:27017 \
--restart unless-stopped \ # RESTART
--name 'interos-mongo' \
'mongo:4.0'

In fact docker-compose doesn't handle real restarts, these are done by dockerd.
The restart policy written in the compose configuration file will eventually be written to the container's restart policy, which you can check with the following command.
docker inspect --format '{{.HostConfig.RestartPolicy}}' you-container-ID-or-name
Going back to your question, have you set up dockerd to auto starting? i.e. systemctl enable docker
xref: https://docs.docker.com/compose/production/

Related

Configuring security in Elasticsearch on docker container

How do i enable basic authentication for kibana and elasticsearch on docker container?
I want to have authentication enabled in kibana. With the normal files we can simply set the flag
xpack.security.enabled=true and generate the password but since i am running elasticsearch and kibana on docker how do i do it ??
This is my current docker file
version: '3.7'
services:
elasticsearch:
image: elasticsearch:7.9.2
ports:
- '9200:9200'
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: kibana:7.9.2
ports:
- '5601:5601'
You can pass it in env vars while running docker run command for elasticsearch.
Something like this:
docker run -p 9200:9200 -p 9300:9300 -e "xpack.security.enabled=true" docker.elastic.co/elasticsearch/elasticsearch:7.14.0

Docker-swarm across multiple hosts using same docker-compse file

I am building a docker swarm across 3 hosts for the following services, Grakn, Redis, Elasticsearch, MinIO and RabbitMQ.
My queries are this,
Can i use one docker-compose.yml so that everything builds across 3 hosts? Or we need to have 3 docker-compose.yml file?
In order to have HA, I also want to build 3 more host so that say, if one host (physical) fails, the services which are running on this be transfered to other one and service wont be interrupted.
Can i use docker stack here, if so how?
services:
grakn:
image: graknlabs/grakn:1.7.2
ports:
- 48555:48555
volumes:
- grakndata:/grakn-core-all-linux/server/db
restart: always
redis:
image: redis:6.0.5
restart: always
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- discovery.type=single-node
restart: always
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
minio:
image: minio/minio:RELEASE.2020-05-16T01-33-21Z
volumes:
- s3data:/data
ports:
- "9000:9000"
environment:
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
rabbitmq:
image: rabbitmq:3.8-management
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
restart: always
Can i use one docker-compose.yml so that everything builds across 3
hosts? Or we need to have 3 docker-compose.yml file?
Yes, you should use one docker-compose.yml file. There you declare services and their desired state including number of replicas.
In order to have HA, I also want to build 3 more host so that say, if
one host (physical) fails, the services which are running on this be
transfered to other one and service wont be interrupted.
If you initialized a cluster of Docker Engines in swarm mode and these engines run on different hosts, service replicas can run on any host. (unless you restrict service placement using Docker labels)
Can i use docker stack here, if so how?
Yes, run docker stack deploy --compose-file [Path to a Compose file]

docker-compose run with specified network name

I have a docker-compose file with three services (Solr, PostgreSQL and pgAdmin), all sharing a Docker network.
version: '2'
services:
solr:
image: solr:7.7.2
ports:
- '8983:8983'
networks:
primus-dev:
ipv4_address: 10.105.1.101
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- primus
- /opt/solr/server/solr/configsets/sample_techproducts_configs
environment:
- SOLR_HEAP=2048m
logging:
options:
max-size: 5m
db:
image: "postgres:11.5"
container_name: "primus_postgres"
ports:
- "5432:5432"
networks:
primus-dev:
ipv4_address: 10.105.1.102
volumes:
- primus_dbdata:/var/lib/postgres/data
environment:
- POSTGRES_DB=primus75
- POSTGRES_USER=primus
- POSTGRES_PASSWORD=primstav
pgadm4:
image: "dpage/pgadmin4"
networks:
primus-dev:
ipv4_address: 10.105.1.103
ports:
- "3050:80"
volumes:
- /home/nils/docker-home:/var/docker-home
environment:
- PGADMIN_DEFAULT_EMAIL=nils.weinander#kulturit.se
- PGADMIN_DEFAULT_PASSWORD=dev
networks:
primus-dev:
driver: bridge
ipam:
config:
- subnet: 10.105.1.0/24
volumes:
data:
primus_dbdata:
This works just fine after docker-compose up (at least pgAdmin can talk to PostgreSQL).
But, then I have a script (actuall a make target, but that's not the point here), which builds, runs and deletes a container with docker-compose run:
docker-compose run -e HOME=/app -e PYTHONPATH=/app/server -u 0 --rm backend \
bash -c 'cd /app/server && python tools/reindex_mp.py -s -n'
This does not work as the reindex_mp.py cannot reach Solr on 10.105.1.101, as the one shot container is not on the same Docker network. So, is there a way to tell docker-compose to use a named network with docker-compose run? docker run has an option --network but that is not available for docker-compose.
You can create a docker network outside your docker-compose and use that network while running services in docker-compose.
docker network create my-custom-created-network
now inside your docker-compose, use this network like this:
services:
serv1:
image: img
networks:
my-custom-created-network
networks:
my-custom-created-network:
external: true
The network creation example creates a bridge network.
To access containers across hosts, use an overlay network.
You can also use the network created inside docker-compose and connect containers to that network.
Docker creates a default network for docker-compose and services which do not have any network configuration specified, will be using default network created by docker for that compose file.
you can find the network name by executing this command:
docker network ls
Use the network appropriate name while starting a container, like this
docker run [options] --network <network-name> <image-name>
Note: Containers in a same network are accessible using container names, you can leverage this instead of using ips

How to connect metricbeat to elasticsearch and kibana with docker

I've setup elasticsearch and kibana with docker compose. elasticsearch is deployed on: localhost:9200 while kibana is deployed on localhost:5601
When trying to deploy metricbeat with docker run I got the following errors:
$ docker run docker.elastic.co/beats/metricbeat:6.3.2 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://localhost:9200: Get http://localhost:9200: dial tcp [::1]:9200: connect: cannot assign requested address]
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://elasticsearch:9200: Get http://elasticsearch:9200: lookup elasticsearch on 192.168.65.1:53: no such host]
My docker-compose.yml:
# ./docker-compose.yml
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
environment:
# - cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elkdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
restart: always
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
volumes:
- kibana:/usr/share/kibana/config
ports:
- "5601:5601"
depends_on:
- elasticsearch
restart: always
volumes:
elkdata:
kibana:
First edit your docker-compose file by adding a name for default docker network:
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
environment:
# - cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elkdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
networks:
- my-network
restart: always
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
volumes:
- kibana:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- my-network
depends_on:
- elasticsearch
restart: always
volumes:
elkdata:
kibana:
networks:
my-network:
name: awesome-name
Execute docker-compose up and then start metricbeat with the below command:
$ docker run docker.elastic.co/beats/metricbeat:6.3.2 --network=awesome-name setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["elasticsearch:9200"]
Explanation:
When you try to deploy metricbeat, you provide below envars:
setup.kibana.host=kibana:5601
output.elasticsearch.hosts=["localhost:9200"]
I will start with the second one. With docker run command, when you are starting metricbeat, you are telling the container that it can access elastic search on localhost:9200. So when the container starts, it will access localhost on port 9200 expecting to find elasticsearch running. But, as the container is a host isolated process with its own network layer, localhost resolves to the container itself, not to your docker host machine as you are expecting.
Regarding the setup of kibana host, you should firstly understand how docker-compose works. By default, when you execute docker-compose up, a docker network is created and all services defined on yml file are added to this network. Inside this network and only, services are accessible through their service name. For your case, as defined on yml file, their names would be elasticsearch, kibana.
So in order metricbeat container to be able to communicate with elasticsearch and kibana containers, it should be added to the same docker network. This can be achieved with setting --network flag on docker run command.
Another approach would be to share docker host's network with your containers by using network mode host, but I would not recommend that.
References:
Docker compose
docker run

docker-compose stop not working after docker-compose -p <name> up

I am using docker-compose version 2. I am starting containers with docker-compose -p some_name up -d and trying to kill them with docker-compose stop. The commands exits with 0 code but the containers are still up and running.
Is this the expected behaviour for version? If yes, any idea how can I work around it?
my docker-compose.yml file looks like this
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.3.0
ports:
- "9200:9200"
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
xpack.security.enabled: "false"
xpack.monitoring.enabled: "false"
xpack.graph.enabled: "false"
xpack.watcher.enabled: "false"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 262144
hard: 262144
kafka-server:
image: spotify/kafka
environment:
- TOPICS=my-topic
ports:
- "9092:9092"
test:
build: .
depends_on:
- elasticsearch
- kafka-server
update
I found that the problem is caused by using the -p parameter and giving explicit prefix to the container. Still looking for the best way to solve it.
docker-compose -p [project_name] stop worked in my case. I had the same problem.
Try forcing running containers to stop by sending a SIGKILL with docker-compose -p some_name kill.
docker-compose kill
I just read and experimented with something from compose CLI envs when passing -p.
You have to pass the -p some_name to kill the containers or compose will assume the directory name if you don't.
Kindly let me know if this helped.

Resources