Does docker-compose support init container? - docker

init container is a great feature in Kubernetes and I wonder whether docker-compose supports it? it allows me to run some command before launch the main application.
I come cross this PR https://github.com/docker/compose-cli/issues/1499 which mentions to support init container. But I can't find related doc in their reference.

This was a discovery for me but yes, it is now possible to use init containers with docker-compose since version 1.29 as can be seen in the PR you linked in your question.
Meanwhile, while I write those lines, it seems that this feature has not yet found its way to the documentation
You can define a dependency on an other container with a condition being basically "when that other container has successfully finished its job". This leaves the room to define containers running any kind of script and exit when they are done before an other dependent container is launched.
To illustrate, I crafted an example with a pretty common scenario: spin up a db container, make sure the db is up and initialize its data prior to launching the application container.
Note: initializing the db (at least as far as the official mysql image is concerned) does not require an init container so this example is more an illustration than a rock solid typical workflow.
The complete example is available in a public github repo so I will only show the key points in this answer.
Let's start with the compose file
---
x-common-env: &cenv
MYSQL_ROOT_PASSWORD: totopipobingo
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
environment:
<<: *cenv
init-db:
image: mysql:8.0
command: /initproject.sh
environment:
<<: *cenv
volumes:
- ./initproject.sh:/initproject.sh
depends_on:
db:
condition: service_started
my_app:
build:
context: ./php
environment:
<<: *cenv
volumes:
- ./index.php:/var/www/html/index.php
ports:
- 9999:80
depends_on:
init-db:
condition: service_completed_successfully
You can see I define 3 services:
The database which is the first to start
The init container which starts only once db is started. This one only runs a script (see below) that will exit once everything is initialized
The application container which will only start once the init container has successfuly done its job.
The initproject.sh script run by the db-init container is very basic for this demo and simply retries to connect to the db every 2 seconds until it succeeds or reaches a limit of 50 tries, then creates a db/table and insert some data:
#! /usr/bin/env bash
# Test we can access the db container allowing for start
for i in {1..50}; do mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "show databases" && s=0 && break || s=$? && sleep 2; done
if [ ! $s -eq 0 ]; then exit $s; fi
# Init some stuff in db before leaving the floor to the application
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create database my_app"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create table my_app.test (id int unsigned not null auto_increment primary key, myval varchar(255) not null)"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "insert into my_app.test (myval) values ('toto'), ('pipo'), ('bingo')"
The Dockerfile for the app container is trivial (adding a mysqli driver for php) and can be found in the example repo as well as the php script to test the init was succesful by calling http://localhost:9999 in your browser.
The interesting part is to observe what's going on when launching the service with docker-compose up -d.
The only limit to what can be done with such a feature is probably your imagination ;) Thanks for making me discovering this.

Related

How can I create additional user in influxdb2 with a docker-compose.yml

I am able to run a docker-compose.yml that starts an influxdb2 and configures with a admin user, org and bucket. My problem is that I am not able to create an additional user (without admin privileges) via the docker-compose.yml.
I would appreciate if someone could give me a hint.
docker-compose.yml:
`version: "3.5"
services:
influxdb:
image: influxdb:latest
container_name: influxdb2
volumes:
- influxdb-storage:/etc/influxdb2:rw
- influxdb-storage:/var/lib/influxdb2:rw
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=adminuser
- DOCKER_INFLUXDB_INIT_PASSWORD=adminpassword
- DOCKER_INFLUXDB_INIT_ORG=myOrg
- DOCKER_INFLUXDB_INIT_BUCKET=myBucket
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=randomTokenValue
ports:
- "8086:8086"
restart: unless-stopped`
I tried adding an entrypoint to somehow run the following command:
influx user create -n john -p user -o myOrg
but that did not work.
The influxdb:latest image is described in the following repository: influxdata-docker/influxdb/2.6/
If you check these files you will see that user is created inside entrypoint.sh script in setup_influxd() function, called from main() -> init_influxd().
There is a run_user_scripts() function which runs user-defined scripts on startup, if directory specified by ${USER_SCRIPT_DIR} variable exists:
# Allow users to mount arbitrary startup scripts into the container,
# for execution after initial setup/upgrade.
declare -r USER_SCRIPT_DIR=/docker-entrypoint-initdb.d
...
# Execute all shell files mounted into the expected path for user-defined startup scripts.
function run_user_scripts () {
if [ -d ${USER_SCRIPT_DIR} ]; then
log info "Executing user-provided scripts" script_dir ${USER_SCRIPT_DIR}
run-parts --regex ".*sh$" --report --exit-on-error ${USER_SCRIPT_DIR}
fi
}
I think you could use this functionality to do the additional steps. But probably you'll need to guard against running these steps twice.

ERROR Disk error while locking directory /var/kafka-logs in 3.10 Kafka

I am using Kafka 3.1.0, Portainer 2.9.0 and docker 20.10.11 to build a 1 broker, 1 consumer and 1 producer cluster.
I am trying to map the log dirs via the docker-compose from the container to the host machine in order to persist the content of that directory (because if the container falls that information will be lost). I know it is recommended to have more than 1 broker, but since I am just testing this feature, I don't want to overcomplicate myself.
The problem I get is
ERROR Disk error while locking directory /var/kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: /var/kafka-logs/.lock
[2022-03-31 12:00:53,986] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
I have checked and the user that executes the broker has all permissions (since I created that directory with my Dockerfile).
RUN mkdir /var/kafka-logs \
&& chown -R kafka:kafka /var/kafka-logs \
&& chmod -R 777 /var/kafka-logs
I have seen that this problem was a thing in the 3.0 version and was fixed in the 3.1, and also that it only happened in Windows, so I don't know the source of this problem.
Edit: I have checked and even without the mapping it still prints that error. It must be a problem of changing the log.dirs property to a non /tmp directory, because if I leave the default configuration it works just fine.
By default I mean the following:
log.dirs=/tmp/kafka-logs
My docker-compose:
version: "3.8"
networks:
net:
external: true
services:
kafka-broker1:
image: registry.gitlab.com/repo/kafka:2.13_3.1.0_v0.1
volumes:
- /var/volumes/kafka/config/server1.properties:/opt/kafka/config/server.properties
networks:
- net
kafka-producer:
image: registry.gitlab.com/repo/kafka:2.13_3.1.0_v0.1
stdin_open: true
tty: true
networks:
- net
kafka-consumer:
image: registry.gitlab.com/repo/kafka:2.13_3.1.0_v0.1
stdin_open: true
tty: true
networks:
- net
The problem was that I have been creating a few docker images and the container with the same name and it didn't picked the newest image.
Once I erased the rest of images and the container picked the lastest it all worked just fine, so it was basically a problem of not having enough permissions to get the lock of that directory.

Docker wait untill a service is completely ready

I'm dockerizing my existing Django application.
I have an entrypoint.sh script which run as entrypoint by the Dockerfile
ENTRYPOINT ["/app/scripts/docker/entrypoint.sh"]
It's content contains script to run migration when environment variable is set to migrate
#!/bin/sh
#set -e
# Run the command and exit with the custom message when the comamnd fails to run
safeRunCommand() {
cmnd="$*"
echo cmnd="$cmnd"
eval "$cmnd"
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error : [code: %d] when executing command: '$cmnd'\n" $ret_code
exit $ret_code
else
echo "Command run successfully: $cmnd"
fi
}
runDjangoMigrate() {
echo "Migrating database"
cmnd="python manage.py migrate --noinput"
safeRunCommand "$cmnd"
echo "Done: Migrating database"
}
# Run Django migrate command.
# The command is run only when environment variable `DJANGO_MANAGE_MIGRATE` is set to `on`.
if [ "x$DJANGO_MANAGE_MIGRATE" = 'xon' ] && [ ! "x$DEPLOYMENT_MODE" = 'xproduction' ]; then
runDjangoMigrate
fi
# Accept other commands
exec "$#"
Now, in the docker-compose file, I have the services like
version: '3.7'
services:
database:
image: mysql:5.7
container_name: 'qcg7_db_mysql'
restart: always
web:
build: .
command: ["./wait_for_it.sh", "database:3306", "--", "./docker_start.sh"]
volumes:
- ./src:/app
depends_on:
- database
environment:
DJANGO_MANAGE_MIGRATE: 'on'
But when I build the image using
docker-compose up --build
It fails to run the migration command from entrypoint script with error
(2002, "Can't connect to MySQL server on 'database' (115)")
This is due to the fact that the database server has not still started.
How can I make web service to wait untill the database service is completely started and is ready to accept connections?
Unfortunately, there is not a native way in Docker to wait for the database service to be ready before Django web app attempts to connect. Depends_on will only ensure that the web app is started after the database container is launched.
Because of this limitation you will need to solve this problem in how your container runs. The easiest solution is to modify the entrypoint.sh to sleep for 10-30 seconds so that your database has time to initialize before executing any additional commands. This official MySQL entrypoint.sh shows an example of how to block until the database is ready.

Local Vault using docker-compose

I'm having big trouble running Vault in docker-compose.
My requirements are :
running as deamon (so restarting when I restart my Mac)
secret being persisted between container restart
no human intervention between restart (unsealing, etc.)
using a generic token
My current docker-compose
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
ports:
- "8200:8200"
volumes:
- ./storagedc/vault/file:/vault/file
However, when the container restart, I get the log
==> Vault server configuration:
Api Address: http://0.0.0.0:8200
Cgo: disabled
Cluster Address: https://0.0.0.0:8201
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v1.2.1
Error initializing Dev mode: Vault is already initialized
Is there any recommendation on that matter?
I'm going to pseudo-code an answer to work around the problems specified, but please note that this is a massive hack and should NEVER be deployed in production as a hard-coded master key and single unseal key is COLOSSALLY INSECURE.
So, you want a test vault server, with persistence.
You can accomplish this, it will need a little bit of work because of the default behavior of the vault container - if you just start it, it will start with a dev mode container, which won't allow for persistence. Just adding persistence via the environment variable won't solve that problem entirely because it will conflict with the default start mode of the container.
so we need to replace this entrypoint script with something that does what we want it to do instead.
First we copy the script out of the container:
$ docker create --name vault vault:1.2.1
$ docker cp vault:/usr/local/bin/docker-entrypoint.sh .
$ docker rm vault
For simplicity, we're going to edit the file and mount it into the container using the docker-compose file. I'm not going to make it really functional - just enough to get it to do what's desired. The entire point here is sample, not something that is usable in production.
My customizations all start at about line 98 - first we launch a dev-mode server in order to record the unseal key, then we terminate the dev mode server.
# Here's my customization:
if [ ! -f /vault/unseal/sealfile ]; then
# start in dev mode, in the background to record the unseal key
su-exec vault vault server \
-dev -config=/vault/config \
-dev-root-token-id="$VAULT_DEV_ROOT_TOKEN_ID" \
2>&1 | tee /vault/unseal/sealfile &
while ! grep -q 'core: vault is unsealed' /vault/unseal/sealfile; do
sleep 1
done
kill %1
fi
Next we check for supplemental config. This is where the extra config goes for disabling TLS, and for binding the appropriate interface.
if [ -n "$VAULT_SUPPLEMENTAL_CONFIG" ]; then
echo "$VAULT_SUPPLEMENTAL_CONFIG" > "$VAULT_CONFIG_DIR/supplemental.json"
fi
Then we launch vault in 'release' mode:
if [ "$(id -u)" = '0' ]; then
set -- su-exec vault "$#"
"$#"&
Then we get the unseal key from the sealfile:
unseal=$(sed -n 's/Unseal Key: //p' /vault/unseal/sealfile)
if [ -n "$unseal" ]; then
while ! vault operator unseal "$unseal"; do
sleep 1
done
fi
We just wait for the process to terminate:
wait
exit $?
fi
There's a full gist for this on github.
Now the docker-compose.yml for doing this is slightly different to your own:
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
command: [ 'vault', 'server', '-config=/vault/config' ]
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
VAULT_SUPPLEMENTAL_CONFIG: '{"ui":true, "listener": {"tcp":{"address": "0.0.0.0:8200", "tls_disable": 1}}}'
VAULT_ADDR: "http://127.0.0.1:8200"
ports:
- "8200:8200"
volumes:
- ./vault:/vault/file
- ./unseal:/vault/unseal
- ./docker-entrypoint.sh:/usr/local/bin/docker-entrypoint.sh
cap_add:
- IPC_LOCK
The command is the command to execute. This is what's in the "$#"& of the script changes.
I've added VAULT_SUPPLEMENTAL_CONFIG for the non-dev run. It needs to specify the interfaces, it needs to turn of tls. I added the ui, so I can access it using http://127.0.0.1:8200/ui. This is part of the changes I made to the script.
Because this is all local, for me, test purposes, I'm mounting ./vault as the data directory, I'm mounting ./unseal as the place to record the unseal code and mounting ./docker-entrypoint.sh as the entrypoint script.
I can docker-compose up this and it launches a persistent vault - there are some errors on the log as I try to unseal before the server has launched, but it works, and persists across multiple docker-compose runs.
Again, to mention that this is completely unsuitable for any form of long-term use. You're better off using docker's own secrets engine if you're doing things like this.
I'd like to suggest a simpler solution for local development with docker-compose.
Vault is always unsealed
Vault UI is enabled and accessible at http://localhost:8200/ui/vault on your dev machine
Vault has predefined root token which can be used by services to communicate with it
docker-compose.yml
vault:
hostname: vault
container_name: vault
image: vault:1.12.0
environment:
VAULT_ADDR: "http://0.0.0.0:8200"
VAULT_API_ADDR: "http://0.0.0.0:8200"
ports:
- "8200:8200"
volumes:
- ./volumes/vault/file:/vault/file:rw
cap_add:
- IPC_LOCK
entrypoint: vault server -dev -dev-listen-address="0.0.0.0:8200" -dev-root-token-id="root"

Prisma Deploy Docker error "Could not connect to server"

This is steps I have done
prisma init
I set postgresql for database in my local(not exist).
It created 3 files, datamodel.graphql, docker-compose.yml, prisma.yml
docker-compose up -d
I confirmed it running successfully
But if I call prisma deploy, it shows me error
Could not connect to server at http://localhost:4466. Please check if your server is running.
All I have done is standard operation described in manual and there is no customization in
https://www.prisma.io/docs/tutorials/deploy-prisma-servers/local-(docker)-meemaesh3k
And this is docker-compose.yml
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.11
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
databases:
default:
connector: postgres
host: localhost
port: '5432'
database: databasename
schema: public
user: postgres
password: root
migrations: true
What am I missing?
I found this solution to the same problem i was facing
docker-machine ip default
Use this address and replace the "localhost" with the IP with the above command to look something like this in prisma.yml file
endpoint: http://1xx.1xx.xx.xxx:4466
The answer is referred from this Github Link
The documentation mentions:
docker ps
You should see output similar to this:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2b799c529e73 prismagraphql/prisma:1.7 "/bin/sh -c /app/sta…" 17 hours ago Up 7 hours 0.0.0.0:4466->4466/tcp myapp_prisma_1
757dfba212f7 mysql:5.7 "docker-entrypoint.s…" 17 hours ago
(Here shown with mysql, but valid with postgresql too)
The point is: there should be two containers running, not one.
Check docker-compose logs to see why the second one (database) did not start.
instead of docker-compose up -d
USE:
docker-compose up
and keep the window running which will keep localhost:4466 alive.
Note : If u want to connect to connect to the database created in docker, you need to map the port in the following way:
docker run --name <ENTER_NAME> -e POSTGRES_PASSWORD=<ENTER_PASSWORD> -d -p 5433:5432 postgres
In the above case PORT(5433) = HOST_PORT and PORT(5432) = CONTAINER_PORT

Resources