How can I create additional user in influxdb2 with a docker-compose.yml - docker

I am able to run a docker-compose.yml that starts an influxdb2 and configures with a admin user, org and bucket. My problem is that I am not able to create an additional user (without admin privileges) via the docker-compose.yml.
I would appreciate if someone could give me a hint.
docker-compose.yml:
`version: "3.5"
services:
influxdb:
image: influxdb:latest
container_name: influxdb2
volumes:
- influxdb-storage:/etc/influxdb2:rw
- influxdb-storage:/var/lib/influxdb2:rw
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=adminuser
- DOCKER_INFLUXDB_INIT_PASSWORD=adminpassword
- DOCKER_INFLUXDB_INIT_ORG=myOrg
- DOCKER_INFLUXDB_INIT_BUCKET=myBucket
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=randomTokenValue
ports:
- "8086:8086"
restart: unless-stopped`
I tried adding an entrypoint to somehow run the following command:
influx user create -n john -p user -o myOrg
but that did not work.

The influxdb:latest image is described in the following repository: influxdata-docker/influxdb/2.6/
If you check these files you will see that user is created inside entrypoint.sh script in setup_influxd() function, called from main() -> init_influxd().
There is a run_user_scripts() function which runs user-defined scripts on startup, if directory specified by ${USER_SCRIPT_DIR} variable exists:
# Allow users to mount arbitrary startup scripts into the container,
# for execution after initial setup/upgrade.
declare -r USER_SCRIPT_DIR=/docker-entrypoint-initdb.d
...
# Execute all shell files mounted into the expected path for user-defined startup scripts.
function run_user_scripts () {
if [ -d ${USER_SCRIPT_DIR} ]; then
log info "Executing user-provided scripts" script_dir ${USER_SCRIPT_DIR}
run-parts --regex ".*sh$" --report --exit-on-error ${USER_SCRIPT_DIR}
fi
}
I think you could use this functionality to do the additional steps. But probably you'll need to guard against running these steps twice.

Related

Keep running container (dev-container) if there is a error?

I'm currently developing in an Odoo container , but every time the main process terminates and I have VSCODE attached I need to reload to make it connect again.
To prevent this, inside .devcontainermodify the file docker-compose.yml with the following line:
#....more configs
# Overrides default command so things don't shut down after the process ends.
command: /bin/sh -c "while sleep 1000; do :; done"
This is my devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.245.2/containers/docker-existing-docker-compose
// If you want to run as a non-root user in the container, see .devcontainer/docker-compose.yml.
{
"name": "Existing Docker Compose (Extend)",
// Update the 'dockerComposeFile' list if you have more compose files or use different names.
// The .devcontainer/docker-compose.yml file contains any overrides you need/want to make.
"dockerComposeFile": [
"../docker-compose.yml",
"docker-compose.yml"
],
// The 'service' property is the name of the service for the container that VS Code should
// use. Update this value and .devcontainer/docker-compose.yml to the real service name.
"service": "web",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/home",
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line if you want start specific services in your Docker Compose config.
// "runServices": [],
// Uncomment the next line if you want to keep your containers running after VS Code shuts down.
// "shutdownAction": "none",
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "root"
}
Even with everything, when I get an error, I have to raise the container again docker-compose up, which causes VSCODE to disconnect.
Inside the container I use a config option that restarts the service every time it detects a code change, similar to the function nodemon in nodejs, but unlike nodemon, if a fatal "uncompilable" error occurs the service stops completely and exits with an error code.
How can I avoid this behavior? Is there a way to ignore the error code so I don't have to reload vscode?
UPDATE
This is an example of my docker compose file:
version: '3.1'
services:
web:
image: odoo:14.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
environment:
- PASSWORD_FILE=/run/secrets/postgresql_password
secrets:
- postgresql_password
db:
image: postgres:13
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD_FILE=/run/secrets/postgresql_password
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
secrets:
- postgresql_password
volumes:
odoo-web-data:
odoo-db-data:
secrets:
postgresql_password:
file: odoo_pg_pass
Odoo Image
You need to add tty: true, where bash will be able to create interactive session and the container will be started
#docker-compose.yml
services:
container-name:
image: $IMAGE
...
tty: true # docker run -t
...
See https://docs.docker.com/engine/reference/run/#foreground
In foreground mode (the default when -d is not specified), docker run
can start the process in the container and attach the console to the
process’s standard input, output, and standard error. It can even
pretend to be a TTY (this is what most command line executables
expect) and pass along signals.

Running docker-entrypoint-initdb.d file is not working in docker-compose MongoDB

I have a docker-compose that looks like this:
stock-trading-system-db:
container_name: stock-trading-system-db
image: mongo
restart: always
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=stock-trading-system-db
volumes:
- ./mongo-seed/import.js:/docker-entrypoint-initdb.d/import.js
- ./mongo-seed/mongod.conf:/etc/mongod.conf
- ./mongo/data:/data/db
- ./mongo-seed/MOCK_DATA.csv:/mongo-seed/MOCK_DATA.csv
and import.js looks like this:
let exec = require('child_process').exec
let command1 = 'mongod --config /etc/mongod.conf'
let command2 = 'mongo --eval "rs.initiate();"'
let command3 = 'mongoimport --host=mongo1 --db=stock-trading-system-db --collection=stocks --type=csv --headerline --file=/mongo-seed/MOCK_DATA.csv'
exec(command1, (err, stdout, stderr) => {
// check for errors or if it was succesfuly
if(!err){
exec(command2, (err, stdout, stderr) => {
// check for errors or if it was succesfuly
if(!err){
exec(command3, (err, stdout, stderr) => {
// check for errors or if it was succesfuly
if(!err){
console.log('MongoDB seed was succesful');
}
})
}
})
}
})
But it doesn't seem like import.js even gets recognised by the container.
The mongo docs say the following:
This variable allows you to specify the name of a database to be used for creation scripts in /docker-entrypoint-initdb.d/*.js (see Initializing a fresh instance below). MongoDB is fundamentally designed for "create on first use", so if you do not insert data with your JavaScript files, then no database is created.
This is in relation to the MONGO_INITDB_DATABASE variable in the docker-compose file, which i have included.
Where am i going wrong?
p.s. This is all to try and get a single node replica set working in a mongo container so that I can use change streams in my node app, so if you know an easier way to do this whilst also importing a csv file into the DB then please mention it :)
First, you shouldn't be trying to start mongod in your init script;
the image already does this for you. When you run your first command:
mongod --config /etc/mongod.conf
That's it. That command doesn't exit, and nothing progresses beyond
that point, so your additional commands will not execute nor will the
normal container startup process complete.
So don't do that.
Next, if you want to run rs.initiate() (or any other javascript
command) as part of the startup process, you don't need to wrap it in
a call to mongo --eval. Just drop in a .js file that contains:
rs.initiate()
And if you're just going to run a shell command, it's probably easiest just to drop that in a .sh file.
I would replace your import.js script with two seperate files. In rs-initiate.js, I would have the call to rs.initiate(), as above, and in import.sh I would have your call to mongoimport...
#!/bin/sh
mongoimport \
--db=stock-trading-system-db \
--collection=stocks \
--type=csv \
--headerline \
--file=/mongo-seed/MOCK_DATA.csv
...so that the docker-compose.yaml would look like:
version: "3"
services:
mongo:
image: docker.io/mongo:latest
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=stock-trading-system-db
volumes:
- ./mongo-seed/import.sh:/docker-entrypoint-initdb.d/import.sh
- ./mongo-seed/rs-initiate.js:/docker-entrypoint-initdb.d/rs-initiate.js
- ./mongo-seed/MOCK_DATA.csv:/mongo-seed/MOCK_DATA.csv
- ./mongo-seed/mongod.conf:/etc/mongod.conf
- mongo_data:/data/db
volumes:
mongo_data:
I'm using a named volume for the mongo data directory here, rather than a bind mount, but that's not necessary; everything will work fine with the bind mount as long as you have permissions configured correctly.

Does docker-compose support init container?

init container is a great feature in Kubernetes and I wonder whether docker-compose supports it? it allows me to run some command before launch the main application.
I come cross this PR https://github.com/docker/compose-cli/issues/1499 which mentions to support init container. But I can't find related doc in their reference.
This was a discovery for me but yes, it is now possible to use init containers with docker-compose since version 1.29 as can be seen in the PR you linked in your question.
Meanwhile, while I write those lines, it seems that this feature has not yet found its way to the documentation
You can define a dependency on an other container with a condition being basically "when that other container has successfully finished its job". This leaves the room to define containers running any kind of script and exit when they are done before an other dependent container is launched.
To illustrate, I crafted an example with a pretty common scenario: spin up a db container, make sure the db is up and initialize its data prior to launching the application container.
Note: initializing the db (at least as far as the official mysql image is concerned) does not require an init container so this example is more an illustration than a rock solid typical workflow.
The complete example is available in a public github repo so I will only show the key points in this answer.
Let's start with the compose file
---
x-common-env: &cenv
MYSQL_ROOT_PASSWORD: totopipobingo
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
environment:
<<: *cenv
init-db:
image: mysql:8.0
command: /initproject.sh
environment:
<<: *cenv
volumes:
- ./initproject.sh:/initproject.sh
depends_on:
db:
condition: service_started
my_app:
build:
context: ./php
environment:
<<: *cenv
volumes:
- ./index.php:/var/www/html/index.php
ports:
- 9999:80
depends_on:
init-db:
condition: service_completed_successfully
You can see I define 3 services:
The database which is the first to start
The init container which starts only once db is started. This one only runs a script (see below) that will exit once everything is initialized
The application container which will only start once the init container has successfuly done its job.
The initproject.sh script run by the db-init container is very basic for this demo and simply retries to connect to the db every 2 seconds until it succeeds or reaches a limit of 50 tries, then creates a db/table and insert some data:
#! /usr/bin/env bash
# Test we can access the db container allowing for start
for i in {1..50}; do mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "show databases" && s=0 && break || s=$? && sleep 2; done
if [ ! $s -eq 0 ]; then exit $s; fi
# Init some stuff in db before leaving the floor to the application
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create database my_app"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create table my_app.test (id int unsigned not null auto_increment primary key, myval varchar(255) not null)"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "insert into my_app.test (myval) values ('toto'), ('pipo'), ('bingo')"
The Dockerfile for the app container is trivial (adding a mysqli driver for php) and can be found in the example repo as well as the php script to test the init was succesful by calling http://localhost:9999 in your browser.
The interesting part is to observe what's going on when launching the service with docker-compose up -d.
The only limit to what can be done with such a feature is probably your imagination ;) Thanks for making me discovering this.

Apache Nifi (on docker): only one of the HTTP and HTTPS connectors can be configured at one time error

Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you

Local Vault using docker-compose

I'm having big trouble running Vault in docker-compose.
My requirements are :
running as deamon (so restarting when I restart my Mac)
secret being persisted between container restart
no human intervention between restart (unsealing, etc.)
using a generic token
My current docker-compose
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
ports:
- "8200:8200"
volumes:
- ./storagedc/vault/file:/vault/file
However, when the container restart, I get the log
==> Vault server configuration:
Api Address: http://0.0.0.0:8200
Cgo: disabled
Cluster Address: https://0.0.0.0:8201
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v1.2.1
Error initializing Dev mode: Vault is already initialized
Is there any recommendation on that matter?
I'm going to pseudo-code an answer to work around the problems specified, but please note that this is a massive hack and should NEVER be deployed in production as a hard-coded master key and single unseal key is COLOSSALLY INSECURE.
So, you want a test vault server, with persistence.
You can accomplish this, it will need a little bit of work because of the default behavior of the vault container - if you just start it, it will start with a dev mode container, which won't allow for persistence. Just adding persistence via the environment variable won't solve that problem entirely because it will conflict with the default start mode of the container.
so we need to replace this entrypoint script with something that does what we want it to do instead.
First we copy the script out of the container:
$ docker create --name vault vault:1.2.1
$ docker cp vault:/usr/local/bin/docker-entrypoint.sh .
$ docker rm vault
For simplicity, we're going to edit the file and mount it into the container using the docker-compose file. I'm not going to make it really functional - just enough to get it to do what's desired. The entire point here is sample, not something that is usable in production.
My customizations all start at about line 98 - first we launch a dev-mode server in order to record the unseal key, then we terminate the dev mode server.
# Here's my customization:
if [ ! -f /vault/unseal/sealfile ]; then
# start in dev mode, in the background to record the unseal key
su-exec vault vault server \
-dev -config=/vault/config \
-dev-root-token-id="$VAULT_DEV_ROOT_TOKEN_ID" \
2>&1 | tee /vault/unseal/sealfile &
while ! grep -q 'core: vault is unsealed' /vault/unseal/sealfile; do
sleep 1
done
kill %1
fi
Next we check for supplemental config. This is where the extra config goes for disabling TLS, and for binding the appropriate interface.
if [ -n "$VAULT_SUPPLEMENTAL_CONFIG" ]; then
echo "$VAULT_SUPPLEMENTAL_CONFIG" > "$VAULT_CONFIG_DIR/supplemental.json"
fi
Then we launch vault in 'release' mode:
if [ "$(id -u)" = '0' ]; then
set -- su-exec vault "$#"
"$#"&
Then we get the unseal key from the sealfile:
unseal=$(sed -n 's/Unseal Key: //p' /vault/unseal/sealfile)
if [ -n "$unseal" ]; then
while ! vault operator unseal "$unseal"; do
sleep 1
done
fi
We just wait for the process to terminate:
wait
exit $?
fi
There's a full gist for this on github.
Now the docker-compose.yml for doing this is slightly different to your own:
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
command: [ 'vault', 'server', '-config=/vault/config' ]
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
VAULT_SUPPLEMENTAL_CONFIG: '{"ui":true, "listener": {"tcp":{"address": "0.0.0.0:8200", "tls_disable": 1}}}'
VAULT_ADDR: "http://127.0.0.1:8200"
ports:
- "8200:8200"
volumes:
- ./vault:/vault/file
- ./unseal:/vault/unseal
- ./docker-entrypoint.sh:/usr/local/bin/docker-entrypoint.sh
cap_add:
- IPC_LOCK
The command is the command to execute. This is what's in the "$#"& of the script changes.
I've added VAULT_SUPPLEMENTAL_CONFIG for the non-dev run. It needs to specify the interfaces, it needs to turn of tls. I added the ui, so I can access it using http://127.0.0.1:8200/ui. This is part of the changes I made to the script.
Because this is all local, for me, test purposes, I'm mounting ./vault as the data directory, I'm mounting ./unseal as the place to record the unseal code and mounting ./docker-entrypoint.sh as the entrypoint script.
I can docker-compose up this and it launches a persistent vault - there are some errors on the log as I try to unseal before the server has launched, but it works, and persists across multiple docker-compose runs.
Again, to mention that this is completely unsuitable for any form of long-term use. You're better off using docker's own secrets engine if you're doing things like this.
I'd like to suggest a simpler solution for local development with docker-compose.
Vault is always unsealed
Vault UI is enabled and accessible at http://localhost:8200/ui/vault on your dev machine
Vault has predefined root token which can be used by services to communicate with it
docker-compose.yml
vault:
hostname: vault
container_name: vault
image: vault:1.12.0
environment:
VAULT_ADDR: "http://0.0.0.0:8200"
VAULT_API_ADDR: "http://0.0.0.0:8200"
ports:
- "8200:8200"
volumes:
- ./volumes/vault/file:/vault/file:rw
cap_add:
- IPC_LOCK
entrypoint: vault server -dev -dev-listen-address="0.0.0.0:8200" -dev-root-token-id="root"

Resources