Running docker-entrypoint-initdb.d file is not working in docker-compose MongoDB - docker

I have a docker-compose that looks like this:
stock-trading-system-db:
container_name: stock-trading-system-db
image: mongo
restart: always
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=stock-trading-system-db
volumes:
- ./mongo-seed/import.js:/docker-entrypoint-initdb.d/import.js
- ./mongo-seed/mongod.conf:/etc/mongod.conf
- ./mongo/data:/data/db
- ./mongo-seed/MOCK_DATA.csv:/mongo-seed/MOCK_DATA.csv
and import.js looks like this:
let exec = require('child_process').exec
let command1 = 'mongod --config /etc/mongod.conf'
let command2 = 'mongo --eval "rs.initiate();"'
let command3 = 'mongoimport --host=mongo1 --db=stock-trading-system-db --collection=stocks --type=csv --headerline --file=/mongo-seed/MOCK_DATA.csv'
exec(command1, (err, stdout, stderr) => {
// check for errors or if it was succesfuly
if(!err){
exec(command2, (err, stdout, stderr) => {
// check for errors or if it was succesfuly
if(!err){
exec(command3, (err, stdout, stderr) => {
// check for errors or if it was succesfuly
if(!err){
console.log('MongoDB seed was succesful');
}
})
}
})
}
})
But it doesn't seem like import.js even gets recognised by the container.
The mongo docs say the following:
This variable allows you to specify the name of a database to be used for creation scripts in /docker-entrypoint-initdb.d/*.js (see Initializing a fresh instance below). MongoDB is fundamentally designed for "create on first use", so if you do not insert data with your JavaScript files, then no database is created.
This is in relation to the MONGO_INITDB_DATABASE variable in the docker-compose file, which i have included.
Where am i going wrong?
p.s. This is all to try and get a single node replica set working in a mongo container so that I can use change streams in my node app, so if you know an easier way to do this whilst also importing a csv file into the DB then please mention it :)

First, you shouldn't be trying to start mongod in your init script;
the image already does this for you. When you run your first command:
mongod --config /etc/mongod.conf
That's it. That command doesn't exit, and nothing progresses beyond
that point, so your additional commands will not execute nor will the
normal container startup process complete.
So don't do that.
Next, if you want to run rs.initiate() (or any other javascript
command) as part of the startup process, you don't need to wrap it in
a call to mongo --eval. Just drop in a .js file that contains:
rs.initiate()
And if you're just going to run a shell command, it's probably easiest just to drop that in a .sh file.
I would replace your import.js script with two seperate files. In rs-initiate.js, I would have the call to rs.initiate(), as above, and in import.sh I would have your call to mongoimport...
#!/bin/sh
mongoimport \
--db=stock-trading-system-db \
--collection=stocks \
--type=csv \
--headerline \
--file=/mongo-seed/MOCK_DATA.csv
...so that the docker-compose.yaml would look like:
version: "3"
services:
mongo:
image: docker.io/mongo:latest
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=stock-trading-system-db
volumes:
- ./mongo-seed/import.sh:/docker-entrypoint-initdb.d/import.sh
- ./mongo-seed/rs-initiate.js:/docker-entrypoint-initdb.d/rs-initiate.js
- ./mongo-seed/MOCK_DATA.csv:/mongo-seed/MOCK_DATA.csv
- ./mongo-seed/mongod.conf:/etc/mongod.conf
- mongo_data:/data/db
volumes:
mongo_data:
I'm using a named volume for the mongo data directory here, rather than a bind mount, but that's not necessary; everything will work fine with the bind mount as long as you have permissions configured correctly.

Related

How can I create additional user in influxdb2 with a docker-compose.yml

I am able to run a docker-compose.yml that starts an influxdb2 and configures with a admin user, org and bucket. My problem is that I am not able to create an additional user (without admin privileges) via the docker-compose.yml.
I would appreciate if someone could give me a hint.
docker-compose.yml:
`version: "3.5"
services:
influxdb:
image: influxdb:latest
container_name: influxdb2
volumes:
- influxdb-storage:/etc/influxdb2:rw
- influxdb-storage:/var/lib/influxdb2:rw
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=adminuser
- DOCKER_INFLUXDB_INIT_PASSWORD=adminpassword
- DOCKER_INFLUXDB_INIT_ORG=myOrg
- DOCKER_INFLUXDB_INIT_BUCKET=myBucket
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=randomTokenValue
ports:
- "8086:8086"
restart: unless-stopped`
I tried adding an entrypoint to somehow run the following command:
influx user create -n john -p user -o myOrg
but that did not work.
The influxdb:latest image is described in the following repository: influxdata-docker/influxdb/2.6/
If you check these files you will see that user is created inside entrypoint.sh script in setup_influxd() function, called from main() -> init_influxd().
There is a run_user_scripts() function which runs user-defined scripts on startup, if directory specified by ${USER_SCRIPT_DIR} variable exists:
# Allow users to mount arbitrary startup scripts into the container,
# for execution after initial setup/upgrade.
declare -r USER_SCRIPT_DIR=/docker-entrypoint-initdb.d
...
# Execute all shell files mounted into the expected path for user-defined startup scripts.
function run_user_scripts () {
if [ -d ${USER_SCRIPT_DIR} ]; then
log info "Executing user-provided scripts" script_dir ${USER_SCRIPT_DIR}
run-parts --regex ".*sh$" --report --exit-on-error ${USER_SCRIPT_DIR}
fi
}
I think you could use this functionality to do the additional steps. But probably you'll need to guard against running these steps twice.

Keep running container (dev-container) if there is a error?

I'm currently developing in an Odoo container , but every time the main process terminates and I have VSCODE attached I need to reload to make it connect again.
To prevent this, inside .devcontainermodify the file docker-compose.yml with the following line:
#....more configs
# Overrides default command so things don't shut down after the process ends.
command: /bin/sh -c "while sleep 1000; do :; done"
This is my devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.245.2/containers/docker-existing-docker-compose
// If you want to run as a non-root user in the container, see .devcontainer/docker-compose.yml.
{
"name": "Existing Docker Compose (Extend)",
// Update the 'dockerComposeFile' list if you have more compose files or use different names.
// The .devcontainer/docker-compose.yml file contains any overrides you need/want to make.
"dockerComposeFile": [
"../docker-compose.yml",
"docker-compose.yml"
],
// The 'service' property is the name of the service for the container that VS Code should
// use. Update this value and .devcontainer/docker-compose.yml to the real service name.
"service": "web",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/home",
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line if you want start specific services in your Docker Compose config.
// "runServices": [],
// Uncomment the next line if you want to keep your containers running after VS Code shuts down.
// "shutdownAction": "none",
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "root"
}
Even with everything, when I get an error, I have to raise the container again docker-compose up, which causes VSCODE to disconnect.
Inside the container I use a config option that restarts the service every time it detects a code change, similar to the function nodemon in nodejs, but unlike nodemon, if a fatal "uncompilable" error occurs the service stops completely and exits with an error code.
How can I avoid this behavior? Is there a way to ignore the error code so I don't have to reload vscode?
UPDATE
This is an example of my docker compose file:
version: '3.1'
services:
web:
image: odoo:14.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
environment:
- PASSWORD_FILE=/run/secrets/postgresql_password
secrets:
- postgresql_password
db:
image: postgres:13
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD_FILE=/run/secrets/postgresql_password
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
secrets:
- postgresql_password
volumes:
odoo-web-data:
odoo-db-data:
secrets:
postgresql_password:
file: odoo_pg_pass
Odoo Image
You need to add tty: true, where bash will be able to create interactive session and the container will be started
#docker-compose.yml
services:
container-name:
image: $IMAGE
...
tty: true # docker run -t
...
See https://docs.docker.com/engine/reference/run/#foreground
In foreground mode (the default when -d is not specified), docker run
can start the process in the container and attach the console to the
process’s standard input, output, and standard error. It can even
pretend to be a TTY (this is what most command line executables
expect) and pass along signals.

Does docker-compose support init container?

init container is a great feature in Kubernetes and I wonder whether docker-compose supports it? it allows me to run some command before launch the main application.
I come cross this PR https://github.com/docker/compose-cli/issues/1499 which mentions to support init container. But I can't find related doc in their reference.
This was a discovery for me but yes, it is now possible to use init containers with docker-compose since version 1.29 as can be seen in the PR you linked in your question.
Meanwhile, while I write those lines, it seems that this feature has not yet found its way to the documentation
You can define a dependency on an other container with a condition being basically "when that other container has successfully finished its job". This leaves the room to define containers running any kind of script and exit when they are done before an other dependent container is launched.
To illustrate, I crafted an example with a pretty common scenario: spin up a db container, make sure the db is up and initialize its data prior to launching the application container.
Note: initializing the db (at least as far as the official mysql image is concerned) does not require an init container so this example is more an illustration than a rock solid typical workflow.
The complete example is available in a public github repo so I will only show the key points in this answer.
Let's start with the compose file
---
x-common-env: &cenv
MYSQL_ROOT_PASSWORD: totopipobingo
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
environment:
<<: *cenv
init-db:
image: mysql:8.0
command: /initproject.sh
environment:
<<: *cenv
volumes:
- ./initproject.sh:/initproject.sh
depends_on:
db:
condition: service_started
my_app:
build:
context: ./php
environment:
<<: *cenv
volumes:
- ./index.php:/var/www/html/index.php
ports:
- 9999:80
depends_on:
init-db:
condition: service_completed_successfully
You can see I define 3 services:
The database which is the first to start
The init container which starts only once db is started. This one only runs a script (see below) that will exit once everything is initialized
The application container which will only start once the init container has successfuly done its job.
The initproject.sh script run by the db-init container is very basic for this demo and simply retries to connect to the db every 2 seconds until it succeeds or reaches a limit of 50 tries, then creates a db/table and insert some data:
#! /usr/bin/env bash
# Test we can access the db container allowing for start
for i in {1..50}; do mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "show databases" && s=0 && break || s=$? && sleep 2; done
if [ ! $s -eq 0 ]; then exit $s; fi
# Init some stuff in db before leaving the floor to the application
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create database my_app"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create table my_app.test (id int unsigned not null auto_increment primary key, myval varchar(255) not null)"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "insert into my_app.test (myval) values ('toto'), ('pipo'), ('bingo')"
The Dockerfile for the app container is trivial (adding a mysqli driver for php) and can be found in the example repo as well as the php script to test the init was succesful by calling http://localhost:9999 in your browser.
The interesting part is to observe what's going on when launching the service with docker-compose up -d.
The only limit to what can be done with such a feature is probably your imagination ;) Thanks for making me discovering this.

Logstash missing config

I have following issue . Every time when I'm trying to set config for logstash it doesn't see my file. I am sure that the path is properly set.
There is info:
[2018-09-14T09:28:44,073][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/home/jakub/IdeaProjects/test/logstash.conf"}
My docker-compose.yml looks following:
logstash:
image: docker.elastic.co/logstash/logstash:6.4.0
networks: ['stack']
ports:
- "9290:9290"
- "4560:4560"
command: logstash -f /home/jakub/IdeaProjects/test/logstash.conf
depends_on: ['elasticsearch']
and logstash.conf:
input {
redis {
host => "redis"
key => "log4j2"
data_type => "list"
password => "RedisTest"
}
}
output {
elasticsearch {
host => "elasticsearch"
}
}
What I'm doing wrong ? Can you give me some advice or solve my issue ?
Thanks for everything.
Cheers
I guess your logstash.conf is on your host under /home/jakub/IdeaProjects/test/logstash.conf.
Thus, it is not inside your container (unless there is some hidden mount). The command will be executed from within the container, thus it points to a non-existing file.
So, you may use docker cp /home/jakub/IdeaProjects/test/logstash.conf :/home/jakub/IdeaProjects/test/logstash.conf (provided the directory exists in your container)
... or (better!) mount the path from your host to your container. Such as :
volumes:
- /home/jakub/IdeaProjects/test/logstash.conf:/home/jakub/IdeaProjects/test/logstash.conf:ro
.. or to use config (the best option to moo if you are in swarm mode!). The mount is close to the "volume" option above, but you also have to pre-create (from command line or from docker-compose file)
There are other options, but the main point is that you have to make your file available from within your container!

How use `echo` in a command in docker-compose.yml to handle a colon (":") sign?

Here is my docker-compose.yml,
elasticsearch:
ports:
- 9200:9200/tcp
image: elasticsearch:2.4
volumes:
- /data/elasticsearch/usr/share/elasticsearch/data:/usr/share/elasticsearch/data
command: /bin/bash -c “echo 'http.cors.enabled: true' > /usr/share/elasticsearch/config/elasticsearch.yml"
it throws the error:
Activating (yaml: [] mapping values are not allowed in this context at line 7, column 49
Looks as if I cannot use the colon sign : in command, is this true?
The colon is how YAML introduces a dictionary. If you have it in a value, you just need to quote the value, for example like this:
image: "elasticsearch:2.4"
Or by using one of the block scalar operators, like this:
command: >
/bin/bash -c “echo 'http.cors.enabled: true' > /usr/share/elasticsearch/config/elasticsearch.yml"
For more information, take a look at the YAML page on Wikipedia. You can always use something like this online YAML parser to test out your YAML syntax.
Properly formatted, your first document should look something like:
elasticsearch:
ports:
- 9200:9200/tcp
image: "elasticsearch:2.4"
volumes:
- /data/elasticsearch/usr/share/elasticsearch/data:/usr/share/elasticsearch/data
command: >
/bin/bash -c “echo 'http.cors.enabled: true' > /usr/share/elasticsearch/config/elasticsearch.yml"
(The indentation of the list markers (-) from the key isn't strictly necessary, but I find that it helps make things easier to read)
A docker container can only run a single command. If you want to run multiple commands, put them in a shell script and copy that into the image.

Resources