Create two influxdb from docker compose YAML file - docker

I want to create two database while running docker-compose up command.
Below is the solution I have tried but didn't worked
version: '3.2'
services:
influxdb:
image: influxdb
env_file: configuration.env
ports:
- '8086:8086'
volumes:
- 'influxdb:/var/lib/influxdb'
environment:
- INFLUXDB_DB=testDB
command: sh -c Sample.sh
Error which I am getting influxdb_1_170f324e55e3 | sh: 1: Sample.sh: not found
Inside Sample.sh I have curl command which when executed standalone creates another db.

You should not override the run command of the influx DB container, if you overide the CMD then you need to start the influxd process as well. So better to go with init.db script and populate the script at run time.
Initialization Files
If the Docker image finds any files with the extensions .sh or
.iql inside of the /docker-entrypoint-initdb.d folder, it will
execute them. The order they are executed in is determined by the
shell. This is usually alphabetical order.
Manually Initializing the Database
To manually initialize the database and exit, the /init-influxdb.sh
script can be used directly. It takes the same parameters as the
influxd run command. As an example:
$ docker run --rm \
-e INFLUXDB_DB=db0 -e INFLUXDB_ADMIN_ENABLED=true \
-e INFLUXDB_ADMIN_USER=admin -e INFLUXDB_ADMIN_PASSWORD=supersecretpassword \
-e INFLUXDB_USER=telegraf -e INFLUXDB_USER_PASSWORD=secretpassword \
-v $PWD:/var/lib/influxdb \
influxdb /init-influxdb.sh
As you check the entrypoint of influx DB offical image and you can explore the database initialization from the offical page.
So you need place you script in .iql or .sh and mount the location in docker-compose.
volumes:
- 'influxdb:/var/lib/influxdb'
- init.db/init.iql:/docker-entrypoint-initdb.d/
Better to create using InfluxQL, add below line to your script and save as init.iql
CREATE DATABASE "NOAA_water_database"
You need to update Dockerfile as well.
FROM influxdb
COPY init.iql /docker-entrypoint-initdb.d/
now you can remove the command from CMD and it should create DB
version: '3.2'
services:
influxdb:
build: .
env_file: configuration.env
ports:
- '8086:8086'
volumes:
- 'influxdb:/var/lib/influxdb'
environment:
- INFLUXDB_DB=testDB

Related

unknown binary file with same name as docker volume

I have built a backup system for a docker data volume according to https://stackoverflow.com/a/56432886/3551483.
docker-compose.yml
version: '3.1'
services:
redmine:
image: redmine
restart: always
ports:
- 8080:3000
volumes:
- dbdata:/usr/src/redmine/sqlite
db_backup:
image: alpine
tty: false
environment:
- TARGET=dbdata
volumes:
- /opt/admin/redmine/backup:/backup
- dbdata:/volume
command: sh -c "tar cvzf /backup/$${TARGET} -C /volume ./redmine.db"
db_restore:
image: alpine
environment:
- SOURCE=dbdata
volumes:
- /opt/admin/redmine/backup:/backup
- dbdata:/volume
command: sh -c "rm -rf /volume/* /volume/..?* /volume/.[!.]* ; tar -C /volume/ -xvzf /backup/$${SOURCE}"
volumes:
dbdata:
backup-script:
#!/bin/bash
set -e
servicefile="/opt/admin/redmine/docker-compose.yml"
servicename="redmine"
backupfilename="redmine_backup_$(date +"%y-%m-%d").tgz"
printf "Backing up redmine to %s...\n" "backup/${backupfilename}"
docker-compose -f ${servicefile} stop ${servicename}
docker-compose -f ${servicefile} run --rm -e TARGET=${backupfilename} db_backup
docker-compose -f ${servicefile} start ${containername}
This works fine, yet whenever I execute the backup script, a binary file called dbdata is saved alongside redmine_backup_$(date +"%y-%m-%d").tgz in /opt/admin/redmine/backup.
This file is always as large as the tgz itself, yet is a binary file. I cannot pinpoint to why it is created and what is purpose is. It is quite clearly related to the named docker volume, as the name changes when I change the volume name.

Docker compose not parsing env variables for the config file inside the container

We are trying some docker containers locally. For security purposes, user and password are used as env variables in the config file. The config file is copied as the volume in the docker-compose for one of the APIs. After docker-compose up, inside the container, we are still seeing the variable name and not the env variable value.
Config file inside the container copied as volume:
dbconfig:
dbuser: ${USER}
dbpass: ${PASSWORD}
dbname:
dbdrivername:
tablename
docker-compose.yaml:
services:
api:
image: ${API_IMAGE:api}:${VERSION:-latest}
ports:
- 8080:8080
environment:
- "USER=${USER}"
- "PASSWORD=${PASSWORD}"
volumes:
- ./conf/config.yaml:/etc/api.yaml
command: ["-config", "/etc/api.yaml"]
Config.yaml:
dbconfig:
dbuser: ${USER}
dbpass: ${PASSWORD}
dbname:
dbdrivername:
tablename
Please help us get rid of this error as we are newly adopting docker testing
Issue fixed with the solution mentioned here. How to run 2 different commands from docker-compose command:
We added the sed command in the entry point script which searches for the env variable inside the config and replaces it with the value. Env variables are passed from docker-compose for the service
sed \
-e "s/USER/${USER}/g" \
-e "s/PASSWORD/${PASSWORD}/g" \ -i /etc/api.yaml

docker-compose and docker: need to intialize with dummy data only first time. whats the best way to do this

I have a django docker image and using docker-compose to start it along with postgresql.
# docker-compose -p torsha-single -f ./github/docker-compose.yml --project-directory ./FINAL up --build --force-recreate --remove-orphans
# docker-compose -p torsha-single -f ./github/docker-compose.yml --project-directory ./FINAL exec fastapi /bin/bash
# My version of docker = 18.09.4-ce
# Compose file format supported till version 18.06.0+ is 3.7
version: "3.7"
services:
postgresql:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ./DO_NOT_DELETE_postgres_data
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: project
POSTGRES_USER: postgres
POSTGRES_PASSWORD: abc123
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
webapp:
image: "django_image"
depends_on:
- postgresql
ports:
- 8000:8000
networks:
- postgresql_network
networks:
postgresql_network:
driver: bridge
Now when after I do docker-compose up for first time I have to create dummy data using
docker-compose exec webapp sh -c 'python manage.py migrate';
docker-compose exec webapp sh -c 'python manage.py shell < useful_scripts/intialize_dummy_data.py'
After this anytime later I dont need to do the above.
Where to place this script so it checks if its first time then run these commands.
One of the Django documentation's suggestions for Providing initial data for models is to write it as a data migration. That will automatically load the data when you run manage.py migrate. Like other migrations, Django records the fact that it has run in the database itself, so it won't re-run it a second time.
This then reduces the problem to needing to run migrations when your application starts. You can write a shell script that first runs migrations, then runs some other command that's passed as arguments:
#!/bin/sh
python manage.py migrate
exec "$#"
This is exactly the form required for a Docker ENTRYPOINT script. In your Dockerfile, COPY this script in with the rest of your application, set the ENTRYPOINT to run this script, and set the CMD to run the application as before.
COPY . . # probably already there
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array form
CMD python manage.py runserver 0.0.0.0:8000 # unchanged
(If you already have an entrypoint wrapper script, add the migration line there. If your Dockerfile somehow splits ENTRYPOINT and CMD, combine them into a single CMD.)
Having done this, the container will run migrations itself whenever it starts. If this is the first time the container runs, it will also load the seed data. You don't need any manual intervention.
docker-compose run is designed for this type of problem. Pair it with the -rm flag to remove the container when the command is complete. Common examples are doing the sort of migrations and initialization you are trying to accomplish.
This is right out of the manual page for the docker-compose run command:
docker-compose run --rm web python manage.py db upgrade
You can think of this as a sort of disposable container, that does one job, and exits. This technique can also be used for scheduled jobs with cron.

docker-compose run returns /bin/ls cannot execute binary file

I have just started learning Docker, and run into this issue which don't know how to go abound.
My Dockerfile looks like this:
FROM node:7.0.0
WORKDIR /app
COPY app /app
COPY hermes-entry /usr/local/bin
RUN chmod +x /usr/local/bin/hermes-entry
COPY entry.d /entry.d
RUN npm install
RUN npm install -g gulp
RUN npm install gulp
RUN gulp
My docker-compose.yml looks like this:
version: '2'
services:
hermes:
build: .
container_name: hermes
volumes:
- ./app:/app
ports:
- "4000:4000"
entrypoint: /bin/bash
links:
- postgres
depends_on:
- postgres
tty: true
postgres:
image: postgres
container_name: postgres
volumes:
- ~/.docker-volumes/hermes/postgresql/data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
ports:
- "2345:5432"
After starting the containers up with:
docker-compose up -d
I tried running a simple bash cmd:
docker-compose run hermes ls
And I got this error:
/bin/ls cannot execute binary file
Any idea on what I am doing wrong?
The entrypoint to your container is bash. By default bash expects a shell script as its first argument, but /bin/ls is a binary, as the error says. If you want to run /bin/ls you need to use -c /bin/ls as your command. -c tells bash that the rest of the arguments are a command line rather than the path of a script, and the command line happens to be a request to run /bin/ls.
You can't run Gulp and Node at the same time in one container. Containers should always have one process each.
If you just want node to serve files, remove your entrypoint from the hermes service.
You can add another service to run gulp, if you are having it run tests, you'd have to map the same volume and add a command: ["gulp"]
And you'd need to remove RUN gulp from your dockerfile (unless you are using it to build your node files)
then run docker-compose up

docker copy file from one container to another?

Here is what I want to do:
docker-compose build
docker-compose $COMPOSE_ARGS run --rm task1
docker-compose $COMPOSE_ARGS run --rm task2
docker-compose $COMPOSE_ARGS run --rm combine-both-task2
docker-compose $COMPOSE_ARGS run --rm selenium-test
And a docker-compose.yml that looks like this:
task1:
build: ./task1
volumes_from:
- task1_output
command: ./task1.sh
task1_output:
image: alpine:3.3
volumes:
- /root/app/dist
command: /bin/sh
# essentially I want to copy task1 output into task2 because they each use different images and use different tech stacks...
task2:
build: ../task2
volumes_from:
- task2_output
- task1_output:ro
command: /bin/bash -cx "mkdir -p task1 && cp -R /root/app/dist/* ."
So now all the required files are in task2 container... how would I start up a web server and expose a port with the content in task2?
I am stuck here... how do I access the stuff from task2_output in my combine-tasks/Dockerfile:
combine-both-task2:
build: ../combine-tasks
volumes_from:
- task2_output
In recent versions of docker, named volumes replace data containers as the easy way to share data between containers.
docker volume create --name myshare
docker run -v myshare:/shared task1
docker run -v myshare:/shared -p 8080:8080 task2
...
Those commands will set up one local volume, and the -v myshare:/shared argument will make that share available as the folder /shared inside each of each container.
To express that in a compose file:
version: '2'
services:
task1:
build: ./task1
volumes:
- 'myshare:/shared'
task2:
build: ./task2
ports:
- '8080:8080'
volumes:
- 'myshare:/shared'
volumes:
myshare:
driver: local
To test this out, I made a small project:
- docker-compose.yml (above)
- task1/Dockerfile
- task1/app.py
- task2/Dockerfile
I used node's http-server as task2/Dockerfile:
FROM node
RUN npm install -g http-server
WORKDIR /shared
CMD http-server
and task1/Dockerfile used python:alpine, to show two different stacks writing and reading.
FROM python:alpine
WORKDIR /app
COPY . .
CMD python app.py
here's task1/app.py
import time
count = 0
while True:
fname = '/shared/{}.txt'.format(count)
with open(fname, 'w') as f:
f.write('content {}'.format(count))
count = count + 1
time.sleep(10)
Take those four files, and run them via docker compose up in the directory with docker-compose.yml - then visit $DOCKER_HOST:8080 to see a steadily updated list of files.
Also, I'm using docker version 1.12.0 and compose version 1.8.0 but this should work for a few versions back.
And be sure to check out the docker docs for details I've probably missed here:
https://docs.docker.com/engine/tutorials/dockervolumes/
For me the best way to copy file from or to container is using docker cp for example:
If you want copy schema.xml from apacheNutch container to solr container then:
docker cp apacheNutch:/root/nutch/conf/schema.xml /tmp/schema.xml
server/solr/configsets/nutch/
docker cp /tmp/schema.xml
solr:/opt/solr-8.1.1/server/solr/configsets/nutch/conf

Resources