unknown binary file with same name as docker volume - docker

I have built a backup system for a docker data volume according to https://stackoverflow.com/a/56432886/3551483.
docker-compose.yml
version: '3.1'
services:
redmine:
image: redmine
restart: always
ports:
- 8080:3000
volumes:
- dbdata:/usr/src/redmine/sqlite
db_backup:
image: alpine
tty: false
environment:
- TARGET=dbdata
volumes:
- /opt/admin/redmine/backup:/backup
- dbdata:/volume
command: sh -c "tar cvzf /backup/$${TARGET} -C /volume ./redmine.db"
db_restore:
image: alpine
environment:
- SOURCE=dbdata
volumes:
- /opt/admin/redmine/backup:/backup
- dbdata:/volume
command: sh -c "rm -rf /volume/* /volume/..?* /volume/.[!.]* ; tar -C /volume/ -xvzf /backup/$${SOURCE}"
volumes:
dbdata:
backup-script:
#!/bin/bash
set -e
servicefile="/opt/admin/redmine/docker-compose.yml"
servicename="redmine"
backupfilename="redmine_backup_$(date +"%y-%m-%d").tgz"
printf "Backing up redmine to %s...\n" "backup/${backupfilename}"
docker-compose -f ${servicefile} stop ${servicename}
docker-compose -f ${servicefile} run --rm -e TARGET=${backupfilename} db_backup
docker-compose -f ${servicefile} start ${containername}
This works fine, yet whenever I execute the backup script, a binary file called dbdata is saved alongside redmine_backup_$(date +"%y-%m-%d").tgz in /opt/admin/redmine/backup.
This file is always as large as the tgz itself, yet is a binary file. I cannot pinpoint to why it is created and what is purpose is. It is quite clearly related to the named docker volume, as the name changes when I change the volume name.

Related

gitlab variable is not accessible in the docker-compose.yml file

I am trying to create a CI/CD pipeline using gitlab and now facing an issue with the gitlab variable. This is not accessible inside docker compose file.
this is my gitlab ci yml file
step-production:
stage: production
before_script:
- export APP_ENVIRONMENT="$PRODUCTION_APP_ENVIRONMENT"
only:
- /^release.*$/
tags:
- release-tag
script:
- echo production env value is "$PRODUCTION_APP_ENVIRONMENT"
- sudo curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- sudo docker-compose -f docker-compose.prod.yml build --no-cache
- sudo docker-compose -f docker-compose.prod.yml up -d
when: manual
and this is my docker compose file
version: "3"
services:
redis:
image: redis:latest
app:
build:
context: .
environment:
- APP_ENVIRONMENT=${PRODUCTION_APP_ENVIRONMENT}
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app:/app
ports:
- "8000:8000"
restart: on-failure:5
# network_mode: "host"
Can someone help me on how to access the gitlab variable inside docker compose file ? I have spend more than a day on the same issue
The issue has been resolved by the following method
Edit the following line in gitlab ci yml file
sudo docker-compose -f docker-compose.prod.yml build --build-arg DB_NAME=$DEVELOPMENT_DB_NAME --build-arg DB_HOST=$DEVELOPMENT_DB_HOST --no-cache
Define the value of $DEVELOPMENT_DB_NAME and $DEVELOPMENT_DB_HOST in gitlab variables section
In the Docker file, add ARG and ENV sections as follows
ARG DB_NAME
ARG DB_HOST
ENV DB_NAME=${DB_NAME}
ENV DB_HOST=${DB_HOST}
Make sure that no environment variables with the same name are not defined in the docker-compose yml file
That's it !!!

why do i need tty: true in docker-compose.yml and other images do not?

I've been looking for the answer for a while, but I haven't found it and I need to understand before I go ahead with my tests.
I am creating an image based on Alpine by installing bash as in the following image:
FROM alpine:3.12
RUN apk add --no-cache --upgrade bash rsync gzip \
&& rm -rf /var/cache/apk/*
COPY ./docker/backup/hello.sh /hello.sh
RUN mkdir /backup \
&& chmod u+x /hello.sh
WORKDIR /backup
ENTRYPOINT ["sh","/hello.sh"]
CMD ["/bin/bash"]
hello.sh
#!/bin/sh
echo "=> Hello Word"
echo "$#"
exec "$#"
The first time I tried bash I could not access with the following command:
docker-compose exec myalpine bash
But searching if I found the answer, I had to put in my docker-compose.yml tty: true, and I already was able to access the myalpine container shell after launching the command docker-compose up -d
Resulting in a part of my docker-compose.yml as follows
services:
myalpine:
build:
context: ./
dockerfile: ./docker/backup/Dockerfile
args:
- DOCKER_ENV=${DOCKER_ENV}
restart: unless-stopped
tty: true
container_name: ${PROJECT_NAME}-files
volumes:
- appdata:/app
- ./data/app/backup:/backup
mysql:
build:
context: ./
dockerfile: ./docker/mysql/Dockerfile
args:
- MYSQL_VERSION=${MYSQL_VERSION}
- DOCKER_ENV=${DOCKER_ENV}
restart: always
container_name: ${PROJECT_NAME}-mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- dbdata:/var/lib/mysql
networks:
- db
And now my question is why in other services of my docker-compose like mysql, I can access the bash without adding tty:true?
Example:
docker-compose exec mysql bash
I can access without having added tty:true to the docker-compose.yml, so there must be something in Alpine's image that I don't understand and would like to understand.
I had reproduced your example with a cut-down version of the dockerfile and the docker-compose.yaml, and after running docker-compose up. First I ran without a tty attached, so I have the following:
~$ docker-compose up
...
...
WARNING: Image for service myalpine was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating test-fies ... done
Attaching to test-fies
test-fies | => Hello Word
test-fies | /bin/bash
~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9a72817b0e28 tty_up_myalpine "sh /hello.sh /bin/b…" 5 minutes ago Restarting (0) Less than a second ago myalpine-fies
As you can see, the container is restarting all the time. Reason for this is that the hello.sh as an entrypoint will receive the command /bin/bash and will execute the bash. This will try to create an interactive shell but since there is no tty the creation of the shell fails and the container is stopped. Because the container is marked restart: unless-stopped it will be in a constant loop of restarting.
Since the container is not running, you are not able to execute docker-compose exec myalpine bash
Once you add a tty to the container, bash will be able to create interactive session and the container will be started.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
745f5139f510 tty_up_myalpine "sh /hello.sh /bin/b…" 10 seconds ago Up 9 seconds myalpine-fies
Reason why this is not a case with mysql is that this image ends up initiating a daemon process which is a non-interactive process that is detached from a tty.

Create two influxdb from docker compose YAML file

I want to create two database while running docker-compose up command.
Below is the solution I have tried but didn't worked
version: '3.2'
services:
influxdb:
image: influxdb
env_file: configuration.env
ports:
- '8086:8086'
volumes:
- 'influxdb:/var/lib/influxdb'
environment:
- INFLUXDB_DB=testDB
command: sh -c Sample.sh
Error which I am getting influxdb_1_170f324e55e3 | sh: 1: Sample.sh: not found
Inside Sample.sh I have curl command which when executed standalone creates another db.
You should not override the run command of the influx DB container, if you overide the CMD then you need to start the influxd process as well. So better to go with init.db script and populate the script at run time.
Initialization Files
If the Docker image finds any files with the extensions .sh or
.iql inside of the /docker-entrypoint-initdb.d folder, it will
execute them. The order they are executed in is determined by the
shell. This is usually alphabetical order.
Manually Initializing the Database
To manually initialize the database and exit, the /init-influxdb.sh
script can be used directly. It takes the same parameters as the
influxd run command. As an example:
$ docker run --rm \
-e INFLUXDB_DB=db0 -e INFLUXDB_ADMIN_ENABLED=true \
-e INFLUXDB_ADMIN_USER=admin -e INFLUXDB_ADMIN_PASSWORD=supersecretpassword \
-e INFLUXDB_USER=telegraf -e INFLUXDB_USER_PASSWORD=secretpassword \
-v $PWD:/var/lib/influxdb \
influxdb /init-influxdb.sh
As you check the entrypoint of influx DB offical image and you can explore the database initialization from the offical page.
So you need place you script in .iql or .sh and mount the location in docker-compose.
volumes:
- 'influxdb:/var/lib/influxdb'
- init.db/init.iql:/docker-entrypoint-initdb.d/
Better to create using InfluxQL, add below line to your script and save as init.iql
CREATE DATABASE "NOAA_water_database"
You need to update Dockerfile as well.
FROM influxdb
COPY init.iql /docker-entrypoint-initdb.d/
now you can remove the command from CMD and it should create DB
version: '3.2'
services:
influxdb:
build: .
env_file: configuration.env
ports:
- '8086:8086'
volumes:
- 'influxdb:/var/lib/influxdb'
environment:
- INFLUXDB_DB=testDB

GitLab CI Docker WORKDIR not being created

I am trying to deploy my NodeJS repo to a DO droplet via GitLab CI. I have been following this guide to do so. What is odd is that the deployment pipeline seems to succeed but if I SSH into the box, I can see that the app is not running as has failed to find a package.json in /usr/src/app which is the WORKDIR my Dockerfile is pointing to.
gitlab-ci.yml
cache:
key: "${CI_COMMIT_REF_NAME} node:latest"
paths:
- node_modules/
- .yarn
stages:
- build
- release
- deploy
build:
stage: build
image: node:latest
script:
- yarn
artifacts:
paths:
- node_modules/
release:
stage: release
image: docker:latest
only:
- master
services:
- docker:dind
variables:
DOCKER_DRIVER: "overlay"
before_script:
- docker version
- docker info
- docker login -u ${CI_REGISTRY_USER} -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}
script:
- docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest --pull .
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest
after_script:
- docker logout ${CI_REGISTRY}
deploy:
stage: deploy
image: gitlab/dind:latest
only:
- master
environment: production
when: manual
before_script:
- mkdir -p ~/.ssh
- echo "${DEPLOY_SERVER_PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H ${DEPLOYMENT_SERVER_IP} >> ~/.ssh/known_hosts
script:
- printf "DB_URL=${DB_URL}\nDB_NAME=${DB_NAME}\nPORT=3000" > .env
- scp -r ./.env ./docker-compose.yml root#${DEPLOYMENT_SERVER_IP}:~/
- ssh root#${DEPLOYMENT_SERVER_IP} "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}; docker-compose rm -sf scraper; docker pull ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest; docker-compose up -d"
Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package.json ./
RUN yarn
COPY . .
EXPOSE 3000
CMD [ "yarn", "start" ]
docker-compose.yml
version: "3"
services:
scraper:
build: .
image: registry.gitlab.com/arby-better/scraper:latest
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3000:3000
environment:
- NODE_ENV=production
env_file:
- .env
I'm using GitLab Shared Runners for my pipeline. My pipeline looks like it executes completely fine but for this symlink failure at the end:
...which I don't think is anything to worry about. If I SSH into my box & go to where the docker compose was copied & inspect:
Docker has not created /usr/src/app.
Versions:
Docker: 19.03.1
Docker-compose: 1.22.0
My DO box is Docker 1-click btw. Any help appreciated!
EDIT
I have altered my Dockerfile to attempt to force the dir creation so have added RUN mkdir -p /usr/src/app before the line declaring it as the working dir. This still does not create the directory...
When I look at the container status' (docker-compose ps), I can see that the containers are in an exit state & have exited with code either 1 or 254...any idea as to why?
Your compose file is designed for a development environment, where the code directory is replaced by a volume mount to the code on the developers machine. You don't have this persistent directory in production, nor should you be depending on code outside of the image in production, defeating the purpose of copying it into your image.
version: "3"
services:
scraper:
build: .
image: registry.gitlab.com/arby-better/scraper:latest
# Comment out or delete these lines, they do not belong in production
#volumes:
# - .:/usr/src/app
# - /usr/src/app/node_modules
ports:
- 3000:3000
environment:
- NODE_ENV=production
env_file:
- .env

Test my go app in a container launched by docker compose with Gitlab CI

I have a Golang app, that depends a FTP Server.
So, In docker compose, I build a FTP service and I refer to it into my tests.
So, in my docker-compose.yml I have:
version: '3'
services:
mygoapp:
build:
dockerfile: ./Dockerfile.local
context: ./
volumes:
- ./volume:/go
- ./test_files:/var/test_files
networks:
mygoapp_network:
env_file:
- test.env
tty: true
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "julien"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/www/julien"
restart: on-failure
networks:
mygoapp_network:
networks:
mygoapp_network:
external: true
In my gitlab-ci.yml I have
variables:
PACKAGE_PATH: /go/src/gitlab.com/xxx
VOLUME_PATH: /var/test_files
stages:
- test
# A hack to make Golang-in-Gitlab happy
.anchors:
- &inject-gopath
mkdir -p $(dirname ${PACKAGE_PATH})
&& ln -s ${CI_PROJECT_DIR} ${PACKAGE_PATH}
&& cd ${PACKAGE_PATH}
test:
image: docker:18
services:
- docker:dind
stage: test
# only:
# - production
before_script:
- touch test.env
- apk update
- apk upgrade
- apk add --no-cache py-pip
- pip install docker-compose
- docker network create mygoapp_network
- mkdir -p volume/log
script:
- docker-compose -f docker-local.yaml up --build -d
- docker exec project-0_mygoapp_1 ls /var/test_files
- docker exec project-0_mygoapp_1 echo $VOLUME_PATH
- docker exec project-0_mygoapp_1 go test ./... -v
All my services are up
But when I run
- docker exec project-0_myapp_1 echo $VOLUME_PATH
I can see $VOLUME_PATH is equal to /var/test_files
but inside code, when I do:
os.Getenv("VOLUME_PATH")
variable is empty
Also, in local, with a docker exec, variable is OK.
I also tried to put Variables into test definition, but it still doesn' work
EDIT: The only way I could do it is setting environment vars in docker compose, but it is not so great
Any idea how to fix it ?
The behaviour of your script is predictable - all environment variables are being expanded when they are met (unless they are in single quotes). So, your line
docker exec project-0_myapp_1 echo $VOLUME_PATH
is expanded before being executed, and $VOLUME_PATH is taken from gitlab runner, not from container.
The only way I see to get this script printing environment variable from inside container is putting script inside sh-file and calling that file:
doit.sh
echo $VOLUME_PATH
gitlab-ci.yml
docker exec project-0_myapp_1 doit.sh

Resources