How to dump a database from external url to Docker container - docker

I want to dump tables from an amazon url into my mariadb container, but it doesn't work.
Here is my Dockerfile
FROM amazoncorretto:11.0.15-alpine
LABEL website = "PHEDON"
VOLUME /phedon-app
RUN apk update && apk add --update --no-cache curl
RUN curl https://myurl/dbdump/dump.sql --output dump.sql
COPY . .
RUN chmod +x ./gradlew
RUN ./gradlew assemble
RUN mv ./build/libs/phedon-spring-server.jar app.jar
ENTRYPOINT ["java","-jar","-Dspring.profiles.active=dev", "app.jar"]
EXPOSE 8080
And here is the db part of the docker compose file.
phedon_db:
image: "mariadb:10.6"
container_name: mariadb
restart: always
command:
[ --lower_case_table_names=1 ]
healthcheck:
test: [ "CMD", "mariadb-admin", "--protocol", "tcp" ,"ping" ]
timeout: 3m
interval: 10s
retries: 10
ports:
- "3307:3306"
networks:
- phedon
volumes:
- container-volume:/var/lib/mysql
- ./dump.sql:/docker-entrypoint-initdb.d/dump.sql
environment:
MYSQL_DATABASE: "phedondb"
MYSQL_USER: "phedon"
MYSQL_PASSWORD: "12345"
MARIADB_ALLOW_EMPTY_ROOT_PASSWORD: "true"
env_file:
- .env
networks:
phedon:
volumes:
container-volume:
I have tried to use the ADD command too, and still no results, my mariadb database is still empty.

Method 1:
You can download the sql file which you want to restore directly on to the host system and can execute the below command to restore it
docker exec -i some-mysql sh -c 'exec mysql -u<user> -p<password> <database>' < /<path to the file>/dump.sql.sql
Method 2
As you are exposing the database on port 3307 you might try to connect to the database through a client. In case if you have installed mysql in your command line then you can execute the below command to restore it.
$ mysql --host=127.0.0.1 --port=3306 -u phedon -p phedon_db < dump.sql

Related

Custom gitea image doesn't find user with Docker Compose

I'm developing a Docker infrastructure with Ansible and Docker Compose and I have a problem with my custom image of Gitea.
I want to use a custom image because I need to implement authentication via LDAP.
The error that i get inside the container log is:
sudo: unknown user: gitea
sudo: error initializing audit plugin sudoers_audit
This is my configuration:
app.ini (of Gitea)
[DEFAULT]
RUN_USER = git
RUN_MODE = prod
...
[database]
PATH = /data/gitea/gitea.db
DB_TYPE = postgres
HOST = db:5432
NAME = gitea
USER = gitea
PASSWD = gitea
LOG_SQL = false
...
Dockerfile
FROM gitea/gitea:1.16.8
RUN apk add sudo
RUN chmod 777 /home
COPY entrypoint /usr/bin/custom_entrypoint
COPY gitea-cli.sh /usr/bin/gitea-cli.sh
ENTRYPOINT /usr/bin/custom_entrypoint
entrypoint
#!/bin/sh
set -e
echo 'Started entrypoint'
while ! nc -z $GITEA__database__HOST; do sleep 1; done;
echo 'Starting operations'
gitea-cli.sh migrate
>- gitea-cli.sh admin auth add-ldap --name ansible-ldap --host 127.0.0.1 --port 1389 --security-protocol unencrypted --user-search-base dc=ldap,dc=vcc,dc=unige,dc=it --admin-filter "(objectClass=giteaAdmin)" --user-filter "(&(objectClass=inetOrgPerson)(uid=%s))" --username-attribute uid --firstname-attribute givenName --surname-attribute surname --email-attribute mail --bind-dn cn=admin,dc=ldap,dc=vcc,dc=unige,dc=it --bind-password admin --allow-deactivate-all
echo 'Ending entrypoint'
gitea-cli.sh
#!/bin/sh
echo 'Started gitea-cli'
USER=git HOME=/home/gitea GITEA_WORK_DIR=/var/lib/gitea sudo -E -u git gitea --config /data/gitea/conf/app.ini "$#"
docker-compose.yaml
db:
image: postgres:14.3
restart: always
hostname: db
environment:
POSTGRES_DB: gitea
POSTGRES_USER: gitea
POSTGRES_PASSWORD: gitea
ports:
- 5432:5432
volumes:
- /data/postgres:/var/lib/postgresql/data
networks:
- vcc
openldap:
image: bitnami/openldap:2.5
ports:
- 1389:1389
- 1636:1636
environment:
BITNAMI_DEBUG: "true"
LDAP_LOGLEVEL: 4
LDAP_ADMIN_USERNAME: admin
LDAP_ADMIN_PASSWORD: admin
LDAP_ROOT: dc=ldap,dc=vcc,dc=unige,dc=it
LDAP_CUSTOM_LDIF_DIR: /bitnami/openldap/backup
LDAP_CUSTOM_SCHEMA_FILE: /bitnami/openldap/schema/schema.ldif
volumes:
- /data/openldap/:/bitnami/openldap
networks:
- vcc
gitea:
image: 127.0.0.1:5000/custom_gitea:51
restart: always
hostname: git.localdomain
build: /data/gitea/custom
ports:
- 4000:4000
- 222:22
environment:
USER: git
USER_UID: 1000
USER_GID: 1000
GITEA__database__DB_TYPE: postgres
GITEA__database__HOST: db:5432
GITEA__database__NAME: gitea
GITEA__database__USER: gitea
GITEA__database__PASSWD: gitea
GITEA__security__INSTALL_LOCK: "true"
GITEA__security__SECRET_KEY: XQolFkmSxJWhxkZrkrGbPDbVrEwiZshnzPOY
volumes:
- /data/gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /data/gitea/app.ini:/data/gitea/conf/app.ini
# deploy:
# mode: global
depends_on:
- db
- openldap
- openldap_admin
networks:
- vcc
User gitea simply don't exist in the image.
docker run -it --rm --entrypoint /bin/sh gitea/gitea:1.16.8
/ # grep gitea /etc/shadow
/ # grep gitea /etc/passwd
/ #
The default user is git:
docker run -it --rm --entrypoint /bin/sh gitea/gitea:1.16.8
/ # tail -1 /etc/passwd
git:x:1000:1000:Linux User,,,:/data/git:/bin/bash
/ #
There is two solutions:
add gitea user (not recommended)
use the default user provide by the image (git).
Adding gitea user
Just add adduser in your Dockerfile and it should work:
FROM gitea/gitea:1.16.8
RUN adduser -D -s /bin/bash gitea # <---- HERE
RUN apk add sudo
COPY entrypoint /usr/bin/custom_entrypoint
COPY gitea-cli.sh /usr/bin/gitea-cli.sh
ENTRYPOINT /usr/bin/custom_entrypoint
You'll also have to change the USER_UID and USER_GID with 1001 (user 1000 is git)
Using default user
Just replace user gitea with git in the gitea service of the dockerfile and in the app.ini .
After that, if you have error like:
error saving to custom config: open /data/gitea/conf/app.ini permission denied
You have to add chown -R 1000:1000 /data/gitea/conf before gitea-cli.sh migrate in entrypoint.
Because you share volume between the host and the container, this will work only if you host user have UID 1000. If not you will have to modify the gitea service in the docker-compose.yml.
Example with an user id of 1002:
docker-compose.yml:
gitea:
image: 127.0.0.1:5000/custom_gitea:51
restart: always
[...]
environment:
USER: git
USER_UID: 1002
USER_GID: 1002
[...]
user: 1002:1002 # <----- HERE
and before the ENTRYPOINT in the dockerfile:
USER git
ENTRYPOINT ....

ERROR: container_linux.go:367: starting container process caused: exec: "entrypoint.sh": executable file not found in $PATH: unknown

I am following this tutorial but when I ran docker-compose run --no-deps app rails new . --force --database=mysql I got the error mentioned in the question title. Below are the contents of some of my files:
Dockerfile:
FROM ruby:2.6
RUN apt-get update -qq && apt-get install -y nodejs
WORKDIR /essence
COPY Gemfile /essence/Gemfile
COPY Gemfile.lock /essence/Gemfile.lock
RUN bundle install
COPY . /essence
# Add a script to be executed every time the container starts
COPY ./entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
EXPOSE 3000
# Configure the main process to run when running the image
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml:
version: '2'
services:
db:
image: mysql:8.0
restart: always
environment:
MYSQL_ROOT_PASSWORD: *****
MYSQL_DATABASE: *****
MYSQL_USER: *****
MYSQL_PASSWORD: *****
app:
build: .
command: bash -c "rm -f /essence/tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- ".:/app"
ports:
- "3000:3000"
depends_on:
- db
links:
- db
environment:
DB_USER: *****
DB_NAME: *****
DB_PASSWORD: *****
DB_HOST: *****
I tried about 20 different things but none of them resolved this error. Does anyone know on a first glance what could be wrong with my setup? Thank you!

one of services started with docker-compose up doesn't stop with docker-compose stop

I have the file docker-compose.production.yml that contains configurations of 5 services. I start them all with the command sudo docker-compose -f docker-compose.production.yml up --build in the directory where the file is. When I want to stop all the services, I simply call sudo docker-compose stop in the directory where the file is. Strangely, 4 out of 5 services stop correctly, but 1 keeps running and if I want to stop it, I must use sudo docker stop [CONTAINER]. The service is not event being listed in the list of services that are being stopped after the stop command is run. It's like the service somehow "detaches" from the group. What could be causing this strange behaviour?
Here's an example of the docker-compose.production.yml file:
version: '3'
services:
fe:
build:
context: ./fe
dockerfile: Dockerfile.production
ports:
- 5000:80
restart: always
be:
image: strapi/strapi:3.4.6-node12
environment:
NODE_ENV: production
DATABASE_CLIENT: mysql
DATABASE_NAME: some_db
DATABASE_HOST: db
DATABASE_PORT: 3306
DATABASE_USERNAME: someuser
DATABASE_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
URL: https://some-url.com
volumes:
- ./be:/srv/app
- ${SOME_DIRECTORY:?no directory specified}:/srv/something:ro
- ./some-directory:/srv/something-else
expose:
- 1447
ports:
- 5001:1337
depends_on:
- db
command: bash -c "yarn install && yarn build && yarn start"
restart: always
watcher:
build:
context: ./watcher
dockerfile: Dockerfile
environment:
LICENSE_KEY: ${LICENSE_KEY:?no license key specified}
volumes:
- ./watcher:/usr/src/app
- ${SOME_DIRECTORY:?no directory specified}:/usr/src/something:ro
db:
image: mysql:8.0.23
environment:
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
MYSQL_DATABASE: some_db
volumes:
- ./db:/var/lib/mysql
restart: always
db-backup:
build:
context: ./db-backup
dockerfile: Dockerfile.production
environment:
MYSQL_HOST: db
MYSQL_DATABASE: some_db
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
volumes:
- ./db-backup/backups:/backups
restart: always
The service that doesn't stop together with others is the last one - db-backup. Here's an example of its Dockerfile.production:
FROM alpine:3.13.1
COPY ./scripts/startup.sh /usr/local/startup.sh
RUN chmod +x /usr/local/startup.sh
# NOTE used for testing when needs to run cron tasks more frequently
# RUN mkdir /etc/periodic/1min
COPY ./cron/daily/* /etc/periodic/daily
RUN chmod +x /etc/periodic/daily/*
RUN sh /usr/local/startup.sh
CMD [ "crond", "-f", "-l", "8"]
And here's an example of the ./scripts/startup.sh:
#!/bin/sh
echo "Running startup script"
echo "Checking if mysql-client is installed"
apk update
if ! apk info | grep -Fxq "mysql-client";
then
echo "Installing MySQL client"
apk add mysql-client
echo "MySQL client installed"
fi
# NOTE this was used for testing. backups should run daily, thus script should
# normally be placed in /etc/periodic/daily/
# cron_task_line="* * * * * run-parts /etc/periodic/1min"
# if ! crontab -l | grep -Fxq "$cron_task_line";
# then
# echo "Enabling cron 1min periodic tasks"
# echo -e "${cron_task_line}\n" >> /etc/crontabs/root
# fi
echo "Startup script finished"
All this happens on all the Ubuntu 18.04 machines that I've tried running this on. Didn't try it on anything else.

Unable to Read Files in Container

Here is my docker-compose.yml:
version: '3.3'
services:
etcd:
container_name: 'etcd'
image: 'quay.io/coreos/etcd'
command: >
etcd -name etcd
-advertise-client-urls http://127.0.0.1:2379,http://127.0.0.1:4001
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001
-initial-advertise-peer-urls http://127.0.0.1:2380
-listen-peer-urls http://0.0.0.0:2380
healthcheck:
test: ["CMD", "curl", "-f", "http://etcd:2379/version"]
interval: 30s
timeout: 10s
retries: 5
networks:
robotrader:
kontrol:
container_name: 'kontrol'
env_file: 'variables.env'
build:
context: '.'
dockerfile: 'Dockerfile'
volumes:
- '/certs:/certs'
ports:
- '6000:6000'
depends_on:
- 'etcd'
networks:
robotrader:
mongo:
container_name: 'mongo'
image: 'mongo:latest'
ports:
- '27017:27017'
volumes:
- '/var/lib/mongodb:/var/lib/mongodb'
networks:
robotrader:
networks:
robotrader:
... and here is the Dockerfile used to build kontrol:
FROM golang:1.8.3 as builder
RUN go get -u github.com/golang/dep/cmd/dep
RUN go get -d github.com/koding/kite
WORKDIR ${GOPATH}/src/github.com/koding/kite
RUN ${GOPATH}/bin/dep ensure
RUN go install ./kontrol/kontrol
RUN mv ${GOPATH}/bin/kontrol /tmp
FROM busybox
ENV APP_HOME /opt/robotrader
RUN mkdir -p ${APP_HOME}
RUN mkdir /certs
WORKDIR ${APP_HOME}
COPY --from=builder /tmp/kontrol .
ENTRYPOINT ["./kontrol", "-initial"]
CMD ["./kontrol"]
Finally when I issue the command...
sudo -E docker-compose -f docker-compose.yaml up
... both etcd and mongo start successfully, whereas kontrol fails with the following error:
kontrol | 2018/06/21 20:11:14 cannot read public key file: open "/certs/key_pub.pem": no such file or directory
If I log into the container..
sudo docker run -it --rm --name j3d-test --entrypoint sh j3d
... and look at folder /certs, the files are there:
ls -la /certs
-rw-r--r--. 1 root root 1679 Jun 21 21:11 key.pem
-rw-r--r--. 1 root root 451 Jun 21 21:11 key_pub.pem
What am I missing?
When you run this command:
sudo docker run -it --rm --name j3d-test --entrypoint sh j3d
You are not "logging into the container". You are creating a new container, and you are looking at the contents of /certs on the underlying image. However, in your docker-compose.yaml you have:
kontrol:
[...]
volumes:
- '/certs:/certs'
[...]
You have configured a bind mount from $PWD/certs onto the /certs
directory in your container. What does your local certs directory
contain?

CRON job cannot find environment variable set in docker-compose

I am setting some environment variables in docker-compose that are being used by a python application being run by a cron job.
docker-compose.yaml:
version: '2.1'
services:
zookeeper:
container_name: zookeeper
image: zookeeper:3.3.6
restart: always
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
kafka:
container_name: kafka
image: wurstmeister/kafka:1.1.0
hostname: kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: "topic:1:1"
KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
KAFKA_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
data-collector:
container_name: data-collector
#image: mystreams:0.1
build:
context: /home/junaid/eMumba/CASB/Docker/data_collector/
dockerfile: Dockerfile
links:
- kafka
environment:
- KAFKA_HOST=kafka
- OFFICE_365_APP_ID=98aff1c5-7a69-46b7-899c-186851054b43
- OFFICE_365_APP_SECRET=zVyS/V694ffWe99QpCvYqE1sqeqLo36uuvTL8gmZV0A=
- OFFICE_365_APP_TENANT=2f6cb1a6-ecb8-4578-b680-bf84ded07ff4
- KAFKA_CONTENT_URL_TOPIC=o365_activity_contenturl
- KAFKA_STORAGE_DATA_TOPIC=o365_storage
- KAFKA_PORT=9092
- POSTGRES_DB_NAME=casb
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=pakistan
- POSTGRES_HOST=postgres_database
depends_on:
postgres_database:
condition: service_healthy
postgres_database:
container_name : postgres_database
build:
context: /home/junaid/eMumba/CASB/Docker/data_collector/
dockerfile: postgres.dockerfile
#image: ayeshaemumba/casb-postgres:v3
#volumes:
# - ./postgres/data:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: pakistan
POSTGRES_DB: casb
expose:
- "5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 30s
retries: 3
When I exec inside data-collector container and echo any of the environment variable I can see that its set:
># docker exec -it data-collector sh
># echo $KAFKA_HOST
> kafka
But my cron job logs shows KeyError: 'KAFKA_HOST'
It means my cron job cannot find environment variables.
Now I have two questions:
1) Why are environment variables not set for cron job?
2) I know that I can pass environment variables as a shell script and run it while building image. But is there a way to pass environment variables from docker-compose?
Update:
Cron job is defined in docker file for python application.
Dockerfile:
FROM python:3.5-slim
# Creating Application Source Code Directory
RUN mkdir -p /usr/src/app
# Setting Home Directory for containers
WORKDIR /usr/src/app
# Installing python dependencies
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
# Copying src code to Container
COPY . /usr/src/app
# Add storage crontab file in the cron directory
ADD crontab-storage /etc/cron.d/storage-cron
# Give execution rights on the storage cron job
RUN chmod 0644 /etc/cron.d/storage-cron
RUN chmod 0644 /usr/src/app/cron_storage_data.sh
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
#Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
crontab-storage:
*/1 * * * * sh /usr/src/app/cron_storage_data.sh
# Don't remove the empty line at the end of this file. It is required to run the cron job
cron_storage_data.sh:
#!/bin/bash
cd /usr/src/app
/usr/local/bin/python3.5 storage_data_collector.py
Cron doesn't inherit docker-compose environment variables by default. A potential workaround for this situation is to:
1.Pass the environment variables from docker-compose to a local .env file
touch .env
echo "export KAFKA_HOST=$KAFKA_HOST" > /usr/src/app/.env
2.source the .env file before the cron task execution
* * * * * <username> . /usr/src/app/.env && sh /usr/src/app/cron_storage_data.sh
This is how your update environment file will look like:
Before:
{'SHELL': '/bin/sh', 'PWD': '/root', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}
After:
{'SHELL': '/bin/sh', 'PWD': '/root', 'KAFKA_HOST': 'kafka', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}

Resources