Here is my docker-compose.yml:
version: '3.3'
services:
etcd:
container_name: 'etcd'
image: 'quay.io/coreos/etcd'
command: >
etcd -name etcd
-advertise-client-urls http://127.0.0.1:2379,http://127.0.0.1:4001
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001
-initial-advertise-peer-urls http://127.0.0.1:2380
-listen-peer-urls http://0.0.0.0:2380
healthcheck:
test: ["CMD", "curl", "-f", "http://etcd:2379/version"]
interval: 30s
timeout: 10s
retries: 5
networks:
robotrader:
kontrol:
container_name: 'kontrol'
env_file: 'variables.env'
build:
context: '.'
dockerfile: 'Dockerfile'
volumes:
- '/certs:/certs'
ports:
- '6000:6000'
depends_on:
- 'etcd'
networks:
robotrader:
mongo:
container_name: 'mongo'
image: 'mongo:latest'
ports:
- '27017:27017'
volumes:
- '/var/lib/mongodb:/var/lib/mongodb'
networks:
robotrader:
networks:
robotrader:
... and here is the Dockerfile used to build kontrol:
FROM golang:1.8.3 as builder
RUN go get -u github.com/golang/dep/cmd/dep
RUN go get -d github.com/koding/kite
WORKDIR ${GOPATH}/src/github.com/koding/kite
RUN ${GOPATH}/bin/dep ensure
RUN go install ./kontrol/kontrol
RUN mv ${GOPATH}/bin/kontrol /tmp
FROM busybox
ENV APP_HOME /opt/robotrader
RUN mkdir -p ${APP_HOME}
RUN mkdir /certs
WORKDIR ${APP_HOME}
COPY --from=builder /tmp/kontrol .
ENTRYPOINT ["./kontrol", "-initial"]
CMD ["./kontrol"]
Finally when I issue the command...
sudo -E docker-compose -f docker-compose.yaml up
... both etcd and mongo start successfully, whereas kontrol fails with the following error:
kontrol | 2018/06/21 20:11:14 cannot read public key file: open "/certs/key_pub.pem": no such file or directory
If I log into the container..
sudo docker run -it --rm --name j3d-test --entrypoint sh j3d
... and look at folder /certs, the files are there:
ls -la /certs
-rw-r--r--. 1 root root 1679 Jun 21 21:11 key.pem
-rw-r--r--. 1 root root 451 Jun 21 21:11 key_pub.pem
What am I missing?
When you run this command:
sudo docker run -it --rm --name j3d-test --entrypoint sh j3d
You are not "logging into the container". You are creating a new container, and you are looking at the contents of /certs on the underlying image. However, in your docker-compose.yaml you have:
kontrol:
[...]
volumes:
- '/certs:/certs'
[...]
You have configured a bind mount from $PWD/certs onto the /certs
directory in your container. What does your local certs directory
contain?
Related
Firstly, I went all that step by step: https://docs.docker.com/language/golang/develop/
Works perfectly.
Then I started to try the same with my golang project. It requires not only db in a volume but also 'assets' and 'creds' directories which I was able to provide working with normal Dockerfile and --mount flag in 'docker run' comand.
So my schema was:
Create a volume 'roach'.
Create a temp container for copying folders.
docker container create --name temp -v roach:/data busybox \
docker cp assets/ temp:/data \
docker rm temp
Run my container with
docker run -it --rm \
--mount 'type=volume,src=roach,dst=/usr/data' \
--network mynet \
--name postgres-server \
-p 80:8080 \
-e PGUSER=totoro \
-e PGPASSWORD=myfriend \
-e PGHOST=db \
-e PGPORT=26257 \
-e PGDATABASE=mydb \
postgres-server
Go files have acces to /usr/data/my_folders
BTW here is Dockerfile:
# syntax=docker/dockerfile:1
FROM golang:1.18-buster AS build
WORKDIR /app
COPY go.mod .
RUN go mod download
COPY . .
RUN go mod tidy
RUN go build -o /t main/main.go main/inst_list.go
## Deploy
FROM gcr.io/distroless/base-debian10
ENV GO111MODULE=on
ENV GOOGLE_APPLICATION_CREDENTIALS='/usr/data/credentials/creds.json'
WORKDIR /
COPY --from=build /t /t
EXPOSE 8080
USER root:root
ENTRYPOINT ["/t"]
================================================================
Then I started to try to make a Docker-compose.yml file like in the end of that example.
It has no --mount flags but I found plenty ways to specify mount path.
I tried much more but left 3 variants of it in code bellow(2 of 3 are commented):
version: '3.8'
services:
docker-t-roach:
depends_on:
- roach
build:
context: .
container_name: postgres-server
hostname: postgres-server
networks:
- mynet
ports:
- 80:8080
environment:
- PGUSER=${PGUSER:-totoro}
- PGPASSWORD=${PGPASSWORD:?database password not set}
- PGHOST=${PGHOST:-db}
- PGPORT=${PGPORT:-26257}
- PGDATABASE=${PGDATABASE-mydb}
deploy:
restart_policy:
condition: on-failure
roach:
image: cockroachdb/cockroach:latest-v20.1
container_name: roach
hostname: db
networks:
- mynet
ports:
- 26257:26257
- 8080:8080
volumes:
# - type: volume
# source: roach
# target: /usr/data
- roach:/usr/data
# - "${PWD}/cockroach-data/roach:/usr/data"
command: start-single-node --insecure
volumes:
roach:
networks:
mynet:
driver: bridge
and it still doesn't work. Moreover it creates 2 Volumes: 'roach' and 'WORKDIRNAME_roach'. I actually tried to copy my folders to both of those. It's not working. The output of build command is alwaysl like that:
postgres-server | STARTED AT
postgres-server | Sep 4 10:43:10
postgres-server | lstat /usr/data/assets/current_batch: no such file or directory
postgres-server | 2022/09/04 10:43:10 lstat /usr/data/assets/current_batch: no such file or directory
(first 2 strings are produced my my go.files, 'assets' is the folder I'm copying)
I think that I'm seaking in the wrong place: maybe the way I copy folders doesn't work with this kind of build?
UPDATE:
At the same time command
docker run -it --rm -v roach:/data ubuntu ls /data/usr
showes that my folders are there. But container is in kind of cycle that doesn't let him see them.
Mihai is tried to help but I didn't understand what he meant. He actually meant that I had to add volume to my app service. I did it now and it works. In example bellow I named 2 volumes for db and app different just for better accuracy:
version: '3.8'
services:
docker-parser:
depends_on:
- roach
build:
context: .
container_name: parser
hostname: parser
networks:
- mynet
ports:
- 80:8080
volumes:
- assets:/data
environment:
- PGUSER=${PGUSER:-totoro}
- PGPASSWORD=${PGPASSWORD:?database password not set}
- PGHOST=${PGHOST:-db}
- PGPORT=${PGPORT:-26257}
- PGDATABASE=${PGDATABASE-mydb}
deploy:
restart_policy:
condition: on-failure
roach:
image: cockroachdb/cockroach:latest-v20.1
container_name: roach
hostname: db
networks:
- mynet
ports:
- 26257:26257
- 8080:8080
volumes:
- roach:/db
command: start-single-node --insecure
volumes:
assets:
roach:
networks:
mynet:
driver: bridge
I want to dump tables from an amazon url into my mariadb container, but it doesn't work.
Here is my Dockerfile
FROM amazoncorretto:11.0.15-alpine
LABEL website = "PHEDON"
VOLUME /phedon-app
RUN apk update && apk add --update --no-cache curl
RUN curl https://myurl/dbdump/dump.sql --output dump.sql
COPY . .
RUN chmod +x ./gradlew
RUN ./gradlew assemble
RUN mv ./build/libs/phedon-spring-server.jar app.jar
ENTRYPOINT ["java","-jar","-Dspring.profiles.active=dev", "app.jar"]
EXPOSE 8080
And here is the db part of the docker compose file.
phedon_db:
image: "mariadb:10.6"
container_name: mariadb
restart: always
command:
[ --lower_case_table_names=1 ]
healthcheck:
test: [ "CMD", "mariadb-admin", "--protocol", "tcp" ,"ping" ]
timeout: 3m
interval: 10s
retries: 10
ports:
- "3307:3306"
networks:
- phedon
volumes:
- container-volume:/var/lib/mysql
- ./dump.sql:/docker-entrypoint-initdb.d/dump.sql
environment:
MYSQL_DATABASE: "phedondb"
MYSQL_USER: "phedon"
MYSQL_PASSWORD: "12345"
MARIADB_ALLOW_EMPTY_ROOT_PASSWORD: "true"
env_file:
- .env
networks:
phedon:
volumes:
container-volume:
I have tried to use the ADD command too, and still no results, my mariadb database is still empty.
Method 1:
You can download the sql file which you want to restore directly on to the host system and can execute the below command to restore it
docker exec -i some-mysql sh -c 'exec mysql -u<user> -p<password> <database>' < /<path to the file>/dump.sql.sql
Method 2
As you are exposing the database on port 3307 you might try to connect to the database through a client. In case if you have installed mysql in your command line then you can execute the below command to restore it.
$ mysql --host=127.0.0.1 --port=3306 -u phedon -p phedon_db < dump.sql
Im inheriting from an opensource project where i have this script to deploy two containers (docker and nginx) on a server:
mkdir -p /app
rm -rf /app/* && tar -xf /tmp/project.tar -C /app
sudo docker-compose -f /app/docker-compose.yml build
sudo supervisorctl restart react-wagtail-project
sudo ufw allow port
The docker-compose.yml file is like this :
version: '3.7'
services:
nginx_sarahmaso:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
ports:
- 4000:80
depends_on:
- web_sarahmaso
networks:
spa_network_sarahmaso:
web_sarahmaso:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
- sqlite_sarahmaso:/app/db
env_file:
- ./env/prod-sample
networks:
spa_network_sarahmaso:
networks:
spa_network_sarahmaso:
volumes:
sqlite_sarahmaso:
staticfiles_sarahmaso:
mediafiles_sarahmaso:
I'm wondering what is the point to use sudo supervisorctl restart react-wagtail-project?
If i put restart: always in the two containers i'm running, is it useful to run on top of that a supervisor command to check they are always on and running?
Or maybe is it for the possibility to create logs ?
Thank you
I'm trying to run uwsgi, and it works fine if I run with docker run -d foo but fails if I do docker-compose up with error
bind(): Permission denied [core/socket.c line 230]
docker-compose.yml
services:
web:
restart: always
build:
context: .
dockerfile: ./compose/production/web/Dockerfile
image: foo
volumes:
- /app
- /var/log
Dockerfile
....
CMD uwsgi --uid www-data --gid www-data --emperor /app/momsite/momsite/conf/docker/etc/uwsgi/vassals
I am setting some environment variables in docker-compose that are being used by a python application being run by a cron job.
docker-compose.yaml:
version: '2.1'
services:
zookeeper:
container_name: zookeeper
image: zookeeper:3.3.6
restart: always
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
kafka:
container_name: kafka
image: wurstmeister/kafka:1.1.0
hostname: kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: "topic:1:1"
KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
KAFKA_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
data-collector:
container_name: data-collector
#image: mystreams:0.1
build:
context: /home/junaid/eMumba/CASB/Docker/data_collector/
dockerfile: Dockerfile
links:
- kafka
environment:
- KAFKA_HOST=kafka
- OFFICE_365_APP_ID=98aff1c5-7a69-46b7-899c-186851054b43
- OFFICE_365_APP_SECRET=zVyS/V694ffWe99QpCvYqE1sqeqLo36uuvTL8gmZV0A=
- OFFICE_365_APP_TENANT=2f6cb1a6-ecb8-4578-b680-bf84ded07ff4
- KAFKA_CONTENT_URL_TOPIC=o365_activity_contenturl
- KAFKA_STORAGE_DATA_TOPIC=o365_storage
- KAFKA_PORT=9092
- POSTGRES_DB_NAME=casb
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=pakistan
- POSTGRES_HOST=postgres_database
depends_on:
postgres_database:
condition: service_healthy
postgres_database:
container_name : postgres_database
build:
context: /home/junaid/eMumba/CASB/Docker/data_collector/
dockerfile: postgres.dockerfile
#image: ayeshaemumba/casb-postgres:v3
#volumes:
# - ./postgres/data:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: pakistan
POSTGRES_DB: casb
expose:
- "5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 30s
retries: 3
When I exec inside data-collector container and echo any of the environment variable I can see that its set:
># docker exec -it data-collector sh
># echo $KAFKA_HOST
> kafka
But my cron job logs shows KeyError: 'KAFKA_HOST'
It means my cron job cannot find environment variables.
Now I have two questions:
1) Why are environment variables not set for cron job?
2) I know that I can pass environment variables as a shell script and run it while building image. But is there a way to pass environment variables from docker-compose?
Update:
Cron job is defined in docker file for python application.
Dockerfile:
FROM python:3.5-slim
# Creating Application Source Code Directory
RUN mkdir -p /usr/src/app
# Setting Home Directory for containers
WORKDIR /usr/src/app
# Installing python dependencies
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
# Copying src code to Container
COPY . /usr/src/app
# Add storage crontab file in the cron directory
ADD crontab-storage /etc/cron.d/storage-cron
# Give execution rights on the storage cron job
RUN chmod 0644 /etc/cron.d/storage-cron
RUN chmod 0644 /usr/src/app/cron_storage_data.sh
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
#Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
crontab-storage:
*/1 * * * * sh /usr/src/app/cron_storage_data.sh
# Don't remove the empty line at the end of this file. It is required to run the cron job
cron_storage_data.sh:
#!/bin/bash
cd /usr/src/app
/usr/local/bin/python3.5 storage_data_collector.py
Cron doesn't inherit docker-compose environment variables by default. A potential workaround for this situation is to:
1.Pass the environment variables from docker-compose to a local .env file
touch .env
echo "export KAFKA_HOST=$KAFKA_HOST" > /usr/src/app/.env
2.source the .env file before the cron task execution
* * * * * <username> . /usr/src/app/.env && sh /usr/src/app/cron_storage_data.sh
This is how your update environment file will look like:
Before:
{'SHELL': '/bin/sh', 'PWD': '/root', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}
After:
{'SHELL': '/bin/sh', 'PWD': '/root', 'KAFKA_HOST': 'kafka', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}