Firstly, I went all that step by step: https://docs.docker.com/language/golang/develop/
Works perfectly.
Then I started to try the same with my golang project. It requires not only db in a volume but also 'assets' and 'creds' directories which I was able to provide working with normal Dockerfile and --mount flag in 'docker run' comand.
So my schema was:
Create a volume 'roach'.
Create a temp container for copying folders.
docker container create --name temp -v roach:/data busybox \
docker cp assets/ temp:/data \
docker rm temp
Run my container with
docker run -it --rm \
--mount 'type=volume,src=roach,dst=/usr/data' \
--network mynet \
--name postgres-server \
-p 80:8080 \
-e PGUSER=totoro \
-e PGPASSWORD=myfriend \
-e PGHOST=db \
-e PGPORT=26257 \
-e PGDATABASE=mydb \
postgres-server
Go files have acces to /usr/data/my_folders
BTW here is Dockerfile:
# syntax=docker/dockerfile:1
FROM golang:1.18-buster AS build
WORKDIR /app
COPY go.mod .
RUN go mod download
COPY . .
RUN go mod tidy
RUN go build -o /t main/main.go main/inst_list.go
## Deploy
FROM gcr.io/distroless/base-debian10
ENV GO111MODULE=on
ENV GOOGLE_APPLICATION_CREDENTIALS='/usr/data/credentials/creds.json'
WORKDIR /
COPY --from=build /t /t
EXPOSE 8080
USER root:root
ENTRYPOINT ["/t"]
================================================================
Then I started to try to make a Docker-compose.yml file like in the end of that example.
It has no --mount flags but I found plenty ways to specify mount path.
I tried much more but left 3 variants of it in code bellow(2 of 3 are commented):
version: '3.8'
services:
docker-t-roach:
depends_on:
- roach
build:
context: .
container_name: postgres-server
hostname: postgres-server
networks:
- mynet
ports:
- 80:8080
environment:
- PGUSER=${PGUSER:-totoro}
- PGPASSWORD=${PGPASSWORD:?database password not set}
- PGHOST=${PGHOST:-db}
- PGPORT=${PGPORT:-26257}
- PGDATABASE=${PGDATABASE-mydb}
deploy:
restart_policy:
condition: on-failure
roach:
image: cockroachdb/cockroach:latest-v20.1
container_name: roach
hostname: db
networks:
- mynet
ports:
- 26257:26257
- 8080:8080
volumes:
# - type: volume
# source: roach
# target: /usr/data
- roach:/usr/data
# - "${PWD}/cockroach-data/roach:/usr/data"
command: start-single-node --insecure
volumes:
roach:
networks:
mynet:
driver: bridge
and it still doesn't work. Moreover it creates 2 Volumes: 'roach' and 'WORKDIRNAME_roach'. I actually tried to copy my folders to both of those. It's not working. The output of build command is alwaysl like that:
postgres-server | STARTED AT
postgres-server | Sep 4 10:43:10
postgres-server | lstat /usr/data/assets/current_batch: no such file or directory
postgres-server | 2022/09/04 10:43:10 lstat /usr/data/assets/current_batch: no such file or directory
(first 2 strings are produced my my go.files, 'assets' is the folder I'm copying)
I think that I'm seaking in the wrong place: maybe the way I copy folders doesn't work with this kind of build?
UPDATE:
At the same time command
docker run -it --rm -v roach:/data ubuntu ls /data/usr
showes that my folders are there. But container is in kind of cycle that doesn't let him see them.
Mihai is tried to help but I didn't understand what he meant. He actually meant that I had to add volume to my app service. I did it now and it works. In example bellow I named 2 volumes for db and app different just for better accuracy:
version: '3.8'
services:
docker-parser:
depends_on:
- roach
build:
context: .
container_name: parser
hostname: parser
networks:
- mynet
ports:
- 80:8080
volumes:
- assets:/data
environment:
- PGUSER=${PGUSER:-totoro}
- PGPASSWORD=${PGPASSWORD:?database password not set}
- PGHOST=${PGHOST:-db}
- PGPORT=${PGPORT:-26257}
- PGDATABASE=${PGDATABASE-mydb}
deploy:
restart_policy:
condition: on-failure
roach:
image: cockroachdb/cockroach:latest-v20.1
container_name: roach
hostname: db
networks:
- mynet
ports:
- 26257:26257
- 8080:8080
volumes:
- roach:/db
command: start-single-node --insecure
volumes:
assets:
roach:
networks:
mynet:
driver: bridge
Related
After running without comand or entrypoint, enter docker exec -it bash and run bad-bot.sh it will run without any problems.
but, if do this in the command or entrypoint in docker-compose, an error occurs and restart repeats indefinitely.
bad-bot.sh:
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/conf.d/globalblacklist.conf -O /etc/nginx/conf.d/globalblacklist.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/blockbots.conf -O /etc/nginx/bots.d/blockbots.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/ddos.conf -O /etc/nginx/bots.d/ddos.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/whitelist-ips.conf -O /etc/nginx/bots.d/whitelist-ips.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/whitelist-domains.conf -O /etc/nginx/bots.d/whitelist-domains.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/blacklist-user-agents.conf -O /etc/nginx/bots.d/blacklist-user-agents.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/custom-bad-referrers.conf -O /etc/nginx/bots.d/custom-bad-referrers.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/blacklist-ips.conf -O /etc/nginx/bots.d/blacklist-ips.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/bad-referrer-words.conf -O /etc/nginx/bots.d/bad-referrer-words.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/conf.d/botblocker-nginx-settings.conf -O /etc/nginx/conf.d/botblocker-nginx-settings.conf
echo "include /etc/nginx/bots.d/blockbots.conf;" | tee -a /etc/nginx/vhost.d/default
echo "include /etc/nginx/bots.d/ddos.conf;" | tee -a /etc/nginx/vhost.d/default
perl -i -pe 's/^.*server_names_hash_bucket_size 128;/#$&/' /etc/nginx/conf.d/default.conf
nginx -t
nginx -s reload
Changing entrypoint to command does not work also.
entrypoint: bsh -c chmod 777 /etc/nginx/bad-bot-installer/bad-bot.sh && /etc/nginx/bad-bot-installer/bad-bot.sh
and change like this does not work also.
proxy.yml:
version: '3.9'
services:
reverse-proxy:
image: nginxproxy/nginx-proxy:latest
container_name: reverse-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:ro
# for bad_bot ddos
# https://github.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/blob/master/MANUAL-CONFIGURATION.md
- /etc/nginx/bots.d:/etc/nginx/bots.d
- /var/run/docker.sock:/tmp/docker.sock:ro
- /var/log/nginx-proxy:/var/log/nginx
- ./Nginx/bad-bot.sh:/etc/nginx/bad-bot-installer/bad-bot.sh:rw
command: bash -c "/etc/nginx/bad-bot-installer/bad-bot.sh"
acme-companion:
image: nginxproxy/acme-companion:latest
depends_on:
- reverse-proxy
container_name: acme-companion
restart: always
environment:
NGINX_PROXY_CONTAINER: reverse-proxy
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:rw
- acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
conf:
driver: local
vhost:
driver: local
certs:
driver: local
acme:
driver: local
html:
driver: local
networks:
default:
name: my-network
error:
when run docker exec -it reverse-proxy /bin/bash:
Error response from daemon: Container e8179bf5071d875a26d0ce17cad5290585991b71dce3832bb10a7b690ef99154 is restarting, wait until the container is running
Im inheriting from an opensource project where i have this script to deploy two containers (docker and nginx) on a server:
mkdir -p /app
rm -rf /app/* && tar -xf /tmp/project.tar -C /app
sudo docker-compose -f /app/docker-compose.yml build
sudo supervisorctl restart react-wagtail-project
sudo ufw allow port
The docker-compose.yml file is like this :
version: '3.7'
services:
nginx_sarahmaso:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
ports:
- 4000:80
depends_on:
- web_sarahmaso
networks:
spa_network_sarahmaso:
web_sarahmaso:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
- sqlite_sarahmaso:/app/db
env_file:
- ./env/prod-sample
networks:
spa_network_sarahmaso:
networks:
spa_network_sarahmaso:
volumes:
sqlite_sarahmaso:
staticfiles_sarahmaso:
mediafiles_sarahmaso:
I'm wondering what is the point to use sudo supervisorctl restart react-wagtail-project?
If i put restart: always in the two containers i'm running, is it useful to run on top of that a supervisor command to check they are always on and running?
Or maybe is it for the possibility to create logs ?
Thank you
docker-compose.yml
version: '3'
volumes:
wp-assets:
services:
mariadb:
build: ./requirements/mariadb
environment:
- MYSQL_ROOT_HOST=${MYSQL_ROOT_HOST}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
ports:
- "127.0.0.1:3306:3306"
- "127.0.0.1:9999:9999" # test
wordpress:
environment:
- WORDPRESS_DB_HOST=${WORDPRESS_DB_HOST}
- WORDPRESS_DB_USER=${WORDPRESS_DB_USER}
- WORDPRESS_DB_PASSWORD=${WORDPRESS_DB_PASSWORD}
- WORDPRESS_DB_NAME=${WORDPRESS_DB_NAME}
- WORDPRESS_TABLE_PREFIX=${WORDPRESS_TABLE_PREFIX}
- WORDPRESS_AUTH_KEY=${WORDPRESS_AUTH_KEY}
- WORDPRESS_SECURE_AUTH_KEY=${WORDPRESS_SECURE_AUTH_KEY}
- WORDPRESS_LOGGED_IN_KEY=${WORDPRESS_LOGGED_IN_KEY}
- WORDPRESS_NONCE_KEY=${WORDPRESS_NONCE_KEY}
- WORDPRESS_AUTH_SALT=${WORDPRESS_AUTH_SALT}
- WORDPRESS_SECURE_AUTH_SALT=${WORDPRESS_SECURE_AUTH_SALT}
- WORDPRESS_LOGGED_IN_SALT=${WORDPRESS_LOGGED_IN_SALT}
- WORDPRESS_NONCE_SALT=${WORDPRESS_NONCE_SALT}
volumes:
- wp-assets:/var/wp-assets
build: ./requirements/wordpress
ports:
# host_port == 127.0.0.1:9000, allow only localhost
- "127.0.0.1:9000:9000"
nginx:
# image: nginx:latest
depends_on:
- wordpress
volumes:
- wp-assets:/var/wp-assets
build: ./requirements/nginx
ports:
# host_port == 0.0.0.0:8080, allow all interfaces
- "8080:80"
mariadb/Dockerfile
FROM debian:buster
# install mariadb-server
RUN apt update && apt install -y mariadb-server
# allow connection from wordpress (host name)
RUN sed -e 's/127.0.0.1/wordpress/' \
-i '/etc/mysql/mariadb.conf.d/50-server.cnf'
# used for socket
RUN mkdir -p /var/run/mysqld && \
chown -R mysql:mysql /var/lib/mysql /var/run/mysqld && \
chmod 777 /var/run/mysqld && \
touch /var/run/mysqld/mysqld.sock
# init db here
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
#ENTRYPOINT ["/docker-entrypoint.sh"]
ENTRYPOINT ["tail", "-f"]
I was trying to connect mariadb with wordpress (mariadb-client), and got an error :
Can't connect to MySQL server on 'mariadb'
So I tested ports are good. But while other ports like nginx:80 or wordpress:9000 can be accessed by the other containers, ports of mariadb refuses connections.
I couldn't figure out what is difference between mariadb container and the others. What's the problem?
The ENTRYPOINT of your MariaDB Dockerfile is
ENTRYPOINT ["tail", "-f"]
so it doesn't actually run MariaDB.
You probably need to comment that out and comment back in
ENTRYPOINT ["/docker-entrypoint.sh"]
I was confused about ports. I thought that each containers could communicate by just opening ports, but there was no LISTENing ports. Docker-compose ports: just binds ports and has nothing to do with LISTEN.
I have two docker run commands - the second container need to be ran in a folder created by the first. As in below
docker run -v $(pwd):/projects \
-w /projects \
gcr.io/base-project/mainmyoh:v1 init myprojectname
cd myprojectname
The above myprojectname folder was created by the first container. I need to run the second container in this folder as below.
docker run -v $(pwd):/project \
-w /project \
-p 3000:3000 \
gcr.io/base-project/myoh:v1
Here is the docker-compose file I have so far:
version: '3.3'
services:
firstim:
volumes:
- '$(pwd):/projects'
restart: always
working_dir: /project
image: gcr.io/base-project/mainmyoh:v1
command: 'init myprojectname'
secondim:
image: gcr.io/base-project/myoh:v1
working_dir: /project
volumes:
- '$(pwd):/projects'
ports:
- 3000:3000
What need to change to achieve this.
You can make the two services use a shared named volume:
version: '3.3'
services:
firstim:
volumes:
- '.:/projects'
- 'my-project-volume:/projects/myprojectname'
restart: always
working_dir: /project
image: gcr.io/base-project/mainmyoh:v1
command: 'init myprojectname'
secondim:
image: gcr.io/base-project/myoh:v1
working_dir: /project
volumes:
- 'my-project-volume:/projects'
ports:
- 3000:3000
volumes:
my-project-volume:
Also, just an observation: in your example the working_dir: references /project while the volumes point to /projects. I assume this is a typo and this might be something you want to fix.
You can build a custom image that does this required setup for you. When secondim runs, you want the current working directory to be /project, you want the current directory's code to be embedded there, and you want the init command to have run. That's easy to express in Dockerfile syntax:
FROM gcr.io/base-project/mainmyoh:v1
WORKDIR /project
COPY . .
RUN init myprojectname
CMD whatever should be run to start the real project
Then you can tell Compose to build it for you:
version: '3.5'
services:
# no build-only first image
secondim:
build: .
image: gcr.io/base-project/mainmyoh:v1
ports:
- '3000:3000'
In another question you ask about running a similar setup in Kubernetes. This Dockerfile-based setup can translate directly into a Kubernetes Deployment/Service, without worrying about questions like "what kind of volume do I need to use" or "how do I copy the code into the cluster separately from the image".
Here is my docker-compose.yml:
version: '3.3'
services:
etcd:
container_name: 'etcd'
image: 'quay.io/coreos/etcd'
command: >
etcd -name etcd
-advertise-client-urls http://127.0.0.1:2379,http://127.0.0.1:4001
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001
-initial-advertise-peer-urls http://127.0.0.1:2380
-listen-peer-urls http://0.0.0.0:2380
healthcheck:
test: ["CMD", "curl", "-f", "http://etcd:2379/version"]
interval: 30s
timeout: 10s
retries: 5
networks:
robotrader:
kontrol:
container_name: 'kontrol'
env_file: 'variables.env'
build:
context: '.'
dockerfile: 'Dockerfile'
volumes:
- '/certs:/certs'
ports:
- '6000:6000'
depends_on:
- 'etcd'
networks:
robotrader:
mongo:
container_name: 'mongo'
image: 'mongo:latest'
ports:
- '27017:27017'
volumes:
- '/var/lib/mongodb:/var/lib/mongodb'
networks:
robotrader:
networks:
robotrader:
... and here is the Dockerfile used to build kontrol:
FROM golang:1.8.3 as builder
RUN go get -u github.com/golang/dep/cmd/dep
RUN go get -d github.com/koding/kite
WORKDIR ${GOPATH}/src/github.com/koding/kite
RUN ${GOPATH}/bin/dep ensure
RUN go install ./kontrol/kontrol
RUN mv ${GOPATH}/bin/kontrol /tmp
FROM busybox
ENV APP_HOME /opt/robotrader
RUN mkdir -p ${APP_HOME}
RUN mkdir /certs
WORKDIR ${APP_HOME}
COPY --from=builder /tmp/kontrol .
ENTRYPOINT ["./kontrol", "-initial"]
CMD ["./kontrol"]
Finally when I issue the command...
sudo -E docker-compose -f docker-compose.yaml up
... both etcd and mongo start successfully, whereas kontrol fails with the following error:
kontrol | 2018/06/21 20:11:14 cannot read public key file: open "/certs/key_pub.pem": no such file or directory
If I log into the container..
sudo docker run -it --rm --name j3d-test --entrypoint sh j3d
... and look at folder /certs, the files are there:
ls -la /certs
-rw-r--r--. 1 root root 1679 Jun 21 21:11 key.pem
-rw-r--r--. 1 root root 451 Jun 21 21:11 key_pub.pem
What am I missing?
When you run this command:
sudo docker run -it --rm --name j3d-test --entrypoint sh j3d
You are not "logging into the container". You are creating a new container, and you are looking at the contents of /certs on the underlying image. However, in your docker-compose.yaml you have:
kontrol:
[...]
volumes:
- '/certs:/certs'
[...]
You have configured a bind mount from $PWD/certs onto the /certs
directory in your container. What does your local certs directory
contain?