I want to create a docker-compose with several services and in it I want to generate a certificate for my domain name with Certbot/LetsEncryt. But when I run it, I always get an error saying it can't find a certificate! While normally I do everything necessary to generate it.
version: '3.8'
services:
proxy-nginx:
build: .
ports:
- 80:80
- 443:443
volumes:
- ./certbot/www:/var/www/certbot/
- ./certbot/conf/:/etc/nginx/ssl/
depends_on:
- nestjs
restart: unless-stopped
certbot:
image: certbot/certbot:latest
depends_on:
- proxy-nginx
volumes:
- ./certbot/www/:/var/www/certbot/
- ./certbot/conf/:/etc/letsencrypt/
command: certonly --webroot --webroot-path=/var/www/certbot --email emain#gmail.com --agree-tos --no-eff-email --staging 0 --force-renewal -d www.mydomaine -d mydomaine
nestjs:
build:
context: ./BACKEND
dockerfile: Dockerfile
ports:
- 3000:3000
Here is the result :
cannot load certificate "/etc/nginx/ssl/live/mydomaine/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/ssl/live/mydomaine/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
```
In my nginx.conf file
I have 1 proxy server and 1 server for the front-end and back-end of my application. But the problem is nginx can't find the certificate. I don't know why.
normally the certificate is generated in the folder /etc/nginx/ssl/live/mydomaine.be/ but it's not the case.
This is how I use it and it works.
docker-compose.yml
services:
node:
container_name: node-server
build: .
environment: # process.env.
NODE_ENV: production
networks:
- app-network
nginx:
image: 'nginx:1.23.3'
container_name: nginx-server
depends_on:
- node
volumes:
- './volumes/nginx/production/nginx.conf:/etc/nginx/nginx.conf:ro'
- './volumes/nginx/production/conf.d/:/etc/nginx/conf.d'
- './volumes/certbot/letsencrypt:/etc/letsencrypt'
- './volumes/certbot/www:/var/www/certbot'
networks:
- app-network
ports:
- '80:80' # To access nginx from outside
- '443:443' # To access nginx from outside
networks:
app-network:
driver: bridge
Docker run certbot
docker run --rm --name temp_certbot \
-v /home/app-folder/volumes/certbot/letsencrypt:/etc/letsencrypt \
-v /home/app-folder/volumes/certbot/www:/tmp/letsencrypt \
-v /home/app-folder/volumes/certbot/log:/var/log \
certbot/certbot:v1.8.0 \
certonly --webroot --agree-tos --renew-by-default \
--preferred-challenges http-01 --server https://acme-v02.api.letsencrypt.org/directory \
--text --email info#domain.com \
-w /tmp/letsencrypt -d domain.com
Related
Two months ago, I set up a website with SSL thanks to Let's Encrypt. The details of how I did it are now quite blurry.
The site is hosted inside several docker containers (nginx, PHP, MySQL). There is a certbot container which should perform the renewal of the SSL certificate. This container is launched once a week and aborts immediately.
I have checked the logs and found this error. My research were unsuccessful and I have no idea what file certbot is complaining about.
usage:
certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...
Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
certificate.
certbot: error: Unable to open config file: certonly -n --webroot --webroot-path=/var/lib/challenge --email contact#**********.com --agree-tos --no-eff-email -d www.**********.com --key-type ecdsa. Error: No such file or directory
Do you have any idea of the problem?
Thanks in advance,
EDIT:
The contents of /etc/letsencrypt are
accounts archive cli.ini csr keys live renewal renewal-hooks
Inside cli.ini, I have :
key-type = ecdsa
elliptic-curve = secp384r1
rsa-key-size = 4096
email = contact#attom.eu
authenticator = webroot
webroot-path = /var/lib/challenge
agree-tos = true
The docker-compose.yml contains :
version: '3'
services:
nginx:
build: ./nginx
container_name: nginx
restart: unless-stopped
depends_on:
- php
networks:
- app-network
volumes:
- {{ mounted_dir_app }}/public:/var/www/html:ro
- certbotdata:/etc/letsencrypt:ro
- challenge:/home/challenge
ports:
- "80:80"
- "443:443"
env_file:
- .env
- ".env.$ENV"
healthcheck:
test: curl -IsLk $$SITE_URL | head -n 1 | grep -q -e ^HTTP -e 200
start_period: 30s
interval: 10s
timeout: 3s
retries: 5
php:
#skip
mysql:
#skip
certbot:
depends_on:
nginx:
condition: service_healthy
build:
context: ./certbot
args:
- "ENV=$ENV"
container_name: certbot
env_file:
- .env
- ".env.$ENV"
volumes:
- certbotdata:/etc/letsencrypt
- challenge:/var/lib/challenge
networks:
app-network:
driver: bridge
volumes:
dbdata:
certbotdata:
challenge:
Edit:
The CERTBOT Dockerfile is
ARG ENV
FROM certbot/certbot as cert-prod
CMD certonly -n --webroot --webroot-path=/var/lib/challenge --email contact#**********.com --agree-tos --no-eff-email -d www.**********.com --key-type ecdsa
FROM alpine as cert-dev
RUN apk update && apk add openssl
CMD mkdir -p /etc/letsencrypt/live/www.**********.com && \
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-subj "/C=**/ST=**********/L=**********" \
-keyout /etc/letsencrypt/live/www.**********.com/privkey.pem -out /etc/letsencrypt/live/www.**********.com/fullchain.pem
FROM cert-${ENV}
So what I am looking at is a docker run command being used in to create a docker container for open telemetry that passes in a config command, and the code looks like...
$ git clone git#github.com:open-telemetry/opentelemetry-collector.git; \
cd opentelemetry-collector/examples; \
go build main.go; ./main & pid1="$!";
docker run --rm -p 13133:13133 -p 14250:14250 -p 14268:14268 \
-p 55678-55679:55678-55679 -p 4317:4317 -p 8888:8888 -p 9411:9411 \
-v "${PWD}/local/otel-config.yaml":/otel-local-config.yaml \
--name otelcol otel/opentelemetry-collector \
--config otel-local-config.yaml; \
kill $pid1; docker stop otelcol
(https://opentelemetry.io/docs/collector/getting-started/#docker)
What I don't understand is how a non-docker related config file(open telemetry config) fits into the "docker run --config" or "docker compose config" commands. Below is the open telemetry config file that seems to be non-docker related
extensions:
memory_ballast:
size_mib: 512
zpages:
endpoint: 0.0.0.0:55679
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
memory_limiter:
# 75% of maximum memory up to 4G
limit_mib: 1536
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
exporters:
logging:
logLevel: debug
service:
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [logging]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [logging]
extensions: [memory_ballast, zpages]
https://github.com/open-telemetry/opentelemetry-collector/blob/main/examples/local/otel-config.yaml
Now I have looked at these Docker links
https://docs.docker.com/engine/swarm/configs/#how-docker-manages-configs
https://nickjanetakis.com/blog/docker-tip-43-using-the-docker-compose-config-command
but I couldn't figure out how to get the docker run --config command in the open telemetry example to start working in docker compose with docker compose config. Here is my docker compose
version: "3.9"
services:
opentelemetry:
container_name: otel
image: otel/opentelemetry-collector:latest
volumes:
- ~/source/repos/CritterTrackerProject/DockerServices/OpenTelemetry/otel-collector-config.yml:/otel-local-config.yml
config:
- otel-local-config.yml
ports:
- 13133:13133
- 14250:14250
- 14268:14268
- 55678-55679:55678-55679
- 4317:4317
- 8888:8888
- 9411:9411
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- my-network
jaeger:
# restart: unless-stopped
container_name: jaeger
image: jaegertracing/all-in-one:latest
ports:
- 16686:16686
# - 14250:14250
# - 14268:14268
# - 5775:5775/udp
- 6831:6831/udp
# - 6832:6832/udp
# - 5778:5778
# - 9411:9411
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- my-network
postgres:
restart: always
container_name: postgres
image: postgres:latest
environment:
- POSTGRES_USER=code
- POSTGRES_PASSWORD=code
ports:
- 5432:5432
volumes:
- postgres:/var/lib/postgresql/data
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- my-network
nginx:
restart: always
container_name: webserver
image: nginx:latest
build:
context: ~/source/repos/CritterTrackerProject
dockerfile: DockerServices/Nginx/Dockerfile
ports:
- 80:80
- 443:443
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- my-network
volumes:
postgres:
networks:
my-network:
external: true
name: my-network
Here is my error after running docker compose up in a Git Bash terminal
$ docker compose -f ./DockerServices/docker-compose.yml up -d
services.opentelemetry Additional property config is not allowed
The general form of docker run is
docker run [docker options] image [command]
And if you look at your original command it matches this pattern
docker run \
--rm -p ... -v ... --name ... \ # Docker options
otel/opentelemetry-collector \ # Image
--config otel-local-config.yaml # Command
So what looks like a --config option is really the command part of the container setup; it overrides the Dockerfile CMD, and it is passed as additional arguments to the image's ENTRYPOINT.
In a Compose setup, then, this would be the container's command:.
services:
opentelemetry:
image: otel/opentelemetry-collector:latest
command: --config otel-local-config.yaml
Since this is an application-specific command string, it's unrelated to the docker-compose config command, which is a diagnostic tool that just dumps out parts of your Compose configuration.
What you're doing in the docker run command is the following mounting:
${PWD}/local/otel-config.yaml on the local host to /otel-local-config.yaml from inside the docker
You can achieve same behavior with volumes option from docker compose:
volumes:
"${PWD}/local/otel-config.yaml":/otel-local-config.yaml
I am following the digital ocean tutorial to install wordpress via docker
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-with-docker-compose
It says if the certbot is other than 0 I get the following error, there are no log files where I it says to look. Newish to docker thanks for helping all!
Edit: I’m noting none of the volumes that this docker-compose were created on the host
Name Command State Ports
-------------------------------------------------------------------------
certbot certbot certonly --webroot ... Exit 1
db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp
webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp
wordpress docker-entrypoint.sh php-fpm Up 9000/tcp
Docker-compose.yml here
version: '3'
services:
db:
image: mysql:8.0
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MYSQL_DATABASE=wordpress
volumes:
- dbdata:/var/lib/mysql
command: '--default-authentication-plugin=mysql_native_password'
networks:
- app-network
wordpress:
depends_on:
- db
image: wordpress:5.1.1-fpm-alpine
container_name: wordpress
restart: unless-stopped
env_file: .env
environment:
- WORDPRESS_DB_HOST=db:3306
- WORDPRESS_DB_USER=$MYSQL_USER
- WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
- WORDPRESS_DB_NAME=wordpress
volumes:
- wordpress:/var/www/html
networks:
- app-network
webserver:
depends_on:
- wordpress
image: nginx:1.15.12-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- wordpress:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- app-network
certbot:
depends_on:
- webserver
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- wordpress:/var/www/html
command: certonly --webroot --webroot-path=/var/www/html --email sammy#example.com --agree-tos --no-eff-email --staging -d example.com -d www.example.com
volumes:
certbot-etc:
wordpress:
dbdata:
networks:
app-network:
driver: bridge
The volumes being created here are named volumes.
To check named volumes run:
docker-compose volume ls
Also, per the comment above, you could check certbot logs with:
docker-compose logs certbot
The volumes and container logs won’t show up using docker unless you use the specific container and volume names which you can find with:
docker-compose ls and docker-compose volume ls
Or use the docker-compose variants above
I try i setup a Shopware Docker Container for development. I setup a Dockerfile for the Shopware initialize process but every time i run the build process shopware return this error message:
mysql -u 'root' -p'root' -h 'dbs' --port='3306' -e "DROP DATABASE IF EXISTS `shopware6dev`"
ERROR 2005 (HY000): Unknown MySQL server host 'dbs' (-2)
i think docker setup the default network after all build processes are done but i need to connect before all containers are ready. The depends_on option brings nothing. I hope anyone have a idea to solve this problem.
This is my docker-compose file:
version: '3'
services:
shopwaredev:
build:
context: ./docker/web
dockerfile: Dockerfile
volumes:
- ./log:/var/log/apache2
environment:
- VIRTUAL_HOST=shopware6dev.test,www.shopware6dev.test
- HTTPS_METHOD=noredirect
restart: on-failure:10
depends_on:
- dbs
adminer:
image: adminer
restart: on-failure:10
ports:
- 8080:8080
dbs:
image: "mysql:5.7"
volumes:
- ./mysql57:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=shopware6dev
restart: on-failure:10
nginx-proxy:
image: solution360/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./ssl:/etc/nginx/certs
restart: on-failure:10
and this is my dockerfile for web shopwaredev container:
FROM solution360/apache24-php74-shopware6
WORKDIR /var/www/html
RUN rm index.html
RUN git clone https://github.com/shopware/development.git .
RUN cp .psh.yaml.dist .psh.yaml
RUN sed -i 's|DB_USER: "app"|DB_USER: "root"|g' .psh.yaml
RUN sed -i 's|DB_PASSWORD: "app"|DB_PASSWORD: "root"|g' .psh.yaml
RUN sed -i 's|DB_HOST: "mysql"|DB_HOST: "dbs"|g' .psh.yaml
RUN sed -i 's|DB_NAME: "shopware"|DB_NAME: "shopware6dev"|g' .psh.yaml
RUN sed -i 's|APP_URL: "http://localhost:8000"|APP_URL: "http://shopware6dev.test"|g' .psh.yaml
RUN ./psh.phar install
I'm trying to setup a Spark development environment with Zeppelin on Docker, but I'm having trouble connecting the Zeppelin and Spark containers.
I'm deploying a Docker Stack, with the current docker-compose
version: '3'
services:
spark-master:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.master.Master -h spark-master
hostname: spark-master
environment:
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: 10.129.34.90
volumes:
- spark-master-volume:/conf
- spark-master-volume:/tmp/data
ports:
- 8000:8080
spark-worker:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.worker.Worker spark://spark-master:7077
hostname: spark-worker
environment:
SPARK_MASTER_URL: spark-master:7077
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: 10.129.34.90
SPARK_WORKER_CORES: 2
SPARK_WORKER_MEMORY: 2g
volumes:
- spark-worker-volume:/conf
- spark-worker-volume:/tmp/data
ports:
- "8081-8100:8081-8100"
zeppelin:
image: apache/zeppelin:0.8.0
ports:
- 8080:8080
- 8443:8443
volumes:
- spark-master-volume:/opt/zeppelin/logs
- spark-master-volume:/opt/zeppelin/notebookcd
environment:
MASTER: "spark://spark-master:7077"
SPARK_MASTER: "spark://spark-master:7077"
SPARK_HOME: /usr/spark-2.4.1
depends_on:
- spark-master
volumes:
spark-master-volume:
driver: local
spark-worker-volume:
driver: local
It builds normally, but when I try to run Spark on Zeppelin, it throws me:
java.lang.RuntimeException: /zeppelin/bin/interpreter.sh: line 231: /usr/spark-2.4.1/bin/spark-submit: No such file or directory
I think the problem is in the volumes, but I can't get how to do it right.
You need to install spark on your zeppelin docker instance to use spark-submit and update the spark interpreter config to point it to your spark cluster
zeppelin_notebook_server:
container_name: zeppelin_notebook_server
build:
context: zeppelin/
restart: unless-stopped
volumes:
- ./zeppelin/config/interpreter.json:/zeppelin/conf/interpreter.json:rw
- ./zeppelin/notebooks:/zeppelin/notebook
- ../sample-data:/sample-data:ro
ports:
- "8085:8080"
networks:
- general
labels:
container_group: "notebook"
spark_base:
container_name: spark-base
build:
context: spark/base
image: spark-base:latest
spark_master:
container_name: spark-master
build:
context: spark/master/
networks:
- general
hostname: spark-master
ports:
- "3030:8080"
- "7077:7077"
environment:
- "SPARK_LOCAL_IP=spark-master"
depends_on:
- spark_base
volumes:
- ./spark/apps/jars:/opt/spark-apps
- ./spark/apps/data:/opt/spark-data
- ../sample-data:/sample-data:ro
spark_worker_1:
container_name: spark-worker-1
build:
context: spark/worker/
networks:
- general
hostname: spark-worker-1
ports:
- "3031:8081"
env_file: spark/spark-worker-env.sh
environment:
- "SPARK_LOCAL_IP=spark-worker-1"
depends_on:
- spark_master
volumes:
- ./spark/apps/jars:/opt/spark-apps
- ./spark/apps/data:/opt/spark-data
- ../sample-data:/sample-data:ro
spark_worker_2:
container_name: spark-worker-2
build:
context: spark/worker/
networks:
- general
hostname: spark-worker-2
ports:
- "3032:8082"
env_file: spark/spark-worker-env.sh
environment:
- "SPARK_LOCAL_IP=spark-worker-2"
depends_on:
- spark_master
volumes:
- ./spark/apps/jars:/opt/spark-apps
- ./spark/apps/data:/opt/spark-data
- ../sample-data:/sample-data:ro
Zeppelin docker file:
FROM "apache/zeppelin:0.8.1"
RUN wget http://apache.mirror.iphh.net/spark/spark-2.4.3/spark-2.4.3-bin-hadoop2.7.tgz --progress=bar:force && \
tar xvf spark-2.4.3-bin-hadoop2.7.tgz && \
mkdir -p /usr/local/spark && \
mv spark-2.4.3-bin-hadoop2.7/* /usr/local/spark/. && \
mkdir -p /sample-data
ENV SPARK_HOME "/usr/local/spark/"
Make sure your zeppelin spark interpreter config is same as:
Build a Dockerfile with content
FROM gettyimages/spark
ENV APACHE_SPARK_VERSION 2.4.1
ENV APACHE_HADOOP_VERSION 2.8.0
ENV ZEPPELIN_VERSION 0.8.1
RUN apt-get update
RUN set -x \
&& curl -fSL "http://www-eu.apache.org/dist/zeppelin/zeppelin-0.8.1/zeppelin-0.8.1-bin-all.tgz" -o /tmp/zeppelin.tgz \
&& tar -xzvf /tmp/zeppelin.tgz -C /opt/ \
&& mv /opt/zeppelin-* /opt/zeppelin \
&& rm /tmp/zeppelin.tgz
ENV SPARK_SUBMIT_OPTIONS "--jars /opt/zeppelin/sansa-examples-spark-2016-12.jar"
ENV SPARK_HOME "/usr/spark-2.4.1/"
WORKDIR /opt/zeppelin
CMD ["/opt/zeppelin/bin/zeppelin.sh"]
and then define your service within your docker-compose.yml file with prefix
version: '3'
services:
zeppelin:
build: ./zeppelin
image: zeppelin:0.8.1-hadoop-2.8.0-spark-2.4.1
...
Finally, use docker-compose -f docker-compose.yml build to build the customised image before docker stack deploy