I happen to run a domain on Cloudflare DNS that I want to use for an authentik deployment. From the corresponding documentation it seems to be rather straight forward to use certbot to get ACME/letsencrypt certificates.
I modified the example snippet in docker-compose.override.yml to the following:
root#debian-2gb-nbg1-1:~# cat docker-compose.override.yml
version: "3.4"
services:
certbot:
image: docker.io/certbot/dns-cloudflare:latest
volumes:
- ./certs/:/etc/letsencrypt
# Variables depending on DNS Plugin
environment:
CLOUDFLARE_API_TOKEN: <redacted>
command:
- certonly
- --non-interactive
- --agree-tos
- --dns-cloudflare
# - --dns-cloudflare-credentials cloudflare.ini
- -m <redacted>
- -d <redacted>
- -v
certbot immediately exits after running docker-compose up -d
The confusing part to me is, the log files says:
certbot: error: unrecognized arguments: --dns-cloudflare-credentials cloudflare.ini
Whereas the documentation for certbot-dns-cloudflare says, this is a required argument.
What am I missing?
Related
I am having trouble rewriting the route or adding a path prefix to a route for a jupyterlab services in docker so that http://jupyter-test.localhost/user starts jupyterlab. I also tried removing the stripprefix with no luck. Any help would be appreciated, thank you
docker-compose.yml
version: "3.8"
services:
reverse-proxy:
image: traefik:v2.4
command: --api.insecure=true --providers.docker # --log.level=DEBUG
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- traefik.enable=false
jupyter_rewrite_path:
restart: always
image: jupyter/scipy-notebook
command: jupyter-lab --ip='*' --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.base_url=/user
labels:
- traefik.http.routers.jupyter_rewrite_path.rule=Host(`jupyter-test.localhost`) && PathPrefix(`/user`)
- traefik.http.services.jupyter_rewrite_path.loadbalancer.server.port=8888
- "traefik.http.routers.jupyter_rewrite_path.middlewares=jupyter_rewrite_path_stripprefix"
- "traefik.http.middlewares.jupyter_rewrite_path_stripprefix.stripprefix.prefixes=/user"
use docker-compose up
When I start containers using your docker-compose.yaml file, I see that the jupyter_rewrite_path container is marked as "unhealthy". Look at the STATUS column in this output:
$ docker compose ps
NAME ... STATUS ...
jupyter_jupyter_rewrite_path_1 ... Up 58 seconds (unhealthy) ...
jupyter_reverse-proxy_1 ... Up 58 seconds ...
Traefik will not direct traffic to an unhealthy service; if you look at your Traefik dashboard (http://localhost:8080/dashboard/#/http/routers), you'll see that the Jupyter container doesn't show up in the list.
The container is marked unhealthy because of a healthcheck defined in the image; we can see that with docker image inspect which shows us:
"Healthcheck": {
"Test": [
"CMD-SHELL",
"wget -O- --no-verbose --tries=1 --no-check-certificate http${GEN_CERT:+s}://localhost:${JUPYTER_PORT}${JUPYTERHUB_SERVICE_PREFIX:-/}api || exit 1"
],
"Interval": 5000000000,
"Timeout": 3000000000,
"StartPeriod": 5000000000,
"Retries": 3
},
So it's connecting to /api on the container and expecting a successful response. As we can see from the container logs, it is in fact getting a 404 error:
jupyter_rewrite_path_1 | [W 2023-02-02 20:50:38.456 ServerApp] 404 GET /api (6d36d539cca44c57bb06702c21c5cc9b#127.0.0.1) 0.84ms referer=None
And that's because you've set --NotebookApp.base_url=/user, but the healthcheck is request /api rather than /user/api.
If you look at the healthcheck, you can see that it builds the URL from a number of variables:
http${GEN_CERT:+s}://localhost:${JUPYTER_PORT}${JUPYTERHUB_SERVICE_PREFIX:-/}api
By setting the JUPYTERHUB_SERVICE_PREFIX variable, we can get the healthcheck to connect to Jupyter at the expected path. That looks like:
jupyter_rewrite_path:
restart: always
image: docker.io/jupyter/scipy-notebook
environment:
JUPYTERHUB_SERVICE_PREFIX: /user/
command:
- jupyter-lab
- --ip=*
- --NotebookApp.token=
- --NotebookApp.password=
- --NotebookApp.base_url=/user
labels:
- traefik.enable=true
- traefik.http.routers.jupyter_rewrite_path.rule=Host(`jupyter-test.localhost`) && PathPrefix(`/user`)
- traefik.http.services.jupyter_rewrite_path.loadbalancer.server.port=8888
You'll note I've dropped the stripprefix bits here, because they're no longer necessary -- by setting the --NotebookApp.base_url option, you're telling Jupyter that it's hosted at /user, so we don't need (or want) to strip the prefix.
With the above configuration, I can successfully access the notebook server at http://localhost/user/.
Using docker desktop with WSL2, the ultimate aim is to run a shell command to generate local SSL certs before starting an nginx service.
to docker up we have
version: '3.6'
services:
# Frontend
rp:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
container_name: revproxy
image: nginx:latest
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
so far so good, now we would like to add a pre startup script to create the ssl certs before launching the nginx server /home/scripts/certs.sh
mkdir -p /home/ssl/certs
mkdir -p /home/ssl/private
openssl req -x509 -nodes -days 365 -subj "/C=CA/ST=QC/O=Company, Inc./CN=zero.url" -addext "subjectAltName=DNS:mydomain.com" -newkey rsa:2048 -keyout /home/ssl/private/nginx-zero.key -out /home/ssl/certs/nginx-zero.crt;
Now adding the following to docker-compose.yml causes the container to bounce between running to rebooting and keeps recreating the certs via the script the exits the container. no general error message. I assume the exit code means the container is exiting correctly, that then triggers the restart.
command: /bin/sh -c "/home/scripts/certs.sh"
following other answers, adding exec "$#" makes no difference.
as an alternative I tried to copy the script into the pre nginx launch folder docker-entrypoint.d. this creates an error on docker up
version: '3.6'
services:
# Frontend
rp:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
container_name: revproxy
image: nginx:latest
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
COPY /home/scripts/certs.sh /docker-entrypoint.d/certs.sh
this generates an error
ERROR: yaml.scanner.ScannerError: while scanning a simple key
in ".\docker-compose.yml", line 18, column 7
could not find expected ':'
in ".\docker-compose.yml", line 18, column 64
The terminal process "C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -Command docker-compose -f "docker-compose.yml" up -d --build" terminated with exit code: 1.
So what are the options for running a script before starting the primary docker-entrypoint.sh script
UPDATE:
as per suggestion in comment, changing the format of the flag did not help,
version: '3.6'
services:
# Frontend
rp:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS: 1
container_name: revproxy
image: nginx:latest
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\dc_scripts:/home/scripts
COPY /home/scripts/certs.sh /docker-entrypoint.d/certs.sh
ERROR: yaml.scanner.ScannerError: while scanning a simple key
in ".\docker-compose.yml", line 17, column 7
could not find expected ':'
in ".\docker-compose.yml", line 18, column 7
The terminal process "C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -Command docker-compose -f "docker-compose.yml" up -d --build" terminated with exit code: 1.
Dockerfiles are used to buid images, and contains a list of commands like RUN, EXEC and COPY. They have a very shell script like syntax with one command per line (for the most part).
A docker compose file on the other hand is a yaml formatted file that is used to deploy built images to docker as running services. You cannot put commands like COPY in this file.
You can, for local deployments, on non windows systems, map individual files in in the volumes section:
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
- ./scripts/certs.sh:/usr/local/bin/certs.sh
But this syntax only works on linux and MacOS hosts - I believe.
An alternative is to restructure your project with a Dockerfile and a docker-compose.yml file.
With a Dockerfile
FROM nginx:latest
COPY --chmod=0755 scripts/certs.sh /usr/local/bin
ENTRYPOINT ["certs.sh"]
Into the docker-compose, add a build: node with the path to the Dockerfile. "." will do. docker-compose build will be needed to force a rebuild if the Dockerfile changes after the first time.
version: '3.9'
services:
revproxy:
environment:
COMPOSE_CONVERT_WINDOWS_PATHS: 1
image: nginx:custom
build: .
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
Now, that youve changed the entrypoint of the nginx container to your custom script, you need to chain to the original one, and call it with the original command.
So, certs.sh needs to look like:
#!/bin/sh
# your cert setup here
# this should remove "certs.sh" from the beginning of the current parameter list.
shift 1
# and now, transfer control to the original entrypoint, with the commandline that was passed.
exec "./docker-entrypoint.sh" "$#"
docker inspect nginx:latest was used to discover the original entrypoint.
Added after edit:
Also, COMPOSE_CONVERT_WINDOWS_PATHS doesn't look like an environment variable that nginx is going to care about. This variable should probably be set on your windows user environment so it is available before running docker-compose.
C:\> set COMPOSE_CONVERT_WINDOWS_PATHS=1
C:\> docker-compose build
...
C:\> docker-compose up
...
Also, nginx on docker hub indicates that /etc/nginx is the proper configuration folder for nginx, so I don't think that mapping things to /home/... is going to do anything. nginx should display a default page however.
I have a wierd problem, as it seems to have been working fine until today. I can't tell what's changed since then, however. I run docker-compose up --build --force-recreate and the build fails saying that it can't resolve the host name.
The issue is specifically because of CURL commands inside one of the Dockerfiles:
USER logstash
WORKDIR /usr/share/logstash
RUN ./bin/logstash-plugin install logstash-input-beats
WORKDIR /tmp
COPY templates/winlogbeat.template.json winlogbeat.template.json
COPY templates/metricbeat.template.json metricbeat.template.json
RUN curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_template/metricbeat-6.3.2 -d#metricbeat.template.json
RUN curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_template/winlogbeat-6.3.2 -d#winlogbeat.template.json
Originally, I had those commands running inside of the Elasticsearch Container, but it stopped working, reporting Could not resolve host: elasticsearch; Unknown error
I thought maybe it was trying to do the RUN commands too soon, so moved the process to the Logstash container, but the issue remains. Logstash depends on Elasticsearch, so Elastic should be up and running by the time that the Logstash container is trying to run this.
I've tried deleting images, containers, network, etc but nothing seems to let me run these CURL commands during the build process;
I'm thinking that perhaps the Docker daemon is caching DNS names, but can't figure out how to reset it, as I've already deleted and recreated the network several times.
Can anyone offer any ideas?
Host: Ubuntu Server 18.04
SW: Docker-CE (current version)
ELK stack: All are the official 6.3.2 images provided by Elastic.
Docker-Compose.YML:
version: '2'
services:
elasticsearch:
build:
context: elasticsearch/
volumes:
- esdata:/usr/share/elasticsearch/data
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
# ports:
# - "9200:9200"
# - "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
HOSTNAME: "elasticsearch"
networks:
- elk
logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
- "5044:5044"
- "5045:5045"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
# Port 5601 is not exposed outside of the container
# Can be accessed through Nginx Reverse Proxy only
# ports:
# - "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
nginx:
build:
context: nginx/
environment:
- APPLICATION_URL=http://docker.local
volumes:
- ./nginx/conf.d/:/etc/nginx/conf.d:ro
ports:
- "80:80"
networks:
- elk
depends_on:
- elasticsearch
fouroneone:
build:
context: fouroneone/
# No direct access, only through Nginx Reverse Proxy
# ports:
# - "8181:80"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
esdata:
Running a curl to elasticsearch is a wrong shortcut as it may not be up, plus Dockerfile may be the wrong place altogether
Also I would not put this script in the Dockerfile but possibly use it to alter the ENTRYPOINT for the image if I really wanted to use Dockerfile (again I would not advise it)
Best to do here is to have a logstash service in docker-file with the image on updated input plugin only and remove all the rest of lines in Dockerfile. And you could have a logstash_setup service which does the setup bits (using logstash image or even cleaner a basic centos image which should have bash and curl installed - since all you do is run a couple of curl commands passing some files)
Script I am talking about might look something like this :
#!/bin/bash
set -euo pipefail
es_url=http://elasticsearch:9200
# Wait for Elasticsearch to start up before doing anything.
until curl -s $es_url -k -o /dev/null; do
sleep 1
done
#then put your code here
curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_ ...
I'm having an issue with my travis-ci before_script while trying to connect to my docker postgres container:
Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
I've seen this problem raised but never fully addressed around SO and github issues, and i'm not clear whether it is specific to docker or travis. One linked issue (below) works around it by using 5433 as the host postgres address but i'd like to know for sure what is going on before i jump into something.
my travis.yml:
sudo: required
services:
- docker
env:
DOCKER_COMPOSE_VERSION: 1.7.1
DOCKER_VERSION: 1.11.1-0~trusty
before_install:
# list docker-engine versions
- apt-cache madison docker-engine
# upgrade docker-engine to specific version
- sudo apt-get -o Dpkg::Options::="--force-confnew" install -y docker-engine=${DOCKER_VERSION}
# upgrade docker-compose
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
before_script:
- echo "Before Script:"
- docker-compose -f docker-compose.ci.yml build
- docker-compose -f docker-compose.ci.yml run app rake db:setup
- docker-compose -f docker-compose.ci.yml run app /bin/sh
script:
- echo "Running Specs:"
- rake spec
my docker-compose.yml for ci:
postgres:
image: postgres:9.4.5
environment:
POSTGRES_USER: web
POSTGRES_PASSWORD: yourpassword
expose:
- '5432' # added this as an attempt to open the port
ports:
- '5432:5432'
volumes:
- web-postgres:/var/lib/postgresql/data
redis:
image: redis:3.0.5
ports:
- '6379:6379'
volumes:
- web-redis:/var/lib/redis/data
web:
build: .
links:
- postgres
- redis
volumes:
- ./code:/app
ports:
- '8000:8000'
# env_file: # setting these directly in the environment
# - .docker.env # (they work fine locally)
sidekiq:
build: .
command: bundle exec sidekiq -C code/config/sidekiq.yml
links:
- postgres
- redis
volumes:
- ./code:/app
Docker & Postgres: Failed to bind tcp 0.0.0.0:5432 address already in use
How to get Docker host IP on Travis CI?
It seems that Postgres service is enabled by default in Travis CI.
So you could :
Try to disable the Postgres service in your Travis config. See How to stop services on Travis CI running by default?. See also https://docs.travis-ci.com/user/database-setup/#PostgreSQL .
Or
Map your postgres container to another host port (!= 5432). Like -p 5455:5432.
It could also be useful to check if the service is already running : Check If a Particular Service Is Running on Ubuntu
Do you use Travis' Postgres?
services:
- postgresql
Would be easier if you provide travis.yml
I have found the official sentry image in dockerhub. But the document is incomplete and I can't setup the environment step by step.
We have to setup the database container first but none of them tell how to setup it at first. Specifically I don't know what are the username and password that sentry will use.
And I also get the following error when I run the sentry container:
sudo docker run --name some-sentry --link some-mysql:mysql -d sentry
e888fcf2976a9ce90f80b28bb4c822c07f7e0235e3980e2a33ea7ddeb0ff18ce
sudo docker logs some-sentry
Traceback (most recent call last):
File "/usr/local/bin/sentry", line 9, in <module>
load_entry_point('sentry==6.4.4', 'console_scripts', 'sentry')()
File "/usr/local/lib/python2.7/site-packages/sentry/utils/runner.py", line 310, in main
initializer=initialize_app,
File "/usr/local/lib/python2.7/site-packages/logan/runner.py", line 167, in run_app
configure_app(config_path=config_path, **kwargs)
File "/usr/local/lib/python2.7/site-packages/logan/runner.py", line 89, in configure_app
raise ValueError("Configuration file does not exist at %r" % (config_path,))
ValueError: Configuration file does not exist at '/.sentry/sentry.conf.py'
UPDATE circa version 21
They don't seem to want to build the official image for us any more as per the deprecation notice on Docker Hub. However, good news, in https://develop.sentry.dev/self-hosted/#getting-started
They supply an install script
There is an official docker-compose included
Seems Kafka and Zookeeper are now required too. Follow the docs to stay up to date.
This is a moving target. I suggest checking https://hub.docker.com/_/sentry/ for updates as their documentation is pretty good.
Circa version 8 you can easily convert those instructions to use docker-compose
docker-compose.yml
version: "2"
services:
redis:
image: redis:3.0.7
networks:
- sentry-net
postgres:
image: postgres:9.6.1
environment:
- POSTGRES_USER:sentry
- POSTGRES_PASSWORD:sentry
# volumes:
# - ./data:/var/lib/postgresql/data:rw
networks:
- sentry-net
sentry:
image: sentry:${SENTRY_TAG}
depends_on:
- redis
- postgres
environment:
- SENTRY_REDIS_HOST=redis
- SENTRY_SECRET_KEY=${SECRET}
- SENTRY_POSTGRES_HOST=postgres
ports:
- 9000:9000
networks:
- sentry-net
sentry_celery_beat:
image: sentry:${SENTRY_TAG}
depends_on:
- sentry
environment:
- SENTRY_REDIS_HOST=redis
- SENTRY_SECRET_KEY=${SECRET}
- SENTRY_POSTGRES_HOST=postgres
command: "sentry run cron"
networks:
- sentry-net
sentry_celery_worker:
image: sentry:${SENTRY_TAG}
depends_on:
- sentry
environment:
- SENTRY_REDIS_HOST=redis
- SENTRY_SECRET_KEY=${SECRET}
- SENTRY_POSTGRES_HOST=postgres
command: "sentry run worker"
networks:
- sentry-net
networks:
sentry-net:
.env
SENTRY_TAG=8.10.0
Run docker run --rm sentry:8.10.0 config generate-secret-key and add the secret
.env updated
SENTRY_TAG=8.10.0
SECRET=somelongsecretgeneratedbythetool
First boot:
docker-compose up -d postgres
docker-compose up -d redis
docker-compose run sentry sentry upgrade
Full boot
docker-compose up -d
Debug
docker-compose ps
docker-compose logs --tail=10
Take a look at the sentry.conf.py file that is part of the official sentry docker image. It gets a bunch of properties from the environment e.g. SENTRY_DB_NAME, SENTRY_DB_USER. Below is an excerpt from the file.
os.getenv('SENTRY_DB_PASSWORD')
or os.getenv('MYSQL_ENV_MYSQL_PASSWORD')
or os.getenv('MYSQL_ENV_MYSQL_ROOT_PASSWORD')
So as for your question about how to sepcify database password it must be set in environment variables. You can do this by running:
sudo docker run --name some-sentry --link some-mysql:mysql \
-e SENTRY_DB_USER=XXX \
-e SENTRY_DB_PASSWORD=XXX \
-d sentry
As for your issue with the exception you seem to be missing a config file Configuration file does not exist at '/.sentry/sentry.conf.py' That file is copied to /home/user/.sentry/sentry.conf.py inside the container. I am not sure why your sentry install is looking for it at /.sentry/sentry.conf.py. There may be an environment variable or a setting that controls this or this may just be a bug in the container.
This works for me https://github.com/slafs/sentry-docker and we don't have to setup database or others. I will learn more about the configuration in detail later.
Here my docker compose yml, with official image from https://hub.docker.com/_/sentry/:
https://gist.github.com/ebuildy/270f4ef3abd41e1490c1
Run:
docker-compose -p sw up -d
docker exec -ti sw_sentry_1 sentry upgrade
Thats it!