I've got a small docker-compose.yaml file that looks like this (it's used to fire up a local test-environment):
version: '2'
services:
ubuntu:
build:
context: ./build
dockerfile: ubuntu.dock
volumes:
- ./transfer:/home/
ports:
- "60000:22"
python:
build:
context: ./build
dockerfile: python.dock
volumes:
- .:/home/code
links:
- mssql
- ubuntu
mssql:
image: "microsoft/mssql-server-linux"
environment:
SA_PASSWORD: "somepassword"
ACCEPT_EULA: "Y"
ports:
- "1433:1433"
The issue that I run into is that when I run docker-compose up it fails at this step (which is in the python.dock file):
Step 10/19 : RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
---> Running in e4963c91a05b
The error looks like this:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
More details here: http://curl.haxx.se/docs/sslcerts.html
The part where it fails in the python.dock file looks like this:
# This line fails
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update -y
I've had issues with curl / docker in the past - because we use a self-signed cert for decrypting/encrypting at the firewall level (network requirement); is there a way for me to specify a self-signed cert for the containers to use?
That would allow curl to reach out and download the required files.
I've tried running something like docker-compose -f docker-compose.yaml --tlscert ~/certs/the-self-signed-ca.pem up but it fails.
How can I do this a bit more programatically so I can just run docker-compose up ?
If I understood it correctly, your firewall breaks the TLS encryption and re-encrypts it with a certificate from a local CA. I think this is generally a bad situation and you should aim for proper end-to-end, but politics aside.
The argument --tlscert passed to docker-compose is used to communicate with the docker daemon, potentially running remotely, exposed on port 2376, by default. In such a scenario, your local docker-compose command orchestrates containers on a remote machine, including building the image.
In your case, the curl command runs within a container. It will use the CAs (usually) installed by the base image in python.dock. To use your custom CA, you need to either
copy the CA certificate to the correct place within the image, e.g.,
COPY the-self-signed-ca.pem /etc/ssl/certs/
The exact procedure depends on your base image. This will make the certificate available in the container. The certificate will most likely be used by all subsequent processes. Other developers/users might not be aware that a custom CA is installed and that the connection is not secure!
copy the CA certificate to a custom place within the image, e.g.,
COPY the-self-signed-ca.pem /some/path/the-self-signed-ca.pem
and tell curl explicitly about the custom CA using the --cacert argument:
RUN curl --cacert /some/path/the-self-signed-ca.pem https://example.com/
Related
I am trying to find a way to import a realm in Keycloak version 17.0.1 that can be done at starting up a docker container (with docker-compose). I want to be able to do this in "start" mode and not "start-dev" mode as in my experience so far "start-dev" in 17 is forcing an H2/in-mem database and not allowing me to point to an external db which I would like to do to more closely resemble dev/prod environments when running locally.
Things I've tried:
1) It appears that according to recent conversations on Github (Issue 10216 and Issue 10754 to name a couple) that the environment variable that used to allow this (KEYCLOAK_IMPORT or KC_IMPORT_REALM in some versions) is no longer a trigger for this. In my attempts it also did not work for version 17.0.1.
2) I've also tried appending the following command in my docker-compose setup for keycloak and had no luck (also tried with just "start") - It appears to just ignore the command (no error or anything):
command: ["start-dev", "-Dkeycloak.import=/tmp/my-realm.json"]
3) I tried running the kc.sh command "import" in the Dockerfile (both before and after Entrypoint/start) but got error: Unmatched arguments from index 1: '/opt/keycloak/bin/kc.sh', 'im port', '--file', '/tmp/my-realm.json'
4) I've shifted gears and have tried to see if it is possible to just do it after the container starts (even with manual intervention) just to get some sanity restored. I attempted to use the admin-cli but after quite a few different attempts at different points/endpoints etc. I just get that localhost refuses to connect.
bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password adminpassword
Responds when hitting the following ports as shown:
8080: Failed to send request - Connect to localhost:8080 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
8443: Failed to send request - localhost:8443 failed to respond
I am sure there are other ways that I've tried and am forgetting - I've kind of spun my wheels at this point.
My code (largely the same as the latest docs on the Keycloak website):
Dockerfile:
FROM quay.io/keycloak/keycloak:17.0.1 as builder
ENV KC_METRICS_ENABLED=true
ENV KC_FEATURES=token-exchange
ENV KC_DB=postgres
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:17.0.1
COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/
WORKDIR /opt/keycloak
# for demonstration purposes only, please make sure to use proper certificates in production instead
ENV KC_HOSTNAME=localhost
RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start" ]
Docker-compose.yml:
version: "3"
services:
keycloak:
build:
context: .
volumes:
- ./my-realm.json:/tmp/my-realm.json:ro
env_file:
- .env
environment:
KC_DB_URL: ${POSTGRESQL_URL}
KC_DB_USERNAME: ${POSTGRESQL_USER}
KC_DB_PASSWORD: ${POSTGRESQL_PASS}
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: adminpassword
ports:
- 8080:8080
- 8443:8443 # <-- I've tried with only 8080 and with only 8443 as well. 8443 appears to be the only that I can get the admin console ui to even work on though.
networks:
- my_net
networks:
my_net:
name: my_net
Any suggestion on how to do this in a programmatic + "dev-opsy" way would be greatly appreciated. I'd really like to get this to work but am confused on how to get past this.
Importing realm upon docker initialization thru configuration is not supported yet. See https://github.com/keycloak/keycloak/issues/10216. They might release this feature in next release v18.
The workarounds people had shared in github thread is create own docker image and import the realm thru json file when building it.
FROM quay.io/keycloak/keycloak:17.0.1
# Make the realm configuration available for import
COPY realm-and-users.json /opt/keycloak_import/
# Import the realm and user
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak_import/realm-and-users.json
# The Keycloak server is configured to listen on port 8080
EXPOSE 8080
EXPOSE 8443
# Import the realm on start-up
CMD ["start-dev"]
As #tboom said, it was not supported yet by keycloak 17.x. But it is now supported by keycloak 18.x using the --import-realm option :
bin/kc.[sh|bat] [start|start-dev] --import-realm
This feature does not work as it was before. The JSON file path must not be specified anymore: the JSON file only has to be copied in the <KEYCLOAK_DIR>/data/import directory (multiple JSON files supported). Note that the import operation is skipped if the realm already exists, so incremental updates are not possible anymore (at least for the time being).
This feature is documented on https://www.keycloak.org/server/importExport#_importing_a_realm_during_startup.
Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you
I'm trying to set up a Django project with docker + nginx following the tutorial Nginx and Let's Encrypt with Docker in Less Than 5 Minutes.
The issue is when I run the script init-letsencrypt.sh I end up with failed challenges.
Here is the content of my script:
#!/bin/bash
if ! [ -x "$(command -v docker-compose)" ]; then
echo 'Error: docker-compose is not installed.' >&2
exit 1
fi
domains=(xxxx.yyyy.net www.xxxx.yyyy.net)
rsa_key_size=4096
data_path="./data/certbot"
email="myemail#example.com" # Adding a valid address is strongly recommended
staging=1 # Set to 1 if you're testing your setup to avoid hitting request limits
if [ -d "$data_path" ]; then
read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then
exit
fi
fi
if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
echo "### Downloading recommended TLS parameters ..."
mkdir -p "$data_path/conf/"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/_internal/tls_configs/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem"
echo
fi
echo "### Creating dummy certificate for $domains ..."
path="/etc/letsencrypt/live/$domains"
mkdir -p "$data_path/conf/live/$domains"
docker-compose -f docker-compose-deploy.yml run --rm --entrypoint "\
openssl req -x509 -nodes -newkey rsa:$rsa_key_size -days 1\
-keyout '$path/privkey.pem' \
-out '$path/fullchain.pem' \
-subj '/CN=localhost'" certbot
echo
echo "### Starting nginx ..."
docker-compose -f docker-compose-deploy.yml up --force-recreate -d proxy
echo
echo "### Deleting dummy certificate for $domains ..."
docker-compose -f docker-compose-deploy.yml run --rm --entrypoint "\
rm -Rf /etc/letsencrypt/live/$domains && \
rm -Rf /etc/letsencrypt/archive/$domains && \
rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot
echo
echo "### Requesting Let's Encrypt certificate for $domains ..."
#Join $domains to -d args
domain_args=""
for domain in "${domains[#]}"; do
domain_args="$domain_args -d $domain"
done
# Select appropriate email arg
case "$email" in
"") email_arg="--register-unsafely-without-email" ;;
*) email_arg="--email $email" ;;
esac
# Enable staging mode if needed
if [ $staging != "0" ]; then staging_arg="--staging"; fi
docker-compose -f docker-compose-deploy.yml run --rm --entrypoint "\
certbot -v certonly --webroot -w /var/www/certbot \
$staging_arg \
$email_arg \
$domain_args \
--rsa-key-size $rsa_key_size \
--agree-tos \
--force-renewal" certbot
echo
echo "### Reloading nginx ..."
docker-compose -f docker-compose-deploy.yml exec proxy nginx -s reload
And my nginx configuration file:
server {
listen 80;
server_name xxxx.yyyy.net;
location ^~ /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
server_name xxxx.yyyy.net;
ssl_certificate /etc/letsencrypt/live/xxxx.yyyy.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxxx.yyyy.net/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass web:8000;
include /etc/nginx/uwsgi_params;
}
}
The output of the part that fails:
Requesting a certificate for xxxx.yyyy.net and www.xxxx.yyyy.net
Performing the following challenges:
http-01 challenge for xxxx.yyyy.net
http-01 challenge for www.xxxx.yyyy.net
Using the webroot path /var/www/certbot for all unmatched domains.
Waiting for verification...
Challenge failed for domain xxxx.yyyy.net
Challenge failed for domain www.xxxx.yyyy.net
http-01 challenge for xxxx.yyyy.net
http-01 challenge for www.xxxx.yyyy.net
Certbot failed to authenticate some domains (authenticator: webroot). The Certificate Authority reported these problems:
Domain: xxxx.yyyy.net
Type: connection
Detail: Fetching http://xxxx.yyyy.net/.well-known/acme-challenge/XJw9w39lRSSbPf-4tb45RLtTnSbjlUEi1f0Cqwsmt-8: Connection refused
Domain: www.xxxx.yyyy.net
Type: connection
Detail: Fetching http://www.xxxx.yyyy.net/.well-known/acme-challenge/b47s4WJARyOTS63oFkaji2nP7oOhiLx5hHp4kO9dCGI: Connection refused
Hint: The Certificate Authority failed to download the temporary challenge files created by Certbot. Ensure that the listed domains serve their content from the provided --webroot-path/-w and that files created there can be downloaded from the internet.
Cleaning up challenges
Some challenges have failed.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
ERROR: 1
One of the comments said:
But there's no further explanation as to how to solve it.
Check the certbot commit
Problem is nginx configuration file. The container fails to start up correctly because of missing certification files. I commented out the ssl server portion, rebuilt the image and executed the script again. Everything worked out just fine. After certificates were generated I just uncommented the ssl configuration, rebuilt the image and composed up the services.
Had the same issue;
The solution was ensuring I defined the volume blocks in both the nginx and certbot services correctly.
//other services
nginx:
container_name: nginx
image: nginx:1.13
ports:
- "80:80"
- "443:443"
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
certbot:
container_name: certbot
image: certbot/certbot
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
Also if you are using EC2 as your cloud server don't forget to add inbound rules for ports 80 and 443.
A More Beginner-friendly Version!
I can confirm that the first answer that was posted (remove all lines regarding SSL certificate registration/HTTPS redirection when first running the init-letsencrypt.sh) works perfectly!
The lack of documentation is really annoying on this one, and i had to find the answer deep in the community section. Even for someone whose first language isn't English this answer would be really difficult to find. I wish they documented more on this matter. :(
So here are some of the steps that you have to follow to resolve this issue...
Basically gotta remove all the HTTPS SSL-related stuff from both the docker-compose.yml and the nginx.conf / nginx/app.conf file.
Then run the init-letsencrypt.sh script.
Then add the HTTPS SSL-related stuff back to both the docker-compose.yml and the nginx.conf / nginx/app.conf file. (If you're on Git, just revert your commits)
Then run docker-compose up -d --build. Then run the init-letsencrypt.sh script again.
Hope this helps, and wish y'all the best of luck!!
P/S: The back-end stack I used was Flask + Celery (Allows Flask to Run Heavy Tasks Asyncronously) + Redis (A Bridge/Middleman Between Flask and Celery) + NGINX + Certbot all running inside individual docker containers, chained using docker-compose. I deployed it on a DigitalOcean Droplet VPS. (VPS is essentially a computer OS that runs on the internet, 24/7)
For newbies, Docker: Think of Python's virtualenv or Node.js's localized node_modules but for OS-level/C-based dependencies. Like those that can be only installed through package managers such as Linux's apt-get install, macOS's brew install, or Windows's choco install.
Docker Compose: e.g. The client and the server may have different OS-level dependencies and you want to separate them so they don't conflict with each other. You can only allow certain communications between by "chaining" them through docker-compose.
What's NGINX? It's a reverse-proxy solution; TLDR: you can connect the domain/URL you purchased and direct it to your web app. Let's Encrypt allows the server to have that green chain lock thing next to your address for secure communication.
Also important thing to note: Do NOT install NGINX or Redis OUTSIDE of the Docker container on the Linux terminal! That will cause conflicts (ports 443 and 80 already being occupied). 443 is for HTTPS, 80 is for HTTP.
These are the tutorial I used for setting up my tech stack:
https://testdriven.io/blog/dockerizing-flask-with-postgres-gunicorn-and-nginx/
https://pentacent.medium.com/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
I can also share my docker-compose.yml file below for your reference:
version: '3.8'
services:
web:
build: .
image: web
container_name: web
command: gunicorn --worker-class=gevent --worker-connections=1000 --workers=5 api:app --bind 0.0.0.0:5000
volumes:
- .:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- redis
expose:
- 5000
worker:
build: .
command: celery --app tasks.celery worker --loglevel=info
volumes:
- .:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
nginx:
image: nginx:1.15-alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./server/nginx:/etc/nginx/conf.d
- ./server/certbot/conf:/etc/letsencrypt
- ./server/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
depends_on:
- web
certbot:
image: certbot/certbot
volumes:
- ./server/certbot/conf:/etc/letsencrypt
- ./server/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
redis:
image: redis:6-alpine
restart: always
ports:
- 6379:6379
# HOW TO SET REDIS PASSWORD VIA ENVIRONMENT VARIABLE
# https://stackoverflow.com/questions/68461172/docker-compose-redis-password-via-environment-variable
dashboard:
build: .
command: celery --app tasks.celery flower --port=5555 --broker=redis://redis:6379/0
ports:
- 5556:5555
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
- worker
Also sharing my Dockerfile JUST IN CASE,
# FOR FRONT-END DEPLOYMENT... (REACT)
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/web/node_modules/.bin:$PATH
COPY web ./web
WORKDIR /app/web
RUN yarn install
RUN yarn build
# FOR BACK-END DEPLOYMENT... (FLASK)
FROM python:3.10.4-slim
WORKDIR /
# Don't forget "--from"! It acts as a bridge that connects two seperate stages
COPY --from=build-step app ./app
WORKDIR /app
RUN apt-get update && apt-get install -y python3-pip python3-dev mesa-utils libgl1-mesa-glx libglib2.0-0 build-essential libssl-dev libffi-dev redis-server
COPY server ./server
WORKDIR /app/server
RUN pip3 install -r ./requirements.txt
# Pretty much pass everything in the root folder except for the client folder, as we do NOT want to overwrite the pre-generated client folder that is already in the ./app folder
# THIS IS CALLED MULTI-STAGE BUILDING IN DOCKER
EXPOSE 5000
All the notes I made while resolving this problem:
'''
TIPS & TRICKS
-------------
UPDATED ON: 2023-02-11
LAST EDITED BY:
WONMO "JOHN" SEONG,
LEAD DEV. AND THE CEO OF HAVIT
----------------------------------------------
HOW TO INSTALL DOCKER-COMPOSE ON DIGITALOCEAN VPS:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04
DOCKERIZE FLASK + CELERY + REDIS APPLICATION WITH DOCKER-COMPOSE:
https://nickjanetakis.com/blog/dockerize-a-flask-celery-and-redis-application-with-docker-compose
https://testdriven.io/blog/flask-and-celery/ <-- PRIMARILY USED THIS TUTORIAL
CELERY VS. GUNICORN WORKERS:
https://stackoverflow.com/questions/24317917/difference-between-celery-and-gunicorn-workers
1. Gunicorn solves concurrency of serving HTTP requests - this is "online" code where each request triggers a Django view, which returns a response. Any code that runs in a view will increase the time it takes to get a response to the user, making the website seem slow. So long running tasks should not go in Django views for that reason.
2. Celery is for running code "offline", where you don't need to return an HTTP response to a user. A Celery task might be triggered by some code inside a Django view, but it could also be triggered by another Celery task, or run on a schedule. Celery uses the model of a worker pulling tasks off of a queue, there are a few Django compatible task frameworks that do this. I give a write up of this architecture here.
CELERY, GUNICORN, AND SUPERVISOR:
https://medium.com/sightwave-software/setting-up-nginx-gunicorn-celery-redis-supervisor-and-postgres-with-django-to-run-your-python-73c8a1c8c1ba
DEPLOY GITHUB REPO ON DIGITALOCEAN VPS USING SSH KEYS:
https://medium.com/swlh/how-to-deploy-your-application-to-digital-ocean-using-github-actions-and-save-up-on-ci-cd-costs-74b7315facc2
COMANDS TO RUN ON VPS TO CLONE GITHUB REPO (WORKS ON BOTH PRIVATE AND PUBLIC REPOS):
1. Login as root
2. Set up your credentials (GitHub SSH-related) and run the following commands:
- apt-get update
- apt-get install git
- mkdir ~/github && cd ~/github
- git clone git#github.com:wonmor/HAVIT-Central.git
3. To get the latest changes, run git fetch origin
HOW TO RUN DOCKER-COMPOSE ON VPS:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04
1. Login as root
2. Run the following commands:
- cd ~/github/HAVIT-Central
- docker compose up --build -d // builds and runs the containers in detached mode
OR docker compose up --build -d --remove-orphans // builds and runs the containers in detached mode and removes orphan containers
- docker compose ps // lists all running containers in Docker engine.
3. To stop the containers, run:
- docker-compose down
HOW TO SET UP NGINX ON UBUNTU VPS TO PROXY PASS TO GUNICORN ON DIGITALOCEAN:
https://www.datanovia.com/en/lessons/digitalocean-initial-ubuntu-server-setup/
https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04
https://www.datanovia.com/en/lessons/digitalocean-how-to-install-nginx-and-ssl/
CAPROVER CLEAN/REMOVE ALL PREVIOUS DEPLOYMENTS:
docker container prune --force
docker image prune --all
FORCE MERGE USING GIT:
git reset --hard origin/main
NGINX - REDIRECT TO DOCKER CONTAINER:
https://gilyes.com/docker-nginx-letsencrypt/
https://github.com/nginx-proxy/acme-companion
https://github.com/nginx-proxy/acme-companion/wiki/Docker-Compose
https://github.com/evertramos/nginx-proxy-automation
https://github.com/buchdag/letsencrypt-nginx-proxy-companion-compose
https://testdriven.io/blog/dockerizing-flask-with-postgres-gunicorn-and-nginx/
https://pentacent.medium.com/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71 <--- THIS IS THE BEST TUTORIAL
Simply run docker-compose up and enjoy your HTTPS-secured website or app.
Then run chmod +x init-letsencrypt.sh and sudo ./init-letsencrypt.sh.
VVIP: HOW TO RUN THIS APP ON VPS:
1. Login as root, run sudo chmod +x init_letsencrypt.sh
2. Now for the bit… that tends to go wrong. Navigate into your remote project folder, and run the initialization script (Run ./<Script-Name>.sh on Terminal). First, docker will build the images, and then run through the script step-by-step as described above. Now, this worked first time for me while putting together the tutorial, but in the past it has taken me hours to get everything set up correctly. The main problem was usually the locations of files: the script would save it to some directory, which was mapped to a volume that nginx was incorrectly mapped to, and so on. If you end up needing to debug, you can run the commands in the script yourself, substituting variables as you go. Pay close attention to the logs — nginx is often quite good at telling you what it’s missing.
3. If all goes to plan, you’ll see a nice little printout from Lets Encrypt and Certbot saying “Congratulations” and your script will exit successfully.
HOW TO OPEN/ALLOW PORTS ON DIGITALOCEAN:
https://www.digitalocean.com/community/tutorials/opening-a-port-on-linux
sudo ufw allow <PORT_NUMBER>
WHAT ARE DNS RECORDS?
https://docs.digitalocean.com/products/networking/dns/how-to/manage-records/
PS: Highers the TTL, the longer it takes for the DNS record to update.
But it will be cached for longer, which means that there will be less load on the DNS server.
TIP: MAKE SURE YOU SET UP THE CUSTOM NAMESPACES FOR DIGITALOCEAN ON GOOGLE DOMAINS:
https://docs.digitalocean.com/tutorials/dns-registrars/
DOCKER SWARM VS. DOCKER COMPOSE:
The difference between Docker Swarm and Docker Compose is that Compose is used for configuring multiple containers in the same host. Docker Swarm is different in that it is a container orchestration tool. This means that Docker Swarm lets you connect containers to multiple hosts similar to Kubernetes.
Cannot load certificate /etc/letsencrypt/live/havit.space/fullchain.pem: BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory FIX:
https://community.letsencrypt.org/t/lets-encrypt-with-nginx-i-got-error-ssl-error-02001002-system-library-fopen-no-such-file-or-directory-fopen-etc-letsencrypt-live-xxx-com-fullchain-pem-r/20990/5
RUNNING MULTIPLE DOCKER COMPOSE FILES:
https://stackoverflow.com/questions/43957259/run-multiple-docker-compose
nginx: [emerg] open() "/etc/letsencrypt/options-ssl-nginx.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/app.conf:20 FIX:
https://stackoverflow.com/questions/64940480/nginx-letsencrypt-error-etc-letsencrypt-options-ssl-nginx-conf-no-such-file-o
VVVIP: RESOLVE NGINX + DOCKER + LETSENCRYPT ISSUES!
https://stackoverflow.com/questions/68449947/certbot-failing-acme-challenge-connection-refused
Basically gotta remove all the HTTPS SSL-related stuff from both the docker-compose.yml and the nginx.conf file.
Then run the init-letsencrypt.sh script. Then add the HTTPS SSL-related stuff back to both the docker-compose.yml and the nginx.conf file.
Then run docker-compose up -d --build. Then run the init-letsencrypt.sh script again.
'''
I am trying to deploy keycloak using docker image (https://hub.docker.com/r/jboss/keycloak/ version 4.5.0-Final) and facing an issue with setting up SSL.
According to the docs
Keycloak image allows you to specify both a
private key and a certificate for serving HTTPS. In that case you need
to provide two files:
tls.crt - a certificate tls.key - a private key Those files need to be
mounted in /etc/x509/https directory. The image will automatically
convert them into a Java keystore and reconfigure Wildfly to use it.
I followed the given steps and provided the volume mount setting with a folder with the necessary files (tls.crt and tls.key), But I am facing issues with SSL handshake, getting
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
error, blocking keycloak load in browser when trying to access it.
I have used letsencrypt to generate pem files and used openssl to create .crt and .key files.
Also tried just openssl to create those files to narrow down issue and the behavior is same(some additional info if this should matter)
By default, when I simply specify just the port binding -p 8443:8443 without specifying the cert volume mount /etc/x509/https the keycloak server generates a self signed certificate and I don't see issue in viewing the app in browser
I guess this might be more of a certificate creation issue than anything specific to keycloak, But, unsure how to get this to working.
Any help is appreciated
I also faced the issue of getting an ERR_SSL_VERSION_OR_CIPHER_MISMATCH error, using the jboss/keycloak Docker image and free certificates from letsencrypt. Even after considering the advices from the other comments. Now, I have a working (and quite easy) setup, which might also help you.
1) Generate letsencrypt certificate
At first, I generated my letsencrypt certificate for domain sub.example.com using the certbot. You can find detailed instructions and alternative ways to gain a certificate at https://certbot.eff.org/ and the user guide at https://certbot.eff.org/docs/using.html.
$ sudo certbot certonly --standalone
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Please enter in your domain name(s) (comma and/or space separated) (Enter 'c' to cancel): sub.example.com
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for sub.example.com
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/sub.example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/sub.example.com/privkey.pem
Your cert will expire on 2020-01-27. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
2) Prepare docker-compose environment
I use docker-compose to run keycloak via docker. The config and data files are stored in path /srv/docker/keycloak/.
Folder config contains the docker-compose.yml
Folder data/certs contains the certificates I generated via letsencrypt
Folder data/keycloack_db is mapped to the database container to make its data persistent.
Put the certificate files to the right path
When I first had issues using the original letscrypt certificates for keycloak, I tried the workaround of converting the certificates to another format, as mentioned in the comments of the former answers, which also failed. Eventually, I realized that my problem was caused by permissions set to the mapped certificate files.
So, what worked for me is to just to copy and rename the files provided by letsencrypt, and mount them to the container.
$ cp /etc/letsencrypt/live/sub.example.com/fullchain.pem /srv/docker/keycloak/data/certs/tls.crt
$ cp /etc/letsencrypt/live/sub.example.com/privkey.pem /srv/docker/keycloak/data/certs/tls.key
$ chmod 755 /srv/docker/keycloak/data/certs/
$ chmod 604 /srv/docker/keycloak/data/certs/*
docker-compose.yml
In my case, I needed to use the host network of my docker host. This is not best practice and should not be required for your case. Please find information about configuration parameters in the documentation at hub.docker.com/r/jboss/keycloak/.
version: '3.7'
networks:
default:
external:
name: host
services:
keycloak:
container_name: keycloak_app
image: jboss/keycloak
depends_on:
- mariadb
restart: always
ports:
- "8080:8080"
- "8443:8443"
volumes:
- "/srv/docker/keycloak/data/certs/:/etc/x509/https" # map certificates to container
environment:
KEYCLOAK_USER: <user>
KEYCLOAK_PASSWORD: <pw>
KEYCLOAK_HTTP_PORT: 8080
KEYCLOAK_HTTPS_PORT: 8443
KEYCLOAK_HOSTNAME: sub.example.ocm
DB_VENDOR: mariadb
DB_ADDR: localhost
DB_USER: keycloak
DB_PASSWORD: <pw>
network_mode: host
mariadb:
container_name: keycloak_db
image: mariadb
volumes:
- "/srv/docker/keycloak/data/keycloak_db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: <pw>
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: <pw>
network_mode: host
Final directory setup
This is how my final file and folder setup looks like.
$ cd /srv/docker/keycloak/
$ tree
.
├── config
│ └── docker-compose.yml
└── data
├── certs
│ ├── tls.crt
│ └── tls.key
└── keycloak_db
Start container
Finally, I was able to start my software using docker-compose.
$ cd /srv/docker/keycloak/config/
$ sudo docker-compose up -d
We can see the mounted certificates within the container.
$ cd /srv/docker/keycloak/config/
$ sudo docker-compose up -d
We can doublecheck the mounted certificates within the container.
## open internal shell of keycloack container
$ sudo docker exec -it keycloak_app /bin/bash
## open directory of certificates
$ cd /etc/x509/https/
$ ll
-rw----r-- 1 root root 3586 Oct 30 14:21 tls.crt
-rw----r-- 1 root root 1708 Oct 30 14:20 tls.key
Considerung the setup from the docker-compose.yml, keycloak is now available at https://sub.example.com:8443
After some research the following method worked (for self-signed certs, I still have to figure out how to do with letsencrypt CA for prod)
generate a self-signed cert using the keytool
keytool -genkey -alias localhost -keyalg RSA -keystore keycloak.jks -validity 10950
convert .jks to .p12
keytool -importkeystore -srckeystore keycloak.jks -destkeystore keycloak.p12 -deststoretype PKCS12
generate .crt from .p12 keystore
openssl pkcs12 -in keycloak.p12 -nokeys -out tls.crt
generate .key from .p12 keystore
openssl pkcs12 -in keycloak.p12 -nocerts -nodes -out tls.key
Then use the tls.crt and tls.key for volume mount /etc/x509/https
Also, on the securing app, in the keycloak.json file specify the following properties
"truststore" : "path/to/keycloak.jks",
"truststore-password" : "<jks-pwd>",
For anyone who is trying to run Keycloak with a passphrase protected private key file:
Keycloak runs the script /opt/jboss/tools/x509.sh to generate the keystore based on the provided files in /etc/x509/https as described in https://hub.docker.com/r/jboss/keycloak - Setting up TLS(SSL).
This script takes no passphrase into account unfortunately. But with a little modification at Docker build time you can fix it by yourself:
Within your Dockerfile add:
RUN sed -i -e 's/-out "${KEYSTORES_STORAGE}\/${PKCS12_KEYSTORE_FILE}" \\/-out "${KEYSTORES_STORAGE}\/${PKCS12_KEYSTORE_FILE}" \\\n -passin pass:"${SERVER_KEYSTORE_PASSWORD}" \\/' /opt/jboss/tools/x509.sh
This command modifies the script and appends the parameter to pass in the passphrase
-passin pass:"${SERVER_KEYSTORE_PASSWORD}"
The value of the parameter is an environment variable which you are free to set: SERVER_KEYSTORE_PASSWORD
Tested with Keycloak 9.0.0
OS: Centos 7
Docker version 17.03.0-ce, build 60ccb22
docker-compose version 1.11.2, build dfed245
I need to map a large range of ports (40000-60000/udp) for RED5Pro server but i always get this error when creating the image:
ERROR: for red5pro UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
here is the compose file
version: '2'
services:
red5pro:
build: ./red5pro/
container_name: red5pro
ports:
- "5080:5080"
- "1935:1935"
- "8554:8554"
- "6262:6262"
- "8081:8081"
- "40000-60000:40000-60000/udp"
and the Dockerfile
FROM java:8
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
libva1 \
libva-drm1 \
libva-x11-1 \
libvdpau1
WORKDIR /opt/red5pro <br>COPY / /opt/red5pro/
ENTRYPOINT ["sh","/opt/red5pro/red5.sh"]
On Mac, what I did was going to the docker icon (upper right corner) and clicked restart, maybe is not the best solution, but it´s the fastest one