Docker-in-Docker issues with connecting to internal container network (Anchore Engine) - docker

I am having issues when trying to connect to a docker-compose network from inside of a container. These are the files I am working with. The whole thing runs when I ./run.sh.
Dockerfile:
FROM docker/compose:latest
WORKDIR .
# EXPOSE 8228
RUN apk update
RUN apk add py-pip
RUN apk add jq
RUN pip install anchorecli
COPY dockertest.sh ./dockertest.sh
COPY docker-compose.yaml docker-compose.yaml
CMD ["./dockertest.sh"]
docker-compose.yaml
services:
# The primary API endpoint service
engine-api:
image: anchore/anchore-engine:v0.6.0
depends_on:
- anchore-db
- engine-catalog
#volumes:
#- ./config-engine.yaml:/config/config.yaml:z
ports:
- "8228:8228"
..................
## A NUMBER OF OTHER CONTAINERS THAT ANCHORE-ENGINE USES ##
..................
networks:
default:
external:
name: anchore-net
dockertest.sh
echo "------------- INSTALL ANCHORE CLI ---------------------"
engineid=`docker ps | grep engine-api | cut -f 1 -d ' '`
engine_ip=`docker inspect $engineid | jq -r '.[0].NetworkSettings.Networks."cws-anchore-net".IPAddress'`
export ANCHORE_CLI_URL=http://$engine_ip:8228/v1
export ANCHORE_CLI_USER='user'
export ANCHORE_CLI_PASS='pass'
echo "System status"
anchore-cli --debug system status #This line throws error (see below)
run.sh:
#!/bin/bash
docker build . -t anchore-runner
docker network create anchore-net
docker-compose up -d
docker run --network="anchore-net" -v //var/run/docker.sock:/var/run/docker.sock anchore-runner
#docker network rm anchore-net
Error Message:
System status
INFO:anchorecli.clients.apiexternal:As Account = None
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 172.19.0.6:8228
Error: could not access anchore service (user=user url=http://172.19.0.6:8228/v1): HTTPConnectionPool(host='172.19.0.6', port=8228): Max retries exceeded with url: /v1
(Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))
Steps:
run.sh builds container image and creates network anchore-net
the container has an entrypoint script, which does multiple things
firstly, it brings up the docker-compose network as detached FROM inside the container
secondly, nstalls anchore-cli so I can run commands against container network
lastly, attempts to get a system status of the anchore-engine (d.c network) but thats where I am running into HTTP request connection issues.
I am dynamically getting the IP of the api endpoint container of anchore-engine and setting the URL of the request to do that. I have also tried passing those variables from command line such as:
anchore-cli --u user --p pass --url http://$engine_ip/8228/v1 system status but that throws the same error.
For those of you who took the time to read through this, I highly appreciate any input you can give me as to where the issue may be lying. Thank you very much.

Related

Why MERN app can't communicate with backend if deployed with docker?

I deployed a MERN app to a digital ocean droplet with Docker. If I run my docker-compose.yml file local on my PC it works well. I have 2 containers: 1 backend, 1 frontend. If I try to compose-up on droplet, it seems the frontend is ok but can't communicate with backend.
I use http-proxy-middleware, my setupProxy.js file:
const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function (app) {
app.use(
'/api',
createProxyMiddleware({
target: 'http://0.0.0.0:5001',
changeOrigin: true,
})
);
};
I tried target: 'http://main-be:5001', too, as main-be is the name of my backend container, but get the same error. Just the Request URL is http://main-be:5001/api/auth/login in the chrome/devops/network.
...also another page:
My docker-compose.yml file:
version: '3.4'
networks:
main:
services:
main-be:
image: main-be:latest
container_name: main-be
ports:
- '5001:5001'
networks:
main:
volumes:
- ./backend/config.env:/app/config.env
command: 'npm run prod'
main-fe:
image: main-fe:latest
container_name: main-fe
networks:
main:
volumes:
- ./frontend/.env:/app/.env
ports:
- '3000:3000'
command: 'npm start'
My Dockerfile in the frontend folder:
FROM node:12.2.0-alpine
COPY . .
RUN npm ci
CMD ["npm", "start"]
My Dockerfile in the backend folder:
FROM node:12-alpine3.14
WORKDIR /app
COPY . .
RUN npm ci --production
CMD ["npm", "run", "prod"]
backend/package.json file:
"scripts": {
"start": "nodemon --watch --exec node --experimental-modules server.js",
"dev": "nodemon server.js",
"prod": "node server.js"
},
frontend/.env file:
SKIP_PREFLIGHT_CHECK=true
HOST=0.0.0.0
backend/config.env file:
DE_ENV=development
PORT=5001
My deploy.sh script to build images, copy to droplet...
#build and save backend and frontend images
docker build -t main-be ./backend & docker build -t main-fe ./frontend
docker save -o ./main-be.tar main-be & docker save -o ./main-fe.tar main-fe
#deploy services
ssh root#46.111.119.161 "pwd && mkdir -p ~/apps/first && cd ~/apps/first && ls -al && echo 'im in' && rm main-be.tar && rm main-fe.tar &> /dev/null"
#::scp file
#scp ./frontend/.env root#46.111.119.161:~/apps/first/frontend
#upload main-be.tar and main-fe.tar to VM via ssh
scp ./main-be.tar ./main-fe.tar root#46.111.119.161:~/apps/thesis/
scp ./docker-compose.yml root#46.111.119.161:~/apps/first/
ssh root#46.111.119.161 "cd ~/apps/first && ls -1 *.tar | xargs --no-run-if-empty -L 1 docker load -i"
ssh root#46.111.119.161 "cd ~/apps/first && sudo docker-compose up"
frontend/src/utils/axios.js:
import axios from 'axios';
export const baseURL = 'http://localhost:5001';
export default axios.create({ baseURL });
frontend/src/utils/constants.js:
const API_BASE_ORIGIN = `http://localhost:5001`;
export { API_BASE_ORIGIN };
I have been trying for days but can't see where the problem is so any help highly appreciated.
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)

Apache Nifi (on docker): only one of the HTTP and HTTPS connectors can be configured at one time error

Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you

Certbot failing acme-challenge (connection refused)

I'm trying to set up a Django project with docker + nginx following the tutorial Nginx and Let's Encrypt with Docker in Less Than 5 Minutes.
The issue is when I run the script init-letsencrypt.sh I end up with failed challenges.
Here is the content of my script:
#!/bin/bash
if ! [ -x "$(command -v docker-compose)" ]; then
echo 'Error: docker-compose is not installed.' >&2
exit 1
fi
domains=(xxxx.yyyy.net www.xxxx.yyyy.net)
rsa_key_size=4096
data_path="./data/certbot"
email="myemail#example.com" # Adding a valid address is strongly recommended
staging=1 # Set to 1 if you're testing your setup to avoid hitting request limits
if [ -d "$data_path" ]; then
read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then
exit
fi
fi
if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
echo "### Downloading recommended TLS parameters ..."
mkdir -p "$data_path/conf/"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/_internal/tls_configs/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem"
echo
fi
echo "### Creating dummy certificate for $domains ..."
path="/etc/letsencrypt/live/$domains"
mkdir -p "$data_path/conf/live/$domains"
docker-compose -f docker-compose-deploy.yml run --rm --entrypoint "\
openssl req -x509 -nodes -newkey rsa:$rsa_key_size -days 1\
-keyout '$path/privkey.pem' \
-out '$path/fullchain.pem' \
-subj '/CN=localhost'" certbot
echo
echo "### Starting nginx ..."
docker-compose -f docker-compose-deploy.yml up --force-recreate -d proxy
echo
echo "### Deleting dummy certificate for $domains ..."
docker-compose -f docker-compose-deploy.yml run --rm --entrypoint "\
rm -Rf /etc/letsencrypt/live/$domains && \
rm -Rf /etc/letsencrypt/archive/$domains && \
rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot
echo
echo "### Requesting Let's Encrypt certificate for $domains ..."
#Join $domains to -d args
domain_args=""
for domain in "${domains[#]}"; do
domain_args="$domain_args -d $domain"
done
# Select appropriate email arg
case "$email" in
"") email_arg="--register-unsafely-without-email" ;;
*) email_arg="--email $email" ;;
esac
# Enable staging mode if needed
if [ $staging != "0" ]; then staging_arg="--staging"; fi
docker-compose -f docker-compose-deploy.yml run --rm --entrypoint "\
certbot -v certonly --webroot -w /var/www/certbot \
$staging_arg \
$email_arg \
$domain_args \
--rsa-key-size $rsa_key_size \
--agree-tos \
--force-renewal" certbot
echo
echo "### Reloading nginx ..."
docker-compose -f docker-compose-deploy.yml exec proxy nginx -s reload
And my nginx configuration file:
server {
listen 80;
server_name xxxx.yyyy.net;
location ^~ /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
server_name xxxx.yyyy.net;
ssl_certificate /etc/letsencrypt/live/xxxx.yyyy.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxxx.yyyy.net/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass web:8000;
include /etc/nginx/uwsgi_params;
}
}
The output of the part that fails:
Requesting a certificate for xxxx.yyyy.net and www.xxxx.yyyy.net
Performing the following challenges:
http-01 challenge for xxxx.yyyy.net
http-01 challenge for www.xxxx.yyyy.net
Using the webroot path /var/www/certbot for all unmatched domains.
Waiting for verification...
Challenge failed for domain xxxx.yyyy.net
Challenge failed for domain www.xxxx.yyyy.net
http-01 challenge for xxxx.yyyy.net
http-01 challenge for www.xxxx.yyyy.net
Certbot failed to authenticate some domains (authenticator: webroot). The Certificate Authority reported these problems:
Domain: xxxx.yyyy.net
Type: connection
Detail: Fetching http://xxxx.yyyy.net/.well-known/acme-challenge/XJw9w39lRSSbPf-4tb45RLtTnSbjlUEi1f0Cqwsmt-8: Connection refused
Domain: www.xxxx.yyyy.net
Type: connection
Detail: Fetching http://www.xxxx.yyyy.net/.well-known/acme-challenge/b47s4WJARyOTS63oFkaji2nP7oOhiLx5hHp4kO9dCGI: Connection refused
Hint: The Certificate Authority failed to download the temporary challenge files created by Certbot. Ensure that the listed domains serve their content from the provided --webroot-path/-w and that files created there can be downloaded from the internet.
Cleaning up challenges
Some challenges have failed.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
ERROR: 1
One of the comments said:
But there's no further explanation as to how to solve it.
Check the certbot commit
Problem is nginx configuration file. The container fails to start up correctly because of missing certification files. I commented out the ssl server portion, rebuilt the image and executed the script again. Everything worked out just fine. After certificates were generated I just uncommented the ssl configuration, rebuilt the image and composed up the services.
Had the same issue;
The solution was ensuring I defined the volume blocks in both the nginx and certbot services correctly.
//other services
nginx:
container_name: nginx
image: nginx:1.13
ports:
- "80:80"
- "443:443"
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
certbot:
container_name: certbot
image: certbot/certbot
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
Also if you are using EC2 as your cloud server don't forget to add inbound rules for ports 80 and 443.
A More Beginner-friendly Version!
I can confirm that the first answer that was posted (remove all lines regarding SSL certificate registration/HTTPS redirection when first running the init-letsencrypt.sh) works perfectly!
The lack of documentation is really annoying on this one, and i had to find the answer deep in the community section. Even for someone whose first language isn't English this answer would be really difficult to find. I wish they documented more on this matter. :(
So here are some of the steps that you have to follow to resolve this issue...
Basically gotta remove all the HTTPS SSL-related stuff from both the docker-compose.yml and the nginx.conf / nginx/app.conf file.
Then run the init-letsencrypt.sh script.
Then add the HTTPS SSL-related stuff back to both the docker-compose.yml and the nginx.conf / nginx/app.conf file. (If you're on Git, just revert your commits)
Then run docker-compose up -d --build. Then run the init-letsencrypt.sh script again.
Hope this helps, and wish y'all the best of luck!!
P/S: The back-end stack I used was Flask + Celery (Allows Flask to Run Heavy Tasks Asyncronously) + Redis (A Bridge/Middleman Between Flask and Celery) + NGINX + Certbot all running inside individual docker containers, chained using docker-compose. I deployed it on a DigitalOcean Droplet VPS. (VPS is essentially a computer OS that runs on the internet, 24/7)
For newbies, Docker: Think of Python's virtualenv or Node.js's localized node_modules but for OS-level/C-based dependencies. Like those that can be only installed through package managers such as Linux's apt-get install, macOS's brew install, or Windows's choco install.
Docker Compose: e.g. The client and the server may have different OS-level dependencies and you want to separate them so they don't conflict with each other. You can only allow certain communications between by "chaining" them through docker-compose.
What's NGINX? It's a reverse-proxy solution; TLDR: you can connect the domain/URL you purchased and direct it to your web app. Let's Encrypt allows the server to have that green chain lock thing next to your address for secure communication.
Also important thing to note: Do NOT install NGINX or Redis OUTSIDE of the Docker container on the Linux terminal! That will cause conflicts (ports 443 and 80 already being occupied). 443 is for HTTPS, 80 is for HTTP.
These are the tutorial I used for setting up my tech stack:
https://testdriven.io/blog/dockerizing-flask-with-postgres-gunicorn-and-nginx/
https://pentacent.medium.com/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
I can also share my docker-compose.yml file below for your reference:
version: '3.8'
services:
web:
build: .
image: web
container_name: web
command: gunicorn --worker-class=gevent --worker-connections=1000 --workers=5 api:app --bind 0.0.0.0:5000
volumes:
- .:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- redis
expose:
- 5000
worker:
build: .
command: celery --app tasks.celery worker --loglevel=info
volumes:
- .:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
nginx:
image: nginx:1.15-alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./server/nginx:/etc/nginx/conf.d
- ./server/certbot/conf:/etc/letsencrypt
- ./server/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
depends_on:
- web
certbot:
image: certbot/certbot
volumes:
- ./server/certbot/conf:/etc/letsencrypt
- ./server/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
redis:
image: redis:6-alpine
restart: always
ports:
- 6379:6379
# HOW TO SET REDIS PASSWORD VIA ENVIRONMENT VARIABLE
# https://stackoverflow.com/questions/68461172/docker-compose-redis-password-via-environment-variable
dashboard:
build: .
command: celery --app tasks.celery flower --port=5555 --broker=redis://redis:6379/0
ports:
- 5556:5555
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
- worker
Also sharing my Dockerfile JUST IN CASE,
# FOR FRONT-END DEPLOYMENT... (REACT)
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/web/node_modules/.bin:$PATH
COPY web ./web
WORKDIR /app/web
RUN yarn install
RUN yarn build
# FOR BACK-END DEPLOYMENT... (FLASK)
FROM python:3.10.4-slim
WORKDIR /
# Don't forget "--from"! It acts as a bridge that connects two seperate stages
COPY --from=build-step app ./app
WORKDIR /app
RUN apt-get update && apt-get install -y python3-pip python3-dev mesa-utils libgl1-mesa-glx libglib2.0-0 build-essential libssl-dev libffi-dev redis-server
COPY server ./server
WORKDIR /app/server
RUN pip3 install -r ./requirements.txt
# Pretty much pass everything in the root folder except for the client folder, as we do NOT want to overwrite the pre-generated client folder that is already in the ./app folder
# THIS IS CALLED MULTI-STAGE BUILDING IN DOCKER
EXPOSE 5000
All the notes I made while resolving this problem:
'''
TIPS & TRICKS
-------------
UPDATED ON: 2023-02-11
LAST EDITED BY:
WONMO "JOHN" SEONG,
LEAD DEV. AND THE CEO OF HAVIT
----------------------------------------------
HOW TO INSTALL DOCKER-COMPOSE ON DIGITALOCEAN VPS:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04
DOCKERIZE FLASK + CELERY + REDIS APPLICATION WITH DOCKER-COMPOSE:
https://nickjanetakis.com/blog/dockerize-a-flask-celery-and-redis-application-with-docker-compose
https://testdriven.io/blog/flask-and-celery/ <-- PRIMARILY USED THIS TUTORIAL
CELERY VS. GUNICORN WORKERS:
https://stackoverflow.com/questions/24317917/difference-between-celery-and-gunicorn-workers
1. Gunicorn solves concurrency of serving HTTP requests - this is "online" code where each request triggers a Django view, which returns a response. Any code that runs in a view will increase the time it takes to get a response to the user, making the website seem slow. So long running tasks should not go in Django views for that reason.
2. Celery is for running code "offline", where you don't need to return an HTTP response to a user. A Celery task might be triggered by some code inside a Django view, but it could also be triggered by another Celery task, or run on a schedule. Celery uses the model of a worker pulling tasks off of a queue, there are a few Django compatible task frameworks that do this. I give a write up of this architecture here.
CELERY, GUNICORN, AND SUPERVISOR:
https://medium.com/sightwave-software/setting-up-nginx-gunicorn-celery-redis-supervisor-and-postgres-with-django-to-run-your-python-73c8a1c8c1ba
DEPLOY GITHUB REPO ON DIGITALOCEAN VPS USING SSH KEYS:
https://medium.com/swlh/how-to-deploy-your-application-to-digital-ocean-using-github-actions-and-save-up-on-ci-cd-costs-74b7315facc2
COMANDS TO RUN ON VPS TO CLONE GITHUB REPO (WORKS ON BOTH PRIVATE AND PUBLIC REPOS):
1. Login as root
2. Set up your credentials (GitHub SSH-related) and run the following commands:
- apt-get update
- apt-get install git
- mkdir ~/github && cd ~/github
- git clone git#github.com:wonmor/HAVIT-Central.git
3. To get the latest changes, run git fetch origin
HOW TO RUN DOCKER-COMPOSE ON VPS:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04
1. Login as root
2. Run the following commands:
- cd ~/github/HAVIT-Central
- docker compose up --build -d // builds and runs the containers in detached mode
OR docker compose up --build -d --remove-orphans // builds and runs the containers in detached mode and removes orphan containers
- docker compose ps // lists all running containers in Docker engine.
3. To stop the containers, run:
- docker-compose down
HOW TO SET UP NGINX ON UBUNTU VPS TO PROXY PASS TO GUNICORN ON DIGITALOCEAN:
https://www.datanovia.com/en/lessons/digitalocean-initial-ubuntu-server-setup/
https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04
https://www.datanovia.com/en/lessons/digitalocean-how-to-install-nginx-and-ssl/
CAPROVER CLEAN/REMOVE ALL PREVIOUS DEPLOYMENTS:
docker container prune --force
docker image prune --all
FORCE MERGE USING GIT:
git reset --hard origin/main
NGINX - REDIRECT TO DOCKER CONTAINER:
https://gilyes.com/docker-nginx-letsencrypt/
https://github.com/nginx-proxy/acme-companion
https://github.com/nginx-proxy/acme-companion/wiki/Docker-Compose
https://github.com/evertramos/nginx-proxy-automation
https://github.com/buchdag/letsencrypt-nginx-proxy-companion-compose
https://testdriven.io/blog/dockerizing-flask-with-postgres-gunicorn-and-nginx/
https://pentacent.medium.com/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71 <--- THIS IS THE BEST TUTORIAL
Simply run docker-compose up and enjoy your HTTPS-secured website or app.
Then run chmod +x init-letsencrypt.sh and sudo ./init-letsencrypt.sh.
VVIP: HOW TO RUN THIS APP ON VPS:
1. Login as root, run sudo chmod +x init_letsencrypt.sh
2. Now for the bit… that tends to go wrong. Navigate into your remote project folder, and run the initialization script (Run ./<Script-Name>.sh on Terminal). First, docker will build the images, and then run through the script step-by-step as described above. Now, this worked first time for me while putting together the tutorial, but in the past it has taken me hours to get everything set up correctly. The main problem was usually the locations of files: the script would save it to some directory, which was mapped to a volume that nginx was incorrectly mapped to, and so on. If you end up needing to debug, you can run the commands in the script yourself, substituting variables as you go. Pay close attention to the logs — nginx is often quite good at telling you what it’s missing.
3. If all goes to plan, you’ll see a nice little printout from Lets Encrypt and Certbot saying “Congratulations” and your script will exit successfully.
HOW TO OPEN/ALLOW PORTS ON DIGITALOCEAN:
https://www.digitalocean.com/community/tutorials/opening-a-port-on-linux
sudo ufw allow <PORT_NUMBER>
WHAT ARE DNS RECORDS?
https://docs.digitalocean.com/products/networking/dns/how-to/manage-records/
PS: Highers the TTL, the longer it takes for the DNS record to update.
But it will be cached for longer, which means that there will be less load on the DNS server.
TIP: MAKE SURE YOU SET UP THE CUSTOM NAMESPACES FOR DIGITALOCEAN ON GOOGLE DOMAINS:
https://docs.digitalocean.com/tutorials/dns-registrars/
DOCKER SWARM VS. DOCKER COMPOSE:
The difference between Docker Swarm and Docker Compose is that Compose is used for configuring multiple containers in the same host. Docker Swarm is different in that it is a container orchestration tool. This means that Docker Swarm lets you connect containers to multiple hosts similar to Kubernetes.
Cannot load certificate /etc/letsencrypt/live/havit.space/fullchain.pem: BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory FIX:
https://community.letsencrypt.org/t/lets-encrypt-with-nginx-i-got-error-ssl-error-02001002-system-library-fopen-no-such-file-or-directory-fopen-etc-letsencrypt-live-xxx-com-fullchain-pem-r/20990/5
RUNNING MULTIPLE DOCKER COMPOSE FILES:
https://stackoverflow.com/questions/43957259/run-multiple-docker-compose
nginx: [emerg] open() "/etc/letsencrypt/options-ssl-nginx.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/app.conf:20 FIX:
https://stackoverflow.com/questions/64940480/nginx-letsencrypt-error-etc-letsencrypt-options-ssl-nginx-conf-no-such-file-o
VVVIP: RESOLVE NGINX + DOCKER + LETSENCRYPT ISSUES!
https://stackoverflow.com/questions/68449947/certbot-failing-acme-challenge-connection-refused
Basically gotta remove all the HTTPS SSL-related stuff from both the docker-compose.yml and the nginx.conf file.
Then run the init-letsencrypt.sh script. Then add the HTTPS SSL-related stuff back to both the docker-compose.yml and the nginx.conf file.
Then run docker-compose up -d --build. Then run the init-letsencrypt.sh script again.
'''

Can I run a docker container from the browser?

I don't suppose anyone knows if it's possible to call the docker run or docker compose up commands from a web app?
I have the following scenario in which I have a react app that uses openlayers for it's maps. I have it so that when the user loses internet connection it fallback onto making the requests to a map server running locally on docker. The issue is that the user needs to manually start the server via the command line. To make things easier for the user, I added the following bash script and docker compose file to boot up the server with a single command, but was wondering if I could incorporate that functionality into the web app and have the user boot the map server by the click of a button?
Just for references sake these are my bash and compose files.
#!/bin/sh
dockerDown=`docker info | grep -qi "ERROR" && echo "stopped"`
if [ $dockerDown ]
then
echo "\n ********* Please start docker before running this script ********* \n"
exit 1
fi
skipInstall="no"
read -p "Have you imported the maps already and just want to run the app (y/n)?" choice
case "$choice" in
y|Y ) skipInstall="yes";;
n|N ) skipInstall="no";;
* ) skipInstall="no";;
esac
pbfUrl='https://download.geofabrik.de/asia/malaysia-singapore-brunei-latest.osm.pbf'
#polyUrl='https://download.geofabrik.de/asia/malaysia-singapore-brunei.poly'
#-e DOWNLOAD_POLY=$polyUrl \
docker volume create openstreetmap-data
docker volume create openstreetmap-rendered-tiles
if [ $skipInstall = "no" ]
then
echo "\n ***** IF THIS IS THE FIRST TIME, YOU MIGHT WANT TO GO GET A CUP OF COFFEE WHILE YOU WAIT ***** \n"
docker run \
-e DOWNLOAD_PBF=$pbfUrl \
-v openstreetmap-data:/var/lib/postgresql/12/main \
-v openstreetmap-rendered-tiles:/var/lib/mod_tile \
overv/openstreetmap-tile-server \
import
echo "Finished Postgres container!"
fi
echo "\n *** BOOTING UP SERVER CONTAINER *** \n"
docker compose up
My docker compose file
version: '3'
services:
map:
image: overv/openstreetmap-tile-server
volumes:
- openstreetmap-data:/var/lib/postgresql/12/main
- openstreetmap-rendered-tiles:/var/lib/mod_tile
environment:
- THREADS=24
- OSM2PGSQL_EXTRA_ARGS=-C 4096
- AUTOVACUUM=off
ports:
- "8080:80"
command: "run"
volumes:
openstreetmap-data:
external: true
openstreetmap-rendered-tiles:
external: true
There is the Docker API, and you are able to start containers,
In the Docker documentation,
https://docs.docker.com/engine/api/
To start the containers using the Docker API
https://docs.docker.com/engine/api/v1.41/#operation/ContainerStart

Spring boot app fail to link consul in docker

I am trying to use Consul as discovery service, and another two spring boot app to register with Consul; and put them into docker;
following are my codes:
app:
server:
port: 3333
spring:
application:
name: adder
cloud:
consul:
host: consul
port: 8500
discovery:
preferIpAddress: true
healthCheckPath: /health
healthCheckInterval: 15s
instanceId: ${spring.application.name}:${spring.application.instance_id:${server.port}}
2 docker-compose.yml
consul1:
image: "progrium/consul:latest"
container_name: "consul1"
hostname: "consul1"
command: "-server -bootstrap -ui-dir /ui"
adder:
image: wsy/adder
ports:
- "3333:3333"
links:
- consul1
environment:
WAIT_FOR_HOSTS: consul1:8500
There is another similar question Cannot link Consul and Spring Boot app in Docker;
the answer suggests, the app should wait for consul to fully work by using depends_on, which I tried, but didn't work;
the error message is as following:
adder_1 | com.ecwid.consul.transport.TransportException: java.net.ConnectException: Connection refused
adder_1 | at com.ecwid.consul.transport.AbstractHttpTransport.executeRequest(AbstractHttpTransport.java:80) ~[consul-api-1.1.8.jar!/:na]
adder_1 | at com.ecwid.consul.transport.AbstractHttpTransport.makeGetRequest(AbstractHttpTransport.java:39) ~[consul-api-1.1.8.jar!/:na]
besides spring boot application.yml and docker-compose.yml, following is App's Dockerfile
FROM java:8
VOLUME /tmp
ADD adder-0.0.1-SNAPSHOT.jar app.jar
RUN bash -c 'touch /app.jar'
ADD start.sh start.sh
RUN bash -c 'chmod +x /start.sh'
EXPOSE 3333
ENTRYPOINT ["/start.sh", " java -Djava.security.egd=file:/dev/./urandom -jar /app.jar"]
and the start.sh
#!/bin/bash
set -e
wait_single_host() {
local host=$1
shift
local port=$1
shift
echo "waiting for TCP connection to $host:$port..."
while ! nc ${host} ${port} > /dev/null 2>&1 < /dev/null
do
echo "TCP connection [$host] not ready, will try again..."
sleep 1
done
echo "TCP connection ready. Executing command [$host] now..."
}
wait_all_hosts() {
if [ ! -z "$WAIT_FOR_HOSTS" ]; then
local separator=':'
for _HOST in $WAIT_FOR_HOSTS ; do
IFS="${separator}" read -ra _HOST_PARTS <<< "$_HOST"
wait_single_host "${_HOST_PARTS[0]}" "${_HOST_PARTS[1]}"
done
else
echo "IMPORTANT : Waiting for nothing because no $WAIT_FOR_HOSTS env var defined !!!"
fi
}
wait_all_hosts
exec $1
I can infer that your Consul configuration is located in your application.yml instead of bootstrap.yml, that's the problem.
According to this answer, bootstrap.yml is loaded before application.yml and Consul client has to check its configuration before the application itself and therefore look at the bootstrap.yml.
Example of a working bootstrap.yml :
spring:
cloud:
consul:
host: consul
port: 8500
discovery:
prefer-ip-address: true
Run Consul server and do not forget the name option to match with your configuration:
docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
Consul server is now running, run your application image (builded previously with your artifact) and link it to the Consul container:
docker run -d -name=my-consul-client-app --link consul:consul acme/spring-app
Your problem is that depends_on does only control the startup order of your services. You have to wait until the consul servers are up and running before starting your spring app. You can do this with this script:
#!/bin/bash
set -e
default_host="database"
default_port="3306"
host="${2:-$default_host}"
port="${3:-$default_port}"
echo "waiting for TCP connection to $host:$port..."
while ! (echo >/dev/tcp/$host/$port) &>/dev/null
do
sleep 1
done
echo "TCP connection ready. Executing command [$1] now..."
exec $1
Usage in you docker file:
COPY wait.sh /wait.sh
RUN chmod +x /wait.sh
CMD ["/wait.sh", "java -jar yourApp-jar" , "consulURL" , "ConsulPort" ]
I just want to clarify that, at last I still don't have a solution, and can't understand the situation here; I tried the suggestion from Ohmen, in APP container, I am able to ping consul1; But the APP still fails to connect consul;
If I only start the consul by following command:
docker-compose -f docker-compose-discovery.yml up discovery
Then I can run the APP directly(through Main), and it is able to connect with spring.cloud.consul.host: discovery;
But if I try to run APP in docker container, like following:
docker run --name adder --link discovery-consul:discovery wsy/adder
It fails again with connection refused;
I am very new to docker & docker-compose; I thought it would be a good example to start, but it seems not that easy for me;

Resources