Run Artifactory as Docker container response 404 - docker

I created docker container with this command:
docker run --name artifactory -d -p 8081:8081 \
-v /jfrog/artifactory:/var/opt/jfrog/artifactory \
-e EXTRA_JAVA_OPTIONS='-Xms128M -Xmx512M -Xss256k -XX:+UseG1GC' \
docker.bintray.io/jfrog/artifactory-oss:latest
and started artifactory, but the response I get is 404 - not found
If u access http://99.79.191.172:8081/artifactory u see it

If you follow the Artifactory Docker install documentation, you'll see you also need to expose port 8082 for the new JFrog Router, which is now handling the traffic coming in to the UI (and other services as needed).
This new architecture is from Artifactory 7.x. By setting latest as the repository tag, you don't have full control of what version you are running...
So your command should look like
docker run --name artifactory -p 8081:8081 -d -p 8082:8082 \
-v /jfrog/artifactory:/var/opt/jfrog/artifactory \
docker.bintray.io/jfrog/artifactory-oss:latest
For controlling the configuration (like the Java options you want), it's recommended to use the Artifactory system.yaml configuration. This file is the best way to control all aspects of the Artifactory system configuration.

I start my instance with
sudo groupadd -g 1030 artifactory
sudo useradd -u 1030 -g artifactory artifactory
sudo chown artifactory:artifactory /daten/jfrog -R
docker run \
-d \
--name artifactory \
-v /daten/jfrog/artifactory:/var/opt/jfrog/artifactory \
--user "$(id -u artifactory):$(id -g artifactory)" \
--restart always \
-p 8084:8081 -p 9082:8082 releases-docker.jfrog.io/jfrog/artifactory-oss:latest
This is my /daten/jfrog/artifactory/etc/system.yaml (I changed nothing manually)
## #formatter:off
## JFROG ARTIFACTORY SYSTEM CONFIGURATION FILE
## HOW TO USE: comment-out any field and keep the correct yaml indentation by deleting only the leading '#' character.
configVersion: 1
## NOTE: JFROG_HOME is a place holder for the JFrog root directory containing the deployed product, the home directory for all JFrog products.
## Replace JFROG_HOME with the real path! For example, in RPM install, JFROG_HOME=/opt/jfrog
## NOTE: Sensitive information such as passwords and join key are encrypted on first read.
## NOTE: The provided commented key and value is the default.
## SHARED CONFIGURATIONS
## A shared section for keys across all services in this config
shared:
## Java 11 distribution to use
#javaHome: "JFROG_HOME/artifactory/app/third-party/java"
## Extra Java options to pass to the JVM. These values add to or override the defaults.
#extraJavaOpts: "-Xms512m -Xmx2g"
## Security Configuration
security:
## Join key value for joining the cluster (takes precedence over 'joinKeyFile')
#joinKey: "<Your joinKey>"
## Join key file location
#joinKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/join.key>"
## Master key file location
## Generated by the product on first startup if not provided
#masterKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/master.key>"
## Maximum time to wait for key files (master.key and join.key)
#bootstrapKeysReadTimeoutSecs: 120
## Node Settings
node:
## A unique id to identify this node.
## Default auto generated at startup.
#id: "art1"
## Default auto resolved by startup script
#ip:
## Sets this node as primary in HA installation
#primary: true
## Sets this node as part of HA installation
#haEnabled: true
## Database Configuration
database:
## One of mysql, oracle, mssql, postgresql, mariadb
## Default Embedded derby
## Example for postgresql
#type: postgresql
#driver: org.postgresql.Driver
#url: "jdbc:postgresql://<your db url, for example: localhost:5432>/artifactory"
#username: artifactory
#password: password
I see this in router-request.log
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43740","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":3608608,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.56803042Z","level":"info","msg":"","request_Uber-Trace-Id":"664d23ea1941d9b0:410817c2c69f2849:31b50a1adccb9846:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43734","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":4000683,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.567751867Z","level":"info","msg":"","request_Uber-Trace-Id":"23967a8743252dd8:436e2a5407b66e64:31cfc496ccc260fa:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43736","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":4021195,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.567751873Z","level":"info","msg":"","request_Uber-Trace-Id":"28300761ec7b6cd5:36588fa084ee7105:10fbdaadbc39b21e:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43622","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":3918873,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.567751891Z","level":"info","msg":"","request_Uber-Trace-Id":"6d57920d087f4d0f:26b9120411520de2:49b0e61895e17734:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43742","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":2552815,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.569112324Z","level":"info","msg":"","request_Uber-Trace-Id":"d4a7bb216cf31eb:5c783ae80b95778f:fd11882b03eb63f:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43730","DownstreamContentSize":45,"DownstreamStatus":200,"Duration":18106757,"RequestMethod":"POST","RequestPath":"/artifactory/api/auth/loginRelatedData","StartUTC":"2021-12-30T11:49:19.557661286Z","level":"info","msg":"","request_Uber-Trace-Id":"d4a7bb216cf31eb:640bf3bca741e43b:28f0abcfc40f203:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43726","DownstreamContentSize":169,"DownstreamStatus":200,"Duration":19111069,"RequestMethod":"GET","RequestPath":"/artifactory/api/crowd","StartUTC":"2021-12-30T11:49:19.557426794Z","level":"info","msg":"","request_Uber-Trace-Id":"664d23ea1941d9b0:417647e0e0fd0911:55e80b7f7ab0724e:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43724","DownstreamContentSize":496,"DownstreamStatus":200,"Duration":19308753,"RequestMethod":"GET","RequestPath":"/artifactory/api/securityconfig","StartUTC":"2021-12-30T11:49:19.557346739Z","level":"info","msg":"","request_Uber-Trace-Id":"6d57920d087f4d0f:7bdba564c07f8bc5:71b1b99e1e406d5f:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43728","DownstreamContentSize":2,"DownstreamStatus":200,"Duration":19140699,"RequestMethod":"GET","RequestPath":"/artifactory/api/saml/config","StartUTC":"2021-12-30T11:49:19.557516365Z","level":"info","msg":"","request_Uber-Trace-Id":"23967a8743252dd8:2f9035e56dd9f0c5:4315ec00a6b32eb4:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43732","DownstreamContentSize":148,"DownstreamStatus":200,"Duration":18907203,"RequestMethod":"GET","RequestPath":"/artifactory/api/httpsso","StartUTC":"2021-12-30T11:49:19.557786692Z","level":"info","msg":"","request_Uber-Trace-Id":"28300761ec7b6cd5:2767cf480f6ebd73:2c013715cb58b384:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
I've to change the port to 8084 (it's already occupied) But I run into 404 as well.
Who knows how to solve it ?

Related

Getting HTTP ERROR: 404 for Jenkins after forwarding port with public IP

I have Jenkins locally running on port 8081 on a linux machine that is setup in office .
I have a public IP that I am trying to use to make Jenkins publicly available.
I have entered the public IP with port in Manage Jenkins -> Configure System -> Jenkins
URL like: http://182.156.xxx.xx:8081/
Now if I direct to http://182.156.xxx.xx:8081/ , it gives me HTTP 404 error(screenshot attached).
Note: I have setup the Jenkins in Ubuntu with below commands:
wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ >
/etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins
etc/default/jenkins file:
# defaults for Jenkins automation server
# pulled in from the init script; makes things easier.
NAME=jenkins
# arguments to pass to java
# Allow graphs etc. to work even when an X server is present
JAVA_ARGS="-Djava.awt.headless=true"
#JAVA_ARGS="-Xmx256m"
# make jenkins listen on IPv4 address
#JAVA_ARGS="-Djava.net.preferIPv4Stack=true"
PIDFILE=/var/run/$NAME/$NAME.pid
# user and group to be invoked as (default to jenkins)
JENKINS_USER=$NAME
JENKINS_GROUP=$NAME
# location of the jenkins war file
JENKINS_WAR=/usr/share/$NAME/$NAME.war
# jenkins home location
JENKINS_HOME=/var/lib/$NAME
# set this to false if you don't want Jenkins to run by itself
# in this set up, you are expected to provide a servlet container
# to host jenkins.
RUN_STANDALONE=true
# log location. this may be a syslog facility.priority
JENKINS_LOG=/var/log/$NAME/$NAME.log
#JENKINS_LOG=daemon.info
# Whether to enable web access logging or not.
# Set to "yes" to enable logging to /var/log/$NAME/access_log
JENKINS_ENABLE_ACCESS_LOG="no"
# OS LIMITS SETUP
# comment this out to observe /etc/security/limits.conf
# this is on by default because http://github.com/jenkinsci/jenkins/commit/2fb288474e980d0e7ff9c4a3b768874835a3e92e
# reported that Ubuntu's PAM configuration doesn't include pam_limits.so, and as a result the # of file
# descriptors are forced to 1024 regardless of /etc/security/limits.conf
MAXOPENFILES=8192
# set the umask to control permission bits of files that Jenkins creates.
# 027 makes files read-only for group and inaccessible for others, which some security sensitive users
# might consider benefitial, especially if Jenkins runs in a box that's used for multiple purposes.
# Beware that 027 permission would interfere with sudo scripts that run on the master (JENKINS-25065.)
#
# Note also that the particularly sensitive part of $JENKINS_HOME (such as credentials) are always
# written without 'others' access. So the umask values only affect job configuration, build records,
# that sort of things.
#
# If commented out, the value from the OS is inherited, which is normally 022 (as of Ubuntu 12.04,
# by default umask comes from pam_umask(8) and /etc/login.defs
# UMASK=027
# port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8081
# servlet context, important if you want to use apache proxying
PREFIX=/$NAME
# arguments to pass to jenkins.
# --javahome=$JAVA_HOME
# --httpListenAddress=$HTTP_HOST (default 0.0.0.0)
# --httpPort=$HTTP_PORT (default 8080; disable with -1)
# --httpsPort=$HTTP_PORT
# --argumentsRealm.passwd.$ADMIN_USER=[password]
# --argumentsRealm.roles.$ADMIN_USER=admin
# --webroot=~/.jenkins/war
# --prefix=$PREFIX
JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=$HTTP_PORT"
In this jenkins file, I have only changed the HTTP PORT from 8080 to 8081. As on port 8080, the jenkins is already running with the same public IP.
Jenkins version : 2.289.2
Java version : 8
Ubuntu version : 20.04
jenkins_error_screenshot

Certbot failing acme-challenge (connection refused)

I'm trying to set up a Django project with docker + nginx following the tutorial Nginx and Let's Encrypt with Docker in Less Than 5 Minutes.
The issue is when I run the script init-letsencrypt.sh I end up with failed challenges.
Here is the content of my script:
#!/bin/bash
if ! [ -x "$(command -v docker-compose)" ]; then
echo 'Error: docker-compose is not installed.' >&2
exit 1
fi
domains=(xxxx.yyyy.net www.xxxx.yyyy.net)
rsa_key_size=4096
data_path="./data/certbot"
email="myemail#example.com" # Adding a valid address is strongly recommended
staging=1 # Set to 1 if you're testing your setup to avoid hitting request limits
if [ -d "$data_path" ]; then
read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then
exit
fi
fi
if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
echo "### Downloading recommended TLS parameters ..."
mkdir -p "$data_path/conf/"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/_internal/tls_configs/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem"
echo
fi
echo "### Creating dummy certificate for $domains ..."
path="/etc/letsencrypt/live/$domains"
mkdir -p "$data_path/conf/live/$domains"
docker-compose -f docker-compose-deploy.yml run --rm --entrypoint "\
openssl req -x509 -nodes -newkey rsa:$rsa_key_size -days 1\
-keyout '$path/privkey.pem' \
-out '$path/fullchain.pem' \
-subj '/CN=localhost'" certbot
echo
echo "### Starting nginx ..."
docker-compose -f docker-compose-deploy.yml up --force-recreate -d proxy
echo
echo "### Deleting dummy certificate for $domains ..."
docker-compose -f docker-compose-deploy.yml run --rm --entrypoint "\
rm -Rf /etc/letsencrypt/live/$domains && \
rm -Rf /etc/letsencrypt/archive/$domains && \
rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot
echo
echo "### Requesting Let's Encrypt certificate for $domains ..."
#Join $domains to -d args
domain_args=""
for domain in "${domains[#]}"; do
domain_args="$domain_args -d $domain"
done
# Select appropriate email arg
case "$email" in
"") email_arg="--register-unsafely-without-email" ;;
*) email_arg="--email $email" ;;
esac
# Enable staging mode if needed
if [ $staging != "0" ]; then staging_arg="--staging"; fi
docker-compose -f docker-compose-deploy.yml run --rm --entrypoint "\
certbot -v certonly --webroot -w /var/www/certbot \
$staging_arg \
$email_arg \
$domain_args \
--rsa-key-size $rsa_key_size \
--agree-tos \
--force-renewal" certbot
echo
echo "### Reloading nginx ..."
docker-compose -f docker-compose-deploy.yml exec proxy nginx -s reload
And my nginx configuration file:
server {
listen 80;
server_name xxxx.yyyy.net;
location ^~ /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
server_name xxxx.yyyy.net;
ssl_certificate /etc/letsencrypt/live/xxxx.yyyy.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxxx.yyyy.net/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass web:8000;
include /etc/nginx/uwsgi_params;
}
}
The output of the part that fails:
Requesting a certificate for xxxx.yyyy.net and www.xxxx.yyyy.net
Performing the following challenges:
http-01 challenge for xxxx.yyyy.net
http-01 challenge for www.xxxx.yyyy.net
Using the webroot path /var/www/certbot for all unmatched domains.
Waiting for verification...
Challenge failed for domain xxxx.yyyy.net
Challenge failed for domain www.xxxx.yyyy.net
http-01 challenge for xxxx.yyyy.net
http-01 challenge for www.xxxx.yyyy.net
Certbot failed to authenticate some domains (authenticator: webroot). The Certificate Authority reported these problems:
Domain: xxxx.yyyy.net
Type: connection
Detail: Fetching http://xxxx.yyyy.net/.well-known/acme-challenge/XJw9w39lRSSbPf-4tb45RLtTnSbjlUEi1f0Cqwsmt-8: Connection refused
Domain: www.xxxx.yyyy.net
Type: connection
Detail: Fetching http://www.xxxx.yyyy.net/.well-known/acme-challenge/b47s4WJARyOTS63oFkaji2nP7oOhiLx5hHp4kO9dCGI: Connection refused
Hint: The Certificate Authority failed to download the temporary challenge files created by Certbot. Ensure that the listed domains serve their content from the provided --webroot-path/-w and that files created there can be downloaded from the internet.
Cleaning up challenges
Some challenges have failed.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
ERROR: 1
One of the comments said:
But there's no further explanation as to how to solve it.
Check the certbot commit
Problem is nginx configuration file. The container fails to start up correctly because of missing certification files. I commented out the ssl server portion, rebuilt the image and executed the script again. Everything worked out just fine. After certificates were generated I just uncommented the ssl configuration, rebuilt the image and composed up the services.
Had the same issue;
The solution was ensuring I defined the volume blocks in both the nginx and certbot services correctly.
//other services
nginx:
container_name: nginx
image: nginx:1.13
ports:
- "80:80"
- "443:443"
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
certbot:
container_name: certbot
image: certbot/certbot
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
Also if you are using EC2 as your cloud server don't forget to add inbound rules for ports 80 and 443.
A More Beginner-friendly Version!
I can confirm that the first answer that was posted (remove all lines regarding SSL certificate registration/HTTPS redirection when first running the init-letsencrypt.sh) works perfectly!
The lack of documentation is really annoying on this one, and i had to find the answer deep in the community section. Even for someone whose first language isn't English this answer would be really difficult to find. I wish they documented more on this matter. :(
So here are some of the steps that you have to follow to resolve this issue...
Basically gotta remove all the HTTPS SSL-related stuff from both the docker-compose.yml and the nginx.conf / nginx/app.conf file.
Then run the init-letsencrypt.sh script.
Then add the HTTPS SSL-related stuff back to both the docker-compose.yml and the nginx.conf / nginx/app.conf file. (If you're on Git, just revert your commits)
Then run docker-compose up -d --build. Then run the init-letsencrypt.sh script again.
Hope this helps, and wish y'all the best of luck!!
P/S: The back-end stack I used was Flask + Celery (Allows Flask to Run Heavy Tasks Asyncronously) + Redis (A Bridge/Middleman Between Flask and Celery) + NGINX + Certbot all running inside individual docker containers, chained using docker-compose. I deployed it on a DigitalOcean Droplet VPS. (VPS is essentially a computer OS that runs on the internet, 24/7)
For newbies, Docker: Think of Python's virtualenv or Node.js's localized node_modules but for OS-level/C-based dependencies. Like those that can be only installed through package managers such as Linux's apt-get install, macOS's brew install, or Windows's choco install.
Docker Compose: e.g. The client and the server may have different OS-level dependencies and you want to separate them so they don't conflict with each other. You can only allow certain communications between by "chaining" them through docker-compose.
What's NGINX? It's a reverse-proxy solution; TLDR: you can connect the domain/URL you purchased and direct it to your web app. Let's Encrypt allows the server to have that green chain lock thing next to your address for secure communication.
Also important thing to note: Do NOT install NGINX or Redis OUTSIDE of the Docker container on the Linux terminal! That will cause conflicts (ports 443 and 80 already being occupied). 443 is for HTTPS, 80 is for HTTP.
These are the tutorial I used for setting up my tech stack:
https://testdriven.io/blog/dockerizing-flask-with-postgres-gunicorn-and-nginx/
https://pentacent.medium.com/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
I can also share my docker-compose.yml file below for your reference:
version: '3.8'
services:
web:
build: .
image: web
container_name: web
command: gunicorn --worker-class=gevent --worker-connections=1000 --workers=5 api:app --bind 0.0.0.0:5000
volumes:
- .:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- redis
expose:
- 5000
worker:
build: .
command: celery --app tasks.celery worker --loglevel=info
volumes:
- .:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
nginx:
image: nginx:1.15-alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./server/nginx:/etc/nginx/conf.d
- ./server/certbot/conf:/etc/letsencrypt
- ./server/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
depends_on:
- web
certbot:
image: certbot/certbot
volumes:
- ./server/certbot/conf:/etc/letsencrypt
- ./server/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
redis:
image: redis:6-alpine
restart: always
ports:
- 6379:6379
# HOW TO SET REDIS PASSWORD VIA ENVIRONMENT VARIABLE
# https://stackoverflow.com/questions/68461172/docker-compose-redis-password-via-environment-variable
dashboard:
build: .
command: celery --app tasks.celery flower --port=5555 --broker=redis://redis:6379/0
ports:
- 5556:5555
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
- worker
Also sharing my Dockerfile JUST IN CASE,
# FOR FRONT-END DEPLOYMENT... (REACT)
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/web/node_modules/.bin:$PATH
COPY web ./web
WORKDIR /app/web
RUN yarn install
RUN yarn build
# FOR BACK-END DEPLOYMENT... (FLASK)
FROM python:3.10.4-slim
WORKDIR /
# Don't forget "--from"! It acts as a bridge that connects two seperate stages
COPY --from=build-step app ./app
WORKDIR /app
RUN apt-get update && apt-get install -y python3-pip python3-dev mesa-utils libgl1-mesa-glx libglib2.0-0 build-essential libssl-dev libffi-dev redis-server
COPY server ./server
WORKDIR /app/server
RUN pip3 install -r ./requirements.txt
# Pretty much pass everything in the root folder except for the client folder, as we do NOT want to overwrite the pre-generated client folder that is already in the ./app folder
# THIS IS CALLED MULTI-STAGE BUILDING IN DOCKER
EXPOSE 5000
All the notes I made while resolving this problem:
'''
TIPS & TRICKS
-------------
UPDATED ON: 2023-02-11
LAST EDITED BY:
WONMO "JOHN" SEONG,
LEAD DEV. AND THE CEO OF HAVIT
----------------------------------------------
HOW TO INSTALL DOCKER-COMPOSE ON DIGITALOCEAN VPS:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04
DOCKERIZE FLASK + CELERY + REDIS APPLICATION WITH DOCKER-COMPOSE:
https://nickjanetakis.com/blog/dockerize-a-flask-celery-and-redis-application-with-docker-compose
https://testdriven.io/blog/flask-and-celery/ <-- PRIMARILY USED THIS TUTORIAL
CELERY VS. GUNICORN WORKERS:
https://stackoverflow.com/questions/24317917/difference-between-celery-and-gunicorn-workers
1. Gunicorn solves concurrency of serving HTTP requests - this is "online" code where each request triggers a Django view, which returns a response. Any code that runs in a view will increase the time it takes to get a response to the user, making the website seem slow. So long running tasks should not go in Django views for that reason.
2. Celery is for running code "offline", where you don't need to return an HTTP response to a user. A Celery task might be triggered by some code inside a Django view, but it could also be triggered by another Celery task, or run on a schedule. Celery uses the model of a worker pulling tasks off of a queue, there are a few Django compatible task frameworks that do this. I give a write up of this architecture here.
CELERY, GUNICORN, AND SUPERVISOR:
https://medium.com/sightwave-software/setting-up-nginx-gunicorn-celery-redis-supervisor-and-postgres-with-django-to-run-your-python-73c8a1c8c1ba
DEPLOY GITHUB REPO ON DIGITALOCEAN VPS USING SSH KEYS:
https://medium.com/swlh/how-to-deploy-your-application-to-digital-ocean-using-github-actions-and-save-up-on-ci-cd-costs-74b7315facc2
COMANDS TO RUN ON VPS TO CLONE GITHUB REPO (WORKS ON BOTH PRIVATE AND PUBLIC REPOS):
1. Login as root
2. Set up your credentials (GitHub SSH-related) and run the following commands:
- apt-get update
- apt-get install git
- mkdir ~/github && cd ~/github
- git clone git#github.com:wonmor/HAVIT-Central.git
3. To get the latest changes, run git fetch origin
HOW TO RUN DOCKER-COMPOSE ON VPS:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04
1. Login as root
2. Run the following commands:
- cd ~/github/HAVIT-Central
- docker compose up --build -d // builds and runs the containers in detached mode
OR docker compose up --build -d --remove-orphans // builds and runs the containers in detached mode and removes orphan containers
- docker compose ps // lists all running containers in Docker engine.
3. To stop the containers, run:
- docker-compose down
HOW TO SET UP NGINX ON UBUNTU VPS TO PROXY PASS TO GUNICORN ON DIGITALOCEAN:
https://www.datanovia.com/en/lessons/digitalocean-initial-ubuntu-server-setup/
https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-22-04
https://www.datanovia.com/en/lessons/digitalocean-how-to-install-nginx-and-ssl/
CAPROVER CLEAN/REMOVE ALL PREVIOUS DEPLOYMENTS:
docker container prune --force
docker image prune --all
FORCE MERGE USING GIT:
git reset --hard origin/main
NGINX - REDIRECT TO DOCKER CONTAINER:
https://gilyes.com/docker-nginx-letsencrypt/
https://github.com/nginx-proxy/acme-companion
https://github.com/nginx-proxy/acme-companion/wiki/Docker-Compose
https://github.com/evertramos/nginx-proxy-automation
https://github.com/buchdag/letsencrypt-nginx-proxy-companion-compose
https://testdriven.io/blog/dockerizing-flask-with-postgres-gunicorn-and-nginx/
https://pentacent.medium.com/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71 <--- THIS IS THE BEST TUTORIAL
Simply run docker-compose up and enjoy your HTTPS-secured website or app.
Then run chmod +x init-letsencrypt.sh and sudo ./init-letsencrypt.sh.
VVIP: HOW TO RUN THIS APP ON VPS:
1. Login as root, run sudo chmod +x init_letsencrypt.sh
2. Now for the bit… that tends to go wrong. Navigate into your remote project folder, and run the initialization script (Run ./<Script-Name>.sh on Terminal). First, docker will build the images, and then run through the script step-by-step as described above. Now, this worked first time for me while putting together the tutorial, but in the past it has taken me hours to get everything set up correctly. The main problem was usually the locations of files: the script would save it to some directory, which was mapped to a volume that nginx was incorrectly mapped to, and so on. If you end up needing to debug, you can run the commands in the script yourself, substituting variables as you go. Pay close attention to the logs — nginx is often quite good at telling you what it’s missing.
3. If all goes to plan, you’ll see a nice little printout from Lets Encrypt and Certbot saying “Congratulations” and your script will exit successfully.
HOW TO OPEN/ALLOW PORTS ON DIGITALOCEAN:
https://www.digitalocean.com/community/tutorials/opening-a-port-on-linux
sudo ufw allow <PORT_NUMBER>
WHAT ARE DNS RECORDS?
https://docs.digitalocean.com/products/networking/dns/how-to/manage-records/
PS: Highers the TTL, the longer it takes for the DNS record to update.
But it will be cached for longer, which means that there will be less load on the DNS server.
TIP: MAKE SURE YOU SET UP THE CUSTOM NAMESPACES FOR DIGITALOCEAN ON GOOGLE DOMAINS:
https://docs.digitalocean.com/tutorials/dns-registrars/
DOCKER SWARM VS. DOCKER COMPOSE:
The difference between Docker Swarm and Docker Compose is that Compose is used for configuring multiple containers in the same host. Docker Swarm is different in that it is a container orchestration tool. This means that Docker Swarm lets you connect containers to multiple hosts similar to Kubernetes.
Cannot load certificate /etc/letsencrypt/live/havit.space/fullchain.pem: BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory FIX:
https://community.letsencrypt.org/t/lets-encrypt-with-nginx-i-got-error-ssl-error-02001002-system-library-fopen-no-such-file-or-directory-fopen-etc-letsencrypt-live-xxx-com-fullchain-pem-r/20990/5
RUNNING MULTIPLE DOCKER COMPOSE FILES:
https://stackoverflow.com/questions/43957259/run-multiple-docker-compose
nginx: [emerg] open() "/etc/letsencrypt/options-ssl-nginx.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/app.conf:20 FIX:
https://stackoverflow.com/questions/64940480/nginx-letsencrypt-error-etc-letsencrypt-options-ssl-nginx-conf-no-such-file-o
VVVIP: RESOLVE NGINX + DOCKER + LETSENCRYPT ISSUES!
https://stackoverflow.com/questions/68449947/certbot-failing-acme-challenge-connection-refused
Basically gotta remove all the HTTPS SSL-related stuff from both the docker-compose.yml and the nginx.conf file.
Then run the init-letsencrypt.sh script. Then add the HTTPS SSL-related stuff back to both the docker-compose.yml and the nginx.conf file.
Then run docker-compose up -d --build. Then run the init-letsencrypt.sh script again.
'''

how do I set up NiFi on HTTPS in a container

I want to secure my NiFi with HTTPS using the tls-toolkit in standalone mode inside a Docker container, on a remote virtual machine running RHEL 8 (so actually using Podman instead of Docker but using a podman-docker module, I can treat podman as a Docker). I want to use the port 19443 now, but eventually I will be using the 9443.
I have created my simple testing Dockerfile:
FROM apache/nifi:latest
WORKDIR /opt/nifi/nifi-current
RUN /opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh standalone -n "localhost" -C "CN=user_1, OU=NiFi"
RUN ls localhost/
RUN cp -fv /opt/nifi/nifi-current/localhost/* /opt/nifi/nifi-current/conf/ # <- first problem, see build
RUN ls conf/
RUN /opt/nifi/nifi-current/bin/nifi.sh start
EXPOSE 19443
USER nifi
HTTP Works
I have pulled the apache/nifi image and using the command:
docker run --name my_nifi -p 19443:19443 -d -e NIFI_WEB_HTTP_PORT='19443' my_nifi
where the last my_nifi is the image tag that I have created from the Dockerfile.
With this container I can connect to
http://<the remote IP address>:19443/nifi
and it works, showing the NiFi page.
Dockerfile build
docker build -t my_nifi --no-cache .
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
STEP 1: FROM apache/nifi:latest
STEP 2: WORKDIR /opt/nifi/nifi-current
c6788497ae98d998a561aab162f1cded42f17026abe3745e61021826858ff6db
STEP 3: RUN /opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh standalone -n "localhost" -C "CN=user_1, OU=NiFi"
2020/12/30 08:38:15 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandaloneCommandLine: No nifiPropertiesFile specified, using embedded one.
2020/12/30 08:38:16 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Running standalone certificate generation with output directory ../nifi-current
2020/12/30 08:38:16 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Generated new CA certificate ../nifi-current/nifi-cert.pem and key ../nifi-current/nifi-key.key
2020/12/30 08:38:16 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Writing new ssl configuration to ../nifi-current/localhost
2020/12/30 08:38:16 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully generated TLS configuration for localhost 1 in ../nifi-current/localhost
2020/12/30 08:38:16 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Generating new client certificate ../nifi-current/CN=user_1_OU=NiFi.p12
2020/12/30 08:38:17 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully generated client certificate ../nifi-current/CN=user_1_OU=NiFi.p12
2020/12/30 08:38:17 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: tls-toolkit standalone completed successfully
0ce5790c026b4650615a6dc8e5745dece2fe6374104825cf4a9ecdc8dfbbdf46
STEP 4: RUN ls localhost/
keystore.jks nifi.properties truststore.jks
85710975c4ed5f1029ad9e7c70b7516e7cf63a9b568e20844d7cf74f8b33f648
STEP 5: RUN cp -fv /opt/nifi/nifi-current/localhost/* /opt/nifi/nifi-current/conf/
'/opt/nifi/nifi-current/localhost/keystore.jks' -> '/opt/nifi/nifi-current/conf/keystore.jks'
'/opt/nifi/nifi-current/localhost/nifi.properties' -> '/opt/nifi/nifi-current/conf/nifi.properties'
'/opt/nifi/nifi-current/localhost/truststore.jks' -> '/opt/nifi/nifi-current/conf/truststore.jks'
a2b99978024840cc4d2702b31f8f2346398673f31ace9d776af112b1aa3d45ac
STEP 6: RUN ls conf/
authorizers.xml login-identity-providers.xml
bootstrap-notification-services.xml nifi.properties
bootstrap.conf state-management.xml
logback.xml zookeeper.properties
0adb1c26826936d08f7edd6df604a0689c23cb9e3db47be06f1c9b4ce935a50d
STEP 7: RUN /opt/nifi/nifi-current/bin/nifi.sh start
Java home: /usr/local/openjdk-8
NiFi home: /opt/nifi/nifi-current
Bootstrap Config File: /opt/nifi/nifi-current/conf/bootstrap.conf
7146d8dc7f891643f42dfd2efef446cedf7b98cf2ecad90ebf6b5de335408b4e
STEP 8: EXPOSE 19443
72f941725ac0c9a66d2c2e0a21286b6db52b3a039c721dccd70234f75dfdd9fe
STEP 9: USER nifi
STEP 10: COMMIT my_nifi
77cf9574d75af00aeed7c6dbacbb853badad82e12f9f448a94f6162df2c1df44
In STEP 3 I use the NiFi tls-toolkit to create the jks keys and the new nifi.properties file, but:
in STEP 5-6, I see the problem that even though the cp command
says that the files have been copied in the conf/ folder, they are
not if I just list the content of that folder.
after the build, I ran a new container (docker run --name my_nifi -p 19443:19443 -d my_nifi and even adding -e NIFI_WEB_HTTPS_PORT='19443' is the same) and tried to enter it and manually cp
the files:
keystore.jks
nifi.properties
truststore.jks
into the conf/ folder and it did copied.
But at the restart of this second container I get this ERROR:
2020-12-30 08:50:33,022 INFO [main] org.eclipse.jetty.util.log Logging initialized #7671ms to org.eclipse.jetty.util.log.Slf4jLog
2020-12-30 08:50:33,066 WARN [main] org.apache.nifi.web.server.JettyServer Both the HTTP and HTTPS connectors are configured in nifi.properties. Only one of these connectors should be configured. See the NiFi Admin Guide for more details
2020-12-30 08:50:33,066 WARN [main] org.apache.nifi.web.server.JettyServer HTTP connector: http://8eafc1fa77d0:8080
2020-12-30 08:50:33,066 WARN [main] org.apache.nifi.web.server.JettyServer HTTPS connector: https://localhost:9443
2020-12-30 08:50:33,066 ERROR [main] org.apache.nifi.web.server.JettyServer NiFi only supports one mode of HTTP or HTTPS operation, not both simultaneously. Check the nifi.properties file and ensure that either the HTTP hostname and port or the HTTPS hostname and port are empty
2020-12-30 08:50:33,068 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.lang.IllegalStateException: Only one of the HTTP and HTTPS connectors can be configured at one time
at org.apache.nifi.web.server.JettyServer.configureConnectors(JettyServer.java:825)
at org.apache.nifi.web.server.JettyServer.<init>(JettyServer.java:178)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.nifi.NiFi.<init>(NiFi.java:151)
at org.apache.nifi.NiFi.<init>(NiFi.java:72)
at org.apache.nifi.NiFi.main(NiFi.java:301)
2020-12-30 08:50:33,068 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2020-12-30 08:50:33,069 INFO [Thread-1] org.apache.nifi.NiFi Jetty web server shutdown completed (nicely or otherwise).
But the nifi.properties that has been copied is this one below, which does not have the http values filled:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Core Properties #
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.archive.enabled=true
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flow.configuration.archive.max.time=30 days
nifi.flow.configuration.archive.max.storage=500 MB
nifi.flow.configuration.archive.max.count=
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait before checking again for work?
nifi.bored.yield.duration=10 millis
nifi.queue.backpressure.count=10000
nifi.queue.backpressure.size=1 GB
nifi.authorizer.configuration.file=./conf/authorizers.xml
nifi.login.identity.provider.configuration.file=./conf/login-identity-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.library.autoload.directory=./extensions
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=false
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.checkpoint.interval=20 secs
nifi.flowfile.repository.always.sync=false
nifi.flowfile.repository.encryption.key.provider.implementation=
nifi.flowfile.repository.encryption.key.provider.location=
nifi.flowfile.repository.encryption.key.id=
nifi.flowfile.repository.encryption.key=
nifi.flowfile.repository.retain.orphaned.flowfiles=true
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=1 MB
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=7 days
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=../nifi-content-viewer/
nifi.content.repository.encryption.key.provider.implementation=
nifi.content.repository.encryption.key.provider.location=
nifi.content.repository.encryption.key.id=
nifi.content.repository.encryption.key=
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.max.storage.size=10 GB
nifi.provenance.repository.rollover.time=10 mins
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable. Some examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Repository Properties
nifi.provenance.repository.buffer.size=100000
# Component Status Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=true
nifi.remote.input.socket.port=10443
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs
# web properties #
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
nifi.web.https.host=localhost
nifi.web.https.port=9443
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
nifi.web.max.content.size=
nifi.web.max.requests.per.second=30000
nifi.web.should.send.server.version=true
# security properties #
nifi.sensitive.props.key=
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
nifi.security.keystore=./conf/keystore.jks
nifi.security.keystoreType=jks
nifi.security.keystorePasswd=U/lgE52hjoAhCa0w9KD2XWZeVp1gyNPT5sAY9I0Kyng
nifi.security.keyPasswd=U/lgE52hjoAhCa0w9KD2XWZeVp1gyNPT5sAY9I0Kyng
nifi.security.truststore=./conf/truststore.jks
nifi.security.truststoreType=jks
nifi.security.truststorePasswd=EvHdoccmVKi8dQj51ohiOIYIuR/J/SaMWb176qBIVrY
nifi.security.user.authorizer=managed-authorizer
nifi.security.allow.anonymous.authentication=false
nifi.security.user.login.identity.provider=
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
# OpenId Connect SSO Properties #
nifi.security.user.oidc.discovery.url=
nifi.security.user.oidc.connect.timeout=5 secs
nifi.security.user.oidc.read.timeout=5 secs
nifi.security.user.oidc.client.id=
nifi.security.user.oidc.client.secret=
nifi.security.user.oidc.preferred.jwsalgorithm=
nifi.security.user.oidc.additional.scopes=
nifi.security.user.oidc.claim.identifying.user=
# Apache Knox SSO Properties #
nifi.security.user.knox.url=
nifi.security.user.knox.publicKey=
nifi.security.user.knox.cookieName=hadoop-jwt
nifi.security.user.knox.audiences=
# Identity Mapping Properties #
# These properties allow normalizing user identities such that identities coming from different identity providers
# (certificates, LDAP, Kerberos) can be treated the same internally in NiFi. The following example demonstrates normalizing
# DNs from certificates and principals from Kerberos into a common identity string:
#
# nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?), L=(.*?), ST=(.*?), C=(.*?)$
# nifi.security.identity.mapping.value.dn=$1#$2
# nifi.security.identity.mapping.transform.dn=NONE
# nifi.security.identity.mapping.pattern.kerb=^(.*?)/instance#(.*?)$
# nifi.security.identity.mapping.value.kerb=$1#$2
# nifi.security.identity.mapping.transform.kerb=UPPER
# Group Mapping Properties #
# These properties allow normalizing group names coming from external sources like LDAP. The following example
# lowercases any group name.
#
# nifi.security.group.mapping.pattern.anygroup=^(.*)$
# nifi.security.group.mapping.value.anygroup=$1
# nifi.security.group.mapping.transform.anygroup=LOWER
# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.heartbeat.missable.max=8
nifi.cluster.protocol.is.secure=true
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=false
nifi.cluster.node.address=localhost
nifi.cluster.node.protocol.port=11443
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=1
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=
nifi.zookeeper.connect.timeout=10 secs
nifi.zookeeper.session.timeout=10 secs
nifi.zookeeper.root.node=/nifi
# Zookeeper properties for the authentication scheme used when creating acls on znodes used for cluster management
# Values supported for nifi.zookeeper.auth.type are "default", which will apply world/anyone rights on znodes
# and "sasl" which will give rights to the sasl/kerberos identity used to authenticate the nifi node
# The identity is determined using the value in nifi.kerberos.service.principal and the removeHostFromPrincipal
# and removeRealmFromPrincipal values (which should align with the kerberos.removeHostFromPrincipal and kerberos.removeRealmFromPrincipal
# values configured on the zookeeper server).
nifi.zookeeper.auth.type=
nifi.zookeeper.kerberos.removeHostFromPrincipal=
nifi.zookeeper.kerberos.removeRealmFromPrincipal=
# kerberos #
nifi.kerberos.krb5.file=
# kerberos service principal #
nifi.kerberos.service.principal=
nifi.kerberos.service.keytab.location=
# kerberos spnego principal #
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.authentication.expiration=12 hours
# external properties files for variable registry
# supports a comma delimited list of file locations
nifi.variable.registry.properties=
# analytics properties #
nifi.analytics.predict.enabled=false
nifi.analytics.predict.interval=3 mins
nifi.analytics.query.interval=5 mins
nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
nifi.analytics.connection.model.score.name=rSquared
nifi.analytics.connection.model.score.threshold=.90
How do I solve this?
According to the documentation of the nifi image , you should add specific variables to your docker run command if you want to go https. I will try that, by providing external keystore and truststore.
docker run --name nifi \
-v /User/dreynolds/certs/localhost:/opt/certs \
-p 8443:8443 \
-e AUTH=tls \
-e KEYSTORE_PATH=/opt/certs/keystore.jks \
-e KEYSTORE_TYPE=JKS \
-e KEYSTORE_PASSWORD=QKZv1hSWAFQYZ+WU1jjF5ank+l4igeOfQRp+OSbkkrs \
-e TRUSTSTORE_PATH=/opt/certs/truststore.jks \
-e TRUSTSTORE_PASSWORD=rHkWR1gDNW3R9hgbeRsT3OM3Ue0zwGtQqcFKJD2EXWE \
-e TRUSTSTORE_TYPE=JKS \
-e INITIAL_ADMIN_IDENTITY='CN=Random User, O=Apache, OU=NiFi, C=US' \
-d \
apache/nifi:latest
You could also try to build the image from scratch (ie by downloading nifi from the Dockerfile etc...)

Openldap 2.4 within Docker container

I'm setting up an openldap server (slapd) within a docker container, I just took the latest centos image (tags: latest, centos7, 7, image here
and then I've installed following packages:
openldap-servers-2.4.44-5.el7.x86_64
openldap-2.4.44-5.el7.x86_64
openldap-clients-2.4.44-5.el7.x86_64
openldap-devel-2.4.44-5.el7.x86_64
and then just started the service with /usr/sbin/slapd -F /etc/openldap/slapd.d/
Then I'm trying to add the domain and root user to ldap configuration using this ldif file (db.ldif)
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=mydomain,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=myadminuser,dc=mydomain,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootPW
olcRootPW: {SSHA}blablablabla
Then when I run ldapmodify -Y EXTERNAL -H ldapi:/// -f db.ldif
It throws me this ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1)
I see the port is open 'cause telnet can connect to it and actually doing ldapsearch from another machine works ldapsearch -h $myserver -p $myport -x
And response this:
# extended LDIF
#
# LDAPv3
# base <> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 32 No such object
# numResponses: 1
I really don't know what I'm missing.

How to create different Jenkins2 images , unlocked and with preloaded plugins?

I launch a new Jenkins2 container based on the official Jenkins image.
But, it needs the initial setup. The random generated unlock string must be entered and the admin user/pass must be set. Then the plugins must be installed.
I want to be able to set these up from the dockerfile.
I made a list of the plugins that I want to be installed during build, but how do I confront the other two?
Basically, I want to be able to create different images uniquely configured and ready to be used via a container.
Plugins
Installation of plugins (as per the documentation):
# Dockerfile
USER root
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
# (...)
USER jenkins
If you wish to generate a plugins.txt, before running the above, based on your current manual jenkins setup run the following:
JENKINS_HOST=<user>:<passwd>#<hostname>:<port>
curl -sSL "http://$JENKINS_HOST/pluginManager/api/xml?depth=1&xpath=/*/*/shortName|/*/*/version&wrapper=plugins" | perl -pe 's/.*?<shortName>([\w-]+).*?<version>([^<]+)()(<\/\w+>)+/\1 \2\n/g'|sed 's/ /:/' > plugins.txt
_
# plugins.txt (example)
ace-editor:1.1
git-client:2.1.0
workflow-multibranch:2.9.2
script-security:1.24
durable-task:1.12
pam-auth:1.3
credentials:2.1.8
bitbucket:1.1.5
ssh-credentials:1.12
credentials-binding:1.10
mapdb-api:1.0.9.0
workflow-support:2.10
resource-disposer:0.3
workflow-basic-steps:2.3
email-ext:2.52
ws-cleanup:0.32
ssh-slaves:1.11
workflow-job:2.9
docker-commons:1.5
matrix-project:1.7.1
plain-credentials:1.3
workflow-scm-step:2.3
scm-api:1.3
matrix-auth:1.4
icon-shim:2.0.3
ldap:1.13
pipeline-build-step:2.3
subversion:2.7.1
ant:1.4
branch-api:1.11.1
pipeline-input-step:2.5
bouncycastle-api:2.16.0
workflow-cps:2.23
docker-slaves:1.0.5
cloudbees-folder:5.13
pipeline-stage-step:2.2
workflow-api:2.6
pipeline-stage-view:2.2
workflow-aggregator:2.4
github:1.22.4
token-macro:2.0
pipeline-graph-analysis:1.2
authentication-tokens:1.3
handlebars:1.1.1
gradle:1.25
git:3.0.0
external-monitor-job:1.6
structs:1.5
mercurial:1.57
antisamy-markup-formatter:1.5
jquery-detached:1.2.1
mailer:1.18
workflow-cps-global-lib:2.4
windows-slaves:1.2
workflow-step-api:2.5
docker-workflow:1.9
github-branch-source:1.10
pipeline-milestone-step:1.1
git-server:1.7
github-organization-folder:1.5
momentjs:1.1.1
build-timeout:1.17.1
github-api:1.79
workflow-durable-task-step:2.5
pipeline-rest-api:2.2
junit:1.19
display-url-api:0.5
timestamper:1.8.7
Disable Security & Admin user
This can be sorted by passing --env JAVA_OPTS="-Djenkins.install.runSetupWizard=false" to the docker run command.
Example:
docker run -d --name myjenkins -p 8080:8080 -p 50000:50000 --env JAVA_OPTS="-Djenkins.install.runSetupWizard=false" jenkins:latest

Resources