I am trying to install the IBM Private Cloud Community Edition but struggle with the execution of the sudo docker run command from the installation instructions:
> sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
When I execute it, it returns following output with error message (below):
user#kim:/opt/ibm-cloud-private-ce-2.1.0.3/cluster$ sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
PLAY [Checking Python interpreter] *********************************************
TASK [Checking Python interpreter] *********************************************
changed: [127.0.0.1]
PLAY [Checking prerequisites] **************************************************
TASK [Gathering Facts] *********************************************************
ok: [127.0.0.1]
TASK [docker-engine-check : Getting Docker engine version] *********************
changed: [127.0.0.1]
TASK [docker-engine-check : Checking docker engine if installed] ***************
changed: [127.0.0.1]
TASK [docker-engine : include] *************************************************
TASK [docker-engine : include] *************************************************
TASK [containerd-engine-check : Getting containerd version] ********************
TASK [containerd-engine-check : Checking cri-containerd if installed] **********
TASK [containerd-engine : include] *********************************************
TASK [containerd-engine : include] *********************************************
TASK [network-check : Checking for the network pre-check file] *****************
ok: [127.0.0.1 -> localhost]
TASK [network-check : include_tasks] *******************************************
included: /installer/playbook/roles/network-check/tasks/calico.yaml for 127.0.0.1
TASK [network-check : Calico Validation - Verifying hostname for lowercase] ****
TASK [network-check : Calico Validation - Initializing interface list to be verified] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is first-found] ***
TASK [network-check : Calico Validation - Updating regex string to match interfaces to be excluded] ***
TASK [network-check : Calico Validation - Getting list of interfaces to be considered] ***
TASK [network-check : Calico Validation - Excluding default interface if defined] ***
TASK [network-check : Calico Validation - Finding Interface reg-ex when autodetection_method is interface(reg-ex)] ***
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is interface(reg-ex)] ***
TASK [network-check : Calico Validation - Finding Domain/IP when autodetection_method is can-reach] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding IP for the Domain when autodetection_method is can-reach] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is can-reach] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when lo is found] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is can-reach] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding MTU for the detected Interface(s)] ***
fatal: [127.0.0.1]: FAILED! => {"msg": "The conditional check 'hostvars[inventory_hostname]['ansible_'~item]['mtu'] is defined' failed. The error was: error while evaluating conditional (hostvars[inventory_hostname]['ansible_'~item]['mtu'] is defined): 'dict object' has no attribute u'ansible_'\n\nThe error appears to have been in '/installer/playbook/roles/network-check/tasks/calico.yaml': line 86, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Calico Validation - Finding MTU for the detected Interface(s)\n ^ here\n"}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
127.0.0.1 : ok=12 changed=6 unreachable=0 failed=1
Playbook run took 0 days, 0 hours, 0 minutes, 13 seconds
user#kim:/opt/ibm-cloud-private-ce-2.1.0.3/cluster$
I am working on Ubuntu 14.04 with Docker version 17.12.1-ce, build 7390fc6.
My host file looks like this:
[master]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
[worker]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
[proxy]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
#[management]
#4.4.4.4
#[va]
#5.5.5.5
The yaml.cfg file looks like this:
# Licensed Materials - Property of IBM
# IBM Cloud private
# # Copyright IBM Corp. 2017 All Rights Reserved
# US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
---
###### docker0: 172.17.0.1
###### eth0: 192.168.240.14
## Network Settings
network_type: calico
# network_helm_chart_path: < helm chart path >
## Network in IPv4 CIDR format
network_cidr: 127.0.0.1/8
## Kubernetes Settings
service_cluster_ip_range: 127.0.0.1/24
## Makes the Kubelet start if swap is enabled on the node. Remove
## this if your production env want to disble swap.
kubelet_extra_args: ["--fail-swap-on=false"]
# cluster_domain: cluster.local
# cluster_name: mycluster
# cluster_CA_domain: "{{ cluster_name }}.icp"
# cluster_zone: "myzone"
# cluster_region: "myregion"
## Etcd Settings
etcd_extra_args: ["--grpc-keepalive-timeout=0", "--grpc-keepalive-interval=0", "--snapshot-count=10000"]
## General Settings
# wait_for_timeout: 600
# docker_api_timeout: 100
## Advanced Settings
default_admin_user: user
default_admin_password: 6CEd29CN
# ansible_user: <username>
# ansible_become: true
# ansible_become_password: <password>
## Kubernetes Settings
# kube_apiserver_extra_args: []
# kube_controller_manager_extra_args: []
# kube_proxy_extra_args: []
# kube_scheduler_extra_args: []
## Enable Kubernetes Audit Log
# auditlog_enabled: false
## GlusterFS Settings
# glusterfs: false
## GlusterFS Storage Settings
# storage:
# - kind: glusterfs
# nodes:
# - ip: <worker_node_m_IP_address>
# device: <link path>/<symlink of device aaa>,<link path>/<symlink of device bbb>
# - ip: <worker_node_n_IP_address>
# device: <link path>/<symlink of device ccc>
# - ip: <worker_node_o_IP_address>
# device: <link path>/<symlink of device ddd>
# storage_class:
# name:
# default: false
# volumetype: replicate:3
## Network Settings
## Calico Network Settings
# calico_ipip_enabled: true
# calico_tunnel_mtu: 1430
calico_ip_autodetection_method: can-reach=127.0.0.1
## IPSec mesh Settings
## If user wants to configure IPSec mesh, the following parameters
## should be configured through config.yaml
# ipsec_mesh:
# enable: true
# interface: <interface name on which IPsec will be enabled>
# subnets: []
# exclude_ips: "<list of IP addresses separated by a comma>"
# kube_apiserver_secure_port: 8001
## External loadbalancer IP or domain
## Or floating IP in OpenStack environment
# cluster_lb_address: none
## External loadbalancer IP or domain
## Or floating IP in OpenStack environment
# proxy_lb_address: none
## Install in firewall enabled mode
# firewall_enabled: false
## Allow loopback dns server in cluster nodes
# loopback_dns: false
## High Availability Settings
# vip_manager: etcd
## High Availability Settings for master nodes
# vip_iface: eth0
# cluster_vip: 127.0.1.1
## High Availability Settings for Proxy nodes
# proxy_vip_iface: eth0
# proxy_vip: 127.0.1.1
## Federation cluster Settings
# federation_enabled: false
# federation_cluster: federation-cluster
# federation_domain: cluster.federation
# federation_apiserver_extra_args: []
# federation_controllermanager_extra_args: []
# federation_external_policy_engine_enabled: false
## vSphere cloud provider Settings
## If user wants to configure vSphere as cloud provider, vsphere_conf
## parameters should be configured through config.yaml
# kubelet_nodename: hostname
# cloud_provider: vsphere
# vsphere_conf:
# user: <vCenter username for vSphere cloud provider>
# password: <password for vCenter user>
# server: <vCenter server IP or FQDN>
# port: [vCenter Server Port; default: 443]
# insecure_flag: [set to 1 if vCenter uses a self-signed certificate]
# datacenter: <datacenter name on which Node VMs are deployed>
# datastore: <default datastore to be used for provisioning volumes>
# working_dir: <vCenter VM folder path in which node VMs are located>
## Disabled Management Services Settings
## You can disable the following management services: ["service-catalog", "metering", "monitoring", "istio", "vulnerability-advisor", "custom-metrics-adapter"]
disabled_management_services: ["istio", "vulnerability-advisor", "custom-metrics-adapter"]
## Docker Settings
# docker_env: []
# docker_extra_args: []
## The maximum size of the log before it is rolled
# docker_log_max_size: 50m
## The maximum number of log files that can be present
# docker_log_max_file: 10
## Install/upgrade docker version
# docker_version: 17.12.1
## ICP install docker automatically
# install_docker: true
## Ingress Controller Settings
## You can add your ingress controller configuration, and the allowed configuration can refer to
## https://github.com/kubernetes/ingress-nginx/blob/nginx-0.9.0/docs/user-guide/configmap.md#configuration-options
# ingress_controller:
# disable-access-log: 'true'
## Clean metrics indices in Elasticsearch older than this number of days
# metrics_max_age: 1
## Clean application log indices in Elasticsearch older than this number of days
# logs_maxage: 1
## Uncomment the line below to install Kibana as a managed service.
# kibana_install: true
# STARTING_CLOUDANT
# cloudant:
# namespace: kube-system
# pullPolicy: IfNotPresent
# pvPath: /opt/ibm/cfc/cloudant
# database:
# password: fdrreedfddfreeedffde
# federatorCommand: hostname
# federationIdentifier: "-0"
# readinessProbePeriodSeconds: 2
# readinessProbeInitialDelaySeconds: 90
# END_CLOUDANT
My goal is to set up ICP on my local machine (single node) and I'm very thankful for any help regarding this issue.
So I resolved this error by uncommenting and setting # calico_ipip_enabled: true to false.
After that though I got another error because of my loopback ip:
fatal: [127.0.0.1] => A loopback IP is used in your DNS server configuration. For more details, see https://ibm.biz/dns-fails.
But there is an fix/workaround by setting loopback_dns: true as mentioned in the link.
I can't close this question here but this is how I resolved it.
IBM Cloud private suppoted OS is ubuntu 16.04 .Please check the below url
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/supported_system_config/supported_os.html
Please check the system requirement..
I am also trying to install the setup.. I didnt face this issue.
Normally, we can not specify 127.0.0.1 as ICP node in ICP hosts file, thanks.
Related
I have a problem with Traefik, I want to log from a server with syslog-ng (docker).
I have logs but I have reverse_proxy name and I want source IP not the name of traefik. I wish to keep source IP from the host.
traefik.yml :
global:
sendAnonymousUsage: false
api:
dashboard: true
insecure: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
watch: true
useBindPortIP: true
exposedByDefault: false
file:
filename: /etc/traefik/config.yml
watch: true
log:
level: INFO
format: common
entryPoints:
http:
address: ":80"
http:
redirections:
entryPoint:
to: https
scheme: https
https:
address: ":443"
udp:
address: ":514/udp"
tcp:
address: ":514"
udp:
services:
syslog-ng:
loadBalancer:
servers:
- address: ":514/udp"
tcp:
services:
syslog-ng:
loadBalancer:
servers:
- address: ":514"
forwardedHeaders: true
certificatesResolvers:
le:
acme:
email: responsable.informatique#exemple.com
storage: acme.json
httpChallenge:
# used during the challenge
entryPoint: http
syslog-ng.conf:
# rsyslog configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark # provides --MARK-- message capability
# Provides UDP syslog reception
#$ModLoad imudp
#$InputUDPServerRun 514
#$UDPServerRun 514
# Provides TCP syslog reception
#$ModLoad imtcp
#$InputTCPServerRun 514
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
#*.* #lysca-app0037.ds-001.net
*.* ##10.84.50.186
#*.* action(type=omfwd" target="10.84.50.186" port="601" protocol="tcp"
# action.resumeRetryCount ="100"
# queue.type="linkedList" queue.size="10000")
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* ##remote-host:514
# ### end of the forwarding rule ###
It's like NAT reverse but I don't know here I can find the configuration in traefik.
Thank for read me
Regards
I have Jenkins locally running on port 8081 on a linux machine that is setup in office .
I have a public IP that I am trying to use to make Jenkins publicly available.
I have entered the public IP with port in Manage Jenkins -> Configure System -> Jenkins
URL like: http://182.156.xxx.xx:8081/
Now if I direct to http://182.156.xxx.xx:8081/ , it gives me HTTP 404 error(screenshot attached).
Note: I have setup the Jenkins in Ubuntu with below commands:
wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ >
/etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins
etc/default/jenkins file:
# defaults for Jenkins automation server
# pulled in from the init script; makes things easier.
NAME=jenkins
# arguments to pass to java
# Allow graphs etc. to work even when an X server is present
JAVA_ARGS="-Djava.awt.headless=true"
#JAVA_ARGS="-Xmx256m"
# make jenkins listen on IPv4 address
#JAVA_ARGS="-Djava.net.preferIPv4Stack=true"
PIDFILE=/var/run/$NAME/$NAME.pid
# user and group to be invoked as (default to jenkins)
JENKINS_USER=$NAME
JENKINS_GROUP=$NAME
# location of the jenkins war file
JENKINS_WAR=/usr/share/$NAME/$NAME.war
# jenkins home location
JENKINS_HOME=/var/lib/$NAME
# set this to false if you don't want Jenkins to run by itself
# in this set up, you are expected to provide a servlet container
# to host jenkins.
RUN_STANDALONE=true
# log location. this may be a syslog facility.priority
JENKINS_LOG=/var/log/$NAME/$NAME.log
#JENKINS_LOG=daemon.info
# Whether to enable web access logging or not.
# Set to "yes" to enable logging to /var/log/$NAME/access_log
JENKINS_ENABLE_ACCESS_LOG="no"
# OS LIMITS SETUP
# comment this out to observe /etc/security/limits.conf
# this is on by default because http://github.com/jenkinsci/jenkins/commit/2fb288474e980d0e7ff9c4a3b768874835a3e92e
# reported that Ubuntu's PAM configuration doesn't include pam_limits.so, and as a result the # of file
# descriptors are forced to 1024 regardless of /etc/security/limits.conf
MAXOPENFILES=8192
# set the umask to control permission bits of files that Jenkins creates.
# 027 makes files read-only for group and inaccessible for others, which some security sensitive users
# might consider benefitial, especially if Jenkins runs in a box that's used for multiple purposes.
# Beware that 027 permission would interfere with sudo scripts that run on the master (JENKINS-25065.)
#
# Note also that the particularly sensitive part of $JENKINS_HOME (such as credentials) are always
# written without 'others' access. So the umask values only affect job configuration, build records,
# that sort of things.
#
# If commented out, the value from the OS is inherited, which is normally 022 (as of Ubuntu 12.04,
# by default umask comes from pam_umask(8) and /etc/login.defs
# UMASK=027
# port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8081
# servlet context, important if you want to use apache proxying
PREFIX=/$NAME
# arguments to pass to jenkins.
# --javahome=$JAVA_HOME
# --httpListenAddress=$HTTP_HOST (default 0.0.0.0)
# --httpPort=$HTTP_PORT (default 8080; disable with -1)
# --httpsPort=$HTTP_PORT
# --argumentsRealm.passwd.$ADMIN_USER=[password]
# --argumentsRealm.roles.$ADMIN_USER=admin
# --webroot=~/.jenkins/war
# --prefix=$PREFIX
JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=$HTTP_PORT"
In this jenkins file, I have only changed the HTTP PORT from 8080 to 8081. As on port 8080, the jenkins is already running with the same public IP.
Jenkins version : 2.289.2
Java version : 8
Ubuntu version : 20.04
jenkins_error_screenshot
I'm trying to set up a local Kibana instance with ActiveMQ for testing purposes. I've created a docker network called elastic-network. I have 3 containers in my network: elasticsearch, kibana and finally activemq. In my kibana container, I downloaded metric beats using the following shell command
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.11.2-linux-x86_64.tar.gz
In the configuration file metricbeat.reference.yml, I've changed the host for my ActiveMQ instance running under the container activemq
- module: activemq
metricsets: ['broker', 'queue', 'topic']
period: 10s
hosts: ['activemq:8161']
path: '/api/jolokia/?ignoreErrors=true&canonicalNaming=false'
username: admin # default username
password: admin # default passwor
When I run metricbeat using the verbose parameter ./metricbeat -e I get some error mentioning that ActiveMQ API is unreachable. My problem is that metricbeat ignore my active mq broker configuration and tries to connect to localhost.
Is there a reason why my configuration could be ignored?
After looking through the documentation, I saw that for Linux, unlike the other OS, you also have to change the configuration in the module directory module.d/activemq.yml, not just the metricbeat.reference.yml
# Module: activemq
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.11/metricbeat-module-activemq.html
- module: activemq
metricsets: ['broker', 'queue', 'topic']
period: 10s
hosts: ['activemq:8161']
path: '/api/jolokia/?ignoreErrors=true&canonicalNaming=false'
username: admin # default username
password: admin # default password
I want to secure my NiFi with HTTPS using the tls-toolkit in standalone mode inside a Docker container, on a remote virtual machine running RHEL 8 (so actually using Podman instead of Docker but using a podman-docker module, I can treat podman as a Docker). I want to use the port 19443 now, but eventually I will be using the 9443.
I have created my simple testing Dockerfile:
FROM apache/nifi:latest
WORKDIR /opt/nifi/nifi-current
RUN /opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh standalone -n "localhost" -C "CN=user_1, OU=NiFi"
RUN ls localhost/
RUN cp -fv /opt/nifi/nifi-current/localhost/* /opt/nifi/nifi-current/conf/ # <- first problem, see build
RUN ls conf/
RUN /opt/nifi/nifi-current/bin/nifi.sh start
EXPOSE 19443
USER nifi
HTTP Works
I have pulled the apache/nifi image and using the command:
docker run --name my_nifi -p 19443:19443 -d -e NIFI_WEB_HTTP_PORT='19443' my_nifi
where the last my_nifi is the image tag that I have created from the Dockerfile.
With this container I can connect to
http://<the remote IP address>:19443/nifi
and it works, showing the NiFi page.
Dockerfile build
docker build -t my_nifi --no-cache .
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
STEP 1: FROM apache/nifi:latest
STEP 2: WORKDIR /opt/nifi/nifi-current
c6788497ae98d998a561aab162f1cded42f17026abe3745e61021826858ff6db
STEP 3: RUN /opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh standalone -n "localhost" -C "CN=user_1, OU=NiFi"
2020/12/30 08:38:15 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandaloneCommandLine: No nifiPropertiesFile specified, using embedded one.
2020/12/30 08:38:16 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Running standalone certificate generation with output directory ../nifi-current
2020/12/30 08:38:16 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Generated new CA certificate ../nifi-current/nifi-cert.pem and key ../nifi-current/nifi-key.key
2020/12/30 08:38:16 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Writing new ssl configuration to ../nifi-current/localhost
2020/12/30 08:38:16 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully generated TLS configuration for localhost 1 in ../nifi-current/localhost
2020/12/30 08:38:16 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Generating new client certificate ../nifi-current/CN=user_1_OU=NiFi.p12
2020/12/30 08:38:17 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully generated client certificate ../nifi-current/CN=user_1_OU=NiFi.p12
2020/12/30 08:38:17 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: tls-toolkit standalone completed successfully
0ce5790c026b4650615a6dc8e5745dece2fe6374104825cf4a9ecdc8dfbbdf46
STEP 4: RUN ls localhost/
keystore.jks nifi.properties truststore.jks
85710975c4ed5f1029ad9e7c70b7516e7cf63a9b568e20844d7cf74f8b33f648
STEP 5: RUN cp -fv /opt/nifi/nifi-current/localhost/* /opt/nifi/nifi-current/conf/
'/opt/nifi/nifi-current/localhost/keystore.jks' -> '/opt/nifi/nifi-current/conf/keystore.jks'
'/opt/nifi/nifi-current/localhost/nifi.properties' -> '/opt/nifi/nifi-current/conf/nifi.properties'
'/opt/nifi/nifi-current/localhost/truststore.jks' -> '/opt/nifi/nifi-current/conf/truststore.jks'
a2b99978024840cc4d2702b31f8f2346398673f31ace9d776af112b1aa3d45ac
STEP 6: RUN ls conf/
authorizers.xml login-identity-providers.xml
bootstrap-notification-services.xml nifi.properties
bootstrap.conf state-management.xml
logback.xml zookeeper.properties
0adb1c26826936d08f7edd6df604a0689c23cb9e3db47be06f1c9b4ce935a50d
STEP 7: RUN /opt/nifi/nifi-current/bin/nifi.sh start
Java home: /usr/local/openjdk-8
NiFi home: /opt/nifi/nifi-current
Bootstrap Config File: /opt/nifi/nifi-current/conf/bootstrap.conf
7146d8dc7f891643f42dfd2efef446cedf7b98cf2ecad90ebf6b5de335408b4e
STEP 8: EXPOSE 19443
72f941725ac0c9a66d2c2e0a21286b6db52b3a039c721dccd70234f75dfdd9fe
STEP 9: USER nifi
STEP 10: COMMIT my_nifi
77cf9574d75af00aeed7c6dbacbb853badad82e12f9f448a94f6162df2c1df44
In STEP 3 I use the NiFi tls-toolkit to create the jks keys and the new nifi.properties file, but:
in STEP 5-6, I see the problem that even though the cp command
says that the files have been copied in the conf/ folder, they are
not if I just list the content of that folder.
after the build, I ran a new container (docker run --name my_nifi -p 19443:19443 -d my_nifi and even adding -e NIFI_WEB_HTTPS_PORT='19443' is the same) and tried to enter it and manually cp
the files:
keystore.jks
nifi.properties
truststore.jks
into the conf/ folder and it did copied.
But at the restart of this second container I get this ERROR:
2020-12-30 08:50:33,022 INFO [main] org.eclipse.jetty.util.log Logging initialized #7671ms to org.eclipse.jetty.util.log.Slf4jLog
2020-12-30 08:50:33,066 WARN [main] org.apache.nifi.web.server.JettyServer Both the HTTP and HTTPS connectors are configured in nifi.properties. Only one of these connectors should be configured. See the NiFi Admin Guide for more details
2020-12-30 08:50:33,066 WARN [main] org.apache.nifi.web.server.JettyServer HTTP connector: http://8eafc1fa77d0:8080
2020-12-30 08:50:33,066 WARN [main] org.apache.nifi.web.server.JettyServer HTTPS connector: https://localhost:9443
2020-12-30 08:50:33,066 ERROR [main] org.apache.nifi.web.server.JettyServer NiFi only supports one mode of HTTP or HTTPS operation, not both simultaneously. Check the nifi.properties file and ensure that either the HTTP hostname and port or the HTTPS hostname and port are empty
2020-12-30 08:50:33,068 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.lang.IllegalStateException: Only one of the HTTP and HTTPS connectors can be configured at one time
at org.apache.nifi.web.server.JettyServer.configureConnectors(JettyServer.java:825)
at org.apache.nifi.web.server.JettyServer.<init>(JettyServer.java:178)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.nifi.NiFi.<init>(NiFi.java:151)
at org.apache.nifi.NiFi.<init>(NiFi.java:72)
at org.apache.nifi.NiFi.main(NiFi.java:301)
2020-12-30 08:50:33,068 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2020-12-30 08:50:33,069 INFO [Thread-1] org.apache.nifi.NiFi Jetty web server shutdown completed (nicely or otherwise).
But the nifi.properties that has been copied is this one below, which does not have the http values filled:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Core Properties #
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.archive.enabled=true
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flow.configuration.archive.max.time=30 days
nifi.flow.configuration.archive.max.storage=500 MB
nifi.flow.configuration.archive.max.count=
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait before checking again for work?
nifi.bored.yield.duration=10 millis
nifi.queue.backpressure.count=10000
nifi.queue.backpressure.size=1 GB
nifi.authorizer.configuration.file=./conf/authorizers.xml
nifi.login.identity.provider.configuration.file=./conf/login-identity-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.library.autoload.directory=./extensions
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=false
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.checkpoint.interval=20 secs
nifi.flowfile.repository.always.sync=false
nifi.flowfile.repository.encryption.key.provider.implementation=
nifi.flowfile.repository.encryption.key.provider.location=
nifi.flowfile.repository.encryption.key.id=
nifi.flowfile.repository.encryption.key=
nifi.flowfile.repository.retain.orphaned.flowfiles=true
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=1 MB
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=7 days
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=../nifi-content-viewer/
nifi.content.repository.encryption.key.provider.implementation=
nifi.content.repository.encryption.key.provider.location=
nifi.content.repository.encryption.key.id=
nifi.content.repository.encryption.key=
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.max.storage.size=10 GB
nifi.provenance.repository.rollover.time=10 mins
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable. Some examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Repository Properties
nifi.provenance.repository.buffer.size=100000
# Component Status Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=true
nifi.remote.input.socket.port=10443
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs
# web properties #
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
nifi.web.https.host=localhost
nifi.web.https.port=9443
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
nifi.web.max.content.size=
nifi.web.max.requests.per.second=30000
nifi.web.should.send.server.version=true
# security properties #
nifi.sensitive.props.key=
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
nifi.security.keystore=./conf/keystore.jks
nifi.security.keystoreType=jks
nifi.security.keystorePasswd=U/lgE52hjoAhCa0w9KD2XWZeVp1gyNPT5sAY9I0Kyng
nifi.security.keyPasswd=U/lgE52hjoAhCa0w9KD2XWZeVp1gyNPT5sAY9I0Kyng
nifi.security.truststore=./conf/truststore.jks
nifi.security.truststoreType=jks
nifi.security.truststorePasswd=EvHdoccmVKi8dQj51ohiOIYIuR/J/SaMWb176qBIVrY
nifi.security.user.authorizer=managed-authorizer
nifi.security.allow.anonymous.authentication=false
nifi.security.user.login.identity.provider=
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
# OpenId Connect SSO Properties #
nifi.security.user.oidc.discovery.url=
nifi.security.user.oidc.connect.timeout=5 secs
nifi.security.user.oidc.read.timeout=5 secs
nifi.security.user.oidc.client.id=
nifi.security.user.oidc.client.secret=
nifi.security.user.oidc.preferred.jwsalgorithm=
nifi.security.user.oidc.additional.scopes=
nifi.security.user.oidc.claim.identifying.user=
# Apache Knox SSO Properties #
nifi.security.user.knox.url=
nifi.security.user.knox.publicKey=
nifi.security.user.knox.cookieName=hadoop-jwt
nifi.security.user.knox.audiences=
# Identity Mapping Properties #
# These properties allow normalizing user identities such that identities coming from different identity providers
# (certificates, LDAP, Kerberos) can be treated the same internally in NiFi. The following example demonstrates normalizing
# DNs from certificates and principals from Kerberos into a common identity string:
#
# nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?), L=(.*?), ST=(.*?), C=(.*?)$
# nifi.security.identity.mapping.value.dn=$1#$2
# nifi.security.identity.mapping.transform.dn=NONE
# nifi.security.identity.mapping.pattern.kerb=^(.*?)/instance#(.*?)$
# nifi.security.identity.mapping.value.kerb=$1#$2
# nifi.security.identity.mapping.transform.kerb=UPPER
# Group Mapping Properties #
# These properties allow normalizing group names coming from external sources like LDAP. The following example
# lowercases any group name.
#
# nifi.security.group.mapping.pattern.anygroup=^(.*)$
# nifi.security.group.mapping.value.anygroup=$1
# nifi.security.group.mapping.transform.anygroup=LOWER
# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.heartbeat.missable.max=8
nifi.cluster.protocol.is.secure=true
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=false
nifi.cluster.node.address=localhost
nifi.cluster.node.protocol.port=11443
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=1
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=
nifi.zookeeper.connect.timeout=10 secs
nifi.zookeeper.session.timeout=10 secs
nifi.zookeeper.root.node=/nifi
# Zookeeper properties for the authentication scheme used when creating acls on znodes used for cluster management
# Values supported for nifi.zookeeper.auth.type are "default", which will apply world/anyone rights on znodes
# and "sasl" which will give rights to the sasl/kerberos identity used to authenticate the nifi node
# The identity is determined using the value in nifi.kerberos.service.principal and the removeHostFromPrincipal
# and removeRealmFromPrincipal values (which should align with the kerberos.removeHostFromPrincipal and kerberos.removeRealmFromPrincipal
# values configured on the zookeeper server).
nifi.zookeeper.auth.type=
nifi.zookeeper.kerberos.removeHostFromPrincipal=
nifi.zookeeper.kerberos.removeRealmFromPrincipal=
# kerberos #
nifi.kerberos.krb5.file=
# kerberos service principal #
nifi.kerberos.service.principal=
nifi.kerberos.service.keytab.location=
# kerberos spnego principal #
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.authentication.expiration=12 hours
# external properties files for variable registry
# supports a comma delimited list of file locations
nifi.variable.registry.properties=
# analytics properties #
nifi.analytics.predict.enabled=false
nifi.analytics.predict.interval=3 mins
nifi.analytics.query.interval=5 mins
nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
nifi.analytics.connection.model.score.name=rSquared
nifi.analytics.connection.model.score.threshold=.90
How do I solve this?
According to the documentation of the nifi image , you should add specific variables to your docker run command if you want to go https. I will try that, by providing external keystore and truststore.
docker run --name nifi \
-v /User/dreynolds/certs/localhost:/opt/certs \
-p 8443:8443 \
-e AUTH=tls \
-e KEYSTORE_PATH=/opt/certs/keystore.jks \
-e KEYSTORE_TYPE=JKS \
-e KEYSTORE_PASSWORD=QKZv1hSWAFQYZ+WU1jjF5ank+l4igeOfQRp+OSbkkrs \
-e TRUSTSTORE_PATH=/opt/certs/truststore.jks \
-e TRUSTSTORE_PASSWORD=rHkWR1gDNW3R9hgbeRsT3OM3Ue0zwGtQqcFKJD2EXWE \
-e TRUSTSTORE_TYPE=JKS \
-e INITIAL_ADMIN_IDENTITY='CN=Random User, O=Apache, OU=NiFi, C=US' \
-d \
apache/nifi:latest
You could also try to build the image from scratch (ie by downloading nifi from the Dockerfile etc...)
Dear who may read this topic,
I'm looking for help in setting up properly my jfrog oss.
I downloaded and unpacked the package suggested in the installation steps.
I modified the ports in the docker-compose.yaml and ran a shot.
Below the outputs of several files and execution.
System Diagnostic
└──╼ $sudo ./systemDiagnostics.sh
************** Start Artifactory Diagnostics *******************
Discovered compose installer, proceeding with var directory as :[/root/.jfrog/artifactory/var]
******************************** CHECK PORTS AVAILABLITY
********************************
Running: lsof -i:$ports_needed [ERROR] RESULT: NOT OK
---
Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
[ERROR] Artifactory 8184 NOT AVAILABLE used by processName docker-pr processId 30261 [ERROR] Either stop the process or change the port by configuring artifactory.port in system.yaml
[ERROR] RouterExternal 8183 NOT AVAILABLE used by processName docker-pr processId 30274 [ERROR] Either stop the process or change the port by configuring router.entrypoints.externalPort in system.yaml
******************************** CHECKING MAX OPEN FILES
********************************
Running: ulimit -n [ERROR] RESULT: NOT OK
---
[ERROR] Number found 1024 is less than recommended minimum of 32000 for USER "root"
******************************** CHECKING MAX OPEN PROCESSES
********************************
Running: ulimit -u RESULT: OK
******************************** CHECK FIREWALL SETTINGS
********************************
RESULT: FirewallD is not configured
******************************** CHECK FIREWALL IPTABLES SETTINGS
********************************
Running: iptables -L INPUT -n -v | grep $ports_needed | grep -i -E "REJECT|DROP" RESULT: OK
Artifactory 8184 AVAILABLE Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterExternal 8183 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
******************************** CHECK FIREWALL IP6TABLES SETTINGS
********************************
Running: ip6tables -L INPUT -n -v | grep $ports_needed | grep -i -E "REJECT|DROP" RESULT: OK
Artifactory 8184 AVAILABLE Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterExternal 8183 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
******************************** CHECKING LOCALHOST PING
********************************
Running: ping localhost -c 1 > /dev/null 2>&1 RESULT: OK
******************************** PROXY LIST
********************************
Checking proxy configured in HTTP_PROXY RESULT: OK Checking proxy configured in HTTPS_PROXY RESULT: OK Checking proxy configured in NO_PROXY RESULT: OK Checking proxy configured in ALL_PROXY RESULT: OK
************** End Artifactory Diagnostics *******************
docker-compose.yaml
version: '3' services: postgres:
image: ${DOCKER_REGISTRY}/postgres:9.6.11
container_name: postgresql
environment:
- POSTGRES_DB=artifactory
- POSTGRES_USER=artifactory
- POSTGRES_PASSWORD=r00t
ports:
- 5437:5437
volumes:
- ${ROOT_DATA_DIR}/var/data/postgres/data:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
restart: always
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10"
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000 artifactory:
image: ${DOCKER_REGISTRY}/jfrog/artifactory-oss:${ARTIFACTORY_VERSION}
container_name: artifactory
volumes:
- ${ROOT_DATA_DIR}/var:/var/opt/jfrog/artifactory
- /etc/localtime:/etc/localtime:ro
restart: always
depends_on:
- postgres
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
environment:
- JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
ports:
- ${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
# for router communication
- 8185:8185 # for artifactory communication
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10"
system.yaml
──╼ $sudo cat /root/.jfrog/artifactory/var/etc/system.yaml
shared:
node:
ip: 127.0.1.1
id: parrot
name: parrot
database:
type: postgresql
driver: org.postgresql.Driver
password: r00t
username: artifactory
url: jdbc:postgresql://127.0.1.1:5437/artifactory
router:
entrypoints:
externalPort: 8185
Could someone please help me out through this issue ?
EDIT 1.
catalina localhost.log
04-Jun-2020 07:07:55.585 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
04-Jun-2020 07:07:55.624 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/opt/jfrog/artifactory' resolved from: System property
04-Jun-2020 07:07:58.411 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Initializing Spring embedded WebApplicationContext
04-Jun-2020 07:09:59.503 INFO [localhost-startStop-3] org.apache.catalina.core.ApplicationContext.log Closing Spring root WebApplicationContext
console.log
2020-06-03T21:16:48.869Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [o.j.c.w.FileWatcher:147 ] [Thread-5 ] - Starting watch of folder configurations
2020-06-03T21:16:48.938L [35m[tomct][0m [SEVERE] [ ] [org.apache.tomcat.jdbc.pool.ConnectionPool] [org.apache.tomcat.jdbc.pool.ConnectionPool init] - Unable to create initial connections of pool.
org.postgresql.util.PSQLException: Connection to 127.0.1.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
.env
## This file contains environment variables used by the docker-compose yaml files
## IMPORTANT: During installation, this file may be updated based on user choices or existing configuration
## Docker registry to fetch images from
DOCKER_REGISTRY=docker.bintray.io
## Version of artifactory to install
ARTIFACTORY_VERSION=7.5.5
## The Installation directory for Artifactory. IF not entered, the script will prompt you for this input. Default [$HOME/.jfrog/artifactory]
ROOT_DATA_DIR=/root/.jfrog/artifactory
# Router external port mapping. This property may be overridden from the system.yaml (router.entrypoints.externalPort)
JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=8185
EDIT 2.
I'm facing the same issue when trying to follow another tutorial.
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ ./systemDiagnostics.sh
************** Start Artifactory Diagnostics *******************
Discovered debian installer, proceeding with var directory as :[/opt/jfrog/artifactory/var]
********************************
CHECK PORTS AVAILABLITY
********************************
Running: lsof -i:$ports_needed
RESULT: OK
Artifactory 8081 AVAILABLE
Access 8040 AVAILABLE
AccessGrpc 8045 AVAILABLE
MetaData 8086 AVAILABLE
FrontEnd 8070 AVAILABLE
Replicator 8048 AVAILABLE
Router 8046 AVAILABLE
RouterExternal 8082 AVAILABLE
RouterTraefik 8049 AVAILABLE
RouterGrpc 8047 AVAILABLE
********************************
CHECKING MAX OPEN FILES
********************************
BusyBox v1.30.1 (2019-02-14 18:11:39 UTC) multi-call binary.
Usage: grep [-HhnlLoqvsriwFE] [-m N] [-A/B/C N] PATTERN/-e PATTERN.../-f FILE [FILE]...
Search for PATTERN in FILEs (or stdin)
-H Add 'filename:' prefix
-h Do not add 'filename:' prefix
-n Add 'line_no:' prefix
-l Show only names of files that match
-L Show only names of files that don't match
-c Show only count of matching lines
-o Show only the matching part of line
-q Quiet. Return 0 if PATTERN is found, 1 otherwise
-v Select non-matching lines
-s Suppress open and read errors
-r Recurse
-i Ignore case
-w Match whole words only
-x Match whole lines only
-F PATTERN is a literal (not regexp)
-E PATTERN is an extended regexp
-m N Match up to N times per file
-A N Print N lines of trailing context
-B N Print N lines of leading context
-C N Same as '-A N -B N'
-e PTRN Pattern to match
-f FILE Read pattern from file
[WARN] User does not exist, skipping max open files check for this user
********************************
CHECKING MAX OPEN PROCESSES
********************************
BusyBox v1.30.1 (2019-02-14 18:11:39 UTC) multi-call binary.
Usage: grep [-HhnlLoqvsriwFE] [-m N] [-A/B/C N] PATTERN/-e PATTERN.../-f FILE [FILE]...
Search for PATTERN in FILEs (or stdin)
-H Add 'filename:' prefix
-h Do not add 'filename:' prefix
-n Add 'line_no:' prefix
-l Show only names of files that match
-L Show only names of files that don't match
-c Show only count of matching lines
-o Show only the matching part of line
-q Quiet. Return 0 if PATTERN is found, 1 otherwise
-v Select non-matching lines
-s Suppress open and read errors
-r Recurse
-i Ignore case
-w Match whole words only
-x Match whole lines only
-F PATTERN is a literal (not regexp)
-E PATTERN is an extended regexp
-m N Match up to N times per file
-A N Print N lines of trailing context
-B N Print N lines of leading context
-C N Same as '-A N -B N'
-e PTRN Pattern to match
-f FILE Read pattern from file
[WARN] User does not exist, skipping max open processes check for this user
********************************
CHECK FIREWALL SETTINGS
********************************
RESULT: FirewallD is not configured
********************************
CHECK FIREWALL IPTABLES SETTINGS
********************************
RESULT: Iptables is not configured
********************************
CHECK FIREWALL IP6TABLES SETTINGS
********************************
RESULT: Ip6tables is not configured
********************************
CHECKING LOCALHOST PING
********************************
Running: ping localhost -c 1 > /dev/null 2>&1
[ERROR] RESULT: NOT OK
---
[ERROR] Unable to resolve localhost. check if localhost is well defined in /etc/hosts
********************************
PROXY LIST
********************************
Checking proxy configured in HTTP_PROXY
RESULT: OK
Checking proxy configured in HTTPS_PROXY
RESULT: OK
Checking proxy configured in NO_PROXY
RESULT: OK
Checking proxy configured in ALL_PROXY
RESULT: OK
************** End Artifactory Diagnostics *******************
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/host
host.conf hostname hosts
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/host
host.conf hostname hosts
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 1cdf04cd9ed1
Thanks for your answers to my questions. Your postgres URL is going within artifactory container.
Please change url to jdbc:postgresql://postgres:5437/artifactory and try. Or add the both service with network_mode: host