Dear who may read this topic,
I'm looking for help in setting up properly my jfrog oss.
I downloaded and unpacked the package suggested in the installation steps.
I modified the ports in the docker-compose.yaml and ran a shot.
Below the outputs of several files and execution.
System Diagnostic
└──╼ $sudo ./systemDiagnostics.sh
************** Start Artifactory Diagnostics *******************
Discovered compose installer, proceeding with var directory as :[/root/.jfrog/artifactory/var]
******************************** CHECK PORTS AVAILABLITY
********************************
Running: lsof -i:$ports_needed [ERROR] RESULT: NOT OK
---
Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
[ERROR] Artifactory 8184 NOT AVAILABLE used by processName docker-pr processId 30261 [ERROR] Either stop the process or change the port by configuring artifactory.port in system.yaml
[ERROR] RouterExternal 8183 NOT AVAILABLE used by processName docker-pr processId 30274 [ERROR] Either stop the process or change the port by configuring router.entrypoints.externalPort in system.yaml
******************************** CHECKING MAX OPEN FILES
********************************
Running: ulimit -n [ERROR] RESULT: NOT OK
---
[ERROR] Number found 1024 is less than recommended minimum of 32000 for USER "root"
******************************** CHECKING MAX OPEN PROCESSES
********************************
Running: ulimit -u RESULT: OK
******************************** CHECK FIREWALL SETTINGS
********************************
RESULT: FirewallD is not configured
******************************** CHECK FIREWALL IPTABLES SETTINGS
********************************
Running: iptables -L INPUT -n -v | grep $ports_needed | grep -i -E "REJECT|DROP" RESULT: OK
Artifactory 8184 AVAILABLE Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterExternal 8183 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
******************************** CHECK FIREWALL IP6TABLES SETTINGS
********************************
Running: ip6tables -L INPUT -n -v | grep $ports_needed | grep -i -E "REJECT|DROP" RESULT: OK
Artifactory 8184 AVAILABLE Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterExternal 8183 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
******************************** CHECKING LOCALHOST PING
********************************
Running: ping localhost -c 1 > /dev/null 2>&1 RESULT: OK
******************************** PROXY LIST
********************************
Checking proxy configured in HTTP_PROXY RESULT: OK Checking proxy configured in HTTPS_PROXY RESULT: OK Checking proxy configured in NO_PROXY RESULT: OK Checking proxy configured in ALL_PROXY RESULT: OK
************** End Artifactory Diagnostics *******************
docker-compose.yaml
version: '3' services: postgres:
image: ${DOCKER_REGISTRY}/postgres:9.6.11
container_name: postgresql
environment:
- POSTGRES_DB=artifactory
- POSTGRES_USER=artifactory
- POSTGRES_PASSWORD=r00t
ports:
- 5437:5437
volumes:
- ${ROOT_DATA_DIR}/var/data/postgres/data:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
restart: always
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10"
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000 artifactory:
image: ${DOCKER_REGISTRY}/jfrog/artifactory-oss:${ARTIFACTORY_VERSION}
container_name: artifactory
volumes:
- ${ROOT_DATA_DIR}/var:/var/opt/jfrog/artifactory
- /etc/localtime:/etc/localtime:ro
restart: always
depends_on:
- postgres
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
environment:
- JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
ports:
- ${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
# for router communication
- 8185:8185 # for artifactory communication
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10"
system.yaml
──╼ $sudo cat /root/.jfrog/artifactory/var/etc/system.yaml
shared:
node:
ip: 127.0.1.1
id: parrot
name: parrot
database:
type: postgresql
driver: org.postgresql.Driver
password: r00t
username: artifactory
url: jdbc:postgresql://127.0.1.1:5437/artifactory
router:
entrypoints:
externalPort: 8185
Could someone please help me out through this issue ?
EDIT 1.
catalina localhost.log
04-Jun-2020 07:07:55.585 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
04-Jun-2020 07:07:55.624 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/opt/jfrog/artifactory' resolved from: System property
04-Jun-2020 07:07:58.411 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Initializing Spring embedded WebApplicationContext
04-Jun-2020 07:09:59.503 INFO [localhost-startStop-3] org.apache.catalina.core.ApplicationContext.log Closing Spring root WebApplicationContext
console.log
2020-06-03T21:16:48.869Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [o.j.c.w.FileWatcher:147 ] [Thread-5 ] - Starting watch of folder configurations
2020-06-03T21:16:48.938L [35m[tomct][0m [SEVERE] [ ] [org.apache.tomcat.jdbc.pool.ConnectionPool] [org.apache.tomcat.jdbc.pool.ConnectionPool init] - Unable to create initial connections of pool.
org.postgresql.util.PSQLException: Connection to 127.0.1.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
.env
## This file contains environment variables used by the docker-compose yaml files
## IMPORTANT: During installation, this file may be updated based on user choices or existing configuration
## Docker registry to fetch images from
DOCKER_REGISTRY=docker.bintray.io
## Version of artifactory to install
ARTIFACTORY_VERSION=7.5.5
## The Installation directory for Artifactory. IF not entered, the script will prompt you for this input. Default [$HOME/.jfrog/artifactory]
ROOT_DATA_DIR=/root/.jfrog/artifactory
# Router external port mapping. This property may be overridden from the system.yaml (router.entrypoints.externalPort)
JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=8185
EDIT 2.
I'm facing the same issue when trying to follow another tutorial.
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ ./systemDiagnostics.sh
************** Start Artifactory Diagnostics *******************
Discovered debian installer, proceeding with var directory as :[/opt/jfrog/artifactory/var]
********************************
CHECK PORTS AVAILABLITY
********************************
Running: lsof -i:$ports_needed
RESULT: OK
Artifactory 8081 AVAILABLE
Access 8040 AVAILABLE
AccessGrpc 8045 AVAILABLE
MetaData 8086 AVAILABLE
FrontEnd 8070 AVAILABLE
Replicator 8048 AVAILABLE
Router 8046 AVAILABLE
RouterExternal 8082 AVAILABLE
RouterTraefik 8049 AVAILABLE
RouterGrpc 8047 AVAILABLE
********************************
CHECKING MAX OPEN FILES
********************************
BusyBox v1.30.1 (2019-02-14 18:11:39 UTC) multi-call binary.
Usage: grep [-HhnlLoqvsriwFE] [-m N] [-A/B/C N] PATTERN/-e PATTERN.../-f FILE [FILE]...
Search for PATTERN in FILEs (or stdin)
-H Add 'filename:' prefix
-h Do not add 'filename:' prefix
-n Add 'line_no:' prefix
-l Show only names of files that match
-L Show only names of files that don't match
-c Show only count of matching lines
-o Show only the matching part of line
-q Quiet. Return 0 if PATTERN is found, 1 otherwise
-v Select non-matching lines
-s Suppress open and read errors
-r Recurse
-i Ignore case
-w Match whole words only
-x Match whole lines only
-F PATTERN is a literal (not regexp)
-E PATTERN is an extended regexp
-m N Match up to N times per file
-A N Print N lines of trailing context
-B N Print N lines of leading context
-C N Same as '-A N -B N'
-e PTRN Pattern to match
-f FILE Read pattern from file
[WARN] User does not exist, skipping max open files check for this user
********************************
CHECKING MAX OPEN PROCESSES
********************************
BusyBox v1.30.1 (2019-02-14 18:11:39 UTC) multi-call binary.
Usage: grep [-HhnlLoqvsriwFE] [-m N] [-A/B/C N] PATTERN/-e PATTERN.../-f FILE [FILE]...
Search for PATTERN in FILEs (or stdin)
-H Add 'filename:' prefix
-h Do not add 'filename:' prefix
-n Add 'line_no:' prefix
-l Show only names of files that match
-L Show only names of files that don't match
-c Show only count of matching lines
-o Show only the matching part of line
-q Quiet. Return 0 if PATTERN is found, 1 otherwise
-v Select non-matching lines
-s Suppress open and read errors
-r Recurse
-i Ignore case
-w Match whole words only
-x Match whole lines only
-F PATTERN is a literal (not regexp)
-E PATTERN is an extended regexp
-m N Match up to N times per file
-A N Print N lines of trailing context
-B N Print N lines of leading context
-C N Same as '-A N -B N'
-e PTRN Pattern to match
-f FILE Read pattern from file
[WARN] User does not exist, skipping max open processes check for this user
********************************
CHECK FIREWALL SETTINGS
********************************
RESULT: FirewallD is not configured
********************************
CHECK FIREWALL IPTABLES SETTINGS
********************************
RESULT: Iptables is not configured
********************************
CHECK FIREWALL IP6TABLES SETTINGS
********************************
RESULT: Ip6tables is not configured
********************************
CHECKING LOCALHOST PING
********************************
Running: ping localhost -c 1 > /dev/null 2>&1
[ERROR] RESULT: NOT OK
---
[ERROR] Unable to resolve localhost. check if localhost is well defined in /etc/hosts
********************************
PROXY LIST
********************************
Checking proxy configured in HTTP_PROXY
RESULT: OK
Checking proxy configured in HTTPS_PROXY
RESULT: OK
Checking proxy configured in NO_PROXY
RESULT: OK
Checking proxy configured in ALL_PROXY
RESULT: OK
************** End Artifactory Diagnostics *******************
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/host
host.conf hostname hosts
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/host
host.conf hostname hosts
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 1cdf04cd9ed1
Thanks for your answers to my questions. Your postgres URL is going within artifactory container.
Please change url to jdbc:postgresql://postgres:5437/artifactory and try. Or add the both service with network_mode: host
Related
I'm trying to install pihole on a Kubernetes cluster on Docker via helm. I'm following this guide to do so. Everything seems to go smoothly. I get a completion:
NAME: pihole
LAST DEPLOYED: Wed Sep 30 22:22:15 2020
NAMESPACE: pihole
STATUS: deployed
REVISION: 1
TEST SUITE: None
But the pihole never reaches the ready state, it just restarts after a couple minutes. Upon inspecting the pod I see:
lastState:
terminated:
containerID: docker://16e2a318b460d4d5aebd502175fb688fc150993940181827a506c086e2cb326a
exitCode: 0
finishedAt: "2020-09-30T22:01:55Z"
reason: Completed
startedAt: "2020-09-30T21:59:17Z"
How do I prevent this from continually restarting once it's complete?
Here is the output of kubectl logs <POD_NAME>:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] 01-resolver-resolv: applying...
[fix-attrs.d] 01-resolver-resolv: exited 0.
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 20-start.sh: executing...
::: Starting docker specific checks & setup for docker pihole/pihole
[✓] Update local cache of available packages
[i] Existing PHP installation detected : PHP version 7.0.33-0+deb9u8
[i] Installing configs from /etc/.pihole...
[i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
[✓] Copying 01-pihole.conf to /etc/dnsmasq.d/01-pihole.conf
chown: cannot access '': No such file or directory
chmod: cannot access '': No such file or directory
chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
::: Pre existing WEBPASSWORD found
Using default DNS servers: 8.8.8.8 & 8.8.4.4
DNSMasq binding to default interface: eth0
Added ENV to php:
"PHP_ERROR_LOG" => "/var/log/lighttpd/error.log",
"ServerIP" => "0.0.0.0",
"VIRTUAL_HOST" => "pi.hole",
Using IPv4 and IPv6
::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early))
https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
https://mirror1.malwaredomains.com/files/justdomains
::: Testing pihole-FTL DNS: FTL started!
::: Testing lighttpd config: Syntax OK
::: All config checks passed, cleared for startup ...
::: Docker start setup complete
[✗] DNS resolution is currently unavailable
You are not alone with this issue.
Resolution is here - chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
This happens for me as well. I used that same tutorial to set up my
cluster. If you are using a persistent volume as well, use a ssh
connection to get to your drive and run these two commands.
ls -l ----> this will show the owner and user of each file they all should be www-data if not run this cmd
sudo chown -R www-data:www-data pihole from the /mnt/ssd directory described in the tutorial. This will allow you to add more whitelists/blacklists/adlists from the web portal.
I have Kafka/Zookeeper container and Divolte container in - https://github.com/divolte/docker-divolte/blob/master/docker-compose.yml, which correctly starts and works by
docker-compose up -d --build
I want to add the hdfs container - https://hub.docker.com/r/mdouchement/hdfs/ which correctly starts and works by
docker run -p 22022:22 -p 8020:8020 -p 50010:50010 -p 50020:50020 -p 50070:50070 -p 50075:50075 -it mdouchement/hdfs
But after adding the code to yml:
hdfs:
image: mdouchement/hdfs
environment:
DIVOLTE_KAFKA_BROKER_LIST: kafka:9092
ports:
- "22022:22"
- "8020:8020"
- "50010:50010"
- "50020:50020"
- "50070:50070"
- "50075:50075"
depends_on:
- kafka
The web http://localhost:50070 and data node http://localhost:8020/ did not answer. Could you help me to add new container? Which of hdfs ports do I have to write as source connection port?
The logs of HDFS container is:
2020-02-21T15:11:47.613270635Z Starting OpenBSD Secure Shell server: sshd.
2020-02-21T15:11:50.440130986Z Starting namenodes on [localhost]
2020-02-21T15:11:54.616344960Z localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
2020-02-21T15:11:54.616369660Z localhost: starting namenode, logging to /opt/hadoop/logs/hadoop-root-namenode-278b399bc998.out
2020-02-21T15:11:59.328993612Z localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
2020-02-21T15:11:59.329016212Z localhost: starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-278b399bc998.out
2020-02-21T15:12:06.078269195Z Starting secondary namenodes [0.0.0.0]
2020-02-21T15:12:10.837364362Z 0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
2020-02-21T15:12:10.839375064Z 0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-root-secondarynamenode-278b399bc998.out
2020-02-21T15:12:17.249040842Z starting portmap, logging to /opt/hadoop/logs/hadoop--portmap-278b399bc998.out
2020-02-21T15:12:18.253954832Z DEPRECATED: Use of this script to execute hdfs command is deprecated.
2020-02-21T15:12:18.253993233Z Instead use the hdfs command for it.
2020-02-21T15:12:18.254002633Z
2020-02-21T15:12:21.277829129Z starting nfs3, logging to /opt/hadoop/logs/hadoop--nfs3-278b399bc998.out
2020-02-21T15:12:22.284864146Z DEPRECATED: Use of this script to execute hdfs command is deprecated.
2020-02-21T15:12:22.284883446Z Instead use the hdfs command for it.
2020-02-21T15:12:22.284887146Z
Port description:
Ports
Portmap -> 111
NFS -> 2049
HDFS namenode -> 8020 (hdfs://localhost:8020)
HDFS datanode -> 50010
HDFS datanode (ipc) -> 50020
HDFS Web browser -> 50070
HDFS datanode (http) -> 50075
HDFS secondary namenode -> 50090
SSH -> 22
The docker-compose response answer is:
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
divolte-streamsets-quickstart_divolte_1 /opt/divolte/start.sh Up 0.0.0.0:8290->8290/tcp
divolte-streamsets-quickstart_hdfs_1 /bin/sh -c service ssh sta ... Exit 0
divolte-streamsets-quickstart_kafka_1 supervisord -n Up 2181/tcp, 9092/tcp, 9093/tcp, 9094/tcp, 9095/tcp, 9096/tcp, 9097/tcp, 9098/tcp, 9099/tcp
divolte-streamsets-quickstart_streamsets_1 /docker-entrypoint.sh dc -exec Up 0.0.0.0:18630->18630/tcp
I am new to docker and all this logging stuff so maybe I'm making a stuipd mistake so thanks for helping in advance. I have ELK running a a docker container (6.2.2) via Dockerfile line:
FROM sebp/elk:latest
In a separate container I am installing and running Filebeat via the folling Dockerfile lines:
RUN curl -L -O -k https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-amd64.deb
RUN dpkg -i filebeat-6.2.2-amd64.deb
COPY resources/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
RUN chmod go-w /etc/filebeat/filebeat.yml
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
My Filebeat configuration is:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /jetty/jetty-distribution-9.3.8.v20160314/logs/*.log
output.logstash:
enabled: false
hosts: ["elk-stack:9002"]
#index: 'audit'
output.elasticsearch:
enabled: true
hosts: ["elk-stack:9200"]
#index: "audit-%{+yyyy.MM.dd}"
path.config: "/etc/filebeat"
#setup.template.name: "audit"
#setup.template.pattern: "audit-*"
#setup.template.fields: "${path.config}/fields.yml"
As you can see I was trying to do a custom index into elasticsearch, but now I'm just trying to get the default working first. The jetty logs all have global read permissions.
The docker container logs show no errors and after running I make sure the config and output are OK:
# filebeat test config
Config OK
# filebeat test output
elasticsearch: http://elk-stack:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 172.17.0.3
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 6.2.2
/var/log/filebeat/filebeat shows:
2018-03-15T13:23:38.859Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-03-15T13:23:38.860Z INFO instance/beat.go:475 Beat UUID: ed5cecaf-cbf5-438d-bbb9-30bab80c4cb9
2018-03-15T13:23:38.860Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elk-stack:9200
2018-03-15T13:23:38.891Z INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
However when i hit localhost:9200/_cat/indices?v it doesn't return any indices:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
How do I get this working? I am out of ideas. Thanks again for any help.
To answer my own question you can't start filebeat with:
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
and have it keep running once the container starts. Need to manually start it or have it start in its own container with an ENTRYPOINT tag.
Use this guide to install Kubernetes on Vagrant cluster:
https://kubernetes.io/docs/getting-started-guides/kubeadm/
At (2/4) Initializing your master, there came some errors:
[root#localhost ~]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.4
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Some fatal errors occurred:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
I checked the /proc/sys/net/bridge/bridge-nf-call-iptables file content, there is only one 0 in it.
At (3/4) Installing a pod network, I downloaded kube-flannel file:
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
And run kubectl apply -f kube-flannel.yml, got error:
[root#localhost ~]# kubectl apply -f kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Until here, I don't know how to goon.
My Vagrantfile:
# Master Server
config.vm.define "master", primary: true do |master|
master.vm.network :private_network, ip: "192.168.33.200"
master.vm.network :forwarded_port, guest: 22, host: 1234, id: 'ssh'
end
In order to set /proc/sys/net/bridge/bridge-nf-call-iptables by editing /etc/sysctl.conf. There you can add [1]
net.bridge.bridge-nf-call-iptables = 1
Then execute
sudo sysctl -p
And the changes will be applied. With this the pre-flight check should pass.
[1] http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf
Update #2019/09/02
Sometimes modprobe br_netfilter is unreliable, you may need to redo it after relogin, so use the following instead when on a systemd sytem:
echo br_netfilter > /etc/modules-load.d/br_netfilter.conf
systemctl restart systemd-modules-load.service
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
YES, the accepted answer is right, but I faced with
cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
So I did
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
sudo sysctl -p
Then solved.
On Ubuntu 16.04 I just had to:
modprobe br_netfilter
Default value in /proc/sys/net/bridge/bridge-nf-call-iptables is already 1.
Then I added br_netfilter to /etc/modules to load the module automatically on next boot.
As mentioned in K8s docs - Installing kubeadm under the Letting iptables see bridged traffic section:
Make sure that the br_netfilter module is loaded. This can be done
by running lsmod | grep br_netfilter. To load it explicitly call
sudo modprobe br_netfilter.
As a requirement for your Linux Node's iptables to correctly see
bridged traffic, you should ensure
net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl
config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Regardng the preflight erros - you can see in Kubeadm Implementation details under the preflight-checks:
Kubeadm executes a set of preflight checks before starting the init,
with the aim to verify preconditions and avoid common cluster startup
problems..
The following missing configurations will produce errors:
.
.
if /proc/sys/net/bridge/bridge-nf-call-iptables file does not exist/does not contain 1
if advertise address is ipv6 and /proc/sys/net/bridge/bridge-nf-call-ip6tables does not exist/does not contain 1.
if swap is on
.
.
The one-liner way:
sysctl net.bridge.bridge-nf-call-iptables=1
In DockerCloud I am trying to get my container to speak with the other container. I believe the problem is the hostname not resolving (this is set in /conf.d/kafka.yaml shown below).
To get DockerCloud to have the two containers communicate, I have tried many variations including the full host-name kafka-development-1 and kafka-development-1.kafka, etc.
The error I keep getting is in the datadog-agent info:
Within the container I run ./etc/init.d/datadog-agent info and receive:
kafka
-----
- instance #kafka-kafka-development-9092 [ERROR]: 'Cannot connect to instance
kafka-development:9092 java.io.IOException: Failed to retrieve RMIServer stub:
javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: \n\tjava.net.SocketException: Connection reset]' collected 0 metrics
- Collected 0 metrics, 0 events & 0 service checks
The steps I take for details
SSH Into Docker Node:
$ docker ps
CONTAINER | PORTS
datadog-agent-kafka-development-1.2fb73f62 | 8125/udp, 9001/tcp
kafka-development-1.3dc7c2d0 | 0.0.0.0:9092->9092/tcp
I log into the containers to see their values, this is the datadog-agent:
$ docker exec -it datadog-agent-kafka-development-1.2fb73f62 /bin/bash
$ > echo $DOCKERCLOUD_CONTAINER_HOSTNAME
datadog-agent-kafka-development-1
$ > tail /etc/hosts
172.17.0.7 datadog-agent-kafka-development-1
10.7.0.151 datadog-agent-kafka-development-1
This is the kafka container:
$ docker exec -it kafka-development-1.3dc7c2d0 /bin/bash
$ > echo $DOCKERCLOUD_CONTAINER_HOSTNAME
kafka-development-1
$ > tail /etc/hosts
172.17.0.6 kafka-development-1
10.7.0.8 kafka-development-1
$ > echo $KAFKA_ADVERTISED_HOST_NAME
kafka-development.c23d1d00.svc.dockerapp.io
$ > echo $KAFKA_ADVERTISED_PORT
9092
$ > echo $KAFKA_ZOOKEEPER_CONNECT
zookeeper-development:2181
Datadog conf.d/kafka.yaml:
instances:
- host: kafka-development
port: 9092 # This is the JMX port on which Kafka exposes its metrics (usually 9999)
tags:
kafka: broker
env: development
# ... Defaults Below
Can anyone see what I am doing wrong?