I'm testing two platforms, cento-7 & ubuntu-1604. Both converge successfully. But fail during verify.
Ubuntu:
System Package apache2
✔ should be installed
Service apache2
× should be running
expected that `Service apache2` is running
Command curl localhost
✔ stdout should match /hello/i
✔ exit_status should eq 0
Port 80
✔ should be listening
Test Summary: 4 successful, 1 failure, 0 skipped
Seems strange that it fails on apache2 running but curl localhost succeeds.
I did a kitchen login
$ sudo systemctl status apache2
Failed to connect to bus: No such file or directory
so I tried
$ ps -eo comm,etime,user | grep apache2
apache2 06:34:11 root
apache2 06:34:11 www-data
apache2 06:34:11 www-data
Looks like apache2 is running.
Centos-7
System Package httpd
✔ should be installed
Service httpd
✔ should be running
Command curl localhost
✔ stdout should match /hello/i
✔ exit_status should eq 0
Port 80
× should be listening
expected `Port 80.listening?` to return true, got false
Test Summary: 4 successful, 1 failure, 0 skipped
Strange that httpd is running and curl works but not listening on port 80?
So I logged in and ran netstat
$ sudo netstat -tulpn | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 562/httpd
Here are my tests:
package_name =
service_name =
case os[:family]
when 'redhat' then 'httpd'
when 'debian' then 'apache2'
end
describe package(package_name) do
it { should be_installed }
end
describe service(service_name) do
it { should be_running }
end
describe command('curl localhost') do
its('stdout') { should match(/hello/i) }
its('exit_status') { should eq 0 }
end
describe port(80) do
it { should be_listening }
end
Here is my .kitchen.yml
---
driver:
name: docker
privileged: true
provisioner:
name: chef_zero
verifier:
name: inspec
platforms:
- name: ubuntu-16.04
- name: centos-7
driver:
platform: rhel
run_command: /usr/lib/systemd/systemd
suites:
- name: default
run_list:
- recipe[hello_world_test::default]
- recipe[hello_world_test::deps]
verifier:
inspec_tests:
- test/integration/default
attributes:
Any idea why I get the failures when it seems, to me at least, that on the test machines they are working as they should.
Thanks,
Andrew
The first one is because systemd is not set up in your Ubuntu platform. By default, kitchen-docker doesn't set up systemd support (as you can see with how you set it up for centos).
The second issue is more likely something funky with ss, which is the modern replacement for netstat. InSpec does have some fallback logic to use netspec but check out https://github.com/inspec/inspec/blob/8683c54510808462c7f3df6d92833aff3b21fe42/lib/resources/port.rb#L385-L421 and compare to your running container.
Related
Dear who may read this topic,
I'm looking for help in setting up properly my jfrog oss.
I downloaded and unpacked the package suggested in the installation steps.
I modified the ports in the docker-compose.yaml and ran a shot.
Below the outputs of several files and execution.
System Diagnostic
└──╼ $sudo ./systemDiagnostics.sh
************** Start Artifactory Diagnostics *******************
Discovered compose installer, proceeding with var directory as :[/root/.jfrog/artifactory/var]
******************************** CHECK PORTS AVAILABLITY
********************************
Running: lsof -i:$ports_needed [ERROR] RESULT: NOT OK
---
Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
[ERROR] Artifactory 8184 NOT AVAILABLE used by processName docker-pr processId 30261 [ERROR] Either stop the process or change the port by configuring artifactory.port in system.yaml
[ERROR] RouterExternal 8183 NOT AVAILABLE used by processName docker-pr processId 30274 [ERROR] Either stop the process or change the port by configuring router.entrypoints.externalPort in system.yaml
******************************** CHECKING MAX OPEN FILES
********************************
Running: ulimit -n [ERROR] RESULT: NOT OK
---
[ERROR] Number found 1024 is less than recommended minimum of 32000 for USER "root"
******************************** CHECKING MAX OPEN PROCESSES
********************************
Running: ulimit -u RESULT: OK
******************************** CHECK FIREWALL SETTINGS
********************************
RESULT: FirewallD is not configured
******************************** CHECK FIREWALL IPTABLES SETTINGS
********************************
Running: iptables -L INPUT -n -v | grep $ports_needed | grep -i -E "REJECT|DROP" RESULT: OK
Artifactory 8184 AVAILABLE Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterExternal 8183 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
******************************** CHECK FIREWALL IP6TABLES SETTINGS
********************************
Running: ip6tables -L INPUT -n -v | grep $ports_needed | grep -i -E "REJECT|DROP" RESULT: OK
Artifactory 8184 AVAILABLE Access 8040 AVAILABLE AccessGrpc 8045 AVAILABLE MetaData 8086 AVAILABLE FrontEnd 8070 AVAILABLE Replicator 8048 AVAILABLE Router 8046 AVAILABLE RouterExternal 8183 AVAILABLE RouterTraefik 8049 AVAILABLE RouterGrpc 8047 AVAILABLE
******************************** CHECKING LOCALHOST PING
********************************
Running: ping localhost -c 1 > /dev/null 2>&1 RESULT: OK
******************************** PROXY LIST
********************************
Checking proxy configured in HTTP_PROXY RESULT: OK Checking proxy configured in HTTPS_PROXY RESULT: OK Checking proxy configured in NO_PROXY RESULT: OK Checking proxy configured in ALL_PROXY RESULT: OK
************** End Artifactory Diagnostics *******************
docker-compose.yaml
version: '3' services: postgres:
image: ${DOCKER_REGISTRY}/postgres:9.6.11
container_name: postgresql
environment:
- POSTGRES_DB=artifactory
- POSTGRES_USER=artifactory
- POSTGRES_PASSWORD=r00t
ports:
- 5437:5437
volumes:
- ${ROOT_DATA_DIR}/var/data/postgres/data:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
restart: always
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10"
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000 artifactory:
image: ${DOCKER_REGISTRY}/jfrog/artifactory-oss:${ARTIFACTORY_VERSION}
container_name: artifactory
volumes:
- ${ROOT_DATA_DIR}/var:/var/opt/jfrog/artifactory
- /etc/localtime:/etc/localtime:ro
restart: always
depends_on:
- postgres
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
environment:
- JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
ports:
- ${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
# for router communication
- 8185:8185 # for artifactory communication
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10"
system.yaml
──╼ $sudo cat /root/.jfrog/artifactory/var/etc/system.yaml
shared:
node:
ip: 127.0.1.1
id: parrot
name: parrot
database:
type: postgresql
driver: org.postgresql.Driver
password: r00t
username: artifactory
url: jdbc:postgresql://127.0.1.1:5437/artifactory
router:
entrypoints:
externalPort: 8185
Could someone please help me out through this issue ?
EDIT 1.
catalina localhost.log
04-Jun-2020 07:07:55.585 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
04-Jun-2020 07:07:55.624 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/opt/jfrog/artifactory' resolved from: System property
04-Jun-2020 07:07:58.411 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Initializing Spring embedded WebApplicationContext
04-Jun-2020 07:09:59.503 INFO [localhost-startStop-3] org.apache.catalina.core.ApplicationContext.log Closing Spring root WebApplicationContext
console.log
2020-06-03T21:16:48.869Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [o.j.c.w.FileWatcher:147 ] [Thread-5 ] - Starting watch of folder configurations
2020-06-03T21:16:48.938L [35m[tomct][0m [SEVERE] [ ] [org.apache.tomcat.jdbc.pool.ConnectionPool] [org.apache.tomcat.jdbc.pool.ConnectionPool init] - Unable to create initial connections of pool.
org.postgresql.util.PSQLException: Connection to 127.0.1.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
.env
## This file contains environment variables used by the docker-compose yaml files
## IMPORTANT: During installation, this file may be updated based on user choices or existing configuration
## Docker registry to fetch images from
DOCKER_REGISTRY=docker.bintray.io
## Version of artifactory to install
ARTIFACTORY_VERSION=7.5.5
## The Installation directory for Artifactory. IF not entered, the script will prompt you for this input. Default [$HOME/.jfrog/artifactory]
ROOT_DATA_DIR=/root/.jfrog/artifactory
# Router external port mapping. This property may be overridden from the system.yaml (router.entrypoints.externalPort)
JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=8185
EDIT 2.
I'm facing the same issue when trying to follow another tutorial.
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ ./systemDiagnostics.sh
************** Start Artifactory Diagnostics *******************
Discovered debian installer, proceeding with var directory as :[/opt/jfrog/artifactory/var]
********************************
CHECK PORTS AVAILABLITY
********************************
Running: lsof -i:$ports_needed
RESULT: OK
Artifactory 8081 AVAILABLE
Access 8040 AVAILABLE
AccessGrpc 8045 AVAILABLE
MetaData 8086 AVAILABLE
FrontEnd 8070 AVAILABLE
Replicator 8048 AVAILABLE
Router 8046 AVAILABLE
RouterExternal 8082 AVAILABLE
RouterTraefik 8049 AVAILABLE
RouterGrpc 8047 AVAILABLE
********************************
CHECKING MAX OPEN FILES
********************************
BusyBox v1.30.1 (2019-02-14 18:11:39 UTC) multi-call binary.
Usage: grep [-HhnlLoqvsriwFE] [-m N] [-A/B/C N] PATTERN/-e PATTERN.../-f FILE [FILE]...
Search for PATTERN in FILEs (or stdin)
-H Add 'filename:' prefix
-h Do not add 'filename:' prefix
-n Add 'line_no:' prefix
-l Show only names of files that match
-L Show only names of files that don't match
-c Show only count of matching lines
-o Show only the matching part of line
-q Quiet. Return 0 if PATTERN is found, 1 otherwise
-v Select non-matching lines
-s Suppress open and read errors
-r Recurse
-i Ignore case
-w Match whole words only
-x Match whole lines only
-F PATTERN is a literal (not regexp)
-E PATTERN is an extended regexp
-m N Match up to N times per file
-A N Print N lines of trailing context
-B N Print N lines of leading context
-C N Same as '-A N -B N'
-e PTRN Pattern to match
-f FILE Read pattern from file
[WARN] User does not exist, skipping max open files check for this user
********************************
CHECKING MAX OPEN PROCESSES
********************************
BusyBox v1.30.1 (2019-02-14 18:11:39 UTC) multi-call binary.
Usage: grep [-HhnlLoqvsriwFE] [-m N] [-A/B/C N] PATTERN/-e PATTERN.../-f FILE [FILE]...
Search for PATTERN in FILEs (or stdin)
-H Add 'filename:' prefix
-h Do not add 'filename:' prefix
-n Add 'line_no:' prefix
-l Show only names of files that match
-L Show only names of files that don't match
-c Show only count of matching lines
-o Show only the matching part of line
-q Quiet. Return 0 if PATTERN is found, 1 otherwise
-v Select non-matching lines
-s Suppress open and read errors
-r Recurse
-i Ignore case
-w Match whole words only
-x Match whole lines only
-F PATTERN is a literal (not regexp)
-E PATTERN is an extended regexp
-m N Match up to N times per file
-A N Print N lines of trailing context
-B N Print N lines of leading context
-C N Same as '-A N -B N'
-e PTRN Pattern to match
-f FILE Read pattern from file
[WARN] User does not exist, skipping max open processes check for this user
********************************
CHECK FIREWALL SETTINGS
********************************
RESULT: FirewallD is not configured
********************************
CHECK FIREWALL IPTABLES SETTINGS
********************************
RESULT: Iptables is not configured
********************************
CHECK FIREWALL IP6TABLES SETTINGS
********************************
RESULT: Ip6tables is not configured
********************************
CHECKING LOCALHOST PING
********************************
Running: ping localhost -c 1 > /dev/null 2>&1
[ERROR] RESULT: NOT OK
---
[ERROR] Unable to resolve localhost. check if localhost is well defined in /etc/hosts
********************************
PROXY LIST
********************************
Checking proxy configured in HTTP_PROXY
RESULT: OK
Checking proxy configured in HTTPS_PROXY
RESULT: OK
Checking proxy configured in NO_PROXY
RESULT: OK
Checking proxy configured in ALL_PROXY
RESULT: OK
************** End Artifactory Diagnostics *******************
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/host
host.conf hostname hosts
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/host
host.conf hostname hosts
artifactory#1cdf04cd9ed1:/opt/jfrog/artifactory/app/bin$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 1cdf04cd9ed1
Thanks for your answers to my questions. Your postgres URL is going within artifactory container.
Please change url to jdbc:postgresql://postgres:5437/artifactory and try. Or add the both service with network_mode: host
I'm new with SaltStack.
I need to install NVIDIA on minion server running CentOS 7 with SaltStack only.
In the gpu/init.sls file:
install_nvidia:
cmd.script:
- source: salt://gpu/files/NVIDIA-Linux-x86_64-375.20.run
- user: root
- group: root
- shell: /bin/bash
- args: -a
I run:
sudo salt minion_name state.apply gpu
I get the error:
...
stderr:
Error opening terminal: unknown.
...
...
Summary for minion_name
------------
Succeeded: 0 (changed=1)
Failed: 1
How can I get more verbose information about the reason it failed?
I believe it wait to user input but I don't know what
Also how can I install NVIDIA on CentOS 7 with non interactive way?
Thanks.
You can get more verbose information about why a Salt state has failed by running it locally using salt-call -l debug.
salt-call -l debug state.apply gpu
In your case, you have to be aware that installing the NVIDIA driver on Linux will require you to run the installer without a graphical session present. The simplest way to do this will be to check if you're currently in a graphical session (with systemd) and then drop do multi-user.target if so:
enter-multiuser:
cmd.run:
- name: systemctl isolate multi-user.target
- onlyif: systemctl status graphical.target
Then, you can install the NVIDIA driver silently using something like
gpu-prerequisites:
pkg.installed:
- pkgs:
- kernel-devel
download-installer:
file.managed:
- name: /tmp/NVIDIA-Linux-x86_64-375.20.run
- source: salt://gpu/files/NVIDIA-Linux-x86_64-375.20.run
install-driver:
cmd.run:
- name: /tmp/NVIDIA-Linux-x86_64-375.20.run -a -s -Z -X
- require:
- file: download-installer
- pkg: gpu-prequisites
start-graphical:
cmd.run:
- name: systemctl start graphical.target
- unless: systemctl status graphical.target
- watch:
- cmd: install-driver
I am new to docker and all this logging stuff so maybe I'm making a stuipd mistake so thanks for helping in advance. I have ELK running a a docker container (6.2.2) via Dockerfile line:
FROM sebp/elk:latest
In a separate container I am installing and running Filebeat via the folling Dockerfile lines:
RUN curl -L -O -k https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-amd64.deb
RUN dpkg -i filebeat-6.2.2-amd64.deb
COPY resources/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
RUN chmod go-w /etc/filebeat/filebeat.yml
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
My Filebeat configuration is:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /jetty/jetty-distribution-9.3.8.v20160314/logs/*.log
output.logstash:
enabled: false
hosts: ["elk-stack:9002"]
#index: 'audit'
output.elasticsearch:
enabled: true
hosts: ["elk-stack:9200"]
#index: "audit-%{+yyyy.MM.dd}"
path.config: "/etc/filebeat"
#setup.template.name: "audit"
#setup.template.pattern: "audit-*"
#setup.template.fields: "${path.config}/fields.yml"
As you can see I was trying to do a custom index into elasticsearch, but now I'm just trying to get the default working first. The jetty logs all have global read permissions.
The docker container logs show no errors and after running I make sure the config and output are OK:
# filebeat test config
Config OK
# filebeat test output
elasticsearch: http://elk-stack:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 172.17.0.3
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 6.2.2
/var/log/filebeat/filebeat shows:
2018-03-15T13:23:38.859Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-03-15T13:23:38.860Z INFO instance/beat.go:475 Beat UUID: ed5cecaf-cbf5-438d-bbb9-30bab80c4cb9
2018-03-15T13:23:38.860Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elk-stack:9200
2018-03-15T13:23:38.891Z INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
However when i hit localhost:9200/_cat/indices?v it doesn't return any indices:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
How do I get this working? I am out of ideas. Thanks again for any help.
To answer my own question you can't start filebeat with:
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
and have it keep running once the container starts. Need to manually start it or have it start in its own container with an ENTRYPOINT tag.
Use this guide to install Kubernetes on Vagrant cluster:
https://kubernetes.io/docs/getting-started-guides/kubeadm/
At (2/4) Initializing your master, there came some errors:
[root#localhost ~]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.4
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Some fatal errors occurred:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
I checked the /proc/sys/net/bridge/bridge-nf-call-iptables file content, there is only one 0 in it.
At (3/4) Installing a pod network, I downloaded kube-flannel file:
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
And run kubectl apply -f kube-flannel.yml, got error:
[root#localhost ~]# kubectl apply -f kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Until here, I don't know how to goon.
My Vagrantfile:
# Master Server
config.vm.define "master", primary: true do |master|
master.vm.network :private_network, ip: "192.168.33.200"
master.vm.network :forwarded_port, guest: 22, host: 1234, id: 'ssh'
end
In order to set /proc/sys/net/bridge/bridge-nf-call-iptables by editing /etc/sysctl.conf. There you can add [1]
net.bridge.bridge-nf-call-iptables = 1
Then execute
sudo sysctl -p
And the changes will be applied. With this the pre-flight check should pass.
[1] http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf
Update #2019/09/02
Sometimes modprobe br_netfilter is unreliable, you may need to redo it after relogin, so use the following instead when on a systemd sytem:
echo br_netfilter > /etc/modules-load.d/br_netfilter.conf
systemctl restart systemd-modules-load.service
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
YES, the accepted answer is right, but I faced with
cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
So I did
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
sudo sysctl -p
Then solved.
On Ubuntu 16.04 I just had to:
modprobe br_netfilter
Default value in /proc/sys/net/bridge/bridge-nf-call-iptables is already 1.
Then I added br_netfilter to /etc/modules to load the module automatically on next boot.
As mentioned in K8s docs - Installing kubeadm under the Letting iptables see bridged traffic section:
Make sure that the br_netfilter module is loaded. This can be done
by running lsmod | grep br_netfilter. To load it explicitly call
sudo modprobe br_netfilter.
As a requirement for your Linux Node's iptables to correctly see
bridged traffic, you should ensure
net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl
config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Regardng the preflight erros - you can see in Kubeadm Implementation details under the preflight-checks:
Kubeadm executes a set of preflight checks before starting the init,
with the aim to verify preconditions and avoid common cluster startup
problems..
The following missing configurations will produce errors:
.
.
if /proc/sys/net/bridge/bridge-nf-call-iptables file does not exist/does not contain 1
if advertise address is ipv6 and /proc/sys/net/bridge/bridge-nf-call-ip6tables does not exist/does not contain 1.
if swap is on
.
.
The one-liner way:
sysctl net.bridge.bridge-nf-call-iptables=1
In DockerCloud I am trying to get my container to speak with the other container. I believe the problem is the hostname not resolving (this is set in /conf.d/kafka.yaml shown below).
To get DockerCloud to have the two containers communicate, I have tried many variations including the full host-name kafka-development-1 and kafka-development-1.kafka, etc.
The error I keep getting is in the datadog-agent info:
Within the container I run ./etc/init.d/datadog-agent info and receive:
kafka
-----
- instance #kafka-kafka-development-9092 [ERROR]: 'Cannot connect to instance
kafka-development:9092 java.io.IOException: Failed to retrieve RMIServer stub:
javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: \n\tjava.net.SocketException: Connection reset]' collected 0 metrics
- Collected 0 metrics, 0 events & 0 service checks
The steps I take for details
SSH Into Docker Node:
$ docker ps
CONTAINER | PORTS
datadog-agent-kafka-development-1.2fb73f62 | 8125/udp, 9001/tcp
kafka-development-1.3dc7c2d0 | 0.0.0.0:9092->9092/tcp
I log into the containers to see their values, this is the datadog-agent:
$ docker exec -it datadog-agent-kafka-development-1.2fb73f62 /bin/bash
$ > echo $DOCKERCLOUD_CONTAINER_HOSTNAME
datadog-agent-kafka-development-1
$ > tail /etc/hosts
172.17.0.7 datadog-agent-kafka-development-1
10.7.0.151 datadog-agent-kafka-development-1
This is the kafka container:
$ docker exec -it kafka-development-1.3dc7c2d0 /bin/bash
$ > echo $DOCKERCLOUD_CONTAINER_HOSTNAME
kafka-development-1
$ > tail /etc/hosts
172.17.0.6 kafka-development-1
10.7.0.8 kafka-development-1
$ > echo $KAFKA_ADVERTISED_HOST_NAME
kafka-development.c23d1d00.svc.dockerapp.io
$ > echo $KAFKA_ADVERTISED_PORT
9092
$ > echo $KAFKA_ZOOKEEPER_CONNECT
zookeeper-development:2181
Datadog conf.d/kafka.yaml:
instances:
- host: kafka-development
port: 9092 # This is the JMX port on which Kafka exposes its metrics (usually 9999)
tags:
kafka: broker
env: development
# ... Defaults Below
Can anyone see what I am doing wrong?