Kafka manager no application loader is configured - docker

Here's my docker-compose:
version: '3'
services:
nodered:
container_name: nodered
image: nodered/node-red
ports:
- "1880:1880"
volumes:
- ./nodered:/data
depends_on:
- mosquitto
environment:
TZ: "America/Toronto"
restart: always
mosquitto:
image: eclipse-mosquitto
container_name: mqtt
restart: always
ports:
- "1883:1883"
volumes:
- "./mosquitto/config:/mosquitto/config"
- "./mosquitto/data:/mosquitto/data"
- "./mosquitto/log:/mosquitto/log"
environment:
- TZ=America/Toronto
user: "${PUID}:${PGID}"
portainer:
ports:
- "9000:9000"
container_name: portainer
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./portainer/portainer_data:/data"
image: portainer/portainer-ce
zookeeper:
image: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
container_name: kafka
ports:
- "9092:9092"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
restart: on-failure
cmak:
image: hlebalbau/kafka-manager
container_name: cmak
restart: always
depends_on:
- kafka
- zookeeper
ports:
- "9080:9080"
environment:
- ZK_HOSTS=zookeper:2181
- APPLICATION_SECRET=letmein
command: bin/cmak -Dconfig.file=/opt/cmak/conf/application.conf -Dhttp.port=9080
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
My port 9000 is already used by Portainer and it works properly, but when I'm trying to run Kafka manager on 9080, I'm getting this error without any further explanation:
nodered | 14 Sep 21:59:41 - [info] Starting flows
nodered | 14 Sep 21:59:41 - [info] Started flows
cmak | Oops, cannot start the server.
cmak | java.lang.RuntimeException: No application loader is configured. Please configure an application loader either using the play.application.loader configuration property, or by depending on a module that configures one. You can add the Guice support module by adding "libraryDependencies += guice" to your build.sbt.
cmak | at scala.sys.package$.error(package.scala:30)
cmak | at play.api.ApplicationLoader$.play$api$ApplicationLoader$$loaderNotFound(ApplicationLoader.scala:44)
cmak | at play.api.ApplicationLoader$.apply(ApplicationLoader.scala:70)
cmak | at play.core.server.ProdServerStart$.start(ProdServerStart.scala:50)
cmak | at play.core.server.ProdServerStart$.main(ProdServerStart.scala:25)
cmak | at play.core.server.ProdServerStart.main(ProdServerStart.scala)
I have a feeling it's either my path to kafka-manager is wrong or I might have to expose the hostname on my kafka container...

Related

Got problem deploying docker-compose service(port issue)

I want to deploy a service that will allow me to use Spark and MongoDB in a Jupiter notebook.
I use docker-compose to build up the service, and it`s as followed:
version: "3.3"
volumes:
shared-workspace:
networks:
spark-net:
driver: bridge
services:
spark-master:
image: uqteaching/cloudcomputing:spark-master-v1
container_name: spark-master
networks:
- "spark-net"
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-1:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-1
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-2:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-2
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8082:8082"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
mongo:
image: mongo
container_name: 'mongo'
networks:
- "spark-net"
ports:
- "27017:27017"
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8091:8091"
jupyter-notebook:
container_name: jupyternb
image: jupyter/all-spark-notebook:42f4c82a07ff
depends_on:
- mongo
- spark-master
links:
- mongo
expose:
- "8888"
networks:
- "spark-net"
ports:
- "8888:8888"
volumes:
- ./nbs:/home/jovyan/work/nbs
- ./events:/tmp/spark-events
environment:
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
command: "start-notebook.sh \
--ip=0.0.0.0 \
--allow-root \
--no-browser \
--notebook-dir=/home/jovyan/work/nbs \
--NotebookApp.token='' \
--NotebookApp.password=''
"
And the result is something like this:
I dont know why. Even I set these 2` services to listen to a different port.
They are using 8081/tcp at the same time, which caused them both to crash.
I want to solve this.
mongo-express seems to need port 8081 internal, so use another external port to be able to login to the webui.
http://localhost:8092 would then be something like this:
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8092:8091"

Set up Prometheus and Cadvisor with docker-compose

i am new to prometheus , cadvisor and docker-compose. i made a docker-compose file including my own created application named chat, with a mongo container. those work fine. now i want to monitor my containers with prometheus and cadvisor. im getting following errors:
cadvisor | W0419 11:41:00.576916 1 sysinfo.go:203] Nodes topology is not available, providing CPU topology
cadvisor | W0419 11:41:00.577437 1 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
cadvisor | E0419 11:41:00.582000 1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
and
prometheus | ts=2022-04-19T11:54:19.051Z caller=main.go:438 level=error msg="Error loading config (--config.file=/etc/prometheus/prometheus.yml)" file=/etc/prometheus/prometheus.yml err="parsing YAML file /etc/prometheus/prometheus.yml: yaml: unmarshal errors:\n line 2: field scrape-interval not found in type config.plain"
i tryed to change the config parameter from my docker-compose into, but it dont changed the error:
command:
- '--config.file=./prometheus/prometheus.yml'
docker-compose.yml:
version : '3.7'
services:
chat-api:
container_name: chat-api
build:
context: .
dockerfile: ./Dockerfile
ports:
- '4000:4000'
networks:
- cchat
restart: 'on-failure'
userdb:
image: mongo:latest
container_name: mongodb
volumes:
- userdb:/data/db
networks:
- cchat
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9080:9080'
networks:
- cloudchat
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
devices:
- /dev/kmsg:/dev/kmsg
depends_on:
- chat-api
networks:
- cchat
volumes:
userdb:
networks:
cchat:
prometheus.yml:
global:
scrape-interval: 2s
scrape_configs:
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
project structure:
picture of project structure
I guess it's quite late but you can try mounting /etc/machine-id:/etc/machine-id:ro.
Running in privileged mode could help too. This is my configuration which is working without problems:
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.47.0
container_name: cadvisor
restart: unless-stopped
privileged: true
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
Some important note, don't use latest it seems it's not the latest version (source: https://github.com/google/cadvisor/issues/3066).

Creating spark cluster with drone.yml not working

I have docker-compose.yml with below image and configuration
version: '3'
services:
spark-master:
image: bde2020/spark-master:2.4.4-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
here the docker-compose up log ---> https://jpst.it/1Xc4K
and here containers up and running and i mean spark worker connected to spark master without any issues , now problem is i created drone.yml and where i added services component with
services:
jce-cassandra:
image: cassandra:3.0
ports:
- "9042:9042"
jce-elastic:
image: elasticsearch:5.6.16-alpine
ports:
- "9200:9200"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
janusgraph:
image: janusgraph/janusgraph:latest
ports:
- "8182:8182"
environment:
JANUS_PROPS_TEMPLATE: cassandra-es
janusgraph.storage.backend: cql
janusgraph.storage.hostname: jce-cassandra
janusgraph.index.search.backend: elasticsearch
janusgraph.index.search.hostname: jce-elastic
depends_on:
- jce-elastic
- jce-cassandra
spark-master:
image: bde2020/spark-master:2.4.4-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
but here spark worker is not connected to spark master getting exceptions, here is exception log details , can some one please guide me why am facing this issue
Note : I am trying to create these services in drone.yml for my integration testing
Answering for better formatting. The comments suggest sleeping. Assuming this is the dockerfile (https://hub.docker.com/r/bde2020/spark-worker/dockerfile) You could sleep by adding the command:
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
command: sleep 10 && /bin/bash /worker.sh
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
Although sleep 10 is probably excessive, if this would would sleep 5 or sleep 2

docker-compose: zipkin cannot connect to elasticsearch

I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f

Setting up a local zookeeper and kafka using docker and wurstmeisters images

I really have a hard time to configure my docker compose to get my kafka running. I always get the following error on docker-compose logs:
java.lang.IllegalArgumentException: Error creating broker listeners
from 'PLAINTEXT://kafka:': Unable to parse PLAINTEXT://kafka: to a
broker endpoint
I have tried all possible IP adresses and names of my machine for the KAFKA_ADVERTISED_HOST_NAME but this does not change the situation. However this is my current docker-compose.yml
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
hostname: zookeeper
restart: unless-stopped
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
hostname: kafka
restart: unless-stopped
# links:
# - zookeeper:zookeeper
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_BROKER_ID=1
- KAFKA_NUM_PARTITIONS=1
- KAFKA_CREATE_TOPICS="test:1:1"
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/kafka
I have stopped using wurstmeister and switched to bitnami. Here the config works just straight from the example.
version: '3'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
hostname: zookeeper
restart: unless-stopped
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
#volumes:
# - ./data/zookeeper:/bitnami/zookeeper
kafka:
image: 'bitnami/kafka:latest'
hostname: kafka
restart: unless-stopped
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
volumes:
- ./data/kafka:/bitnami/kafka

Resources