Inspect method active_queues failed - celery flower - docker

I brought together different components of a ML application using celery flower & worker, Fast API, redis and RabbitMQ into docker-compose, but I seem to be getting an error with the celery_flower component that affects any calls made to it. Unfortunately can't share the images of the ML and celery_flower services, but any inputs on possible bugs in my docker-compose file would be very helpful.
version: "3.9"
services:
web:
image: custom-image
container_name: web_app
ports:
- 8080:8080
- 14565:14565
depends_on:
- amqp
- celery_worker
- redis
amqp:
image: rabbitmq:3
container_name: rabbit_mq_test
ports:
- 4369:4369
- 5672:5672
- 25672:25672
- 15672:15672
redis:
image: redis
container_name: redis_container
ports:
- 6379:6379
celery_worker:
image: custom-image
working_dir: /webapp/endpoints/contents/
container_name: celery_worker
command: /opt/conda/envs/webapp/bin/celery -A router.celery_app worker --loglevel=info -P threads
depends_on:
- amqp
- redis
celery_flower:
container_name: celery_flower
working_dir: /webapp/endpoints/contents/
image: mher/flower:latest
command: celery --broker 'amqp://guest:guest#amqp:5672//' --result-backend 'rpc://' flower
environment:
- CELERY_BROKER_URL=amqp://guest:guest#amqp:5672//
- FLOWER_PORT=5555
ports:
- 5555:5555
depends_on:
- amqp
- redis
Error logs from celery_flower:
celery_flower | [I 221213 19:33:42 command:162] Visit me at http://localhost:5555
celery_flower | [I 221213 19:33:42 command:170] Broker: amqp://guest:**#amqp:5672//
celery_flower | [I 221213 19:33:42 command:171] Registered tasks:
celery_flower | ['celery.accumulate',
celery_flower | 'celery.backend_cleanup',
celery_flower | 'celery.chain',
celery_flower | 'celery.chord',
celery_flower | 'celery.chord_unlock',
celery_flower | 'celery.chunks',
celery_flower | 'celery.group',
celery_flower | 'celery.map',
celery_flower | 'celery.starmap']
celery_flower | [W 221213 19:33:42 command:177] Running without authentication
celery_flower | [I 221213 19:33:48 mixins:225] Connected to amqp://guest:**#amqp:5672//
celery_flower | [W 221213 19:33:49 inspector:42] Inspect method active_queues failed
celery_flower | [W 221213 19:33:49 inspector:42] Inspect method active failed
celery_flower | [W 221213 19:33:49 inspector:42] Inspect method scheduled failed
celery_flower | [W 221213 19:33:49 inspector:42] Inspect method reserved failed
celery_flower | [W 221213 19:33:49 inspector:42] Inspect method registered failed
celery_flower | [W 221213 19:33:49 inspector:42] Inspect method revoked failed
celery_flower | [W 221213 19:33:49 inspector:42] Inspect method conf failed
celery_flower | [W 221213 19:33:49 inspector:42] Inspect method stats failed
I cannot spot anything wrong with the celery flower dashboard too which is quite confusing.

Related

Docker compose zookeeper java.net.NoRouteToHostException: No route to host

I have a local application with kafka and zookeeper but when i run docker compose up in my arch linux desktop kafka show this error:
container log:
kafka_1 | java.net.NoRouteToHostException: No route to host
kafka_1 | at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
kafka_1 | at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
kafka_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344)
kafka_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290)
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.18.0.2:2181.
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will not attempt to authenticate using SASL (unknown error)
kafka_1 | [main-SendThread(zookeeper:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for sever zookeeper/172.18.0.2:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException.
docker compose file:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
restart: unless-stopped
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- '2181:2181'
networks:
- domper-network
kafka:
image: confluentinc/cp-kafka:latest
restart: unless-stopped
depends_on:
- zookeeper
ports:
- '9092:9092'
- '9094:9094'
environment:
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_LISTENERS: INTERNAL://:9092,OUTSIDE://:9094
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,OUTSIDE://host.docker.internal:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
extra_hosts:
- 'host.docker.internal:172.17.0.1' #gateway do docker
networks:
- domper-network
postgres:
image: postgres
restart: unless-stopped
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: Admin#2021!
POSTGRES_DB: domper
POSTGRES_HOST_AUTH_METHOD: password
ports:
- 5432:5432
volumes:
- postgres-domper-data:/var/lib/postgresql/data
networks:
- domper-network
pgadmin:
image: dpage/pgadmin4
restart: unless-stopped
environment:
PGADMIN_DEFAULT_EMAIL: 'admin#admin.com.br'
PGADMIN_DEFAULT_PASSWORD: 'Admin#2021!'
ports:
- 16543:80
depends_on:
- postgres
networks:
- domper-network
api:
build: ./
restart: 'no'
command: bash -c "npm i && npm run migration:run && npm run seed:run && npm run start:dev"
ports:
- 8888:8888
env_file:
- dev.env
volumes:
- ./:/var/www/api
- /var/www/api/node_modules/
depends_on:
- postgres
- kafka
networks:
- domper-network
# healthcheck:
# test: ["CMD", "curl", "-f", "http://localhost:8888/healthcheck"]
# interval: 60s
# timeout: 5s
# retries: 5
notification-service:
build: ../repoDomperNotification/
restart: 'no'
command: npm run start
ports:
- 8889:8889
env_file:
- dev.env
volumes:
- ../repoDomperNotification/:/var/www/notification
- /var/www/notification/node_modules/
depends_on:
- kafka
networks:
- domper-network
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:8889/healthcheck']
interval: 60s
timeout: 5s
retries: 5
volumes:
postgres-domper-data:
driver: local
networks:
domper-network:
but in my windows desktop works and i don't know what this mean, I think it's some host configuration.
I've tried all the things I found on the internet and none of them worked.
log after #OneCricketeer (remove remove host.docker.internal and the extra_hosts) suggestion:
repodompercore-kafka-1 | ===> Running preflight checks ...
repodompercore-kafka-1 | ===> Check if /var/lib/kafka/data is writable ...
repodompercore-kafka-1 | ===> Check if Zookeeper is healthy ...
repodompercore-kafka-1 | SLF4J: Class path contains multiple SLF4J bindings.
repodompercore-kafka-1 | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
repodompercore-kafka-1 | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
repodompercore-kafka-1 | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
repodompercore-kafka-1 | SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
repodompercore-kafka-1 | log4j:WARN No appenders could be found for logger (io.confluent.admin.utils.cli.ZookeeperReadyCommand).
repodompercore-kafka-1 | log4j:WARN Please initialize the log4j system properly.
repodompercore-kafka-1 | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
repodompercore-kafka-1 exited with code 1

Not able to connect with MySQL server to Spring Application in Docker?

i'm new to Docker, as a part of dockerization of spring mvc application , i'm not able to connect my application to MySQL server
Dockerfile :this is Spring mvc application so need to copy my war to
tomcat container
FROM tomcat:8.0.20-jre8
COPY /target/CTH.war /usr/local/tomcat/webapps/
docker-compose.yml
version: '3'
services:
mysql-standalone:
image: 'mysql:5.7'
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- ./docker/provision/mysql/init:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD= root
- MYSQL_USER= root
- MYSQL_PASSWORD= root
- MYSQL_DATABASE= CTH
- MYSQL_DATABASE= CTH_CHAT
ports:
- "3307:3306"
cth-docker-container:
image: cth-docker-container
ports:
- "8082:8082"
environment:
CTH_DATASOURCE_DATABASE: CTH
CTH_DATASOURCE_SERVERNAME: mysql-standalone
CTH_DATASOURCE_USERNAME: root
CTH_DATASOURCE_PASSWORD: root
CTH_DATASOURCE_PORT: 3307
CTH_CHAT_DATASOURCE_DATABASE: CTH_CHAT
CTH_CHAT_DATASOURCE_SERVERNAME: mysql-standalone
CTH_CHAT_DATASOURCE_USERNAME: root
CTH_CHAT_DATASOURCE_PASSWORD: root
CTH_CHAT_DATASOURCE_PORT: 3307
build:
context: "./"
dockerfile: "Dockerfile"
depends_on:
- mysql-standalone
application.properties :this is spring mvc application and application uses MySQL db with two databases
1st db
dbuser.local=${CTH_DATASOURCE_USERNAME:root}
dbpassword.local=${CTH_DATASOURCE_PASSWORD:root}
dbdatabaseName.local=${CTH_DATASOURCE_DATABASE:CTH}
dbserverName.local=${CTH_DATASOURCE_SERVERNAME:localhost}
dbportNumber.local=${CTH_DATASOURCE_PORT:3306}
2nd db
dbuser.cth.chat.local=${CTH_CHAT_DATASOURCE_USERNAME:root}
dbpassword.cth.chat.local=${CTH_CHAT_DATASOURCE_PASSWORD:root}
dbdatabaseName.cth.chat.local=${CTH_CHAT_DATASOURCE_USERNAME:CTH_CHAT}
dbserverName.cth.chat.local=${CTH_CHAT_DATASOURCE_SERVERNAME:localhost}
dbportNumber.cth.chat.local=${CTH_CHAT_DATASOURCE_PORT:3306}
i read articles from which i created dockerfile and docker-compose file https://medium.com/#asce4s/dockerize-spring-mvc-application-a9ffbd11eadb https://github.com/abagayev/docker-bootstrap-collection/tree/master/mysql-few-databases/docker/provision/mysql/init
but i'm getting following erros when i execute
docker-compose -f docker-compose.yml up
Caused by: com.mysql.cj.core.exceptions.CJCommunicationsException: Communications link failure
cth-docker-container_1 |
cth-docker-container_1 | The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
cth-docker-container_1 | at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
cth-docker-container_1 | at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
cth-docker-container_1 | at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
cth-docker-container_1 | at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
cth-docker-container_1 | at com.mysql.cj.core.exceptions.ExceptionFactory.createException(ExceptionFactory.java:54)
cth-docker-container_1 | at com.mysql.cj.core.exceptions.ExceptionFactory.createException(ExceptionFactory.java:93)
cth-docker-container_1 | at com.mysql.cj.core.exceptions.ExceptionFactory.createException(ExceptionFactory.java:133)
cth-docker-container_1 | at com.mysql.cj.core.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:149)
cth-docker-container_1 | at com.mysql.cj.mysqla.io.MysqlaSocketConnection.connect(MysqlaSocketConnection.java:83)
cth-docker-container_1 | at com.mysql.cj.mysqla.MysqlaSession.connect(MysqlaSession.java:122)
cth-docker-container_1 | at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:1754)
cth-docker-container_1 | at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:1624)
cth-docker-container_1 | ... 125 common frames omitted
cth-docker-container_1 | Caused by: java.net.UnknownHostException: mysql-standalone
cth-docker-container_1 | at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
cth-docker-container_1 | at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
cth-docker-container_1 | at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
cth-docker-container_1 | at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
cth-docker-container_1 | at com.mysql.cj.core.io.StandardSocketFactory.connect(StandardSocketFactory.java:179)
cth-docker-container_1 | at com.mysql.cj.mysqla.io.MysqlaSocketConnection.connect(MysqlaSocketConnection.java:57)
cth-docker-container_1 | ... 128 common frames omitted
i'm stuck and not getting if i have set wrong environment or my way is not correct your any suggestion i'll definitely help me
thanks!
After reading lots of articles, I got success to up my application with MySQL db in a docker container and access in Mysql workbench.
here is my docker-compose file
version: '3'
services:
app:
container_name: cth-app
build:
context: .
dockerfile: Dockerfile
ports:
- '8080:8080'
environment:
##db1
CTH_DATASOURCE_DATABASE: CTH
CTH_DATASOURCE_HOST: mysqldb
CTH_DATASOURCE_USERNAME: cth
CTH_DATASOURCE_PASSWORD: root
CTH_DATASOURCE_PORT: 3306
##db2
CTH_CHAT_DATASOURCE_DATABASENAME: CTH_CHAT
CTH_CHAT_DATASOURCE_HOST: mysqldb
CTH_CHAT_DATASOURCE_USERNAME: cth
CTH_CHAT_DATASOURCE_PASSWORD: root
CTH_CHAT_DATASOURCE_PORT: 3306
depends_on:
- mysqldb
mysqldb:
image: mysql/mysql-server:5.7
container_name: mysqldb
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
##here i am able to access dump file which I'll create two databases
with user created for me.
volumes:
- ./docker/provision/mysql/init:/docker-entrypoint-initdb.d
ports:
- '3308:3306'
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: cth
MYSQL_PASSWORD: root
you need to create a dump file refer the article [ https://github.com/abagayev/docker-bootstrap-collection/tree/master/mysql-few-databases]
when you'll docker-compose up -d --build
your application is up running on port 8080, with MySQL DB running inside a host machine
Thanks!

getting connection refused from docker containers in a same docker compose

i have an akka http application and one keycloak docker container
i used sbt native docker plugin to create a an image of my app and then i wrote a docker compose file but my app container is not discovering keycloak container
here is my build.sbt file
enablePlugins(DockerPlugin)
enablePlugins(JavaAppPackaging)
dockerExposedPorts := Seq(8083)
then i created a docker image of my project
>docker:publishLocal
then i created a docker-compose file
version: '3.3'
services:
keycloak:
image: jboss/keycloak
container_name: docker-keycloak-container
ports:
- "8080:8080"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
akkahttpservice:
image: myproject-auth:0.0.1
container_name: docker-myproject-auth-container
ports:
- "8083:8083"
depends_on:
- keycloak
docker ps shows
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
181522f78d22 myproject-auth:0.0.1 "/opt/docker/bin/int…" 45 seconds ago Up 40 seconds 0.0.0.0:8083->8083/tcp docker-myproject-auth-container
d92d9c4f0a19 jboss/keycloak "/opt/jboss/tools/do…" 5 days ago Up 46 seconds 0.0.0.0:8080->8080/tcp, 8443/tcp docker-keycloak-container
but when i hit my applications route which talks to keycloak i got
14:13:32.020 [scala-execution-context-global-45] ERROR com.ifkaar.lufz.authentication.actors.worker.TokenManagerActor - Actor TokenManager: exception in fetching token
docker-myproject-auth-container | akka.stream.StreamTcpException: Tcp command [Connect(0.0.0.0:8080,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused
docker-myproject-auth-container | Caused by: java.net.ConnectException: Connection refused
docker-myproject-auth-container | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
docker-myproject-auth-container | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:714)
docker-myproject-auth-container | at akka.io.TcpOutgoingConnection$$anonfun$connecting$1.$anonfun$applyOrElse$4(TcpOutgoingConnection.scala:111)
docker-myproject-auth-container | at akka.io.TcpOutgoingConnection.akka$io$TcpOutgoingConnection$$reportConnectFailure(TcpOutgoingConnection.scala:53)
docker-myproject-auth-container | at akka.io.TcpOutgoingConnection$$anonfun$connecting$1.applyOrElse(TcpOutgoingConnection.scala:111)
docker-myproject-auth-container | at akka.actor.Actor.aroundReceive(Actor.scala:537)
docker-myproject-auth-container | at akka.actor.Actor.aroundReceive$(Actor.scala:535)
docker-myproject-auth-container | at akka.io.TcpConnection.aroundReceive(TcpConnection.scala:33)
docker-myproject-auth-container | at akka.actor.ActorCell.receiveMessage(ActorCell.scala:577)
docker-myproject-auth-container | at akka.actor.ActorCell.invoke(ActorCell.scala:547)
docker-myproject-auth-container | at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
docker-myproject-auth-container | at akka.dispatch.Mailbox.run(Mailbox.scala:231)
docker-myproject-auth-container | at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
docker-myproject-auth-container | at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
docker-myproject-auth-container | at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
docker-myproject-auth-container | at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
docker-myproject-auth-container | at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
but everything works when i run my app from sbt and use docker keycloak container why is that so ?
also i am running my akka http project on http://0.0.0.0:8083
kindly guide me on this
You need to inject address of keycloak into your service via environment variable. For this you can use service name as keycloak address (Compose docs)
services:
keycloak:
image: jboss/keycloak
container_name: docker-keycloak-container
ports:
- "8080:8080"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
akkahttpservice:
image: myproject-auth:0.0.1
container_name: docker-myproject-auth-container
environment:
- KEYCLOAK_URL: "http://keycloak:8080"
ports:
- "8083:8083"
depends_on:
- keycloak

Error in running "docker-compose up" : Invalid config, exiting abnormally

I have three different nodes that every one has docker with Ubuntu on it. I want to make Kafka cluster with these three nodes; In fact, I installed docker on each node with loading Ubuntu with on them. I configure "zookeeper.properties" in docker environment for "150.20.11.157" like this:
dataDir=/tmp/zookeeper/data
tickTime=2000
initLimit=10
syncLimit=5
server.1=0.0.0.0:2888:3888
server.2=150.20.11.134:2888:3888
server.3=150.20.11.137:2888:3888
clientPort=2186
For node 150.20.11.134, "zookeeper.properties" file in docker environment is like this:
dataDir=/tmp/zookeeper/data
tickTime=2000
initLimit=10
syncLimit=5
server.1=150.20.11.157:2888:3888
server.2=0.0.0.0:2888:3888
server.3=150.20.11.137:2888:3888
clientPort=2186
For node 150.20.11.137, "zookeeper.properties" file in docker environment is like this:
dataDir=/tmp/zookeeper/data
tickTime=2000
initLimit=10
syncLimit=5
server.1=150.20.11.157:2888:3888
server.2=150.20.11.134:2888:3888
server.3=0.0.0.0:2888:3888
clientPort=2186
Also, I setup "server.properties" like this, for node 150.20.11.157:
broker.id=0
port=9092
listeners = PLAINTEXT://150.20.11.157:9092
log.dirs=/tmp/kafka-logs
zookeeper.connect=150.20.11.157:2186,150.20.11.134:2186,
150.20.11.137:2186
"server.properties" for node 150.20.11.134 is:
broker.id=1
port=9092
listeners = PLAINTEXT://150.20.11.134:9092
log.dirs=/tmp/kafka-logs
zookeeper.connect=150.20.11.157:2186,150.20.11.134:2186,
150.20.11.137:2186
"server.properties" for node 150.20.11.137 is:
broker.id=2
port=9092
listeners = PLAINTEXT://150.20.11.137:9092
log.dirs=/tmp/kafka-logs
zookeeper.connect=150.20.11.157:2186,150.20.11.134:2186,
150.20.11.137:2186
More over, every node has a "myid" file in "/tmp/zookeeper/data" of docker environment with its server id inside it.
To make a Kafka cluster of three node like this picture, I make a "docker-compose.yaml" file and a dockerfile for it.
This is my docker-compose file:
version: '3.7'
services:
zookeeper:
build: .
command: /root/kafka_2.11-2.0.1/bin/zookeeper-server-start.sh
/root/kafka_2.11-2.0.1/config/zookeeper.properties
ports:
- 2186:2186
kafka1:
build:
context: .
args:
brokerId: 0
command: /root/kafka_2.11-2.0.1/bin/kafka-server-start.sh
/root/kafka_2.11-2.0.1/config/server.properties
depends_on:
- zookeeper
kafka2:
build:
context: .
args:
brokerId: 1
command: /root/kafka_2.11-2.0.1/bin/kafka-server-start.sh
/root/kafka_2.11-2.0.1/config/server.properties
depends_on:
- zookeeper
kafka3:
build:
context: .
args:
brokerId: 2
command: /root/kafka_2.11-2.0.1/bin/kafka-server-start.sh
/root/kafka_2.11-2.0.1/config/server.properties
depends_on:
- zookeeper
producer:
build: .
command: bash -c "sleep 4 && /root/kafka_2.11-2.0.1/bin/kafka-
topics.sh --create --zookeeper zookeeper:2186 --replication-
factor 2 --partitions 3 --topic dates && while true; do date |
/kafka_2.11-2.0.1/bin/kafka-console-producer.sh --broker-list
kafka1:9092,kafka2:9092,kafka3:9092 --topic dates; sleep 1;
done "
depends_on:
- zookeeper
- kafka1
- kafka2
- kafka3
consumer:
build: .
command: bash -c "sleep 6 && /root/kafka_2.11-2.0.1/bin/kafka-
console-consumer.sh localhost:9092 --topic dates --bootstrap-
server kafka1:9092,kafka2:9092,kafka3:9092"
depends_on:
- zookeeper
- kafka1
- kafka2
- kafka3
The problem is after "dockerfile build ." when I do "sudo docker-compose up" on each node. It does not run completely. Some of my log is in following:
zookeeper_1 | [2019-01-17 16:09:27,197] INFO Reading configuration from: /root/kafka_2.11-2.0.1/config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
kafka3_1 | [2019-01-17 16:09:29,426] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
kafka3_1 | [2019-01-17 16:09:29,702] INFO starting (kafka.server.KafkaServer)
kafka3_1 | [2019-01-17 16:09:29,702] INFO Connecting to zookeeper on 150.20.11.157:2186,150.20.11.134:2186,150.20.11.137:2186 (kafka.server.KafkaServer)
kafka1_1 | [2019-01-17 16:09:30,012] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
zookeeper_1 | [2019-01-17 16:09:27,240] INFO Resolved hostname: 150.20.11.137 to address: /150.20.11.137 (org.apache.zookeeper.server.quorum.QuorumPeer)
kafka1_1 | [2019-01-17 16:09:30,486] INFO starting (kafka.server.KafkaServer)
kafka3_1 | [2019-01-17 16:09:29,715] INFO [ZooKeeperClient] Initializing a new session to 150.20.11.157:2186,150.20.11.134:2186,150.20.11.137:2186. (kafka.zookeeper.ZooKeeperClient)
zookeeper_1 | [2019-01-17 16:09:27,241] INFO Resolved hostname: 150.20.11.134 to address: /150.20.11.134 (org.apache.zookeeper.server.quorum.QuorumPeer)
zookeeper_1 | [2019-01-17 16:09:27,241] INFO Resolved hostname: 0.0.0.0 to address: /0.0.0.0 (org.apache.zookeeper.server.quorum.QuorumPeer)
kafka3_1 | [2019-01-17 16:09:29,720] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
zookeeper_1 | [2019-01-17 16:09:27,241] INFO Defaulting to majority quorums (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
kafka3_1 | [2019-01-17 16:09:29,721] INFO Client environment:host.name=be08b050be4c (org.apache.zookeeper.ZooKeeper)
zookeeper_1 | [2019-01-17 16:09:27,242] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
zookeeper_1 | org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /root/kafka_2.11-2.0.1/config/zookeeper.properties
zookeeper_1 | at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:156)
zookeeper_1 | at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:104)
zookeeper_1 | at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81)
zookeeper_1 | Caused by: java.lang.IllegalArgumentException: /tmp/zookeeper/data/myid file is missing
zookeeper_1 | at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:408)
zookeeper_1 | at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:152)
zookeeper_1 | ... 2 more
kafka1_1 | [2019-01-17 16:09:30,487] INFO Connecting to zookeeper on 150.20.11.157:2186,150.20.11.134:2186,150.20.11.137:2186 (kafka.server.KafkaServer)
zookeeper_1 | Invalid config, exiting abnormall
In fact, I configured Kafka cluster without using docker on every node and I could run Zookeeper and Kafka server without any problem. Kafka cluster was like this picture:
Would you please tell me what I am doing wrong to config this cluser?
Any help would be appreciated.
I change the docker-compose file and problem solved. Zookeeper and Kafka server run without any problem. Topic was created. Also,Consumer and Producer worked with topic in three nodes. My docker-compose for one node is like this:
version: '3.7'
services:
zookeeper:
image: ubuntu_mesos
command: /root/kafka_2.11-2.0.1/bin/zookeeper-server-start.sh
/root/kafka_2.11-2.0.1/config/zookeeper.properties
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2186
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 10
ZOOKEEPER_SYNC_LIMIT: 5
ZOOKEEPER_SERVERS:
0.0.0.0:2888:3888;150.20.11.134:2888:3888;150.20.11.137:2888:3888
network_mode: host
expose:
- 2186
- 2888
- 3888
ports:
- 2186:2186
- 2888:2888
- 3888:3888
kafka:
image: ubuntu_mesos
command: bash -c "sleep 20; /root/kafka_2.11-2.0.1/bin/kafka-server-
start.sh /root/kafka_2.11-2.0.1/config/server.properties"
network_mode: host
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 0
KAFKA_ZOOKEEPER_CONNECT:
150.20.11.157:2186,150.20.11.134:2186,150.20.11.137:2186
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://150.20.11.157:9092
expose:
- 9092
ports:
- 9092:9092
producer:
image: ubuntu_mesos
command: bash -c "sleep 40; /root/kafka_2.11-2.0.1/bin/kafka-topics.sh --
create --zookeeper 150.20.11.157:2186 --replication-factor 2 --partitions
3 --topic testFlink -- /root/kafka_2.11-2.0.1/bin/kafka-console-
producer.sh --broker-list 150.20.11.157:9092 --topic testFlink"
depends_on:
- zookeeper
- kafka
consumer:
image: ubuntu_mesos
command: bash -c "sleep 44; /root/kafka_2.11-2.0.1/bin/kafka-console-
consumer.sh --bootstrap-server 150.20.11.157:9092 --topic testFlink --
from-beginning"
depends_on:
- zookeeper
- kafka
Two other nodes have the docker-compose like above too.
Hope it was helpful for others.

How to stop a container to start before a dependency is "really" done?

I am trying to hold a container from start (ready to accept connection since it's Apache+PHP) before a dependency has finished. This is how my docker-compose.yml looks like:
version: '3'
services:
webserver:
image: reynierpm/docker-lamp
dns:
- 8.8.8.8
- 8.8.4.4
ports:
- "80:80"
volumes:
- f:/sources:/var/www/html
depends_on:
- mysqldb
- mongodb
restart: on-failure
environment:
SERVER_URL: 'localhost'
SERVER_SCHEME: 'http'
HTTP_PORT: '80'
mysqldb:
image: mysql:latest
healthcheck:
test: "exit 0"
interval: 1m30s
timeout: 10s
retries: 3
env_file: .env
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- f:/sources/db_dump:/docker-entrypoint-initdb.d
ports:
- "3306:3306"
restart: on-failure
mongodb:
image: mongo:latest
env_file: .env
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE}
ports:
- "27017:27017"
restart: on-failure
But the result from docker-compose up is as follow:
mysqldb_1 | mysql: [Warning] Using a password on the command line interface can be insecure.
mysqldb_1 |
mysqldb_1 |
mysqldb_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/language.sql
mysqldb_1 | mysql: [Warning] Using a password on the command line interface can be insecure.
mysqldb_1 |
mysqldb_1 |
mysqldb_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/license.sql
mysqldb_1 | mysql: [Warning] Using a password on the command line interface can be insecure.
webserver_1 | Enabling module rewrite.
webserver_1 | Enabling module expires.
webserver_1 | To activate the new configuration, you need to run:
webserver_1 | service apache2 restart
webserver_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.18.0.4. Set the 'ServerName' directive globally to suppress this message
mysqldb_1 |
mysqldb_1 |
mysqldb_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/license_agreement.sql
mysqldb_1 | mysql: [Warning] Using a password on the command line interface can be insecure.
mysqldb_1 |
mysqldb_1 |
mysqldb_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/license_forcast.sql
mysqldb_1 | mysql: [Warning] Using a password on the command line interface can be insecure.
As you can see in between of MySQL import process the webserver_1 container has started already and mysqldb_1 hasn't. If I try to access the application it will fails (blank page) because MySQL isn't ready to accept connections or read some needed tables because they're not created|imported yet.
How do I avoid this? What changes do I need to make in my docker-compose.yml to achieve it?
In your 'webserver' start-up script / entry point you could pass the sleep command or you could use: WAIT command as described here:
https://docs.docker.com/engine/reference/commandline/wait/

Resources