Run supervisor outside docker whereas nimbus runs as a docker container - docker

I'm trying to run nimbus as container in docker and trying to use my machine as supervisor. I've tried the following. I've a docker compose that sets up nimbus and zookeeper containers
nimbus:
image: storm:2.4.0
platform: linux/x86_64
container_name: nimbus
command: storm nimbus
depends_on:
- zookeeper
links:
- zookeeper
restart: always
ports:
- '6627:6627'
networks:
- storm
zookeeper:
image: bitnami/zookeeper:3.6.2
container_name: zookeeper
ports:
- '2181'
volumes:
- ~/data/zookeeper:/bitnami/zookeeper
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- storm
ui:
platform: linux/x86_64
image: storm:2.4.0
container_name: ui
command: storm ui
links:
- nimbus
- zookeeper
restart: always
ports:
- "9090:8080"
networks:
- storm
I've exposed the two with ports. When I try to run storm supervisor from terminal, with localhost as nimbus seed and zookeeper server in storm.yaml, I get the following log and am not able to start the supervisor.
2022-11-03 19:15:55.866 o.a.s.s.o.a.z.ClientCnxn main-SendThread(localhost:2181) [INFO] Socket error occurred: localhost/127.0.0.1:2181: Connection refused
2022-11-03 19:15:55.971 o.a.s.d.s.Supervisor main [ERROR] supervisor can't create stormClusterState
2022-11-03 19:15:55.975 o.a.s.u.Utils main [ERROR] Received error in thread main.. terminating server...
java.lang.Error: java.lang.RuntimeException: org.apache.storm.shade.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /storm
at org.apache.storm.utils.Utils.handleUncaughtException(Utils.java:663) ~[storm-client-2.3.0.jar:2.3.0]
at org.apache.storm.utils.Utils.handleUncaughtException(Utils.java:667) ~[storm-client-2.3.0.jar:2.3.0]
at org.apache.storm.utils.Utils.lambda$createDefaultUncaughtExceptionHandler$2(Utils.java:1047) ~[storm-client-2.3.0.jar:2.3.0]
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1055) [?:?]
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1050) [?:?]
at java.lang.Thread.dispatchUncaughtException(Thread.java:1997) [?:?]
I'm able to access the ui as localhost:9090. Can someone please tell me how to resolve this?

Related

Why can't celery worker connect to rabbitmq using docker compose

I am trying to combine my services using docker compose but my celery and celery beat services can't connect with my rabbitmq service.
Here is my rabbitmq service in the docker-compose.yml file
rabbitmq:
container_name: "rabbitmq"
image: rabbitmq:3-management
ports:
- 15672:15672
- 5672:5672
environment:
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
depends_on:
- server
volumes:
- rabbitmq:/var/lib/rabbitmq
and here are my celery worker and celery beat services in docker-compose.yml
celery_worker:
container_name: celery-worker
build: .
command: celery -A tasks worker -E --loglevel=INFO
environment:
host_server: postgresqldb
db_server_port: 5432
database_name: db
db_username: user
db_password: password
ssl_mode: prefer
networks:
- postgresqlnet
depends_on:
- rabbitmq
celery_beat:
container_name: celery-beat
build: .
command: celery -A tasks beat
environment:
- host_server=postgresqldb
- db_server_port=5432
- database_name=db
- db_username=user
- db_password=password
- ssl_mode=prefer
networks:
- postgresqlnet
depends_on:
- rabbitmq
I also have a celeryconfig.py where broker url is stored. It's content are below
broker_url = "amqp://guest:guest#localhost:5672//"
When I run docker compose up I get this output from celery and celery beat.
celery-worker | [2023-02-03 14:24:43,223: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
celery-worker | Trying again in 32.00 seconds... (16/100)
celery-worker |
celery-beat | [2023-02-03 14:25:10,058: ERROR/MainProcess] beat: Connection error: [Errno 111] Connection refused. Trying again in 32.0 seconds...
I realize what I did wrong now. First I needed to connect all three services using docker networking and then use the "rabbitmq" as my hostname name in my celeryconfig file.

Grafana + prometheus node-exporter data source error - Instant query faile(bigger context)

My problem is that when I'm trying to connect to a data source that is node-exporter the following error pops up in the logs.
grafana | t=2021-12-31T07:37:52+0000 lvl=eror msg="Instant query failed" logger=tsdb.prometheus query=1+1 err="bad_response: readObjectStart: expect { or n, but found <, error found in #1 byte of ...|<html>\n\t\t\t<|..., bigger context ...|<html>\n\t\t\t<head><title>Node Exporter</title></head>|..."
The same thing is happening for both node-exporters.
I'm using docker compose and docker containers to deploy prometheus, node-exporter and grafana but even without containers I was receiving the same error.
The prometheus is pulling metrics for all targets, and it looks like it's working fine.
My docker compose settings for node:
node-exporter:
image: prom/node-exporter:latest
container_name: prometheus-node-rockwatch
restart: unless-stopped
ports:
- "9100:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
Thanks for any help!

Apache NiFi Cluster in Docker over 3 VM's

I want to make a NiFi cluster in docker over 3 vm's.
I found a docker-compose file that creates a cluster on one node and try to edit this file.
I found out that i need zookeeper, but do i need one zookeeper each instance? and what ports should i open or map in docker?
the docker-compose file i found:
version: "3"
services:
zookeeper:
hostname: zookeeper
container_name: zookeeper
image: zookeeper:3.6.1
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
image: apache/nifi:1.11.4
ports:
- 8080 # Unsecured HTTP Web Port
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=true
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
I changed the file like this (on each VM the ip is correct )
version: "3"
services:
zookeeper:
hostname: zookeeper
container_name: zookeeper
image: 'zookeeper:3.6.1'
ports:
- 2181
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
image: apache/nifi:1.11.4
ports:
- 8080 # Unsecured HTTP Web Port
- 8082
- 9001
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=true
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
# - NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ZK_CONNECT_STRING=192.168.2.10:2181,192.168.2.20:2181,192.168.2.30:2181
- NIFI_ELECTION_MAX_WAIT=1 min
- NIFI_CLUSTER_ADDRESS=192.168.2.XX
and in the logs i found this message but cant find any solution
ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:972)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:943)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:66)
at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:346)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I found out that i need zookeeper, but do i need one zookeeper each instance?
No, you can use one zookeeper
and what ports should i open or map in docker?
as I know you need 2888, 3888, 2181 to be open. But only 2181 to communicate with nifi
2888 and 3888 for zookeeper cluster communication.

Unable to run Apache nifi docker container in swarm. Empty response

I am unable to run official nifi image in docker swarm.
When I start container in regular mode:
docker run --name nifi -p 8080:8080 -d apache/nifi:latest
everything works fine and I can access the application under http://localhost:8080/nifi
However when i try to run application in docker swarm:
docker swarm init
docker stack deploy -c docker-compose.yml nifi
With the following docker-compose.yml
version: "3"
services:
zookeeper:
hostname: zookeeper
container_name: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
image: apache/nifi:latest
ports:
- "8080:8080"
expose:
- "8080"
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_WEB_HTTP_HOST=localhost
- NIFI_CLUSTER_IS_NODE=true
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
Application starts (zookeeper and nifi) but is unaccessible under http://localhost:8080/nifi
curl http://localhost:8080
curl: (52) Empty reply from server
However running the following code:
docker exec -it 629ecd6949d9 curl -v http://localhost:8080
shows that nifi is up and running, but for some reason it does not work from outside container.
I am close to start hitting the wall with my head. How can I fix this?
Best
Paweł
Refactored your compose file. Try to use it:
version: "3.3"
services:
zookeeper:
hostname: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
image: apache/nifi:latest
ports:
- target: 8080
published: 8080
protocol: tcp
mode: host
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_WEB_HTTP_HOST=0.0.0.0
- NIFI_CLUSTER_IS_NODE=true
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
In order to make the NiFi image run in Docker swarm mode you need to add NIFI_WEB_HTTP_HOST=0.0.0.0 to the environment section of the docker-compose file:
version: "3"
services:
zookeeper:
hostname: zookeeper
container_name: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
image: apache/nifi:latest
ports:
- "8080:8080"
expose:
- "8080"
environment:
- NIFI_WEB_HTTP_HOST=0.0.0.0 # This line right here
- NIFI_WEB_HTTP_PORT=8080
- NIFI_WEB_HTTP_HOST=localhost
- NIFI_CLUSTER_IS_NODE=true
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
Sorry, if NIFI_WEB_HOST=0.0.0.0 then it will cause problem when nifi container try to communicate with each other.
2020-02-20 03:20:13,509 WARN [Replicate Request Thread-5] o.a.n.c.c.h.r.ThreadPoolRequestReplicator
java.net.SocketTimeoutException: timeout
at okio.Okio$4.newTimeoutException(Okio.java:232)
at okio.AsyncTimeout.exit(AsyncTimeout.java:285)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:241)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:355)
at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:227)
at okhttp3.internal.http1.Http1Codec.readHeaderLine(Http1Codec.java:215)
at okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:189)
at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:88)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200)
at okhttp3.RealCall.execute(RealCall.java:77)
at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:138)
at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:132)
at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:647)
at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:839)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.read(SocketInputStream.java:204)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at okio.Okio$2.read(Okio.java:140)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:237)
... 28 common frames omitted

consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno 111] Connection refused

I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.

Resources