We are using testcontainers to instantiate Kafka containers for testing an app that creates a kstream StateStore and kafka consumer to process some messages. The test goes fine and we could both produce and consume messages. But these tests continuously print annoying WARN messages -
(kafka-admin-client-thread)([]) WARN - NetworkClient - [AdminClient
clientId=test_clientId] Connection to node 1
(localhost/127.0.0.1:55304) could not be established. Broker may not
be available.
It seems the AdminClient is scheduled to poll every second and this creates substantial log files.
On enabling DEBUG logs, I observe the error
[2022-09-16T22:48:11,747Z](test_x-GlobalStreamThread)([]) DEBUG - NetworkClient - [Consumer clientId=test_x-global-consumer, groupId=null] Node 1 disconnected.
[2022-09-16T22:48:11,747Z](test_x-GlobalStreamThread)([]) WARN - NetworkClient - [Consumer clientId=test_x-global-consumer, groupId=null] Connection to node 1 (localhost/127.0.0.1:55304) could not be established. Broker may not be available.
[2022-09-16T22:48:11,747Z](test_x-GlobalStreamThread)([]) DEBUG - ConsumerNetworkClient - [Consumer clientId=test_x-global-consumer, groupId=null] Cancelled request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=test_x-global-consumer, correlationId=11) due to node 1 being disconnected
[2022-09-16T22:48:11,747Z](test_x-GlobalStreamThread)([]) INFO - FetchSessionHandler - [Consumer clientId=test_x-global-consumer, groupId=null] Error sending fetch request (sessionId=1421844635, epoch=INITIAL) to node 1:
org.apache.kafka.common.errors.DisconnectException: null
[2022-09-16T22:48:11,792Z](kafka-producer-network-thread | producer-1)([]) DEBUG - NetworkClient - [Producer clientId=producer-1] Initialize connection to node localhost:55304 (id: 1 rack: null) for sending metadata request
[2022-09-16T22:48:11,792Z](kafka-admin-client-thread | adminclient-1)([]) DEBUG - NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:55304 (id: 1 rack: null) using address localhost/127.0.0.1
[2022-09-16T22:48:11,792Z](kafka-producer-network-thread | producer-1)([]) DEBUG - NetworkClient - [Producer clientId=producer-1] Initiating connection to node localhost:55304 (id: 1 rack: null) using address localhost/127.0.0.1
[2022-09-16T22:48:11,793Z](kafka-producer-network-thread | producer-1)([]) DEBUG - Selector - [Producer clientId=producer-1] Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.Net.pollConnect(Native Method)
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:219)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:530)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:544)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
at java.base/java.lang.Thread.run(Thread.java:833)
[2022-09-16T22:48:11,793Z](kafka-admin-client-thread | adminclient-1)([]) DEBUG - Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.Net.pollConnect(Native Method)
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:219)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:530)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:544)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1293)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1224)
at java.base/java.lang.Thread.run(Thread.java:833)
[2022-09-16T22:48:11,793Z](kafka-producer-network-thread | producer-1)([]) DEBUG - NetworkClient - [Producer clientId=producer-1] Node 1 disconnected.
[2022-09-16T22:48:11,793Z](kafka-admin-client-thread | adminclient-1)([]) DEBUG - NetworkClient - [AdminClient clientId=adminclient-1] Node 1 disconnected.
Similar logs are also coming from GlobalStreamThread
[2022-09-16T22:48:11,746Z](test_x-GlobalStreamThread)([]) DEBUG - NetworkClient - [Consumer clientId=test_x-global-consumer, groupId=null] Initiating connection to node localhost:55304 (id: 1 rack: null) using address localhost/127.0.0.1
[2022-09-16T22:48:11,747Z](test_x-GlobalStreamThread)([]) DEBUG - Selector - [Consumer clientId=test_x-global-consumer, groupId=null] Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.Net.pollConnect(Native Method)
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:219)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:530)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:544)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1296)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
at org.apache.kafka.streams.processor.internals.GlobalStreamThread$StateConsumer.pollAndUpdate(GlobalStreamThread.java:237)
at org.apache.kafka.streams.processor.internals.GlobalStreamThread.run(GlobalStreamThread.java:284)
Docker port mapping looks like
0.0.0.0:55306->2181/tcp, 0.0.0.0:55305->9092/tcp, 0.0.0.0:55304->9093/tcp
Testcontainer is started as
KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:6.2.1"))
.withNetwork(Network.SHARED)
Testcontainer maps KAFKA_LISTENERS here
https://github.com/testcontainers/testcontainers-java/blob/master/modules/kafka/src/main/java/org/testcontainers/containers/KafkaContainer.java#L49
Could you help me understand what is happening here and how to provide the correct port mappings via testcontainers for these threads to work fine?
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 days ago.
Improve this question
I'm trying to migrate a 3 node zookeeper ensemble from VMs to a kubernetes cluster without downtime.
I know there are a lot of blog posts and other articles on how to migrate zookeeper without downtime VMs to VMs to bare mettal to Vms etc. but couldn't find one which migrates w/o downtime to k8s.
This is the config on all zk nodes (zoo.cfg):
autopurge.purgeInterval=1
initLimit=10
syncLimit=5
autopurge.snapRetainCount=5
snapCount=5000
4lw.commands.whitelist=*
tickTime=2000
dataDir=/var/opt/zookeeper/data/data
admin.serverPort=8080
reconfigEnabled=true
admin.enableServer=True
standaloneEnabled=false
dynamicConfigFile=/opt/zookeeper/apache-zookeeper-3.7.1-bin/conf/zoo.cfg.dynamic
and /opt/zookeeper/current/conf/zoo.cfg.dynamic
server.1=inzzk01:2888:3888;2181
server.2=inzzk02:2888:3888;2181
server.3=inzzk03:2888:3888;2181
Up until here all is good, the cluster is formed
I run zk in k8s as a statefulset from this answer (btw, by itself if I create a 3 pod cluster it works as expected), so scrap everything on k8s to work on a clean cluster and
add the below to the config on VMs + restart each node:
server.4=10.100.102.106:30888:31888;30181
server.5=10.100.102.232:30889:31889;30182
The 2 IP addresses above are correct k8s nodes IP addresses (also the ports are correct)
In the logs all is normal:
2023-02-17 13:45:30,107 [myid:1] - WARN [QuorumConnectionThread-[myid=1]-3:QuorumCnxManager#401] - Cannot open channel to 4 at election address /10.100.102.106:31888
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.initiateConnection(QuorumCnxManager.java:384)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$QuorumConnectionReqThread.run(QuorumCnxManager.java:458)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
2023-02-17 13:45:30,113 [myid:1] - WARN [QuorumConnectionThread-[myid=1]-4:QuorumCnxManager#401] - Cannot open channel to 5 at election address /10.100.102.232:31889
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.initiateConnection(QuorumCnxManager.java:384)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$QuorumConnectionReqThread.run(QuorumCnxManager.java:458)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
I tried all types of services (ClusterIP, headless and not, Loadbalancer and NodePort) in the end I figured the simple way to go is no service + add hostNetwork: true to the statefulset. This way the ports are directly mapped to the k8s nodes so no proxy/SNAT/DNAT/xNAT :) so I can target them directly. Again not recommended! but for the sake of this example.
kubectl -n infraservices get all
NAME READY STATUS RESTARTS AGE
pod/zk-0 0/1 Running 1 (65s ago) 2m45s
In the logs of the pod:
2023-02-17 14:00:48,308 [myid:4] - INFO [WorkerReceiver[myid=4]:FastLeaderElection$Messenger$WorkerReceiver#390] - Notification: my state:LOOKING; n.sid:4, n.state:LOOKING, n.leader:4, n.round:0x1, n.peerEpoch:0x0, n.zxid:0x0, message format version:0x2, n.config version:0x0
2023-02-17 14:00:48,315 [myid:4] - INFO [WorkerReceiver[myid=4]:FastLeaderElection$Messenger$WorkerReceiver#308] - 4 Received version: 1600000000 my version: 0
2023-02-17 14:00:48,315 [myid:4] - INFO [WorkerReceiver[myid=4]:FastLeaderElection$Messenger$WorkerReceiver#316] - restarting leader election
2023-02-17 14:00:48,315 [myid:4] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker#1408] - Interrupting SendWorker thread from RecvWorker. sid: 2. myId: 4
2023-02-17 14:00:48,393 [myid:4] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker#1408] - Interrupting SendWorker thread from RecvWorker. sid: 3. myId: 4
2023-02-17 14:00:48,394 [myid:4] - INFO [QuorumPeerListener:QuorumCnxManager$Listener#985] - Leaving listener
2023-02-17 14:00:48,395 [myid:4] - WARN [SendWorker:2:QuorumCnxManager$SendWorker#1288] - Interrupted while waiting for message on queue
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(Unknown Source)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(Unknown Source)
at org.apache.zookeeper.util.CircularBlockingQueue.poll(CircularBlockingQueue.java:105)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1453)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$900(QuorumCnxManager.java:99)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:1277)
2023-02-17 14:00:48,395 [myid:4] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker#1402] - Connection broken for id 1, my id = 4
java.net.SocketException: Socket closed
at java.base/java.net.SocketInputStream.socketRead0(Native Method)
at java.base/java.net.SocketInputStream.socketRead(Unknown Source)
at java.base/java.net.SocketInputStream.read(Unknown Source)
at java.base/java.net.SocketInputStream.read(Unknown Source)
at java.base/java.io.BufferedInputStream.fill(Unknown Source)
at java.base/java.io.BufferedInputStream.read(Unknown Source)
at java.base/java.io.DataInputStream.readInt(Unknown Source)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:1390)
2023-02-17 14:00:48,395 [myid:4] - WARN [SendWorker:3:QuorumCnxManager$SendWorker#1288] - Interrupted while waiting for message on queue
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(Unknown Source)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(Unknown Source)
at org.apache.zookeeper.util.CircularBlockingQueue.poll(CircularBlockingQueue.java:105)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1453)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$900(QuorumCnxManager.java:99)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:1277)
2023-02-17 14:00:48,395 [myid:4] - INFO [WorkerReceiver[myid=4]:FastLeaderElection$Messenger$WorkerReceiver#472] - WorkerReceiver is down
2023-02-17 14:00:48,395 [myid:4] - WARN [SendWorker:1:QuorumCnxManager$SendWorker#1288] - Interrupted while waiting for message on queue
java.lang.InterruptedException
On the VM the logs look like this:
2023-02-17 14:03:42,165 [myid:1] - INFO [ListenerHandler-inzzk01/10.100.100.128:3888:QuorumCnxManager$Listener$ListenerHandler#1076] - Received connection request from /10.100.102.106:51674
2023-02-17 14:03:42,167 [myid:1] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker#1402] - Connection broken for id 4, my id = 1
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:1390)
2023-02-17 14:03:42,167 [myid:1] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker#1408] - Interrupting SendWorker thread from RecvWorker. sid: 4. myId: 1
2023-02-17 14:03:42,167 [myid:1] - WARN [SendWorker:4:QuorumCnxManager$SendWorker#1288] - Interrupted while waiting for message on queue
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
at org.apache.zookeeper.util.CircularBlockingQueue.poll(CircularBlockingQueue.java:105)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1453)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$900(QuorumCnxManager.java:99)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:1277)
2023-02-17 14:03:42,167 [myid:1] - WARN [SendWorker:4:QuorumCnxManager$SendWorker#1300] - Send worker leaving thread id 4 my id = 1
I am able to connect from the pod to the cluster with zkCli.sh
root#inccl02az12-23rpb-mvnnw:/apache-zookeeper-3.7.1-bin# bin/zkCli.sh -timeout 3000 -server inzzk01:2181
[zk: inzzk01:2181(CONNECTED) 6] get /zookeeper/config
server.1=inzzk01:2888:3888:participant;0.0.0.0:2181
server.2=inzzk02:2888:3888:participant;0.0.0.0:2181
server.3=inzzk03:2888:3888:participant;0.0.0.0:2181
version=1600000000
[zk: inzzk01:2181(CONNECTED) 7]
So how can I connect at least one zookeeper node as a pod in k8s to an existing cluster outside k8s ?
NATS Server error:
node2-c1_1 | [1] 2021/12/13 15:35:15.464884 [INF] Creating MQTT streams/consumers with replicas 1 for account "ADKPM577BVYF7QXUNN6YOFEO4CJUQOPTD3WDEIFNXLW264566UABY3QG"
node2-c1_1 | [1] 2021/12/13 15:35:19.470607 [WRN] 172.18.0.1:60930 - mid:36 - "mqttClient-MQTT_3_1_1-86fbcb3e-3d62-4bef-9ac3-684f54945852" - Readloop processing time: 4.0328908s
node2-c1_1 | [1] 2021/12/13 15:35:19.470719 [ERR] 172.18.0.1:60930 - mid:36 - "mqttClient-MQTT_3_1_1-86fbcb3e-3d62-4bef-9ac3-684f54945852" - unable to connect: create messages stream for account "ADKPM577BVYF7QXUNN6YOFEO4CJUQOPTD3WDEIFNXLW264566UABY3QG": timeout for request type "SC" on "$JS.API.STREAM.CREATE.$MQTT_msgs" (reply="$MQTT.JSA.n50ntl9W.SC.FocGVk4ZKwLQbRYWqJEJUk")
node2-c1_1 | [1] 2021/12/13 15:35:19.470741 [DBG] 172.18.0.1:60930 - mid:36 - "mqttClient-MQTT_3_1_1-86fbcb3e-3d62-4bef-9ac3-684f54945852" - Client connection closed: Protocol Violation
Client Error:
CLIENT mqttClient-MQTT_3_1_1-86fbcb3e-3d62-4bef-9ac3-684f54945852: sending CONNECT with Mqtt3Connect: MqttConnect{keepAlive=60, cleanSession=true, simpleAuth=MqttSimpleAuth{username and password}}
PUBLISH: com.hivemq.client.mqtt.exceptions.ConnectionClosedException: Server closed connection without DISCONNECT.
at com.hivemq.client.internal.mqtt.MqttBlockingClient.connect(MqttBlockingClient.java:91)
at com.hivemq.client.internal.mqtt.mqtt3.Mqtt3BlockingClientView.connect(Mqtt3BlockingClientView.java:69)
at com.hivemq.cli.mqtt.MqttClientExecutor.mqtt3Connect(MqttClientExecutor.java:104)
at com.hivemq.cli.mqtt.AbstractMqttClientExecutor.connectMqtt3Client(AbstractMqttClientExecutor.java:266)
at com.hivemq.cli.mqtt.AbstractMqttClientExecutor.connect(AbstractMqttClientExecutor.java:203)
at com.hivemq.cli.mqtt.MqttClientExecutor.connect(MqttClientExecutor.java:67)
at com.hivemq.cli.mqtt.AbstractMqttClientExecutor.getMqttClientFromCacheOrConnect(AbstractMqttClientExecutor.java:434)
at com.hivemq.cli.mqtt.AbstractMqttClientExecutor.subscribe(AbstractMqttClientExecutor.java:84)
at com.hivemq.cli.mqtt.MqttClientExecutor.subscribe(MqttClientExecutor.java:67)
at com.hivemq.cli.commands.cli.SubscribeCommand.run(SubscribeCommand.java:114)
at picocli.CommandLine.executeUserObject(CommandLine.java:1729)
at picocli.CommandLine.access$900(CommandLine.java:145)
at picocli.CommandLine$RunLast.handle(CommandLine.java:2101)
at picocli.CommandLine$RunLast.handle(CommandLine.java:2068)
at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:1935)
at picocli.CommandLine.execute(CommandLine.java:1864)
at com.hivemq.cli.MqttCLIMain.main(MqttCLIMain.java:73)
PUBLISH: Server closed connection without DISCONNECT.
I have the following setup:
Keycloak running in docker, public interface mapped to 127.0.0.1:8180, internal keycloak-n:8080
Quarkus running in docker, public interface mapped to 127.0.0.1:8080
Both run in the same docker network and can communicate
An external AutzClient (not in docker) that uses the token communicate with quarkus
Everything works if client and quarkus are outside of Docker and communicate with keycloak via the same interface. As soon as quarkus is in docker, I can't get it to work.
I've tried many changes so far. On keycloak I set the frontendUrl with /subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.frontendUrl="http://127.0.0.1:8180/auth"
My current quarkus config (oidc part) looks like:
# OIDC Configuration
quarkus.oidc.auth-server-url=http://keycloak-n:8080/auth/realms/quarkus
quarkus.oidc.client-id=backend-service
quarkus.oidc.credentials.secret=85174256-b231-4385-9fa9-257dd0d27bf0
quarkus.oidc.token.lifespan-grace=20
quarkus.oidc.introspection-path=.well-known/openid-configuration
quarkus.oidc.jwks-path=.well-known/jwks.json
quarkus.oidc.token.issuer=http://127.0.0.1:8180/auth/realms/quarkus
# Enable Policy Enforcement
quarkus.keycloak.policy-enforcer.enable=true
If I remove the token issuer, I get from vertx a issuer validation failed. With the current configuration the initial auth works, but than I get a Connection refused (Connection refused) from PolicyEnforcer, because it tries to communicate with 127.0.0.1. Stacktrace is:
2020-08-03 05:43:27,933 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Releasing connection [{}->http://keycloak-n:8080][null]
2020-08-03 05:43:27,933 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Pooling connection [{}->http://keycloak-n:8080][null]; keep alive indefinitely
2020-08-03 05:43:27,933 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Notifying no-one, there are no waiting threads
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.tsc.ThreadSafeClientConnManager] (executor-thread-1) Get connection: {}->http://127.0.0.1:8180, timeout = 0
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) [{}->http://127.0.0.1:8180] total kept alive: 1, total issued: 0, total allocated: 1 out of 20
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) No free connections [{}->http://127.0.0.1:8180][null]
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Available capacity: 20 out of 20 [{}->http://127.0.0.1:8180][null]
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Creating new connection [{}->http://127.0.0.1:8180]
2020-08-03 05:43:27,944 DEBUG [org.apa.htt.imp.con.DefaultClientConnectionOperator] (executor-thread-1) Connecting to 127.0.0.1:8180
2020-08-03 05:43:27,945 DEBUG [org.apa.htt.imp.con.DefaultClientConnection] (executor-thread-1) Connection org.apache.http.impl.conn.DefaultClientConnection#6ba49b73 closed
2020-08-03 05:43:27,946 DEBUG [org.apa.htt.imp.con.DefaultClientConnection] (executor-thread-1) Connection org.apache.http.impl.conn.DefaultClientConnection#6ba49b73 shut down
2020-08-03 05:43:27,946 DEBUG [org.apa.htt.imp.con.tsc.ThreadSafeClientConnManager] (executor-thread-1) Released connection is not reusable.
2020-08-03 05:43:27,946 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Releasing connection [{}->http://127.0.0.1:8180][null]
2020-08-03 05:43:27,946 DEBUG [org.apa.htt.imp.con.DefaultClientConnection] (executor-thread-1) Connection org.apache.http.impl.conn.DefaultClientConnection#6ba49b73 closed
2020-08-03 05:43:27,946 DEBUG [org.apa.htt.imp.con.tsc.ConnPoolByRoute] (executor-thread-1) Notifying no-one, there are no waiting threads
2020-08-03 05:43:27,947 ERROR [org.key.ada.aut.PolicyEnforcer] (executor-thread-1) Could not lazy load resource with path [/hello/find/1] from server: java.lang.RuntimeException: Could not find resource
at org.keycloak.authorization.client.util.Throwables.retryAndWrapExceptionIfNecessary(Throwables.java:91)
at org.keycloak.authorization.client.resource.ProtectedResource.find(ProtectedResource.java:232)
at org.keycloak.authorization.client.resource.ProtectedResource.findByMatchingUri(ProtectedResource.java:291)
at org.keycloak.adapters.authorization.PolicyEnforcer$PathConfigMatcher.matches(PolicyEnforcer.java:268)
at org.keycloak.adapters.authorization.AbstractPolicyEnforcer.getPathConfig(AbstractPolicyEnforcer.java:351)
at org.keycloak.adapters.authorization.AbstractPolicyEnforcer.authorize(AbstractPolicyEnforcer.java:72)
at io.quarkus.keycloak.pep.runtime.KeycloakPolicyEnforcerAuthorizer.apply(KeycloakPolicyEnforcerAuthorizer.java:45)
at io.quarkus.keycloak.pep.runtime.KeycloakPolicyEnforcerAuthorizer.apply(KeycloakPolicyEnforcerAuthorizer.java:29)
at io.quarkus.vertx.http.runtime.security.HttpAuthorizer$1$1$1.run(HttpAuthorizer.java:68)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2046)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1578)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1452)
at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29)
at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.jboss.threads.JBossThread.run(JBossThread.java:479)
Caused by: java.lang.RuntimeException: Error executing http method [GET]. Response : null
at org.keycloak.authorization.client.util.HttpMethod.execute(HttpMethod.java:106)
at org.keycloak.authorization.client.util.HttpMethodResponse$3.execute(HttpMethodResponse.java:68)
at org.keycloak.authorization.client.resource.ProtectedResource$5.call(ProtectedResource.java:226)
at org.keycloak.authorization.client.resource.ProtectedResource$5.call(ProtectedResource.java:222)
at org.keycloak.authorization.client.resource.ProtectedResource.find(ProtectedResource.java:230)
... 15 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:609)
at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:121)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:134)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:605)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:440)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:835)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.keycloak.authorization.client.util.HttpMethod.execute(HttpMethod.java:84)
... 19 more
2020-08-03 05:43:27,951 DEBUG [org.key.ada.aut.AbstractPolicyEnforcer] (executor-thread-1) Checking permissions for path [http://127.0.0.1:8080/hello/find/1] with config [null].
2020-08-03 05:43:27,951 DEBUG [org.key.ada.aut.AbstractPolicyEnforcer] (executor-thread-1) Could not find a configuration for path [/hello/find/1]
Is there any real example on how to configure such a scenario? I already tried to set the frontendUrl to the internal address, that actually works for the runtime, but the web frontend is no longer accessible.
UPDATE:
From front end code (abbreviated):
java.io.InputStream stream = Thread.currentThread().getContextClassLoader()
.getResourceAsStream("META-INF/keycloak.json");
auth=AuthzClient.create(stream);
response = auth.obtainAccessToken(user, password);
final String accessToken = response.getToken();
...
requestContext.getHeaders().add(HttpHeaders.AUTHORIZATION, AUTH_HEADER_PREFIX + accessToken);
...
and config in keycloak.json is
{
"realm": "quarkus",
"auth-server-url": "http://localhost:8180/auth/",
"ssl-required": "external",
"resource": "backend-service",
"verify-token-audience": true,
"credentials": {
"secret": "85174256-b231-4385-9fa9-257dd0d27bf0"
},
"confidential-port": 0,
"policy-enforcer": {}
}
Many thanks
So the following setup works for me:
frontendUrl: external-docker-ip --> NOT localhost!
set in jboss cli by e.g.:
/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.frontendUrl,value="http://172.20.48.1:8180/auth")
##quarkus config
quarkus.oidc.auth-server-url=http://internal_keycloak_docker_IP:8080/auth/realms/quarkus
quarkus.oidc.token.issuer=http://external-docker-ip:8180/auth/realms/quarkus
##client json file
"auth-server-url": "http://external-docker-ip:8180/auth/"
Windows 10 setup :
Thingsboard server running as local service on windows
Thingsboard.yml mqtt parameters
MQTT server parameters
mqtt:
bind_address: "${MQTT_BIND_ADDRESS:0.0.0.0}"
bind_port: "${MQTT_BIND_PORT:1883}"
adaptor: "${MQTT_ADAPTOR_NAME:JsonMqttAdaptor}"
timeout: "${MQTT_TIMEOUT:10000}"
Thingsboard gateway service running as local service on windows
> tb-gateway.yml mqtt parameters
> mqtt:
> enabled: true
> configuration: mqtt-config.json
configuration file of mqtt is set by default,below.
mqtt-config.json mqtt parameters
"brokers": [
{
"host": "localhost",
"port": 1883,
"ssl": false,
"retryInterval": 3000,
"credentials": {
"type": "anonymous"
},
These are the only two services running on my laptop, I published a mqtt message as follows, per docs :
mosquitto_pub -h localhost -p 1883 -u "XXXXXXXX" -t "sensors" -m '{"serialNumber":"TB-GW-SN-001","model":"TB-GW-T1000","temperature":35.2}'
I see errors in both the logs.
thingsboard.log
2018-01-10 20:14:56,174 [nioEventLoopGroup-6-11] INFO o.t.s.t.mqtt.MqttTransportHandler - [mqtt815] Processing connect msg for client: efd91958-ba8f-480a-9a56-ad9d5588c8c7!
2018-01-10 20:14:56,177 [nioEventLoopGroup-6-12] INFO o.t.s.t.mqtt.MqttTransportHandler - [127.0.0.1:51192] Invalid message received
2018-01-10 20:14:59,183 [nioEventLoopGroup-6-1] INFO o.t.s.t.mqtt.MqttTransportHandler - [mqtt817] Processing connect msg for client: efd91958-ba8f-480a-9a56-ad9d5588c8c7!
2018-01-10 20:14:59,188 [nioEventLoopGroup-6-2] INFO o.t.s.t.mqtt.MqttTransportHandler - [127.0.0.1:51194] Invalid message received
2018-01-10 20:15:02,193 [nioEventLoopGroup-6-3] INFO o.t.s.t.mqtt.MqttTransportHandler - [mqtt819] Processing connect msg for client: efd91958-ba8f-480a-9a56-ad9d5588c8c7!
2018-01-10 20:15:02,197 [nioEventLoopGroup-6-4] INFO o.t.s.t.mqtt.MqttTransportHandler - [127.0.0.1:51196] Invalid message received
error in thingsboard gateway is strange, rather.
tb-gateway.log
2018-01-10 20:14:59,191 [main] WARN o.t.g.e.m.client.MqttBrokerMonitor - [localhost:1883] MQTT broker connection failed!
org.eclipse.paho.client.mqttv3.MqttException: Connection lost
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:164)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readByte(Unknown Source)
at org.eclipse.paho.client.mqttv3.internal.wire.MqttInputStream.readMqttWireMessage(MqttInputStream.java:92)
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:116)
... 1 common frames omitted
2018-01-10 20:15:02,198 [main] WARN o.t.g.e.m.client.MqttBrokerMonitor - [localhost:1883] MQTT broker connection failed!
org.eclipse.paho.client.mqttv3.MqttException: Connection lost
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:164)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readByte(Unknown Source)
at org.eclipse.paho.client.mqttv3.internal.wire.MqttInputStream.readMqttWireMessage(MqttInputStream.java:92)
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:116)
... 1 common frames omitted
What am I doing wrong? I have mosquitto installed locally, hence used the mosquitto_pub tool to publish the messages.
Any clues, folks?
All is well.
As https://stackoverflow.com/users/3203737/andrew pointed out, I wasnt running mosquitto and the ports were the same. I cleaned up the mess and the devices were registered perfectly using TB-gateway, in the dashboard.
I have been using flume for a while now, I have got agent and collector running on same machine.
Configuration
agent: exec("/usr/bin/tail -n +0 -F /path/to/file") | agentE2ESink("hostname", 35855)
collector: collectorSource(35855) | collector(10000) { collectorSink("/hdfs/path/to/sink","name") }
Facing issues in the agent node:
2012-06-04 19:13:33,625 [naive file wal consumer-27] INFO debug.InsistentOpenDecorator: open attempt 0 failed, backoff (1000ms): Failed to open thrift event sink to hostname:35855 : java.net.ConnectException: Connection refused
2012-06-04 19:13:34,625 [logicalNode hostname-19] ERROR connector.DirectDriver: Expected ACTIVE but timed out in state OPENING
2012-06-04 19:13:34,632 [naive file wal consumer-27] INFO debug.InsistentOpenDecorator: open attempt 1 failed, backoff (2000ms): Failed to open thrift event sink to hostname:35855 : java.net.ConnectException: Connection refused
2012-06-04 19:13:36,635 [naive file wal consumer-27] INFO debug.InsistentOpenDecorator: open attempt 2 failed, backoff (4000ms): Failed to open thrift event sink to hostname:35855 : java.net.ConnectException: Connection refused
and then empty ACKs will be sent continuously
2012-06-04 19:19:56,960 [Roll-TriggerThread-0] INFO endtoend.AckListener$Empty: Empty Ack Listener began 20120604-191956958+0530.881565921235084.00000026
2012-06-04 19:20:07,043 [Roll-TriggerThread-0] INFO hdfs.SeqfileEventSink: closed /tmp/flume-user1/agent/hostname/writing/20120604-191956958+0530.881565921235084.00000026
I dont understand why the connection is refused. Are there any system level changes that needs to be done ?
Note: the collector is listening to the port but agent is unable to send data through the 35855 port.
Can anyone help me with this problem.
Thanks
If you are running both the agent and the collector on the same box, you should be using localhost as the address.
agentE2ESink("localhost", 35855)