I want to use the schema registry docker (image owned by confluent) with my open-source Kafka I installed locally on my PC.
I am using the following command to run the image :
docker run -p 8081:8081 \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=PLAINTEXT://127.0.0.1:9092 \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:latest
but I am getting the following connection errors:
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
I have Kafka installed on my localhost.
Any idea to solve this, please?
I have Kafka installed on my localhost
As commented, localhost is unclear when you're actually using multiple machines (one physical and at least one virtual)
You need to use host.docker.internal:9092
https://docs.docker.com/docker-for-windows/networking/ (removed because host is not windows)
On a Linux host, you need to use host networking mode
https://docs.docker.com/network/host/
Although, realistically, running Kafka in a container would be simpler for connecting the two
Related
I have a cap rover instance in my digital ocean instance that I created. I want to use teh caprover instance to run cap rover sample apps.
I opened the digital ocean droplet web console in order to run a caprover isntance.
I ran the following lines of code:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
and got this:
Skipping adding existing rule
Skipping adding existing rule (v6)
Skipping adding existing rule
Skipping adding existing rule (v6)
I then ran this:
docker run -p 80:80 -p 443:443 -p 3000:3000 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
I got this:
Unable to find image 'caprover/caprover:latest' locally
latest: Pulling from caprover/caprover
Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f
Status: Downloaded newer image for caprover/caprover:latest
docker: Error response from daemon: driver failed programming external connectivity on endpoint priceless_sammet (9da9028cfc4873818f113458237ebd00f9c64fa648b853730a60b10bea39c720): Bind for 0.0.0.0:3000 failed: port is already allocated.
I tried changing the ports to:
docker run -p 81:81 -p 444:444 -p 3321:3321 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
and got this:
Captain Starting ...
Installing Captain Service ...
Installation of CapRover is starting...
For troubleshooting, please see: https://caprover.com/docs/troubleshooting.html
>>> Checking System Compatibility <<<
Docker Version passed.
Ubuntu detected.
X86 CPU detected.
Total RAM 1033 MB
Are your trying to run CapRover on a local machine or a machine without public IP?
In that case, you need to add this to your installation command:
-e MAIN_NODE_IP_ADDRESS='127.0.0.1'
Otherwise, if you are running CapRover on a VPS with public IP:
Your firewall may have been blocking an in-use port: 80
A simple solution on Ubuntu systems is to run "ufw disable" (security risk)
Or [recommended] just allowing necessary ports:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
See docs for more details on how to fix firewall issues
Finally, if you are an advanced user, and you want to bypass this check (NOT RECOMMENDED),
you can append the docker command with an addition flag: -e BY_PASS_PROXY_CHECK='TRUE'
Installation failed.
Error: Port seems to be closed: 80
at Request._callback (/usr/src/app/built/utils/CaptainInstaller.js:149:24)
at Request.self.callback (/usr/src/app/node_modules/request/request.js:185:22)
at Request.emit (events.js:400:28)
at Request.<anonymous> (/usr/src/app/node_modules/request/request.js:1154:10)
at Request.emit (events.js:400:28)
at IncomingMessage.<anonymous> (/usr/src/app/node_modules/request/request.js:1076:12)
at Object.onceWrapper (events.js:519:28)
at IncomingMessage.emit (events.js:412:35)
at endReadableNT (internal/streams/readable.js:1334:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
How can I open port 80, 443, and 3000 so that I can run the cap rover instance
I'm trying to run a local kafka-connect cluster using docker-compose.
I need to connect on a remote database and i'm also using a remote kafka and schema-registry.
I have enabled access to these remotes resources from my machine.
To start the cluster, on my project folder in my Ubuntu WSL2 terminal, i'm running
docker build -t my-connect:1.0.0
docker-compose up
The application runs successfully, but when I try to create a new connector, returns error 500 with timeout.
My Dockerfile
FROM confluentinc/cp-kafka-connect-base:5.5.0
RUN cat /etc/confluent/docker/log4j.properties.template
ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
ARG JDBC_DRIVER_DIR=/usr/share/java/kafka/
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:5.5.0 \
&& confluent-hub install --no-prompt confluentinc/connect-transforms:1.3.2
ADD java/kafka-connect-jdbc /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
COPY java/kafka-connect-jdbc/ojdbc8.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
ENTRYPOINT ["sh","-c","export CONNECT_REST_ADVERTISED_HOST_NAME=$(hostname -I);/etc/confluent/docker/run"]
My docker-compose.yaml
services:
connect:
image: my-connect:1.0.0
ports:
- 8083:8083
environment:
- CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
- CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
- CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
- CONNECT_BOOTSTRAP_SERVERS=broker1.intranet:9092
- CONNECT_GROUP_ID=kafka-connect
- CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
- CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
- CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
- CONNECT_OFFSET_STORAGE_TOPIC=kafka-connect.offset
- CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect.config
- CONNECT_STATUS_STORAGE_TOPIC=kafka-connect.status
- CONNECT_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY=All
- CONNECT_LOG4J_ROOT_LOGLEVEL=INFO
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
- CONNECT_REST_ADVERTISED_HOST_NAME=localhost
My cluster it's up
~$ curl -X GET http://localhost:8083/
{"version":"5.5.0-ccs","commit":"606822a624024828","kafka_cluster_id":"OcXKHO7eT4m9NBHln6ACKg"}
Connector call
curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d
{
"name": "my-connector",
"config":
{
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max": "1",
"database.user": "user",
"database.password": "pass",
"database.dbname":"SID",
"database.schema":"schema",
"database.server.name": "dbname",
"schema.include.list": "schema",
"database.connection.adapter":"logminer",
"database.hostname":"databasehost",
"database.port":"1521"
}
}
Error
{"error_code": 500,"message": "IO Error trying to forward REST request: java.net.SocketTimeoutException: Connect Timeout"}
## LOG
connect_1 | [2021-07-01 19:08:50,481] INFO Database Version: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
connect_1 | Version 19.4.0.0.0 (io.debezium.connector.oracle.OracleConnection)
connect_1 | [2021-07-01 19:08:50,628] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection)
connect_1 | [2021-07-01 19:08:50,643] INFO AbstractConfig values:
connect_1 | (org.apache.kafka.common.config.AbstractConfig)
connect_1 | [2021-07-01 19:09:05,722] ERROR IO error forwarding REST request: (org.apache.kafka.connect.runtime.rest.RestClient)
connect_1 | java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Connect Timeout
Testing the connection to the database
$ telnet databasehostname 1521
Trying <ip>... Connected to databasehostname
Testing connection to kafka broker
$ telnet broker1.intranet 9092
Trying <ip>... Connected to broker1.intranet
Testing connection to remote schema-registry
$ telnet schema-registry.intranet 8081
Trying <ip>... Connected to schema-registry.intranet
What am I doing wrong? Do I need to configure something else to allow connection to this remote database?
You need to set correctly rest.advertised.host.name (or CONNECT_REST_ADVERTISED_HOST_NAME, if you’re using Docker).
This is how a Connect worker communicates with other workers in the cluster.
For more details see Common mistakes made when configuring multiple Kafka Connect workers by Robin Moffatt.
In your case try to remove CONNECT_REST_ADVERTISED_HOST_NAME=localhost from compose file.
I am new to Jaeger and Kafka,
I am trying to use Kafka as intermediate buffer.
I am using OpenTelemetry to send data to Jaeger-Collector directly using -Dotel.exporter.jaeger.endpoint.
Jaeger-Collector is deployed on Kubernetes and the Kafka is on another network but is accessible. I can confirm that the traces are sent to Jaeger-collector.
On hitting the /metrics of collector and output tells me that spans were written successfully to Kafka.
jaeger_kafka_spans_written_total{status="success"} 21
The Collector logs indicate what topic I am writing to
{"Brokers":["myKafkaBroker......"}},"topic":"tp6"}
I want to get this (Span) data from Kafka Queue to ElasticSearch. To do this I am starting the Jaeger Ingester as follows
docker run -e "SPAN_STORAGE_TYPE=elasticsearch" jaegertracing/jaeger-ingester:1.22 --kafka.consumer.topic=tp6 --kafka.consumer.brokers='myKafkaBroker' --es.tls.skip-host-verify
But the container is stopped after fatal error
{"level":"fatal","ts":1615546463.7784193,"caller":"command-line-arguments/main.go:64","msg":"Failed to init storage factory","error":"failed to create primary Elasticsearch client: health check timeout: Head \"http://127.0.0.1:9200\": dial tcp 127.0.0.1:9200: connect: connection refused: no Elasticsearch node available","stacktrace":"main.main.func1\n\tcommand-line-arguments/main.go:64\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra#v0.0.7/command.go:838\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra#v0.0.7/command.go:943\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra#v0.0.7/command.go:883\nmain.main\n\tcommand-line-arguments/main.go:113\nruntime.main\n\truntime/proc.go:204"}
The elasticsearch and ingester are being run on the same machine using docker. The elasticsearch is running on docker using
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node"ocker.elastic.co/elasticsearch/elasticsearch:7.11.2
I have disabled TLS so that shouldn't be a problem. I am unable to get this to work.
I use the following Kafka Docker image: https://hub.docker.com/r/wurstmeister/kafka/
I'm able to start Apache Kafka with the following properties:
<KAFKA_ADVERTISED_HOST_NAME>${local.ip}</KAFKA_ADVERTISED_HOST_NAME>
<KAFKA_ADVERTISED_PORT>${kafka.port}/KAFKA_ADVERTISED_PORT>
<KAFKA_ZOOKEEPER_CONNECT>zookeeper:2181</KAFKA_ZOOKEEPER_CONNECT>
<KAFKA_MESSAGE_MAX_BYTES>15000000</KAFKA_MESSAGE_MAX_BYTES>
but I see the following warning when trying to send the message into the topic:
WARN 9248 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Error while fetching metadata with correlation id 4 : {post.sent=LEADER_NOT_AVAILABLE}
I saw a few articles on the internet that told that this issue can be related to old properties like KAFKA_ADVERTISED_HOST_NAME and KAFKA_ADVERTISED_PORT and I should reconfigure to KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS. But when I start the Kafka container with the following properties:
<KAFKA_ADVERTISED_LISTENERS>PLAINTEXT://${local.ip}:${kafka.port}</KAFKA_ADVERTISED_LISTENERS>
<KAFKA_LISTENERS>PLAINTEXT://${local.ip}:${kafka.port}</KAFKA_LISTENERS>
<KAFKA_ZOOKEEPER_CONNECT>zookeeper:2181</KAFKA_ZOOKEEPER_CONNECT>
<KAFKA_MESSAGE_MAX_BYTES>15000000</KAFKA_MESSAGE_MAX_BYTES>
my application unable to connect to Kafka:
2018-08-25 16:20:57.407 INFO 17440 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.1.0
2018-08-25 16:20:57.408 INFO 17440 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fdcf75ea326b8e07
2018-08-25 16:20:58.513 WARN 17440 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -1 could not be established. Broker may not be available.
2018-08-25 16:20:59.567 WARN 17440 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -1 could not be established. Broker may not be available.
How to properly reconfigure the Docker Kafka in order to be able to use KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS?
From this awesome post, here's a good explanation about these properties:
LISTENERS are what interfaces Kafka binds to. ADVERTISED_LISTENERS are how clients can connect.
When your application connects to one of the addresses from LISTENERS the Kafka returns the correspondent KAFKA_ADVERTISED_LISTENER to that LISTENER you choosed. The KAFKA_ADVERTISED_LISTENER returned is the address that your application you will really use to communicate with Kafka.
So, you have to use in your application what you set on LISTENERS Kafka property for PLAINTEXT.
Using this configuration as you put:
<KAFKA_ADVERTISED_LISTENERS>PLAINTEXT://${local.ip}:${kafka.port}</KAFKA_ADVERTISED_LISTENERS>
<KAFKA_LISTENERS>PLAINTEXT://${local.ip}:${kafka.port}</KAFKA_LISTENERS>
You have to use on your application:
As you used on Kafka docker ${local.ip}:${kafka.port} you have to get the assigned kafka docker container IP and use it in your application.
Just to fill the variables for this scenario, let's suppose your kafka docker container IP 192.250.0.1 and the kafka port used is 9092, so your application bootstrap.servers property would be: 192.250.0.1:9092
Here's a command to see what Kafka returns to you when you try to connect using one of kafka's listener:
$ kafkacat -b 192.250.0.1:9092 -L
kafkacat is a very useful tool for test and debug kafka.
I'm running my current infrastructure using confluent docker images with version 3.3.1 and everything is working fine. Now I'm trying to setup a sandbox environment with v4.0.0 but I'm experiencing issues.
I'm having issues running cp-kafka-connect (v4.0.0) into docker image provided by confluent.
Infrastructure details: I'm using docker image cp-kafka:4.0.0 as well as for zookeeper and all my streams/consumers/producers applications i have on the infrastructure are working just perfect. Just cp-kafka-connect:4.0.0 is not working.
kafka is running with auto topic creation active.
I've run connect container with these env variables:
-e CONNECT_BOOTSTRAP_SERVERS=kafka1.kafka:9092,kafka2.kafka:9092,kafka3.kafka:9092
-e CONNECT_GROUP_ID=connect-03
-e CONNECT_CONFIG_STORAGE_TOPIC=connect03-config
-e CONNECT_OFFSET_STORAGE_TOPIC=connect03-offsets
-e CONNECT_STATUS_STORAGE_TOPIC=connect03-status
-e CONNECT_KEY_CONVERTER=org.apache.kafka.connect.storage.StringConverter
-e CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
-e CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
-e CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
-e CONNECT_REST_ADVERTISED_HOST_NAME=kafka-connect-03.kafka
-e CONNECT_REST_PORT=8083
-e CONNECT_LOG4J_ROOT_LOGLEVEL=TRACE
-e CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE=false
-e CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE=false
-e CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE=false
-e CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE=false
however when I perform the REST call:
curl -X GET http://kafka-connect-03.kafka/connectors/ -H 'cache-control: no-cache' -H 'content-type: application/json'
i receive after a couple of minutes an HTTP 500 Timeout error.
tailing kafka-connect container I can see this (please note: this message is printed independently from the cUrl call and is printed forever):
[2017-12-31 17:55:40,099] DEBUG [Consumer clientId=consumer-1, groupId=connect03] Sending GroupCoordinator request to broker kafka3.kafka:9092 (id: 1009 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2017-12-31 17:55:40,099] TRACE [Consumer clientId=consumer-1, groupId=connect03] Sending FIND_COORDINATOR {coordinator_key=connect03,coordinator_type=0} with correlation id 3098 to node 1009 (org.apache.kafka.clients.NetworkClient)
[2017-12-31 17:55:40,100] TRACE [Consumer clientId=consumer-1, groupId=connect03] Completed receive from node 1009 for FIND_COORDINATOR with correlation id 3098, received {throttle_time_ms=0,error_code=15,error_message=null,coordinator={node_id=-1,host=,port=-1}} (org.apache.kafka.clients.NetworkClient)
[2017-12-31 17:55:40,100] DEBUG [Consumer clientId=consumer-1, groupId=connect03] Received GroupCoordinator response ClientResponse(receivedTimeMs=1514742940100, latencyMs=1, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=1, clientId=consumer-1, correlationId=3098), responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2017-12-31 17:55:40,100] DEBUG [Consumer clientId=consumer-1, groupId=connect03] Group coordinator lookup failed: The coordinator is not available. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2017-12-31 17:55:40,100] DEBUG [Consumer clientId=consumer-1, groupId=connect03] Coordinator discovery failed, refreshing metadata (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
meanwhile on kafka logs I can see no errors.
I've tried:
to leave connect/kafka to create the topics (config, offsets and status)
RESULT: only offsets topic is created
to create the topics by myself before
RESULT: nothing worth of notice
with every new attempt I've changed groupId / appId config/offsets/status topics names (in order to avoid any issue of dirty setup)
Any clue of this behaviour?
Thanks a lot for your support.
Eventually, I realized where the problem was.
I was starting up kafka container without KAFKA_BROKER_ID env variable. This brings up a kefka broker with a pseudo-random number (eg 1007).
Happened that some cluster config messed up since I started/stopped kafka containers several times. In fact, inspecting the cluster, I saw that the cluster was composed of 3 broker (alive, 1007,1008,1009) and other 3 were offline (1004,1005,1006).
When I created cp-kafka-connect the first time was with the first 3 brokers, than something got corrupted.
Now I cleaned up everything, I placed KAFKA_BROKER_ID in each kafka broker and everything works.
Thanks a lot.