kafka connect in distributed mode is not generating logs specified via log4j properties - docker

I have been using Kafka Connect in my work setup for a while now and it works perfectly fine.
Recently I thought of dabbling with few connectors of my own in my docker based kafka cluser with just one broker (ubuntu:18.04 with kafka installed) and a separate node acting as client for deploying connector apps.
Here is the problem:
Once my broker is up and running, I login to the client node (with no broker running,just the vanilla kafka installation), i setup the class path to point to my connector libraries. Also the KAFKA_LOG4J_OPTS environment variable to point to the location of log file to generate with debug mode enabled.
So every time i start the kafka worker using command:
nohup /opt//bin/connect-distributed /opt//config/connect-distributed.properties > /dev/null 2>&1 &
the connector starts running, but I don't see the log file getting generated.
I have tried several changes but nothing works out.
QUESTIONS:
Does this mean that connect-distributed.sh doesn't generate the log file after reading the variable
KAFKA_LOG4J_OPTS? and if it does, could someone explain how?
NOTE:
(I have already debugged the connect-distributed.sh script and tried the options where daemon mode is included and not included, by default if KAFKA_LOG4J_OPTS is not provided, it uses the connect-log4j.properties file in config directory, but even then no log file is getting generated).
OBSERVATION:
Only when I start zookeeper/broker on the client node, then provided KAFKA_LOG4J_OPTS value is picked and logs start getting generated but nothing related to the Kafka connector. I have already verified the connectivity b/w the client and the broker using kafkacat
The interesting part is:
The same process i follow in my workpalce and logs start getting generated every time the worker (connnect-distributed.sh) is started, but I haven't' been to replicate the behaviors in my own setup). And I have no clue what I am missing here.
Could someone provide some reasoning, this is really driving me mad.

Related

Running Kafka connect in standalone mode, having issues with offsets

I am using this Github repo and folder path I found: https://github.com/entechlog/kafka-examples/tree/master/kafka-connect-standalone to run Kafka connect locally in standalone mode. I have made some changes to the Docker compose file but mainly changes that pertain to authentication.
The problem I am now having is that when I run the Docker image, I get this error multiple times, for each partition (there are 10 of them, 0 through 9):
[2021-12-07 19:03:04,485] INFO [bq-sink-connector|task-0] [Consumer clientId=connector- consumer-bq-sink-connector-0, groupId=connect-bq-sink-connector] Found no committed offset for partition <topic name here>-<partition number here> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1362)
I don't think there are any issues with authenticating or connecting to the endpoint(s), I think the consumer (connect sink) is not sending the offset back.
Am I missing an environment variable? You will see this docker compose file has CONNECT_OFFSET_STORAGE_FILE_FILENAME: /tmp/connect.offsets, and I tried adding CONNECTOR_OFFSET_STORAGE_FILE_FILENAME: /tmp/connect.offsets (CONNECT_ vs. CONNECTOR_) and then I get an error Failed authentication with <Kafka endpoint here>, so now I'm just going in circles.
I think you are focused on the wrong output.
That is an INFO message
The offsets file (or topic in distributed mode) is for source connectors.
Sink connectors use consumer groups. If there is no found offset found for groupId=connect-bq-sink-connector, then the consumer group didn't commit it.

Kafka Connect replication factor for license topic

I'm trying to run Kafka-Connect locally with docker-compose. Much as I like Confluent products and Kafka, it's sometimes a huge quest to pass some config or find one consistent example.
In my docker-compose file, I'm using 6.0.0 versions for broker, zk, sr and kafka-connect right now, but I've tried older versions as well.
The broker(confluentinc/cp-server:6.0.0) fails with:
INFO [Admin Manager on Broker 1]: Error processing create topic
request CreatableTopic(name='_confluent-license', numPartitions=1,
replicationFactor=3, assignments=[],
configs=[CreateableTopicConfig(name='cleanup.policy',
value='compact'), CreateableTopicConfig(name='min.insync.replicas',
value='2')], linkName=null, mirrorTopic=null)
(kafka.server.AdminManager)
And I simply don't know how to pass confluent.topic.replication.factor as env vars to my workers. I've added both:
CONNECT_CONFLUENT_TOPIC_REPLICATION_FACTOR: "1"
CONFLUENT_TOPIC_REPLICATION_FACTOR: "1"
...but they are both ignored.
What's more, I can't even find _confluent-license mentioned in docs anywhere, only _confluent-command.
How can I possibly make connect work locally inside docker-compose and without setting up 3 brokers?
The broker is failing, because that topic is created from it, so you want the variable on the broker
KAFKA_CONFLUENT_TOPIC_REPLICATION_FACTOR: 1
Otherwise, you want cp-kafka, not cp-server
As OneCricketeer pointed out, the value must be set in the broker container, and that also explains why it's broker container that fails, not connect workers. That definitely makes sense. Broker can't fail because of a misconfigured clients.
What probably made it a lot more confusing to find a solution is that the env var and configuration property for license topic replication factor do not follow a regular pattern where env var is an uppercases configuration property with dots replaced with underscores(and prefix where applicable). In reality we get:
confluent.topic.replication.factor -> KAFKA_CONFLUENT_TOPIC_REPLICATION_FACTOR

GridGain Web Console with Docker: 404 Not Found

I'm trying to deploy GridGain Web Console 2020.03.01 on RHEL7 x86_64 with Docker following documentation here.
However, there is 404 Not Found error on accessing http://localhost:3000/swagger-ui.html page which is used as healthcheck. Backend logs show no errors. The last version I'm able to get containers running with is 2019.12.02 (which in fact refuses to show a connected cluster, but that's another issue). Starting with 2020.01.00, all backend healthchecks fail. That looks suspicious considering that 2020.01.00 releasenotes include updates of io.springfox and swagger-ui-dist.
Besides that, 2020.03.01 releasenotes say that Console's default port is changed to 8008, but the server still starts on 3000.
Anyone had any luck deploying dockerized Web Console?
The Web Console consists of backend and frontend. The backend is started on port 3000 which is printed in log, while the frontend is started indeed on port 8008 - and you most probably want to use this.
The docker-compose.yml given on Documentation site maps container's 8008 port to host's 80 port, feel free to replace with any wanted.
Regarding the heathcheck, /health endpoint is now changed to this
The Swagger was removed in 2020.01.00 due to security concerns (same GG-26726 issue mentioned in the release notes). You are right to be suspicious, I'll ask right people to update release notes and the docs, sorry about the confusion and thanks for pointing the issue out. Swagger was supposed to be an internal feature for Web Console (WC) developer team only.
As you pointed out, starting with 2020.01.00 the Swagger-based health check won't work. Internally, the WC team uses dockerize to wait for backend to start, here's an example from our E2E test suite compose:
entrypoint: dockerize -wait http://backend:3000/health -timeout 2m -wait-retry-interval 5s node ./index.js --target=${TARGET:-on-premise}
This might work for you too, with some adaptation. You will most likely have to remove "healthcheck" sections from docker-compose.yml too, or modify these, if the "http://backend:3000/health" URL can indeed serve as a direct replacement for the old "http://localhost:3000/swagger-ui.html" URL, which I am not sure about.

Connecting to scality/s3 server between docker containers

We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure

Cassandra Cluster Setup getting JMX error

I m trying setup a cassandra cluster as a test bed but gave the JMX remote connection error. I seem to found the answer for my error from cassandra FAQ page
Nodetool says "Connection refused to host: 127.0.1.1" for any remote host. What gives?
Nodetool relies on JMX, which in turn relies on RMI, which in turn sets up it's own listeners and connectors as needed on each end of the exchange. Normally all of this happens behind the scenes transparently, but incorrect name resolution for either the host connecting, or the one being connected to, can result in crossed wires and confusing exceptions.
If you are not using DNS, then make sure that your /etc/hosts files are accurate on both ends. If that fails try passing the -Djava.rmi.server.hostname=$IP option to the JVM at startup (where $IP is the address of the interface you can reach from the remote machine).
But can somebody help me on how to do -Djava.rmi.server.hostname=$IP
Or what to add is hosts file, i know that in hosts normally we add "IP Alias", but whose ip and alias.
I dont know much java or either linux
I m currently working on ubuntu v10.04 and cassandra v0.74
Sudesh
For JMX you need to enable JMX-remoting:
java -Dcom.sun.management.jmxremote
Depending on from where you want to access the jmx-server, you also need to specify a port:
-Dcom.sun.management.jmxremote.port=12345
and set or disable passwords.
Have a look at http://download.oracle.com/javase/1.5.0/docs/guide/management/agent.html for more details.

Resources