How to run Confluent Schema Registry using Docker on AWS ec2 - docker

I want to run schema registry for my AWS MSK cluster on EC2 within the same VPC as my MSK cluster using confluentinc/cp-schema-registry.
But the container is exiting without any proper error message.
Here is my docker command:
docker run \
--net=host \
--name=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=<PLAINTEXT-ZOOKEEPER-CONNECTION-URL> \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://localhost:8081 \
-p 8081:8081 \
confluentinc/cp-schema-registry
===== UPDATE ======
I have also tried by running confluent schema-registry as follows:
bin/schema-registry-start etc/schema-registry/schema-registry.properties
But getting the error:
java.lang.RuntimeException: Error initializing the ssl context for RestService
Caused by: java.io.IOException: Keystore was tampered with, or password was incorrect
I have generated the signed certificate, added to keystore by following:
https://docs.aws.amazon.com/msk/latest/developerguide/msk-authentication.html
This keystore is working fine with console-producer and consumers but not working with schema-registry.
and here is my content of schema-registry.properties
listeners=http://0.0.0.0:8081
kafkastore.bootstrap.servers=<MY-MSK-BOOTSTRAP-SERVER>
kafkastore.topic=_schemas
debug=true
security.protocol=SSL
ssl.truststore.location=/tmp/kafka/kafka.client.truststore.jks
ssl.keystore.location=/tmp/kafka/kafka.client.keystore.jks
ssl.keystore.password=xxxx
ssl.key.password=xxxx

Related

cannot connect schema registry kafka

I'm trying to run confluent schema registry via docker image (on Mac os Catalina and Docker version 19.03.12)
docker run --network="host" -e \
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=localhost:2181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry
I'm running zookeper (3.5.8) and kafka server no the localmachine (no docker). The above picks (3.5.8 client of zookeper). However schema-registry is unable to connect.
[main-SendThread(localhost:2181)]
INFO org.apache.zookeeper.ClientCnxn -
Socket error occurred: localhost/127.0.0.1:2181: Connection refused
[main-SendThread(localhost:2181)]
INFO org.apache.zookeeper.ClientCnxn -
Opening socket connection to server localhost/127.0.0.1:2181.
Will not attempt to authenticate using SASL (unknown error)
I also tried mapping the port, instead of running it in the host network but have the same result
docker run -p 8081:8081 -e \
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=host.docker.internal:2181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry
Any ideas? Kafka BTW is running happily I can consume and produce messages.

Grafana with https - Cannot find SSL cert_file

I'm running Grafana in a Docker container on my NAS. Everything is fine when using http.
However I fail to start the container when I setup Grafana for https, as the Certificate file can't be found according to the Docker log.
I create a self-certificate using OpenSSL in order to use Grafana with https.
I modified the docker script to overwrite the enviroment Server section for https and defined the path for the cert and key file.
INFO[12-08|12:28:50] Config overridden from Environment variable logger=settings var="GF_SERVER_PROTOCOL=https"
INFO[12-08|12:28:50] Config overridden from Environment variable logger=settings var="GF_SERVER_CERT_FILE=/share/CACHEDEV2_DATA/Container/grafana/config/ssl/grafana.crt"
INFO[12-08|12:28:50] Config overridden from Environment variable logger=settings var="GF_SERVER_CERT_KEY=/share/CACHEDEV2_DATA/Container/grafana/config/ssl/grafana.key"
As far as I can see, this seems to be fine, however for unknown reason the cert-file isn't found, even it is available in the defined path.
INFO[12-08|12:28:50] HTTP Server Listen logger=http.server address=0.0.0.0:3000 protocol=https subUrl= socket=
EROR[12-08|12:28:50] Stopped HTTPServer logger=server reason="Cannot find SSL cert_file at /share/CACHEDEV2_DATA/Container/grafana/config/ssl/grafana.crt"
When I check the path I see it is valid
[/share/CACHEDEV2_DATA/Container/grafana] # ls -l /share/CACHEDEV2_DATA/Container/grafana/config/ssl/grafana.crt
-rw-r--r-- 1 admin administrators 1228 2019-12-08 10:55 /share/CACHEDEV2_DATA/Container/grafana/config/ssl/grafana.crt
Any idea what could be the reason for this?
Could the Certificate be invalid and the error message is just misleading?
Many thanks for a hint :)
Stefan
Edit:
The script I use to start the Docker Container:
GRAFANA_DIR_CONF=$(readlink -f ./config)
GRAFANA_VER='latest'
docker run -it \
--name=grafana \
-v $GRAFANA_DIR_CONF:/var/lib/grafana \
-v /etc/localtime:/etc/localtime:ro \
-e "GF_SECURITY_ALLOW_EMBEDDING=true" \
-e "GF_USERS_ALLOW_SIGN_UP=false" \
-e "GF_AUTH_ANONYMOUS_ENABLED=true" \
-e "GF_AUTH_BASIC_ENABLED=false" \
-e "GF_SERVER_PROTOCOL=https" \
-e "GF_SERVER_CERT_FILE=$GRAFANA_DIR_CONF/ssl/grafana.crt" \
-e "GF_SERVER_CERT_KEY=$GRAFANA_DIR_CONF/ssl/grafana.key" \
-d \
--restart=always \
-p 3000:3000 \
grafana/grafana:$GRAFANA_VER
[/share/CACHEDEV2_DATA/Container/grafana/config/ssl] # ls -l
total 16
-rw-r--r-- 1 admin administrators 1228 2019-12-08 10:55 grafana.crt
-rw-r--r-- 1 admin administrators 1702 2019-12-08 10:44 grafana.key
[/share/CACHEDEV2_DATA/Container/grafana/config/ssl] #
You are using volume for the configuration folder, so correct path to the cert/key in the container is:
-e "GF_SERVER_CERT_FILE=/var/lib/grafana/ssl/grafana.crt" \
-e "GF_SERVER_CERT_KEY=/var/lib/grafana/ssl/grafana.key" \

Confluent Schema Registry Docker image not exposing port 8081 outside the container

I'm running the following container using the docker image for the Confluent Schema Registry. Everything runs fine inside the container meaning I can run a shell command inside the container against localhost:8081/subjects and get an empty list back as expected.
However, I'm trying to spin up the Schema Registry in a container just so I could build an application locally that points to this schema registry instance. So I tried exposing port 8081 to my local machine. But localhost:8081 is not accessible from my machine. Is there no way to do what I'm trying to do here? I tried running the schema registry without docker on my windows machine but I didn't see a windows specific schema-registry-start file.
docker run -d \
--net=host \
--add-host=linuxkit-00155da9f301:127.0.0.1 \
-p 8081:8081 \
--name=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=iptozookeepernode1:2181,iptozookeepernode2:2181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://localhost:8081 \
-e SCHEMA_REGISTRY_DEBUG=true \
confluentinc/cp-schema-registry:latest
For me, the issue was related to port 8081 on localhost (used by McAfee), changed port mapping to 8017:8081 and it's working fine.

could configure jenkins slave in docker container from jenkins server in docker

I'm practicing docker in practice by manning.
The technical recipe is about configuring jenkins slave which is docker container.
Below is the Dockerfile for jenkins_slave
FROM ubuntu:latest
ENV DEBIAN_FRONTEND noninteractive
RUN groupadd -g 1000 jenkins_slave
RUN useradd -d /home/jenkins_slave -s /bin/bash \
-m jenkins_slave -u 1000 -g jenkins_slave
RUN echo jenkins_slave:jpass | chpasswd
RUN apt-get update && \
apt-get install -y openssh-server openjdk-8-jre wget iproute2
RUN mkdir -p /var/run/sshd
CMD ip route | grep "default via" \
| awk '{print $3}' && /usr/sbin/sshd -D
I built docker images using the command
docker build -t jenkins_slave .
Then I run the docker images as container using the command
$ docker run --name jenkins_slave -it -p 2222:22 jenkins_slave
172.17.0.1
Then I run the jenkins server using the below docker command
$ docker run --name jenkins_server -p 8080:8080 -p 50000:50000 dockerinpractice/jenkins:server
Below is the node configuration details -
Then I get the error message saying This agent is offline because Jenkins failed to launch the agent process on it
Below is the error stack trace
[12/07/17 08:50:00] [SSH] Opening SSH connection to 172.17.0.1:2222.
/var/jenkins_home/.ssh/known_hosts [SSH] No Known Hosts file was found at
/var/jenkins_home/.ssh/known_hosts. Please ensure one is created at this path and that Jenkins can read it.
Key exchange was not finished, connection is closed.
java.io.IOException: There was a problem while connecting to 172.17.0.1:2222
at com.trilead.ssh2.Connection.connect(Connection.java:834)
at com.trilead.ssh2.Connection.connect(Connection.java:703)
at com.trilead.ssh2.Connection.connect(Connection.java:617)
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1284)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:804)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:793)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Key exchange was not finished, connection is closed.
at com.trilead.ssh2.transport.KexManager.getOrWaitForConnectionInfo(KexManager.java:95)
at com.trilead.ssh2.transport.TransportManager.getConnectionInfo(TransportManager.java:237)
at com.trilead.ssh2.Connection.connect(Connection.java:786)
... 9 more
Caused by: java.io.IOException: The server hostkey was not accepted by the verifier callback
at com.trilead.ssh2.transport.KexManager.handleMessage(KexManager.java:548)
at com.trilead.ssh2.transport.TransportManager.receiveLoop(TransportManager.java:790)
at com.trilead.ssh2.transport.TransportManager$1.run(TransportManager.java:502)
... 1 more
I have a simple build configuration called test but the build is not running since the slave is offline.
Any idea why the jenkins master is not identifying the slave server.
Just change the Host Key verification strategy to Non verfiying Verification Strategy in the node configuration.
You are using Host Key verification strategy method which checks the known_hosts file ( usually it's at ~/.ssh/known_hosts). However the jenkins server is checking /var/jenkins_home/.ssh/known_hosts, which probably is empty now, in the docker container.
You can use Manually provided key Verification Strategy and paste the public key there or using other methods with the help of this document.

Docker behind VPN cannot start after downloading TFS container

I have a docker cluster behind a VPN. I have downloaded TFS agent container, and want to connect to our TFS but it cannot connect with an alarm of:
Determining matching VSTS agent...
Downloading and installing VSTS agent...
curl: (35) gnutls_handshake() failed: Error in the pull function.
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
It can ping google.But cannot ping public TFS. I would consider this as a network issue but nginx container was pulled and started with success.
docker run \
-e VSTS_ACCOUNT= xxx \
-e TFS_HOST= yyy \
-e VSTS_TOKEN= zzz \
-it microsoft/vsts-agent
also tried this:
docker run \
-e VSTS_ACCOUNT= xxx \
-e VSTS_AGENT='$(hostname)-agent'\
-e VSTS_TOKEN= yyy \
-e TFS_URL= zzz \
-e VSTS_POOL= eee \
-e VSTS_WORK='/var/vsts/$VSTS_AGENT' \
-v /var/vsts:/var/vsts \
-it microsoft/vsts-agent:ubuntu-14.04
Although it is behind VPN, I can access the repo from browser btw.
It seems docker shows SSL Handshake issue even if you have network issue. But it showing that connection through curl was OK. This issue was solved by adding IP to whitelist.

Resources