Unable to deploy debezium mysql connector using Confluent docker - docker

I trying to run a docker image of confluent to intiate connect service with Debezium MYSQL connector , but failed to get the loaded class after running the docker.
Docker Command
docker run -d \
--name=kafka-connect \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS="b-2.<bootstrap_server>.us-east-1.amazonaws.com:9092,b-3.<bootstrap_server>.us-east-1.amazonaws.com:9092,b-1.<bootstrap_server>.us-east-1.amazonaws.com:9092" \
-e CONNECT_REST_PORT=8083 \
-e CONNECT_PLUGIN_PATH="/usr/share/java,/tmp/connectors" \
-e CONNECT_GROUP_ID="quickstart" \
-e CONNECT_CONFIG_STORAGE_TOPIC="quick-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="quick-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="quick-status" \
-e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="ec2-<public_ip>.compute-1.amazonaws.com" \
-v /opt/connectors:/tmp/connectors \
confluentinc/cp-kafka-connect:3.1.2
My Instance directory files used in the volume (-v) is as below:
ubuntu#ip-<hostname>:/opt/connectors$ ls
debezium-connect-jdbc
Below command to test whether Debezium Mysql driver class loaded or not
curl -s http://ec2-<public_ip>.compute-1.amazonaws.com:8083/connector-plugins | jq .
After running the above command I donot see the class for Debezium Mysql connector as mentioned below
io.debezium.connector.mysql.MySqlConnector

Your problem is too old Kafka Connect version: confluentinc/cp-kafka-connect:3.1.2 ships with 0.10.1.1 version, but Classloading Isolation in Connect was implemented only in 0.11.0.
Please use newer images.

Related

Container Listener not working on IP Address from server - Only works on IP from the Docker Network

SUMMARY
I am running a Zabbix Server container, but I am not being able to communicate on its listening port - Locally even.
OS / ENVIRONMENT / Used docker-compose files
This is the script I am currently using to run it:
docker run -d --name zabbix-server \
--restart always \
--link zabbix-snmptraper:zabbix-snmptraps --volumes-from zabbix-snmptraper \
-p 192.168.1.248:10052:10051 \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="aro#123" \
-e ZBX_LISTENPORT=10052 \
-e ZBX_HOUSEKEEPINGFREQUENCY=12 \
-e ZBX_LOGSLOWQUERIES=1000 \
-e ZBX_STARTPOLLERSUNREACHABLE=1 \
-e ZBX_STARTPINGERS=5 \
-e ZBX_STARTTRAPPERS=1 \
-e ZBX_STARTDBSYNCERS=3 \
-e ZBX_STARTDISCOVERERS=4 \
-e ZBX_STARTPOLLERS=10 \
-e ZBX_TIMEOUT=30 \
-e ZBX_VALUECACHESIZE=32M \
-e ZBX_CACHESIZE=48M \
-e ZBX_MAXHOUSEKEEPERDELETE=432000 \
-e ZBX_ENABLE_SNMP_TRAPS=true \
-e MYSQL_ROOT_PASSWORD="my_root_pass_of_mysql..." \
-e DB_SERVER_HOST="mysql-server" \
-e DB_SERVER_PORT="3306" \
-v /etc/localtime:/etc/localtime:ro \
-v /mnt/dados/zabbix/external_scripts:/usr/lib/zabbix/externalscripts \
--network=zabbix-net \
zabbix/zabbix-server-mysql:5.4-ubuntu-latest
CONFIGURATION
The code block of commands is being run on a Debian 11.
STEPS TO REPRODUCE
Basically, the container is UP and running.
The passive queries are all working - I can gather data from Zabbix to other Zabbix Agents, SNMP, etc.
The problem happens when I try to do a active query from outside to Zabbix Server itself... (Active queries.)
My deduction was that the docker container did not create the necessary routes for this, so I must specify something or there is some configuration missing.
EXPECTED RESULTS
When doing a telnet to 10052 on my Zabbix Server, the expected result is a OK Connected.
ACTUAL RESULTS
Locally, on my own Zabbix Server, when I did:
sudo telnet 192.168.1.248 10052
I got telnet: Unable to connect to remote host: Connection refused
Crazy thing is that when doing this on the IP address of the DOCKER NETWORK, (Got the IP from docker inspect zabbix-server "IPAddress": "172.18.0.4"):
sudo telnet 172.18.0.4 10052
Trying 172.18.0.4...
Connected to 172.18.0.4.
It worked. So there is a routing problem with this container.
But most containers when running create the rules or at least show it in logs or docs. how to do it.
But I could not find this anywhere...
Can you please help me?
I am on this for more than two weeks and do not know what to do anymore.
If this is in the wrong section or "flow", please direct me to the correct place to this.
I really appreciate the help.
Edit 1
Here is the output TCPDUMP gave me:
16:28:12.373378 IP 192.168.17.24.55114 > 192.168.1.248.10052: Flags [S], seq 2008667124, win 64240, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0
As you can see, packets are coming through and arriving to the Docker Server.
I tried adding the following rule to IPTables to see if it solved it:
sudo iptables -t nat -A PREROUTING -p tcp --dport 10052 -j DNAT --to-destination 172.18.0.4:10052 -m comment --comment "Redirect requests from IP 248 to the container IP"
But it did not work. Or I created this wrongly.
To list the rules I used the command:
sudo iptables -t nat -v -L PREROUTING -n --line-number
It was created all fine.
While you configured Zabbix to listen on port 10052 (-e ZBX_LISTENPORT=10052), you mount the host port 10052 to the containers port 10051 instead (-p 192.168.1.248:10052:10051).
Use -p 192.168.1.248:10052:10052 to make it work.
docker run -d --name zabbix-server \
--restart always \
--link zabbix-snmptraper:zabbix-snmptraps --volumes-from zabbix-snmptraper \
-p 192.168.1.248:10052:10052 \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="aro#123" \
-e ZBX_LISTENPORT=10052 \
-e ZBX_HOUSEKEEPINGFREQUENCY=12 \
-e ZBX_LOGSLOWQUERIES=1000 \
-e ZBX_STARTPOLLERSUNREACHABLE=1 \
-e ZBX_STARTPINGERS=5 \
-e ZBX_STARTTRAPPERS=1 \
-e ZBX_STARTDBSYNCERS=3 \
-e ZBX_STARTDISCOVERERS=4 \
-e ZBX_STARTPOLLERS=10 \
-e ZBX_TIMEOUT=30 \
-e ZBX_VALUECACHESIZE=32M \
-e ZBX_CACHESIZE=48M \
-e ZBX_MAXHOUSEKEEPERDELETE=432000 \
-e ZBX_ENABLE_SNMP_TRAPS=true \
-e MYSQL_ROOT_PASSWORD="my_root_pass_of_mysql..." \
-e DB_SERVER_HOST="mysql-server" \
-e DB_SERVER_PORT="3306" \
-v /etc/localtime:/etc/localtime:ro \
-v /mnt/dados/zabbix/external_scripts:/usr/lib/zabbix/externalscripts \
--network=zabbix-net \
zabbix/zabbix-server-mysql:5.4-ubuntu-latest

installing transmission on debian with docker: Missing container [duplicate]

This question already has answers here:
Why docker container exits immediately
(16 answers)
Closed 9 months ago.
I am new to this. I have installed docker on my Raspi. I am trying to install transmission on the docker. I use the following;
docker run --cap-add=NET_ADMIN -d \
--name=transmission \
-v /mnt/extDrive1:/data \
-v /etc/localtime:/etc/localtime:ro \
-e CREATE_TUN_DEVICE=true \
-e OPENVPN_PROVIDER=EXPRESSVPN \
-e OPENVPN_CONFIG=my_expressvpn_uk_-_london_udp \
-e OPENVPN_USERNAME=XXX\
-e OPENVPN_PASSWORD=XXX \
-e WEBPROXY_ENABLED=false \
-e LOCAL_NETWORK=192.168.0.0 \
--log-driver json-file \
--log-opt max-size=10m \
-p 9091:9091 \
haugene/transmission-openvpn
I go through the debug on https://haugene.github.io/docker-transmission-openvpn/debug/
All is fine until I get to the section 'Checking if Transmission is running'.
When I run docker ps, there are no containers in the list.
What have I done wrong? Ultimately, I am trying to access transmission through localhost:9091.
Edit: So I have made some progress, but still having issues;
docker start transmission temporarily. populates the container ID
docker exec -it <container-id> bash comes up with the following error:
Error response from daemon: Container XXXX is not running
It seems that container is exiting out as you are not running it in the detached mode. Try this:
docker run -itd --cap-add=NET_ADMIN -d \
--name=transmission \
-v /mnt/extDrive1:/data \
-v /etc/localtime:/etc/localtime:ro \
-e CREATE_TUN_DEVICE=true \
-e OPENVPN_PROVIDER=EXPRESSVPN \
-e OPENVPN_CONFIG=my_expressvpn_uk_-_london_udp \
-e OPENVPN_USERNAME=XXX\
-e OPENVPN_PASSWORD=XXX \
-e WEBPROXY_ENABLED=false \
-e LOCAL_NETWORK=192.168.0.0 \
--log-driver json-file \
--log-opt max-size=10m \
-p 9091:9091 \
haugene/transmission-openvpn

NiFi: Why Does My User Have Insufficient Permissions?

I am following the steps in the "Standalone Instance, Two-Way SSL" section of https://hub.docker.com/r/apache/nifi. However, when I visit the NiFi page, my user has insufficient permissions. Below is the process I am using:
Generate self-signed certificates
mkdir conf
docker exec \
-ti toolkit \
/opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh \
standalone \
-n 'nifi1.bluejay.local' \
-C 'CN=admin,OU=NIFI'
docker cp toolkit:/opt/nifi/nifi-current/nifi-cert.pem conf
docker cp toolkit:/opt/nifi/nifi-current/nifi-key.key conf
docker cp toolkit:/opt/nifi/nifi-current/nifi1.bluejay.local conf
docker cp toolkit:/opt/nifi/nifi-current/CN=admin_OU=NIFI.p12 conf
docker cp toolkit:/opt/nifi/nifi-current/CN=admin_OU=NIFI.password conf
docker stop toolkit
Import client certificate to browser
Import the .p12 file into your browser.
Update /etc/hosts
Add "127.0.0.1 nifi1.bluejay.local" to the end of your /etc/hosts file.
Define a NiFi network
docker network create --subnet=10.18.0.0/16 nifi
Run NiFi in a container
docker run -d \
-e AUTH=tls \
-e KEYSTORE_PATH=/opt/certs/keystore.jks \
-e KEYSTORE_TYPE=JKS \
-e KEYSTORE_PASSWORD=$(grep keystorePasswd conf/nifi1.bluejay.local/nifi.properties | cut -d'=' -f2) \
-e TRUSTSTORE_PATH=/opt/certs/truststore.jks \
-e TRUSTSTORE_PASSWORD=$(grep truststorePasswd conf/nifi1.bluejay.local/nifi.properties | cut -d'=' -f2) \
-e TRUSTSTORE_TYPE=JKS \
-e INITIAL_ADMIN_IDENTITY="CN=admin,OU=NIFI" \
-e NIFI_WEB_PROXY_CONTEXT_PATH=/nifi \
-e NIFI_WEB_PROXY_HOST=nifi1.bluejay.local \
--hostname nifi1.bluejay.local \
--ip 10.18.0.10 \
--name nifi \
--net nifi \
-p 8443:8443 \
-v $(pwd)/conf/nifi1.bluejay.local:/opt/certs:ro \
-v /data/projects/nifi-shared:/opt/nifi/nifi-current/ls-target \
apache/nifi
Visit Page
When you visit http://localhost:8443/nifi, you'll be asked to select a certificate. Select the certificate (e.g. admin) that you imported.
At this point, I am seeing:
Insufficient Permissions
Unknown user with identity 'CN=admin, OU=NIFI'. Contact the system administrator.
In the examples I am seeing, there is no mention of this issue or how to resolve it.
How are permissions assigned to the Initial Admin Identity?
You are missing a space at line
-e INITIAL_ADMIN_IDENTITY="CN=admin,OU=NIFI"
See the error msg.

Issue in connecting ksql with kafka of confluent version 3.3.0 in docker

I am setting up ksql-cli with confluent version 3.3.0 in following way
#zookeper
docker run -d -it \
--net=host \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
confluentinc/cp-zookeeper:3.3.0
#kafka
docker run -d \
--net=host \
--name=kafka \
-e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:3.3.0
#schema-registry
docker run -d \
--net=host \
--name=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=localhost:32181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://localhost:8081 \
confluentinc/cp-schema-registry:3.3.0
i am running ksql-cli docker image in following manner
docker run -it \
--net=host \
--name=ksql-cli \
-e KSQL_CONFIG_DIR="/etc/ksql" \
-e KSQL_LOG4J_OPTS="-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties" \
-e STREAMS_BOOTSTRAP_SERVERS=localhost:29092 \
-e STREAMS_SCHEMA_REGISTRY_HOST=localhost \
-e STREAMS_SCHEMA_REGISTRY_PORT=8081 \
confluentinc/ksql-cli:0.5
when i am running ksql-cli by going in bash of container in folowing way
docker exec -it ksql-cli bash
and running ksql-cli in following way:
./usr/bin/ksql-cli local
It is giving me following error:
Initializing KSQL...
Could not fetch broker information. KSQL cannot initialize AdminCLient.
By default, the ksql-cli attempts to connect to the Kafka brokers on localhost:9092. It looks like your setup is using a different port, so you'll need to provide this on the command line, e.g.
./usr/bin/ksql-cli local --bootstrap-server localhost:32181
You'll probably also need to specify the schema registry port, so you may want to use a properties file, e.g. :
./usr/bin/ksql-cli local --properties-file ./ksql.properties
Where ksql.properties has:
bootstrap.servers=localhost:29092
schema.registry.url=localhost:8081
Or provide both on the command line:
./usr/bin/ksql-cli local \
--bootstrap-server localhost:29092 \
--schema.registry.url http://localhost:8081
Note, from KSQL version 4.1 onwards the commands and properties change name. ksql-cli becomes just ksql. The local mode disappears - you'll need to run a ksql-server node or two explicitly. --property-file becomes --config-file and schema.registry.url becomes ksql.schema.registry.url.

Running Dell Boomi atom on Docker

I retrieved the most recent image using docker pull boomi/atom:2.3.0
I then run the following script (using placeholders for USERNAME, PASSWORD and ACCOUNT_ID):
#!/bin/bash
atom_name=boomidemo01
docker stop $atom_name
docker rm $atom_name
docker run -p 9090:9090 -h boomidemo01 -e URL="platform.boomi.com" \
-e BOOMI_USERNAME=<USERNAME> -e BOOMI_PASSWORD=<PASSWORD> \
-e BOOMI_ATOMNAME=$atom_name \
-e BOOMI_CONTAINERNAME=$atom_name \
-e BOOMI_ACCOUNTID=<ACCOUNT_ID> \
--name $atom_name \
-d -t boomi/atom:2.3.0
But the atom fails to start (not able to connect on port 9090 via a browser on http://127.0.0.1:9090). Did anyone managed to use docker for running a Boomi atom?
I eventually figured it out... the following script works
#!/bin/bash
atom_name=boomidemo01
host_dir=/home/user/Boomi
docker stop $atom_name
docker rm $atom_name
docker run -p 9090:9090 -h $atom_name \
-v $host_dir:/home/boomi/host_dir \
-e URL=https://platform.boomi.com \
-e BOOMI_USERNAME=<USERNAME> \
-e BOOMI_PASSWORD=<PASSWORD> \
-e BOOMI_ATOMNAME=$atom_name \
-e BOOMI_CONTAINERNAME=$atom_name \
-e BOOMI_ACCOUNTID=<ACCOUNT_ID> \
-e PROXY_HOST= \
-e PROXY_USERNAME= \
-e PROXY_PASSWORD= \
-e PROXY_PORT= \
-e DOCKERUID= \
-e SYMLINKS_DIR= \
-e ATOM_LOCALHOSTID=$atom_name \
-e INSTALL_TOKEN= \
--name $atom_name \
-d -t boomi/atom:2.3.0
Download the docker install script from within the UI.
Generate the token.
Run the script with the name, and token.
The port does not matter because Atomsphere is never pinged. It fetches processes for you.

Resources