Schema Registry for a 3 Node Kafka Cluster with SSL - docker

I am configuring a 3 node Kafka Cluster ( 3 brokers and 3 zookeepers with SSL enabled) using docker. Now I need to set up a schema registry. If I just need to use one schema registry is it possible? If Yes how does my SSL trust store and key store configs looks like while running?
I did refer to confluents documentation, where they discuss about Kafka based leader election and zookeeper based leader election, but not clear.
This is my faulty docker run command.
docker run -d \
--net=host \
--name=schema-registry \
-e
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL\
=localhost:22181,localhost:32181,localhost:42181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_DEBUG=true \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SSL
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_LOCATION\
=kafka.broker1.truststore.jks \
-e
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_PASSWORD\
=broker1_truststore_creds \
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION\
=kafka.broker1.keystore.jks \
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_PASSWORD\
=broker1_keystore_creds \
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_KEY_PASSWORD=broker1_sslkey_creds \
-v ${KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets \
confluentinc/cp-schema-registry:5.0.1
I am sure my understanding of how schema registry works with a clustered setup is not correct.

Related

Enable fine grained on keycloak with docker

I have set up keycloak using docker, my problem is that I need to do some modifications on the clients that need the fine grained to be enabled. I have read the documentation and i know I should use the parameter -Dkeycloak.profile=preview or -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled. My problem is that I tried to use that on my docker execution command, but with no luck
docker run --rm \
--name keycloak \
-p 80:8080 \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=[adminPass] \
-e PROXY_ADDRESS_FORWARDING=true \
-e DB_VENDOR=MYSQL \
-e DB_ADDR=[SQL_Server] \
-e DB_DATABASE=keycloak \
-e DB_USER=[DBUSER] \
-e DB_PASSWORD=[DB_PASS] \
-e JDBC_PARAMS=useSSL=false \
-e -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled \
jboss/keycloak
any help?
It is documented in the Docker image readme https://hub.docker.com/r/jboss/keycloak
Additional server startup options (extension of JAVA_OPTS) can be configured using the JAVA_OPTS_APPEND environment variable.
So in your case:
-e JAVA_OPTS_APPEND="-Dkeycloak.profile=preview"
Guess you might need to pass the environment variables to the JVM when starting the Wildfly containing the Keycloak WAR. There is a runner shell script that starts when launching the container. You need to add your environment variables to that call.

How to deploy neoload in docker

Please does anyone know how to deploy neoload on Docker. I have looked at the neoload package on docker hub but it doesn't seem to make much sense. I want to use it for performance testing. the link is https://hub.docker.com/r/neotys/neoload-controller/
As explained in the documentation, there are 2 ways to deploy your neoload controller on docker:
Managed: this mode only works with a neoload web.
Standalone: basically when you run your neoload container, you give it some parameters like the neoload project, the number of virtual users etc... The test is launched at the start of the container.
From the docker hub documentation:
docker run -d --rm \
-e PROJECT_NAME={project-name} \
-e SCENARIO={scenario} \
-e NTS_URL={nts-url} \
-e NTS_LOGIN={login:password} \
-e COLLAB_URL={collab-url} \
-e LICENSE_ID={license-id} \
-e VU_MAX={vu-max} \
-e DURATION_MAX={duration-max} \
-e NEOLOADWEB_URL={nlweb-onpremise-apiurl:port} \
-e NEOLOADWEB_TOKEN={nlweb-token} \
-e PUBLISH_RESULT={publish-result} \
neotys/neoload-controller
You either have to pull the license from a Neoload Web or a NTS server.
I will need more informations about your problem to help you.
Regards

Kafka FQDN in containers enviornment

Running kafka on a container and trying to create a new pgsql container on the same host.
the pgsql container keeps exiting and the logs indicates
ERROR: Failed to connect to Kafka at kafka.domain, check the docker run -e KAFKA_FQDN= value
the kafka container is built with the following attributes
docker run -d \
--name=app_kafka \
-e KAFKA_FQDN=localhost \
-v /var/app/kafka:/data/kafka \
-p 2181:2181 -p 9092:9092 \
app/kafka
the pgsql container with
docker run -d --name app_psql \
-h app-psql \
**-e KAFKA_FQDN=kafka.domain \
--add-host kafka.domain:172.17.0.1 \**
-e MEM=16 \
--shm-size=512m \
-v /var/app/config:/config \
-v /var/app/postgres/main:/data/main \
-v /var/app/postgres/ts:/data/ts \
-p 5432:5432 -p 9005:9005 -p 8080:8080 \
app/postgres
If i'm using docker0 ip address, the logs indicates no route to host, if i'm using the kafka docker ip, i'm getting connection refused.
I guess i'm missing something basic here that needs to be modified to my environment, but I'm lacking in knowledge here.
Will appreciate any assistance here.
You need edit container file hosts, you can pass a script in dockerFile like to
COPY dot.sh .
ENTRYPOINT ["sh","domain.sh"]
And domain.sh
#!/bin/sh
echo Environment container kafka is: "kafka.domain"
echo PGSQL container is "pgsql.domain"
echo "127.0.0.1 kafka.domain" >> /etc/hosts
echo "127.0.0.1 pgsql.domain" >> /etc/hosts
Feel free change ip or domain to needs.

Docker can't expose mesos port 5050

I have a mesos container running, the container has the port mapping 0.0.0.0:32772->5050/tcp.
If I run docker exec CONTAINER_ID "curl 0.0.0.0:5050, I can see the thing I want. However, I can't access HOST_IP:32772. I've tried to run nginx in the same container and I can connect to the nginx server in host, so I think it's mesos configuration problem? How can I fix it?
If I understand correctly, you're running your Mesos Master(s) from a Docker container. You should use host networking instead of bridge networking.
The settings work at least for me:
docker run \
--name=mesos_master \
--net=host \
-e MESOS_IP={YOUR_HOST_IP} \
-e MESOS_HOSTNAME={YOUR_HOST_IP} \
-e MESOS_CLUSTER=mesos-cluster \
-e MESOS_ZK=zk://{YOUR_ZK_SERVERS}/mesos \
-e MESOS_LOG_DIR=/var/log/mesos/master \
-e MESOS_WORK_DIR=/var/lib/mesos/master \
-e MESOS_QUORUM=2 \
mesosphere/mesos-master:0.27.1-2.0.226.ubuntu1404

How to store my docker registry in the file system

I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.

Resources