schema-registry in KAFKA not able to retrieve Cluster ID - docker

I try to install a Kafka environment based on confluent images. After "docker-compose up" all my containers are up and running, but after one minute the schema-registry failed
in the scheme-registry log I found this error message explaining that it failed to get the Kafka cluster Id
I checked the kafka logs and found this :
"[2021-08-05 15:59:17,074] INFO Cluster ID = ddchQ8odQM-hF67TJO97Ng (kafka.server.KafkaServer)"
So Cluster ID is well created. It seems that schema-registry is not able to retreive the Cluster ID but I really don't understand what happen here, I think it is a network issue, I tried many things to fix it but whithout success
here my docker-compose.yaml
services:
zookeeper:
image: confluentinc/cp-zookeeper
hostname: zookeeper
container_name: zookeeper
# networks:
# - my-network
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
deploy:
resources:
limits:
cpus: "1.00"
memory: "1024M"
kafka:
image: confluentinc/cp-kafka
container_name: kafka
depends_on:
- zookeeper
# networks:
# - my-network
ports:
- 9092:9092
- 30001:30001
environment:
# KAFKA_CREATE_TOPICS: toto
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
KAFKA_JMX_PORT: 30001
KAFKA_JMX_HOSTNAME: kafka
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
deploy:
resources:
limits:
cpus: "1.00"
memory: "2048M"
kafka-jmx-exporter:
build: ./materials/tools/prometheus-jmx-exporter
container_name: jmx-exporter
ports:
- 8080:8080
links:
- kafka
# networks:
# - my-network
environment:
JMX_PORT: 30001
JMX_HOST: kafka
HTTP_PORT: 8080
JMX_EXPORTER_CONFIG_FILE: kafka.yml
deploy:
resources:
limits:
cpus: "1.00"
memory: "1024M"
prometheus:
build: ./materials/tools/prometheus
container_name: prometheus
# networks:
# - my-network
ports:
- 9090:9090
spark-master:
container_name: spark-master
build: ./materials/spark
user: root
# networks:
# - my-network
volumes:
- ./materials/spark/connectors:/connectors
- ./materials/spark/scripts:/scripts/
- ./materials/consumer:/scripts/consumer
- ./secrets:/scripts/secrets
- ./materials/spark/jars_dir:/opt/bitnami/spark/.ivy2:z
ports:
- 8085:8080
- 7077:7077
- 4040:4040
environment:
- INIT_DAEMON_STEP=setup_spark
deploy:
resources:
limits:
cpus: "1.00"
memory: "1024M"
# - SPARK_MODE=master
# - SPARK_RPC_AUTHENTICATION_ENABLED=no
# - SPARK_RPC_ENCRYPTION_ENABLED=no
# - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
# - SPARK_SSL_ENABLED=no
spark-worker-1:
container_name: spark-worker-1
build: ./materials/spark
user: root
# networks:
# - my-network
depends_on:
- spark-master
ports:
- 8083:8085
- 4041:4040
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark-master:7077
- SPARK_WORKER_MEMORY=1G
- SPARK_WORKER_CORES=1
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
deploy:
resources:
limits:
cpus: "1.00"
memory: "2048M"
reservations:
cpus: "1.00"
memory: "1024M"
schema-registry:
image: confluentinc/cp-schema-registry
hostname : schema-registry
container_name : schema-registry
#command: /bin/sh -c 'tail -f /dev/null'
command: /bin/schema-registry-start /etc/schema-registry/schema-registry.properties
depends_on:
- kafka
ports:
- 8081:8081
# networks:
# - my-network
environment:
# SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka:29092
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka-1:9092
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
SCHEMA_REGISTRY_DEBUG: "true"
SCHEMA_REGISTRY_KAFKASTORE.INIT.TIMEOUT.MS: 120000
deploy:
resources:
limits:
cpus: "1.00"
memory: "2048M"
producer:
build: ./materials/producer
container_name: producer
depends_on:
- kafka
# networks:
# - my-network
environment:
KAFKA_BROKER_URL: kafka-1:9092
TRANSACTIONS_PER_SECOND: 30
kafkastream:
build: ./materials/kafkastream
container_name: kafkastream
depends_on:
- kafka
# networks:
# - my-network
environment:
KAFKA_BROKER_URL: kafka-1:9092
TRANSACTIONS_PER_SECOND: 5
rest-proxy:
image: confluentinc/cp-kafka-rest
depends_on:
- kafka
- schema-registry
# networks:
# - my-network
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
#command: /bin/kafka-rest-start
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: kafka:29092
KAFKA_REST_LISTENERS: http://0.0.0.0:8082
KAFKA_REST_SCHEMA_REGISTRY_URL: http://schema-registry:8081
#networks:
#my-network:
# external: false
# my-network:
My last try was to completly remove the network in the docker-compose file, that is why all the lines related to networks are commented here.
Any hint or idea will be appreciate
Thanks

I finally found the solution. My mystake was to add the following line in my docker-compose.yml file : "command: /bin/schema-registry-start /etc/schema-registry/schema-registry.properties". In that way, schema-registry start by taken into acount the default configuration of the schema-registry.properties file that is of course not suitable to my local installation and ignore all the environment parameter passed in the docke-compose.yaml file.

PLAINTEXT_HOST://localhost:9092 , change to kafka-1 or use kafka:29092

Related

Docker do not link virtual interfaces to virtual network(bridge)

When create and start my all instances everything looks fine but even though do routing properly, my instances still cannot communicate each others. I have use this this command for each instance.
ip link set <interface> master <network>
eg
ip link set vethb3735ba#if14 master br-bdf6dd295e3a
Here is my operation system details
Linux arch 6.0.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Sat, 15 Oct 2022 14:00:49 +0000 x86_64 GNU/Linux
services:
postgres:
container_name: postgres
image: postgres
environment:
POSTGRES_USER: amigoscode
POSTGRES_PASSWORD: password
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
networks:
- postgres
restart: unless-stopped
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
zipkin:
image: openzipkin/zipkin
container_name: zipkin
ports:
- "9411:9411"
networks:
- spring
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
rabbitmq:
image: rabbitmq:3.9.11-management-alpine
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
networks:
- spring
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
eureka-server:
image: huseyinafsin/eureka-server:latest
container_name: eureka-server
ports:
- "8761:8761"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
depends_on:
- zipkin
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
apigw:
image: huseyinafsin/apigw:latest
container_name: apigw
ports:
- "8083:8083"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
depends_on:
- zipkin
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
customer:
image: huseyinafsin/customer:latest
container_name: customer
ports:
- "8080:8080"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
- postgres
depends_on:
- zipkin
- postgres
- rabbitmq
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
fraud:
image: huseyinafsin/fraud:latest
container_name: fraud
ports:
- "8081:8081"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
- postgres
depends_on:
- zipkin
- postgres
- rabbitmq
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
notification:
image: huseyinafsin/notification:latest
container_name: notification
ports:
- "8082:8082"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
- postgres
depends_on:
- zipkin
- postgres
- rabbitmq
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
networks:
postgres:
driver: bridge
spring:
driver: bridge
volumes:
postgres:
pgadmin:

replicate standard_ds3_v2 databricks cluster in docker

replicating standard_ds3_v2 databricks cluster in docker (14gb 4 cores)
would I just change the spark worker 1 and 2 in dockerfile from 512MB each to 7GB and from 1 core to 2. would this result in the exact same config as the standard_ds3_v2 data bricks cluster?
below is the docker cofig
version: "3.6"
volumes:
shared-workspace:
name: "hadoop-distributed-file-system"
driver: local
services:
jupyterlab:
image: jupyterlab
container_name: jupyterlab
ports:
- 8888:8888
volumes:
- shared-workspace:/opt/workspace
spark-master:
image: spark-master
container_name: spark-master
ports:
- 8080:8080
- 7077:7077
volumes:
- shared-workspace:/opt/workspace
spark-worker-1:
image: spark-worker
container_name: spark-worker-1
environment:
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=512m
ports:
- 8081:8081
volumes:
- shared-workspace:/opt/workspace
depends_on:
- spark-master
spark-worker-2:
image: spark-worker
container_name: spark-worker-2
environment:
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=512m
ports:
- 8082:8081
volumes:
- shared-workspace:/opt/workspace
depends_on:
- spark-master

Why is http://localhost:9021/ not opening the Confluent Control Center?

I am working on the Confluent Admin training and running labs in Docker for Desktop. PFA the docker-compose yaml file.
The Confluent Control Center doesn't open in brower. I am using http://localhost:9021 to open. Ealier it used to open but not any more. The only change I have done in my computer is to install McAfee Live Safe. I even tried by turning off the Firewall, but it didn't help either.
Can someone please share if you had similar experience and how you overcame this issue?
docker-compose.yaml file.
version: "3.5"
services:
zk-1:
image: confluentinc/cp-zookeeper:5.3.1
hostname: zk-1
container_name: zk-1
ports:
- "12181:2181"
volumes:
- data-zk-log-1:/var/lib/zookeeper/log
- data-zk-data-1:/var/lib/zookeeper/data
networks:
- confluent
environment:
- ZOOKEEPER_SERVER_ID=1
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
- ZOOKEEPER_INIT_LIMIT=5
- ZOOKEEPER_SYNC_LIMIT=2
- ZOOKEEPER_SERVERS=zk-1:2888:3888;zk-2:2888:3888;zk-3:2888:3888
zk-2:
image: confluentinc/cp-zookeeper:5.3.1
hostname: zk-2
container_name: zk-2
ports:
- "22181:2181"
volumes:
- data-zk-log-2:/var/lib/zookeeper/log
- data-zk-data-2:/var/lib/zookeeper/data
networks:
- confluent
environment:
- ZOOKEEPER_SERVER_ID=2
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
- ZOOKEEPER_INIT_LIMIT=5
- ZOOKEEPER_SYNC_LIMIT=2
- ZOOKEEPER_SERVERS=zk-1:2888:3888;zk-2:2888:3888;zk-3:2888:3888
zk-3:
image: confluentinc/cp-zookeeper:5.3.1
hostname: zk-3
container_name: zk-3
ports:
- "32181:2181"
volumes:
- data-zk-log-3:/var/lib/zookeeper/log
- data-zk-data-3:/var/lib/zookeeper/data
networks:
- confluent
environment:
- ZOOKEEPER_SERVER_ID=3
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
- ZOOKEEPER_INIT_LIMIT=5
- ZOOKEEPER_SYNC_LIMIT=2
- ZOOKEEPER_SERVERS=zk-1:2888:3888;zk-2:2888:3888;zk-3:2888:3888
kafka-1:
image: confluentinc/cp-enterprise-kafka:5.3.1
hostname: kafka-1
container_name: kafka-1
ports:
- "19092:9092"
networks:
- confluent
volumes:
- data-kafka-1:/var/lib/kafka/data
environment:
KAFKA_BROKER_ID: 101
KAFKA_ZOOKEEPER_CONNECT: zk-1:2181,zk-2:2181,zk-3:2181
KAFKA_LISTENERS: DOCKER://kafka-1:9092,HOST://kafka-1:19092
KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka-1:9092,HOST://localhost:19092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: DOCKER:PLAINTEXT,HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: DOCKER
KAFKA_METRIC_REPORTERS: "io.confluent.metrics.reporter.ConfluentMetricsReporter"
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: "kafka-1:9092,kafka-2:9092,kafka-3:9092"
kafka-2:
image: confluentinc/cp-enterprise-kafka:5.3.1
hostname: kafka-2
container_name: kafka-2
ports:
- "29092:9092"
networks:
- confluent
volumes:
- data-kafka-2:/var/lib/kafka/data
environment:
KAFKA_BROKER_ID: 102
KAFKA_ZOOKEEPER_CONNECT: zk-1:2181,zk-2:2181,zk-3:2181
KAFKA_LISTENERS: DOCKER://kafka-2:9092,HOST://kafka-2:29092
KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka-2:9092,HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: DOCKER:PLAINTEXT,HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: DOCKER
KAFKA_METRIC_REPORTERS: "io.confluent.metrics.reporter.ConfluentMetricsReporter"
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: "kafka-1:9092,kafka-2:9092,kafka-3:9092"
kafka-3:
image: confluentinc/cp-enterprise-kafka:5.3.1
hostname: kafka-3
container_name: kafka-3
ports:
- "39092:9092"
networks:
- confluent
volumes:
- data-kafka-3:/var/lib/kafka/data
environment:
KAFKA_BROKER_ID: 103
KAFKA_ZOOKEEPER_CONNECT: zk-1:2181,zk-2:2181,zk-3:2181
KAFKA_LISTENERS: DOCKER://kafka-3:9092,HOST://kafka-3:39092
KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka-3:9092,HOST://localhost:39092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: DOCKER:PLAINTEXT,HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: DOCKER
KAFKA_METRIC_REPORTERS: "io.confluent.metrics.reporter.ConfluentMetricsReporter"
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: "kafka-1:9092,kafka-2:9092,kafka-3:9092"
schema-registry:
image: confluentinc/cp-schema-registry:5.3.1
hostname: schema-registry
container_name: schema-registry
ports:
- "8081:8081"
networks:
- confluent
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "kafka-1:9092,kafka-2:9092,kafka-3:9092"
SCHEMA_REGISTRY_LISTENERS: "http://schema-registry:8081,http://localhost:8081"
# Uses incorrect container utility belt (CUB) environment variables due to bug.
# See https://github.com/confluentinc/cp-docker-images/issues/807. A fix was merged that
# will be available in the CP 5.4 image.
KAFKA_REST_CUB_KAFKA_TIMEOUT: 120
KAFKA_REST_CUB_KAFKA_MIN_BROKERS: 3
connect:
image: confluentinc/cp-kafka-connect:5.3.1
hostname: connect
container_name: connect
ports:
- "8083:8083"
volumes:
- ./data:/data
networks:
- confluent
environment:
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
CONNECT_BOOTSTRAP_SERVERS: kafka-1:9092,kafka-2:9092,kafka-3:9092
CONNECT_GROUP_ID: "connect"
CONNECT_CONFIG_STORAGE_TOPIC: "connect-config"
CONNECT_OFFSET_STORAGE_TOPIC: "connect-offsets"
CONNECT_STATUS_STORAGE_TOPIC: "connect-status"
CONNECT_KEY_CONVERTER: "io.confluent.connect.avro.AvroConverter"
CONNECT_VALUE_CONVERTER: "io.confluent.connect.avro.AvroConverter"
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: "connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
CONNECT_LOG4J_LOGGERS: org.reflections=ERROR
CONNECT_PLUGIN_PATH: /usr/share/java
CONNECT_REST_HOST_NAME: "connect"
CONNECT_REST_PORT: 8083
CONNECT_CUB_KAFKA_TIMEOUT: 120
ksql-server:
image: confluentinc/cp-ksql-server:5.3.1
hostname: ksql-server
container_name: ksql-server
ports:
- "8088:8088"
networks:
- confluent
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"
KSQL_BOOTSTRAP_SERVERS: kafka-1:9092,kafka-2:9092,kafka-3:9092
KSQL_HOST_NAME: ksql-server
KSQL_APPLICATION_ID: "etl-demo"
KSQL_LISTENERS: "http://0.0.0.0:8088"
# Set the buffer cache to 0 so that the KSQL CLI shows all updates to KTables for learning purposes.
# The default is 10 MB, which means records in a KTable are compacted before showing output.
# Change cache.max.bytes.buffering and commit.interval.ms to tune this behavior.
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
control-center:
image: confluentinc/cp-enterprise-control-center:5.3.1
hostname: control-center
container_name: control-center
restart: always
networks:
- confluent
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: kafka-1:9092,kafka-2:9092,kafka-3:9092
CONTROL_CENTER_ZOOKEEPER_CONNECT: zk-1:2181,zk-2:2181,zk-3:2181
CONTROL_CENTER_STREAMS_NUM_STREAM_THREADS: 4
CONTROL_CENTER_REPLICATION_FACTOR: 3
CONTROL_CENTER_CONNECT_CLUSTER: "connect:8083"
CONTROL_CENTER_KSQL_URL: "http://ksql-server:8088"
CONTROL_CENTER_KSQL_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
tools:
image: cnfltraining/training-tools:5.3
hostname: tools
container_name: tools
volumes:
- ${PWD}/:/apps
working_dir: /apps
networks:
- confluent
command: /bin/bash
tty: true
volumes:
data-zk-log-1:
data-zk-data-1:
data-zk-log-2:
data-zk-data-2:
data-zk-log-3:
data-zk-data-3:
data-kafka-1:
data-kafka-2:
data-kafka-3:
networks:
confluent:
All the docker containers are up and running; all respective confluent services are up.
Thanks !!
Finally...I got an answer to this from Confluent Support.
The version of control center in the labs expires after 30 days.
This can reset by removing all the containers and volumes on the PC.
docker-compose down -v will exit and remove all the containers and volumes.
Re-run the docker-compose up -d command.
Now give a minute or two before opening the Control Center in any browser.
P.S. Docker should have been given at least 6GB of memory to run all the containers.
Thanks.

Cannot connect to Landoop Docker container on WSL2

I'm running a Landoop (image) container using Docker on my Windows 10 Home, using WSL2.
I can am running a docker-compose.yaml file with multiple services:
#
# This docker-compose file starts and runs:
# * A 3-node kafka cluster
# * A 1-zookeeper ensemble
# * Schema Registry
# * Kafka REST Proxy
# * Kafka Connect
#
version: '3.7'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.2.2
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: "2181"
kafka0:
image: confluentinc/cp-kafka:5.2.2
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 0
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_ADVERTISED_LISTENERS: "INTERNAL://kafka0:19092,EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092"
KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
depends_on:
- "zookeeper"
schema-registry:
image: confluentinc/cp-schema-registry:5.2.2
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "PLAINTEXT://kafka0:19092"
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8081"
SCHEMA_REGISTRY_HOST_NAME: "schema-registry"
SCHEMA_REGISTRY_KAFKASTORE_TOPIC_REPLICATION_FACTOR: "1"
depends_on:
- "kafka0"
rest-proxy:
image: confluentinc/cp-kafka-rest:5.2.2
ports:
- "8082:8082"
environment:
KAFKA_REST_BOOTSTRAP_SERVERS: "PLAINTEXT://kafka0:19092"
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082/"
KAFKA_REST_HOST_NAME: "rest-proxy"
KAFKA_REST_SCHEMA_REGISTRY_URL: "http://schema-registry:8081/"
depends_on:
- "kafka0"
- "schema-registry"
connect:
image: confluentinc/cp-kafka-connect:5.2.2
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "PLAINTEXT://kafka0:19092"
CONNECT_GROUP_ID: "connect"
CONNECT_REST_ADVERTISED_HOST_NAME: "connect"
CONNECT_PLUGIN_PATH: "/usr/share/java"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_KEY_CONVERTER: "io.confluent.connect.avro.AvroConverter"
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_VALUE_CONVERTER: "io.confluent.connect.avro.AvroConverter"
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_CONFIG_STORAGE_TOPIC: "connect-config"
CONNECT_OFFSET_STORAGE_TOPIC: "connect-offset"
CONNECT_STATUS_STORAGE_TOPIC: "connect-status"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
depends_on:
- "kafka0"
- "schema-registry"
ksql:
image: confluentinc/cp-ksql-server:5.2.2
ports:
- "8088:8088"
environment:
KSQL_BOOTSTRAP_SERVERS: "PLAINTEXT://kafka0:19092"
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_KSQL_SERVICE_ID: "ksql_service_docker"
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081/"
depends_on:
- "kafka0"
- "schema-registry"
connect-ui:
image: landoop/kafka-connect-ui:0.9.7
ports:
- "8084:8084"
environment:
PORT: "8084"
PROXY: "true"
CONNECT_URL: "http://connect:8083"
depends_on:
- "connect"
topics-ui:
image: landoop/kafka-topics-ui:0.9.4
ports:
- "8085:8085"
environment:
PORT: "8085"
PROXY: "true"
KAFKA_REST_PROXY_URL: "http://rest-proxy:8082"
depends_on:
- "rest-proxy"
schema-registry-ui:
image: landoop/schema-registry-ui:0.9.5
ports:
- "8086:8086"
environment:
PORT: "8086"
PROXY: "true"
SCHEMAREGISTRY_URL: "http://schema-registry:8081/"
depends_on:
- "schema-registry"
postgres:
image: postgres:11
ports:
- "5432:5432"
environment:
POSTGRES_USER: "cta_admin"
POSTGRES_PASSWORD: "chicago"
POSTGRES_DB: "cta"
volumes:
- ./producers/data/cta_stations.csv:/tmp/cta_stations.csv
- ./load_stations.sql:/docker-entrypoint-initdb.d/load_stations.sql
From which, for now, I am only running up until the connect-ui service, which should be bound on port 8084 according to the configuration file.
When I try and access other services, like the kafka or schema-registry services, I am able to see an output on the corresponding ports through: localhost:<port>
However, in this case when I try to go to localhost:8084 which should bring up the Landoop interface, I get: ERR_CONNECTION_REFUSED
I faced the same problem and found that running Docker commands or any other command that creates listeners with sudo solves the problem! Adding the current user to the docker group made no difference.

Portainer in Docker swarm stack with Traefik refuses to connect

I am trying to include Portainer in a docker-compose swarm, consisting of WordPress + MySQL and Traefik (reverse proxy). I am using the following definition:
version: '3'
services:
traefik:
image: "traefik:v2.0.0-rc3"
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.swarmmode=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
deploy:
mode: global
placement:
constraints: [node.role==manager]
portainer:
image: portainer/portainer:latest
command: -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./portainer:/data
networks:
- traefik
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role==manager]
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.portainer.entrypoints=web"
db:
image: mysql:5.7
volumes:
- ./db/initdb.d:/docker-entrypoint-initdb.d
networks:
- traefik
environment:
MYSQL_ROOT_PASSWORD: <root_password>
MYSQL_DATABASE: <db_name>
MYSQL_USER: <db_user>
MYSQL_PASSWORD: <user_password>
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
app:
image: my-repo/wordpress:latest
networks:
- traefik
deploy:
replicas: 2
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`example.org`)"
- "traefik.http.routers.app.entrypoints=web"
- "traefik.http.services.app.loadbalancer.server.port=80"
networks:
traefik:
Everything works except portainer. When I visit localhost:9000 I just get a refused connection. The following non-swarm-mode docker-compose works, however:
version: '3'
services:
traefik:
image: "traefik:v2.0.0-rc3"
container_name: "traefik"
restart: always
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
portainer:
image: portainer/portainer
command: -H unix:///var/run/docker.sock
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./portainer:/data
ports:
- "9000:9000"
- "8000:8000"
networks:
- traefik
labels:
- "traefik.enable=true"
- "traefik.http.routers.portainer.entrypoints=web"
db:
image: mysql:5.7
restart: always
volumes:
- ./db/initdb.d:/docker-entrypoint-initdb.d
networks:
- traefik
environment:
MYSQL_ROOT_PASSWORD: <root_password>
MYSQL_DATABASE: <db_name>
MYSQL_USER: <db_user>
MYSQL_PASSWORD: <user_password>
app:
image: my-repo/wordpress:latest
restart: always
depends_on:
- db
networks:
- traefik
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`example.org`)"
- "traefik.http.routers.app.entrypoints=web"
networks:
traefik:
What am I doing wrong? The logs in each case are the same. In non-swarm-mode I can log in to the Portainer UI and see all my containers running, etc. But the swarm version simply refuses to connect, even when I pass Host rule (portainer.example.org). I have only been using Traefik for a few days, and am very likely to be making a simple configuration error (hopefully!).
Port Detection
Docker Swarm does not provide any port detection information to Traefik.
Therefore you must specify the port to use for communication by using the label traefik.http.services.<service_name>.loadbalancer.server.port (Check the reference for this label in the routing section for Docker).

Resources