I am trying to run a kafka-spark streaming application on Docker. Below is my project structure.
Dockerfile contents:
from gcr.io/datamechanics/spark:platform-3.1-dm14
ENV PYSPARK_MAJOR_PYTHON_VERSION=3
WORKDIR /opt/application/
RUN wget https://jdbc.postgresql.org/download/postgresql-42.2.5.jar
RUN mv postgresql-42.2.5.jar /opt/spark/jars
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY main.py .
Docker-compose.yml contents:
version: "2"
services:
spark:
image: docker.io/bitnami/spark:3
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
ports:
- '8080:8080'
spark-worker:
image: docker.io/bitnami/spark:3
environment:
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark:7077
- SPARK_WORKER_MEMORY=1G
- SPARK_WORKER_CORES=1
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
ports:
- "22181:22181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
kafka-server1:
image: confluentinc/cp-kafka:latest
ports:
- '9092:9092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper-server
I was able to pull the images and create containers successfully.
(venv) (base) johnny#Johnnys-MBP~/PyCharmProjects/dockerpractice/Johnny> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d47d4b091b2c confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 16 minutes ago Up 12 minutes 2181/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:22181->22181/tcp zookeeper
59c06d3cf754 confluentinc/cp-kafka:latest "/etc/confluent/dock…" 48 minutes ago Up 48 minutes 9092/tcp, 0.0.0.0:29092->29092/tcp kafka
26794da88c7d bitnami/spark:3 "/opt/bitnami/script…" About an hour ago Up About an hour dockerpractice-spark-worker-1
5a308035bd18 bitnami/spark:3 "/opt/bitnami/script…" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp dockerpractice-spark-1
I connected to the kafka zookeeper image and started it like below:
docker run -i -t confluentinc/cp-zookeeper:latest /bin/bash
zookeeper-server-start /etc/kafka/zookeeper.properties
The above command starts zookeeper with no exceptions and I tried to start my kafka-server in the same way:
docker run -i -t confluentinc/cp-kafka:latest /bin/bash
kafka-server-start /etc/kafka/server.properties
But when I submit the command for kafka-server, I see the below exception:
[2022-04-04 18:13:11,276] WARN Session 0x0 for sever localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290)
I tried various combinations like changing the ip address & ports of kafka server to new ones and by creating multiple kafka servers but all of them are resulting in same exception.
Could anyone let me know what is the mistake I am doing here and how can I correct it?
Related
I try with no success to access to Mercure's hub through my browser at this URL :
http://locahost:3000 => ERR_CONNECTION_REFUSED
I use Docker for my development. Here's my docker-compose.yml :
# docker/docker-compose.yml
version: '3'
services:
database:
container_name: test_db
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3309:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
php-fpm:
container_name: test_php
build:
context: ./php-fpm
depends_on:
- database
environment:
- APP_ENV=${APP_ENV}
- APP_SECRET=${APP_SECRET}
- DATABASE_URL=mysql://${DATABASE_USER}:${DATABASE_PASSWORD}#database:3306/${DATABASE_NAME}?serverVersion=5.7
volumes:
- ./src:/var/www
nginx:
container_name: test_nginx
build:
context: ./nginx
volumes:
- ./src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
- ./logs:/var/log
depends_on:
- php-fpm
ports:
- "8095:80"
caddy:
container_name: test_mercure
image: dunglas/mercure
restart: unless-stopped
environment:
MERCURE_PUBLISHER_JWT_KEY: '!ChangeMe!'
MERCURE_SUBSCRIBER_JWT_KEY: '!ChangeMe!'
PUBLISH_URL: '${MERCURE_PUBLISH_URL}'
JWT_KEY: '${MERCURE_JWT_KEY}'
ALLOW_ANONYMOUS: '${MERCURE_ALLOW_ANONYMOUS}'
CORS_ALLOWED_ORIGINS: '${MERCURE_CORS_ALLOWED_ORIGINS}'
PUBLISH_ALLOWED_ORIGINS: '${MERCURE_PUBLISH_ALLOWED_ORIGINS}'
ports:
- "3000:80"
I have executed successfully :
docker-compose up -d
docker ps -a :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0e4a72fe75b2 dunglas/mercure "caddy run --config …" 2 hours ago Up 2 hours 443/tcp, 2019/tcp, 0.0.0.0:3000->80/tcp, :::3000->80/tcp test_mercure
724fe920ebef nginx "/docker-entrypoint.…" 3 hours ago Up 3 hours 0.0.0.0:8095->80/tcp, :::8095->80/tcp test_nginx
9e63fddf50ef php-fpm "docker-php-entrypoi…" 3 hours ago Up 3 hours 9000/tcp test_php
e7989b26084e database "docker-entrypoint.s…" 3 hours ago Up 3 hours 0.0.0.0:3309->3306/tcp, :::3309->3306/tcp test_db
I can reach http://localhost:8095 to access to my Symfony app but I don't know on which URL I am supposed to reach my Mercure's hub.
Thank's for your help !
I tried for months to get symfony + nginx + mysql + phpmyadmin + mercure + docker to work both locally for development and in production (obviously). To no avail.
And, while this isn't directly answering your question. The only way I can contribute is with an "answer", as I don't have enough reputation to only comment, or I would have done that.
If you're not tied to nginx for any reason besides a means of a web server, and can replace it with caddy, I have a repo that is symfony + caddy + mysql + phpmyadmin + mercure + docker that works with SSL both locally and in production.
https://github.com/thund3rb1rd78/symfony-mercure-website-skeleton-dockerized
I am trying out the Katacoda playground for Load Balance Containers using Traefik - https://www.katacoda.com/courses/traefik/deploy-load-balancer:
Here is the exact Docker compose script in the tutorial to start a Traefik node and 2 test containers:
image: traefik
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
machine:
image: katacoda/docker-http-server
labels:
- "traefik.backend=machine-echo"
- "traefik.frontend.rule=Host:machine-echo.example.com"
echo:
image: katacoda/docker-http-server:v2
labels:
- "traefik.backend=echo"
- "traefik.frontend.rule=Host:echo-echo.example.com"
I run Docker-Compose command, as given in tutorial:
$ docker-compose up -d
Creating tutorial_traefik_1 ... done
Creating tutorial_echo_1 ... done
Creating tutorial_machine_1 ... done
However when I check the container list, I can see only 2 containers are created. The Traefik container is not created:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
35e87a3ff6ed katacoda/docker-http-server "/app" 11 seconds ago Up 9 seconds 80/tcp tutorial_machine_1
a455019d16be katacoda/docker-http-server:v2 "/app" 11 seconds ago Up 9 seconds 80/tcp tutorial_echo_1
The next step fails too. This may be because the Traefik container is not running:
$ curl -H Host:machine-echo.example.com http://host01
curl: (7) Failed to connect to host01 port 80: Connection refused
Can anyone replicate this tutorial and please let me know the cause and fix for this error?
Just run on this today, you must hardcode traefik version to 1.7.32
traefik:
image: traefik:1.7.32
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
machine:
image: katacoda/docker-http-server
labels:
- "traefik.backend=machine-echo"
- "traefik.frontend.rule=Host:machine-echo.example.com"
echo:
image: katacoda/docker-http-server:v2
labels:
- "traefik.backend=echo"
- "traefik.frontend.rule=Host:echo-echo.example.com"
I installed Kafka on a VM Ubuntu 18.0.4 with following compose file
version: '2'
networks:
kafka-net:
driver: bridge
services:
zookeeper-server:
image: 'bitnami/zookeeper:latest'
networks:
- kafka-net
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka-server1:
image: 'bitnami/kafka:latest'
networks:
- kafka-net
ports:
- '9092:9092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper-server
kafka-server2:
image: 'bitnami/kafka:latest'
networks:
- kafka-net
ports:
- '9093:9092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9093
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper-server
It installed without any problem.
sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39f38caf57cb bitnami/kafka:latest "/entrypoint.sh /run…" 3 hours ago Up 5 minutes 0.0.0.0:9092->9092/tcp kafka_kafka-server1_1
088a703b5b76 bitnami/kafka:latest "/entrypoint.sh /run…" 3 hours ago Up 3 hours 0.0.0.0:9093->9092/tcp kafka_kafka-server2_1
6a754bda47ea bitnami/zookeeper:latest "/entrypoint.sh /run…" 3 hours ago Up 3 hours 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, 8080/tcp kafka_zookeeper-server_1
Now, I want to connect to my Kafka on my VM with the following setting:
I test it from localhost with the following
root#ubuntu:~# kafkacat -b 192.168.179.133:9092 -L
Metadata for all topics (from broker -1: 192.168.179.133:9092/bootstrap):
1 brokers:
broker 1001 at localhost:9092
0 topics:
But in my windows 10 I can not connect to 192.168.179.133:9092 with Conduktor
As you see it returns error.
Test ZK is OK but Test kafka Connectivity raise the error !
You should change KAFKA_CFG_ADVERTISED_LISTENERS if your conductor is not installed in the same machine as Kafka cluster installed.
It should be like this for kafka-server1:
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.179.33:9092
and kafka-server2:
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.179.33:9093
Note: You should consider to add both kafka servers in conductor for redundancy.
You can check this for more information.
I have a docker-compose setup with several services, like so:
version: '3.6'
services:
web:
build:
context: ./services/web
dockerfile: Dockerfile-dev
volumes:
- './services/web:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web-db
- redis
web-db:
build:
context: ./services/web/projct/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
nginx:
build:
context: ./services/nginx
dockerfile: Dockerfile-dev
restart: always
ports:
- 80:80
depends_on:
- web
- client
#- redis
client:
build:
context: ./services/client
dockerfile: Dockerfile-dev
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- 3000:3000
environment:
- NODE_ENV=development
- REACT_APP_WEB_SERVICE_URL=${REACT_APP_WEB_SERVICE_URL}
depends_on:
- web
swagger:
build:
context: ./services/swagger
dockerfile: Dockerfile-dev
volumes:
- './services/swagger/swagger.json:/usr/share/nginx/html/swagger.json'
ports:
- 3008:8080
environment:
- URL=swagger.json
depends_on:
- web
scrapyrt:
image: vimagick/scrapyd:py3
restart: always
ports:
- '9080:9080'
volumes:
- ./services/web:/usr/src/app
working_dir: /usr/src/app/project/api
entrypoint: /usr/src/app/entrypoint-scrapyrt.sh
depends_on:
- web
redis:
image: redis:5.0.3-alpine
restart: always
expose:
- '6379'
ports:
- '6379:6379'
monitor:
image: dev3_web
ports:
- 5555:5555
command: flower -A celery_worker.celery --port=5555 --broker=redis://redis:6379/0
depends_on:
- web
- redis
worker-analysis:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery worker -A celery_worker.celery --loglevel=DEBUG --logfile=celery_logs/worker_analysis.log -Q analysis
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
worker-scraping:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery worker -A celery_worker.celery --loglevel=DEBUG --logfile=celery_logs/worker_scraping.log -Q scraping
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
worker-emailing:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery worker -A celery_worker.celery --loglevel=DEBUG --logfile=celery_logs/worker_emailing.log -Q email
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
worker-learning:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery worker -A celery_worker.celery --loglevel=DEBUG --logfile=celery_logs/worker_ml.log -Q machine_learning
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
worker-periodic:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery beat -A celery_worker.celery --schedule=/tmp/celerybeat-schedule --loglevel=DEBUG --pidfile=/tmp/celerybeat.pid
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
docker-compose -f docker-compose-dev.yml up -d and docker ps give me:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
396d7a1a5443 dev3_nginx "nginx -g 'daemon of…" 23 hours ago Up 18 minutes 0.0.0.0:80->80/tcp dev3_nginx_1
8ec7a51e2c2a dev3_web "celery worker -A ce…" 24 hours ago Up 19 minutes dev3_worker-analysis_1
e591e6445c64 dev3_web "celery worker -A ce…" 24 hours ago Up 19 minutes dev3_worker-learning_1
4d1fd17be3cb dev3_web "celery worker -A ce…" 24 hours ago Up 19 minutes dev3_worker-scraping_1
d25c40060fed dev3_web "celery beat -A cele…" 24 hours ago Up 17 seconds dev3_worker-periodic_1
76df1a600afa dev3_web "celery worker -A ce…" 24 hours ago Up 18 minutes dev3_worker-emailing_1
3442b0ce5d56 vimagick/scrapyd:py3 "/usr/src/app/entryp…" 24 hours ago Up 20 minutes 6800/tcp, 0.0.0.0:9080->9080/tcp dev3_scrapyrt_1
81d3ccea4de4 dev3_client "npm start" 24 hours ago Up 19 minutes 0.0.0.0:3000->3000/tcp dev3_client_1
aff5ecf951d2 dev3_web "flower -A celery_wo…" 24 hours ago Up 10 seconds 0.0.0.0:5555->5555/tcp dev3_monitor_1
864f17f39d54 dev3_swagger "/start.sh" 24 hours ago Up 19 minutes 80/tcp, 0.0.0.0:3008->8080/tcp dev3_swagger_1
e69476843236 dev3_web "/usr/src/app/entryp…" 24 hours ago Up 19 minutes 0.0.0.0:5001->5000/tcp dev3_web_1
22fd91b1ab6e redis:5.0.3-alpine "docker-entrypoint.s…" 24 hours ago Up 20 minutes 0.0.0.0:6379->6379/tcp dev3_redis_1
3a0b2115dd8e dev3_web-db "docker-entrypoint.s…" 24 hours ago Up 19 minutes 0.0.0.0:5435->5432/tcp dev3_web-db_1
They are all up, but I'm facing exceedingly slow network conditions, with a lot of instability. I have tried to check connectivity between containers and catch some eventual lag, like so:
docker container exec -it e69476843236 ping aff5ecf951d2
PING aff5ecf951d2 (172.18.0.13): 56 data bytes
64 bytes from 172.18.0.13: seq=0 ttl=64 time=0.504 ms
64 bytes from 172.18.0.13: seq=1 ttl=64 time=0.254 ms
64 bytes from 172.18.0.13: seq=2 ttl=64 time=0.191 ms
64 bytes from 172.18.0.13: seq=3 ttl=64 time=0.168 ms
but timing seems alright by these tests, though now and then I get ping: bad address 'aff5ecf951d2' when some service goes down.
Sometimes I get this error:
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
And too many times I just have to restart Docker in order to make it work.
How can I docker inspect deeper into slow network conditions and figure out what is wrong? Can newtwork issues be related to volumes?
The problem manifested itself as the number of containers and the app complexity grew up, (as you always should be aware of).
In my case, I had changed one of the images from Alpine to Slim-Buster (Debian), which is significantly larger.
Turns out I could fix that by simply going to Docker 'Preferences':
clicking on 'Advanced' and increasing memory allocation.
Now it runs smoothly again.
I've tried the 'with docker' docs here but it's not working from the localhost:7000, localhost:8081, or any other port I use. What am I missing?
REDIS_PORT=6379
### Redis ################################################
redis:
container_name: redis
hostname: redis
build: ./redis
volumes:
- ${DATA_PATH_HOST}/redis:/data
ports:
- "${REDIS_PORT}:6379"
networks:
- backend
### REDISCOMMANDER ################################################
redis-commander:
container_name: rediscommander
hostname: redis-commander
image: rediscommander/redis-commander:latest
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "7000:80"
networks:
- frontend
- backend
depends_on:
- redis
Docker ps gives me:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
042c9a2e918a rediscommander/redis-commander:latest "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 8081/tcp, 0.0.0.0:7000->80/tcp rediscommander
86bc8c1ca5ff laradock_redis "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:6379->6379/tcp redis
Docker logs rediscommander gives me:
$ docker logs rediscommander
Creating custom redis-commander config '/redis-commander/config/local-production.json'.
Parsing 1 REDIS_HOSTS into custom redis-commander config '/redis-commander/config/local-production.json'.
node ./bin/redis-commander
Using scan instead of keys
No Save: false
listening on 0.0.0.0:8081
access with browser at http://127.0.0.1:8081
Redis Connection redis:6379 using Redis DB #0
Redis commader is listening on port 8081 in container. That is why you should change port binding to
ports:
- "7000:8081"
in redis commander block and access it via localhost:7000.