Multiple docker container the same ports - docker

I got the following containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f75e2629d5b5 sameersbn/gitlab:10.2.2 "/sbin/entrypoint.sh…" 12 minutes ago Up 11 minutes 80/tcp, 0.0.0.0:22->22/tcp, 443/tcp gitlab_app_1
8fc5b1cec6d5 sameersbn/redis:latest "/sbin/entrypoint.sh…" 12 minutes ago Up 12 minutes 6379/tcp gitlab_redis_1
44db2400787d sameersbn/postgresql:9.6-2 "/sbin/entrypoint.sh" 12 minutes ago Up 12 minutes 5432/tcp gitlab_postgresql_1
31a3423a66c7 nextcloud_web "nginx -g 'daemon of…" 37 minutes ago Up 37 minutes 80/tcp, 443/tcp nextcloud_web_1
14334d36116a nextcloud_app "/entrypoint.sh php-…" 37 minutes ago Up 37 minutes 9000/tcp nextcloud_app_1
258d317934a7 nextcloud_cron "/cron.sh" 37 minutes ago Up 37 minutes 9000/tcp nextcloud_cron_1
c66f31c762d8 mariadb "docker-entrypoint.s…" 37 minutes ago Up 37 minutes 3306/tcp nextcloud_db_1
53e8fa0e5a9f redis "docker-entrypoint.s…" 37 minutes ago Up 37 minutes 6379/tcp nextcloud_redis_1
e4c147824046 tvial/docker-mailserver:latest "/bin/sh -c 'supervi…" About an hour ago Up 33 minutes 0.0.0.0:25->25/tcp, 110/tcp, 0.0.0.0:143->143/tcp, 0.0.0.0:587->587/tcp, 465/tcp, 995/tcp, 0.0.0.0:993->993/tcp, 4190/tcp mail_mail_1
4d99cf5a542a nginx "nginx -g 'daemon of…" About an hour ago Up 33 minutes 80/tcp, 443/tcp mail_ssl_1
a4e9b76b91df jrcs/letsencrypt-nginx-proxy-companion "/bin/bash /app/entr…" About an hour ago Up About an hour main_letsencrypt_1
334d501060b4 jwilder/nginx-proxy "/app/docker-entrypo…" About an hour ago Up 22 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp main_nginx-proxy_1
2ff7189e6272 mattermost_db "/entrypoint.sh post…" About an hour ago Up About an hour 5432/tcp mattermost_db_1
4d99ebc5ec02 mattermost_app "/entrypoint.sh plat…" About an hour ago Up About an hour 80/tcp mattermost_app_1
94007cb05dd3 mattermost_web "/entrypoint.sh" About an hour ago Up About an hour 80/tcp, 443/tcp mattermost_web_1
As u can see there are multiple containers with the same ports. For example nextcloud_redis and gitlab_redis. Since I am using nginx-proxy I have to put all containers into the same network. This causes a problems, because for example the gitlab_app tries to use the nextcloud_redis.
In the following the docker-compose.
Nextcloud
version: '3'
services:
db:
image: mariadb
# image: mysql
restart: always
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=XXX
env_file:
- db.env
redis:
image: redis
restart: always
app:
build: ./app
restart: always
volumes:
- ./data:/var/www/html
environment:
- MYSQL_HOST=db
env_file:
- db.env
depends_on:
- db
- redis
web:
build: ./web
restart: always
expose:
- "443"
- "80"
volumes:
- ./data:/var/www/html:ro
environment:
- VIRTUAL_HOST=cloud.XXX.net
- LETSENCRYPT_HOST=cloud.XXX.net
- LETSENCRYPT_EMAIL=admin#XXX.net
depends_on:
- app
cron:
build: ./app
restart: always
volumes:
- ./data:/var/www/html
entrypoint: /cron.sh
depends_on:
- db
- redis
networks:
default:
external:
name: reverse-proxy
Gitlab
version: '2'
services:
redis:
restart: always
image: sameersbn/redis:latest
command:
- --loglevel warning
volumes:
- ./redis:/var/lib/redis:Z
postgresql:
restart: always
image: sameersbn/postgresql:9.6-2
volumes:
- ./postgresql:/var/lib/postgresql:Z
environment:
- DB_USER=gitlab
- DB_PASS=XXX
- DB_NAME=gitlabhq_production
- DB_EXTENSION=pg_trgm
app:
restart: always
image: sameersbn/gitlab:10.2.2
depends_on:
- redis
- postgresql
expose:
- "80"
ports:
- "22:22"
volumes:
- ./data:/home/git/data:Z
environment:
- VIRTUAL_HOST=gitlab.XXX.net
- DEBUG=false
- DB_ADAPTER=postgresql
- DB_HOST=postgresql
- DB_PORT=5432
- DB_USER=gitlab
- DB_PASS=XXX
- DB_NAME=gitlabhq_production
- REDIS_HOST=redis
- REDIS_PORT=6379
- TZ=Europe/Berlin
- GITLAB_TIMEZONE=Berlin
- GITLAB_HTTPS=true
- SSL_SELF_SIGNED=true
[...]
- OAUTH_CROWD_SERVER_URL=
- OAUTH_CROWD_APP_NAME=
- OAUTH_CROWD_APP_PASSWORD=
- OAUTH_AUTH0_CLIENT_ID=
- OAUTH_AUTH0_CLIENT_SECRET=
- OAUTH_AUTH0_DOMAIN=
- OAUTH_AZURE_API_KEY=
- OAUTH_AZURE_API_SECRET=
- OAUTH_AZURE_TENANT_ID=
networks:
default:
external:
name: reverse-proxy
Whats the best solution to fix this issue?

Try use networks:
(At the moment I have no where to try it, so I hope it works, or at least help you decipher the dilemma)
version: '3'
services:
db:
image: mariadb
# image: mysql
restart: always
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=XXX
env_file:
- db.env
networks:
- net_internal_next
redis:
image: redis
restart: always
networks:
- net_internal_next
app:
build: ./app
restart: always
volumes:
- ./data:/var/www/html
environment:
- MYSQL_HOST=db
env_file:
- db.env
depends_on:
- db
- redis
networks:
- net_internal_next
web:
build: ./web
restart: always
expose:
- "443"
- "80"
volumes:
- ./data:/var/www/html:ro
environment:
- VIRTUAL_HOST=cloud.XXX.net
- LETSENCRYPT_HOST=cloud.XXX.net
- LETSENCRYPT_EMAIL=admin#XXX.net
depends_on:
- app
networks:
- net_internal_next
- net_external_next
cron:
build: ./app
restart: always
volumes:
- ./data:/var/www/html
entrypoint: /cron.sh
depends_on:
- db
- redis
networks:
- net_internal_next
networks:
net_external_next:
external:
name: reverse-proxy
net_internal_next:
driver: bridge
--
version: '2'
services:
redis:
restart: always
image: sameersbn/redis:latest
command:
- --loglevel warning
volumes:
- ./redis:/var/lib/redis:Z
networks:
- net_internal_git
postgresql:
restart: always
image: sameersbn/postgresql:9.6-2
volumes:
- ./postgresql:/var/lib/postgresql:Z
environment:
- DB_USER=gitlab
- DB_PASS=XXX
- DB_NAME=gitlabhq_production
- DB_EXTENSION=pg_trgm
networks:
- net_internal_git
app:
restart: always
image: sameersbn/gitlab:10.2.2
depends_on:
- redis
- postgresql
expose:
- "80"
ports:
- "22:22"
volumes:
- ./data:/home/git/data:Z
environment:
- VIRTUAL_HOST=gitlab.XXX.net
- DEBUG=false
- DB_ADAPTER=postgresql
- DB_HOST=postgresql
- DB_PORT=5432
- DB_USER=gitlab
- DB_PASS=XXX
- DB_NAME=gitlabhq_production
- REDIS_HOST=redis
- REDIS_PORT=6379
- TZ=Europe/Berlin
- GITLAB_TIMEZONE=Berlin
- GITLAB_HTTPS=true
- SSL_SELF_SIGNED=true
[...]
- OAUTH_CROWD_SERVER_URL=
- OAUTH_CROWD_APP_NAME=
- OAUTH_CROWD_APP_PASSWORD=
- OAUTH_AUTH0_CLIENT_ID=
- OAUTH_AUTH0_CLIENT_SECRET=
- OAUTH_AUTH0_DOMAIN=
- OAUTH_AZURE_API_KEY=
- OAUTH_AZURE_API_SECRET=
- OAUTH_AZURE_TENANT_ID=
networks:
- net_internal_git
- net_external_git
networks:
net_external_git:
external:
name: reverse-proxy
net_internal_git:
driver: bridge

Related

Hive-metastore not finding Hadoop Datanode

I have an Hadoop cluster with one namenode and one datanode instantiated with docker compose. In addition I am trying to launch Hive but the Hive-metastore seems not to find my datanode even if this one is up and running, infact checking the log it shows:
namenode:9870 is available.
check for datanode:9871...
datanode:9871 is not available yet
try in 5s once again ...
Here is my docker-compose.yml
#HADOOP
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8
container_name: namenode
restart: always
expose:
- "9870"
- "54310"
- "9000"
ports:
- 9870:9870
- 9000:9000
volumes:
- ./data/hadoop_data/:/hadoop_data
environment:
- CLUSTER_NAME=test
- CORE_CONF_fs_defaultFS=hdfs://namenode:9000
- CORE_CONF_hadoop_http_staticuser_user=root
- CORE_CONF_hadoop_proxyuser_hue_hosts=*
- CORE_CONF_hadoop_proxyuser_hue_groups=*
- CORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec
- HDFS_CONF_dfs_webhdfs_enabled=true
- HDFS_CONF_dfs_permissions_enabled=false
- HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false
- HDFS_CONF_dfs_safemode_threshold_pct=0
datanode:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode
restart: always
expose:
- "9871"
environment:
SERVICE_PRECONDITION: "namenode:9870"
ports:
- "9871:9871"
env_file:
- hive.env
hive-server:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-server
volumes:
- ./employee:/employee
env_file:
- hive.env
environment:
HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
SERVICE_PRECONDITION: "hive-metastore:9083"
depends_on:
- hive-metastore
ports:
- "10000:10000"
hive-metastore:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-metastore
env_file:
- hive.env
command: /opt/hive/bin/hive --service metastore
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode:9871 hive-metastore-postgresql:5432"
depends_on:
- hive-metastore-postgresql
ports:
- "9083:9083"
hive-metastore-postgresql:
image: bde2020/hive-metastore-postgresql:2.3.0
container_name: hive-metastore-postgresql
volumes:
- ./metastore-postgresql/postgresql/data:/var/lib/postgresql/data
depends_on:
- datanode

failed Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:7054] | ca_peerOrg1 and ca_peerOrg2 is not running

I am trying to run hyperledger fabric using container. Here is my docker-compose yml.
docker-compose.yaml
version: '2'
services:
ca.org1.sample.com:
image: hyperledger/fabric-ca:1.4
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.sample.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/0c7af57d616f614fd42250b8ba14a0c777220874d328ecbd1464a47ef3f85b1a_sk
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.sample.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/0c7af57d616f614fd42250b8ba14a0c777220874d328ecbd1464a47ef3f85b1a_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./channel/crypto-config/peerOrganizations/org1.sample.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerOrg1
ca.org2.sample.com:
image: hyperledger/fabric-ca:1.4
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org2
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.sample.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/fc399b786271e773cc0011593c6bcae7c4b4ae0f4a595ebf0883154bddb4daa7_sk
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.sample.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/fc399b786271e773cc0011593c6bcae7c4b4ae0f4a595ebf0883154bddb4daa7_sk
ports:
- "8054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./channel/crypto-config/peerOrganizations/org2.sample.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerOrg2
orderer.sample.com:
container_name: orderer.sample.com
image: hyperledger/fabric-orderer:1.4
environment:
- FABRIC_LOGGING_SPEC=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/crypto/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer/tls/ca.crt, /etc/hyperledger/crypto/peerOrg1/tls/ca.crt, /etc/hyperledger/crypto/peerOrg2/tls/ca.crt]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 7050:7050
volumes:
- ./channel:/etc/hyperledger/configtx
- ./channel/crypto-config/ordererOrganizations/sample.com/orderers/orderer.sample.com/:/etc/hyperledger/crypto/orderer
- ./channel/crypto-config/peerOrganizations/org1.sample.com/peers/peer0.org1.sample.com/:/etc/hyperledger/crypto/peerOrg1
- ./channel/crypto-config/peerOrganizations/org2.sample.com/peers/peer0.org2.sample.com/:/etc/hyperledger/crypto/peerOrg2
peer0.org1.sample.com:
container_name: peer0.org1.sample.com
extends:
file: base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org1.sample.com
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_ADDRESS=peer0.org1.sample.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.sample.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.sample.com:7051
ports:
- 7051:7051
- 7053:7053
volumes:
- ./channel/crypto-config/peerOrganizations/org1.sample.com/peers/peer0.org1.sample.com/:/etc/hyperledger/crypto/peer
depends_on:
- orderer.sample.com
peer1.org1.sample.com:
container_name: peer1.org1.sample.com
extends:
file: base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org1.sample.com
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_ADDRESS=peer1.org1.sample.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.sample.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.sample.com:7051
ports:
- 7056:7051
- 7058:7053
volumes:
- ./channel/crypto-config/peerOrganizations/org1.sample.com/peers/peer1.org1.sample.com/:/etc/hyperledger/crypto/peer
depends_on:
- orderer.sample.com
peer0.org2.sample.com:
container_name: peer0.org2.sample.com
extends:
file: base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org2.sample.com
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_ADDRESS=peer0.org2.sample.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org2.sample.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.sample.com:7051
ports:
- 8051:7051
- 8053:7053
volumes:
- ./channel/crypto-config/peerOrganizations/org2.sample.com/peers/peer0.org2.sample.com/:/etc/hyperledger/crypto/peer
depends_on:
- orderer.sample.com
peer1.org2.sample.com:
container_name: peer1.org2.sample.com
extends:
file: base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org2.sample.com
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_ADDRESS=peer1.org2.sample.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org2.sample.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org2.sample.com:7051
ports:
- 8056:7051
- 8058:7053
volumes:
- ./channel/crypto-config/peerOrganizations/org2.sample.com/peers/peer1.org2.sample.com/:/etc/hyperledger/crypto/peer
depends_on:
- orderer.sample.com
base.yaml file
version: '2'
services:
peer-base:
image: hyperledger/fabric-peer:1.4
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_default
- FABRIC_LOGGING_SPEC=ERROR
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
logging:
driver: "json-file"
options:
max-file: "2"
max-size: "5m"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
volumes:
- /var/run/:/host/var/run/
After running these containers, I can not see running ca_peerOrg1 and ca_peerOrg2 containers.
➜ sample_network git:(master) ✗ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
17e106be6872 hyperledger/fabric-peer:1.4 "peer node start" 42 seconds ago Up 35 seconds 0.0.0.0:8051->7051/tcp, 0.0.0.0:8053->7053/tcp peer0.org2.sample.com
fd72c0b378e3 hyperledger/fabric-peer:1.4 "peer node start" 42 seconds ago Up 35 seconds 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.sample.com
f0198beef653 hyperledger/fabric-peer:1.4 "peer node start" 42 seconds ago Up 35 seconds 0.0.0.0:7056->7051/tcp, 0.0.0.0:7058->7053/tcp peer1.org1.sample.com
a9d2a0fabe6d hyperledger/fabric-peer:1.4 "peer node start" 42 seconds ago Up 35 seconds 0.0.0.0:8056->7051/tcp, 0.0.0.0:8058->7053/tcp peer1.org2.sample.com
8173c3c09e6a hyperledger/fabric-orderer:1.4 "orderer" 49 seconds ago Up 43 seconds 0.0.0.0:7050->7050/tcp orderer.sample.com
Here ca_peerOrg1 and ca_peerOrg2 are not running. So When I try to register from node sdk, I am getting the following error,
POST request Enroll on Org1 ...
{"success":false,"message":"failed Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:7054]"}
ORG1 token is null
POST request Enroll on Org2 ...
{"success":false,"message":"failed Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:8054]"}
ORG2 token is null
Please help me to fix this issue.
Check whether the FABRIC_CA_SERVER_CA_KEYFILE and FABRIC_CA_SERVER_TLS_KEYFILE of CA for each organization is properly written. in docker-compose.yaml
# ca_peerOrg1
ls ./channel/crypto-config/peerOrganizations/org1.sample.com/ca/
# ca_peerOrg2
ls ./channel/crypto-config/peerOrganizations/org2.sample.com/ca/
Among the result values operated by the above command, the name of the *_sk file should be written as the FABRIC_CA_SERVER_CA_KEYFILE and FABRIC_CA_SERVER_TLS_KEYFILE
If there are a log according to, I can talk more clearly.
docker logs ca_peerOrg1
docker logs ca_peerOrg2

How to create znode on a docker compose cluster using "command" in the compose file?

I am trying to create a docker-compose file to run zookeeper and solr (3 node cluster - official images). I am trying define a znode in the zookeeper using the "command" attribute in the compose file.
command: bash -c "/apache-zookeeper-3.5.8-bin/bin/zkCli.sh -server zoo1:2181 create /solr '' && zkServer.sh start-foreground"
The zookeeper node with this command keep on crashing again and again.
My docker compose file is :
version: '3.7'
services:
zoo1:
image: zookeeper:3.5
container_name: zoo1
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_4LW_COMMANDS_WHITELIST: mntr,conf,ruok
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
networks:
- solr
volumes:
- 'zoo1_data:/data'
zoo2:
image: zookeeper:3.5
container_name: zoo2
restart: always
hostname: zoo2
ports:
- 2182:2181
environment:
ZOO_4LW_COMMANDS_WHITELIST: mntr,conf,ruok
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181
networks:
- solr
volumes:
- 'zoo2_data:/data'
zoo3:
image: zookeeper:3.5
container_name: zoo3
restart: always
hostname: zoo3
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
ZOO_4LW_COMMANDS_WHITELIST: mntr,conf,ruok
command: bash -c "/apache-zookeeper-3.5.8-bin/bin/zkCli.sh -server zoo2:2181 create /solr '' && zkServer.sh start-foreground"
networks:
- solr
volumes:
- 'zoo3_data:/data'
depends_on:
- zoo1
- zoo2
solr1:
image: solr:8.3
container_name: solr1
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181/solr
volumes:
- 'solr1varsolr:/var/solr'
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
solr2:
image: solr:8.3
container_name: solr2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181/solr
volumes:
- 'solr2varsolr:/var/solr'
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
solr3:
image: solr:8.3
container_name: solr3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181/solr
volumes:
- 'solr3varsolr:/var/solr'
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
networks:
solr:
volumes:
zoo1_data:
zoo2_data:
zoo3_data:
solr1varsolr:
solr2varsolr:
solr3varsolr:
From what it appears to me the second part of the command responsible for starting the zookeeper in the container is not getting executed but i cannot figure out why? and also is there any other way to achieve this ?
Resolved
Found the reason. It was missing "exec" before "zkServer.sh start-foreground" in the command option. Resulting in container being shutdown after execution of command.
thanks #Yoeri Van Nieuwerburg for sample compose file to compare with.
working compose file :
version: '3.7'
services:
zoo1:
image: zookeeper:3.5
container_name: zoo1
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_4LW_COMMANDS_WHITELIST: mntr,conf,ruok
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
networks:
- solr
volumes:
- 'zoo1_data:/data'
zoo2:
image: zookeeper:3.5
container_name: zoo2
restart: always
hostname: zoo2
ports:
- 2182:2181
environment:
ZOO_4LW_COMMANDS_WHITELIST: mntr,conf,ruok
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181
networks:
- solr
volumes:
- 'zoo2_data:/data'
zoo3:
image: zookeeper:3.5
container_name: zoo3
restart: always
hostname: zoo3
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
ZOO_4LW_COMMANDS_WHITELIST: mntr,conf,ruok
command: bash -c "/apache-zookeeper-3.5.8-bin/bin/zkCli.sh -server zoo2:2181 create /solr '' && exec zkServer.sh start-foreground"
networks:
- solr
volumes:
- 'zoo3_data:/data'
depends_on:
- zoo1
- zoo2
solr1:
image: solr:8.3
container_name: solr1
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181/solr
volumes:
- 'solr1varsolr:/var/solr'
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
solr2:
image: solr:8.3
container_name: solr2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181/solr
volumes:
- 'solr2varsolr:/var/solr'
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
solr3:
image: solr:8.3
container_name: solr3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181/solr
volumes:
- 'solr3varsolr:/var/solr'
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
networks:
solr:
volumes:
zoo1_data:
zoo2_data:
zoo3_data:
solr1varsolr:
solr2varsolr:
solr3varsolr:
I may be missing something here, but I'm not sure why you're trying to "create" the solr directory on the zookeeper instance?
Solr is just an instance, connecting to the zookeeper node, right?
Or do you want to connect multiple systems on your ensemble? If so, I think you should check out this solr-doc, which shows exactly that ("Using a chroot").
Anyways, if you just want to get it up-and-running without using the /solr endpoint, here's how one of my zookeeper nodes looks like in my docker-compose (for local development only):
zoo3:
image: library/zookeeper:3.5.7
container_name: zoo3
restart: always
hostname: zoo3
ports:
- 8384:8080
environment:
TZ: Europe/Paris
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
networks:
- solr
command: >
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
echo $TZ > /etc/timezone &&
sed -i 's/autopurge.purgeInterval=0/autopurge.purgeInterval=1/g' /conf/zoo.cfg &&
echo 4lw.commands.whitelist=mntr,conf,ruok >> /conf/zoo.cfg &&
exec zkServer.sh start-foreground"
The only thing you'll have to change in the solr-services, is the url for the zoo3 from zoo3:2181/solr to zoo3:2181... which is already being used for the other 2 nodes ;)
All the nodes use the same setup (except for the naming and the ZOO_SERVERS environment variable of course :) )
Hope this helps, if not, hoping I didn't confuse you ;)

Docker - Slow Network Conditions

I have a docker-compose setup with several services, like so:
version: '3.6'
services:
web:
build:
context: ./services/web
dockerfile: Dockerfile-dev
volumes:
- './services/web:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web-db
- redis
web-db:
build:
context: ./services/web/projct/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
nginx:
build:
context: ./services/nginx
dockerfile: Dockerfile-dev
restart: always
ports:
- 80:80
depends_on:
- web
- client
#- redis
client:
build:
context: ./services/client
dockerfile: Dockerfile-dev
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- 3000:3000
environment:
- NODE_ENV=development
- REACT_APP_WEB_SERVICE_URL=${REACT_APP_WEB_SERVICE_URL}
depends_on:
- web
swagger:
build:
context: ./services/swagger
dockerfile: Dockerfile-dev
volumes:
- './services/swagger/swagger.json:/usr/share/nginx/html/swagger.json'
ports:
- 3008:8080
environment:
- URL=swagger.json
depends_on:
- web
scrapyrt:
image: vimagick/scrapyd:py3
restart: always
ports:
- '9080:9080'
volumes:
- ./services/web:/usr/src/app
working_dir: /usr/src/app/project/api
entrypoint: /usr/src/app/entrypoint-scrapyrt.sh
depends_on:
- web
redis:
image: redis:5.0.3-alpine
restart: always
expose:
- '6379'
ports:
- '6379:6379'
monitor:
image: dev3_web
ports:
- 5555:5555
command: flower -A celery_worker.celery --port=5555 --broker=redis://redis:6379/0
depends_on:
- web
- redis
worker-analysis:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery worker -A celery_worker.celery --loglevel=DEBUG --logfile=celery_logs/worker_analysis.log -Q analysis
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
worker-scraping:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery worker -A celery_worker.celery --loglevel=DEBUG --logfile=celery_logs/worker_scraping.log -Q scraping
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
worker-emailing:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery worker -A celery_worker.celery --loglevel=DEBUG --logfile=celery_logs/worker_emailing.log -Q email
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
worker-learning:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery worker -A celery_worker.celery --loglevel=DEBUG --logfile=celery_logs/worker_ml.log -Q machine_learning
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
worker-periodic:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery beat -A celery_worker.celery --schedule=/tmp/celerybeat-schedule --loglevel=DEBUG --pidfile=/tmp/celerybeat.pid
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
docker-compose -f docker-compose-dev.yml up -d and docker ps give me:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
396d7a1a5443 dev3_nginx "nginx -g 'daemon of…" 23 hours ago Up 18 minutes 0.0.0.0:80->80/tcp dev3_nginx_1
8ec7a51e2c2a dev3_web "celery worker -A ce…" 24 hours ago Up 19 minutes dev3_worker-analysis_1
e591e6445c64 dev3_web "celery worker -A ce…" 24 hours ago Up 19 minutes dev3_worker-learning_1
4d1fd17be3cb dev3_web "celery worker -A ce…" 24 hours ago Up 19 minutes dev3_worker-scraping_1
d25c40060fed dev3_web "celery beat -A cele…" 24 hours ago Up 17 seconds dev3_worker-periodic_1
76df1a600afa dev3_web "celery worker -A ce…" 24 hours ago Up 18 minutes dev3_worker-emailing_1
3442b0ce5d56 vimagick/scrapyd:py3 "/usr/src/app/entryp…" 24 hours ago Up 20 minutes 6800/tcp, 0.0.0.0:9080->9080/tcp dev3_scrapyrt_1
81d3ccea4de4 dev3_client "npm start" 24 hours ago Up 19 minutes 0.0.0.0:3000->3000/tcp dev3_client_1
aff5ecf951d2 dev3_web "flower -A celery_wo…" 24 hours ago Up 10 seconds 0.0.0.0:5555->5555/tcp dev3_monitor_1
864f17f39d54 dev3_swagger "/start.sh" 24 hours ago Up 19 minutes 80/tcp, 0.0.0.0:3008->8080/tcp dev3_swagger_1
e69476843236 dev3_web "/usr/src/app/entryp…" 24 hours ago Up 19 minutes 0.0.0.0:5001->5000/tcp dev3_web_1
22fd91b1ab6e redis:5.0.3-alpine "docker-entrypoint.s…" 24 hours ago Up 20 minutes 0.0.0.0:6379->6379/tcp dev3_redis_1
3a0b2115dd8e dev3_web-db "docker-entrypoint.s…" 24 hours ago Up 19 minutes 0.0.0.0:5435->5432/tcp dev3_web-db_1
They are all up, but I'm facing exceedingly slow network conditions, with a lot of instability. I have tried to check connectivity between containers and catch some eventual lag, like so:
docker container exec -it e69476843236 ping aff5ecf951d2
PING aff5ecf951d2 (172.18.0.13): 56 data bytes
64 bytes from 172.18.0.13: seq=0 ttl=64 time=0.504 ms
64 bytes from 172.18.0.13: seq=1 ttl=64 time=0.254 ms
64 bytes from 172.18.0.13: seq=2 ttl=64 time=0.191 ms
64 bytes from 172.18.0.13: seq=3 ttl=64 time=0.168 ms
but timing seems alright by these tests, though now and then I get ping: bad address 'aff5ecf951d2' when some service goes down.
Sometimes I get this error:
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
And too many times I just have to restart Docker in order to make it work.
How can I docker inspect deeper into slow network conditions and figure out what is wrong? Can newtwork issues be related to volumes?
The problem manifested itself as the number of containers and the app complexity grew up, (as you always should be aware of).
In my case, I had changed one of the images from Alpine to Slim-Buster (Debian), which is significantly larger.
Turns out I could fix that by simply going to Docker 'Preferences':
clicking on 'Advanced' and increasing memory allocation.
Now it runs smoothly again.

Docker multiple MYSQL containers

Docker newbie here.
What I'm trying to achieve is to run multiple MySQL containers with docker compose, in addition to an nginx, a PHP and a PhpMyAdmin container.
This is my docker-compose.yml:
version: '3'
services:
server:
build:
context: ./
dockerfile: server.docker
volumes:
- ./:/var/www
ports:
- "8080:80"
links:
- app
app:
build:
context: ./
dockerfile: app.docker
volumes:
- ./:/var/www
links:
- db_callcenter
- db_forecast
- db_logistics
- db_products
- db_rm
- db_rma
- db_settings
- db_tasks
- db_users
db_callcenter:
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=prj_callcenter"
ports:
- "33061:3306"
volumes:
- mysql_bkp:/var/lib/mysql
db_forecast:
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=prj_forecast"
ports:
- "33062:3306"
volumes:
- mysql_bkp:/var/lib/mysql
db_logistics:
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=prj_logistics"
ports:
- "33063:3306"
volumes:
- mysql_bkp:/var/lib/mysql
db_products:
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=prj_products"
ports:
- "33064:3306"
volumes:
- mysql_bkp:/var/lib/mysql
db_rm:
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=prj_rm"
ports:
- "33065:3306"
volumes:
- mysql_bkp:/var/lib/mysql
db_rma:
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=prj_rma"
ports:
- "33066:3306"
volumes:
- mysql_bkp:/var/lib/mysql
db_settings:
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=prj_settings"
ports:
- "33067:3306"
volumes:
- mysql_bkp:/var/lib/mysql
db_tasks:
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=prj_tasks"
ports:
- "33068:3306"
volumes:
- mysql_bkp:/var/lib/mysql
db_users:
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=prj_users"
ports:
- "33069:3306"
volumes:
- mysql_bkp:/var/lib/mysql
pma:
image: phpmyadmin/phpmyadmin
environment:
- "PMA_USER=root"
- "PMA_PASSWORD=secret"
ports:
- "8001:80"
links:
- db_callcenter
- db_forecast
- db_logistics
- db_products
- db_rm
- db_rma
- db_settings
- db_tasks
- db_users
volumes:
mysql_bkp:
But none of the MySQL containers are created. When I run docker ps I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0228e9c8a267 phpmyadmin/phpmyadmin "/run.sh phpmyadmin" About a minute ago Up 30 seconds 0.0.0.0:8001->80/tcp prj_pma_1
e6c6b11905f1 prj_server "nginx -g 'daemon ..." 22 minutes ago Up 2 minutes 0.0.0.0:8080->80/tcp prj_server_1
2e7dd484c6e5 prj_app "docker-php-entryp..." 24 minutes ago Up 2 minutes 9000/tcp prj_app_1
UPDATE:
docker logs shows:
Unable to lock ./ibdata1 error: 11
or
InnoDB: Unsupported redo log format.
I don't know what I'm doing wrong, or how I should start debugging. Any help would be mostly appreciated.
You can't have multiple mysql processes sharing the same data directory. In your compose file, every database container is using the same mysql_bkp volume. You will need to either create one volume per container, or configure mysql to use a unique subdirectory of /var/lib/mysql for storing data.
If you simply remove the volumes: key from each database service, they will all get a unique anonymous volume (because that's how the mysql image is configured).
Alternatively, you can declare and mount a separate volume for each service:
services:
db_logistics:
image: mysql:5.7
volumes:
- mysql_bkp_logistics:/var/lib/mysql
db_products:
image: mysql:5.7
volumes:
- mysql_bkp_products:/var/lib/mysql
volumes:
mysql_bkp_logistics:
mysql_bkp_products:
Etc.

Resources