I'm tring to add sticky session on Docker Swarm and I first started to deploy the backend and the traefik containers, but the traefik dashboard isn't showing any providers
loadbalancer:
image: registry.fif.tech/traefik:latest
command: --docker \
--docker.swarmmode \
--docker.watch \
--docker.exposedbydefault=false \
--web \
--entryPoints="Name:http Address::8001" \
--defaultentrypoints="http" \
--checknewversion=false \
--loglevel=DEBUG
ports:
- 8001:8001
- 9090:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
deploy:
restart_policy:
condition: any
mode: replicated
replicas: 1
update_config:
delay: 2s
placement:
constraints: [node.role == manager]
networks:
- omni-net
web-desktop:
image: 'registry.fif.tech/omnichannel2-webdesktop:${TAG}'
command: dockerize -wait http://172.17.0.1:4001/ora-cmm-workflow-executor/PreProcessService?wsdl catalina.sh run
restart: always
deploy:
mode: replicated
replicas: 2
update_config:
parallelism: 1
delay: 10s
failure_action: continue
order: start-first
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
ports:
- '9999:8080'
environment:
- TZ='${TZ}'
extra_hosts:
- "webdesktop:127.0.0.1"
- "cmm-server-jms:${CMM_JMS_SERVER_IP}"
- "techlog-server-jms:${TECHLOG_JMS_SERVER_IP}"
depends_on:
- "workflow"
- "redis-server"
secrets:
- DBMetadata
- DBSecuencial
- Desktop
- DesktopRedis
- DesktopKey
volumes:
- /logs-pool/tomcat:/cyberbank/logs
configs:
- source: recaptcha_config
target: /cyberbank/ebanking/v2/config/recaptcha.properties
logging:
driver: none
healthcheck:
test: ["CMD-SHELL", "curl --silent --fail http://localhost:8080/Techbank/sso || exit 1"]
interval: 30s
timeout: 2s
retries: 26
start_period: 2m
labels:
- "traefik.enable=true"
- "traefik.docker.network=omnichannel2_omni-net"
- "traefik.port=9999"
- "traefik.frontend.rule=PathPrefix:/Techbank;"
- "traefik.backend.loadbalancer.sticky=true"
networks:
- omni-net
There is any problem on the stack definition?
In swarm mode the traefik labels must be declared on the service instead of the container, so move your labels to the deploy section.
https://docs.docker.com/compose/compose-file/#labels-1
Related
Created a simple Traefik instance with 2 services, only by http. I'm getting Gateway timeout in both instances, this is my only file where I created my services and traefik proxy.
version: '3.4'
services:
reverse-proxy:
image: traefik:2.0 # The official Traefik docker image
ports:
- "80:80" # The HTTP port
- "10553:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
networks:
- default
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.network=demo_swarm_network"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.swarmMode=true"
- "--entrypoints.web.address=:80"
deploy:
mode: global
placement:
constraints:
- node.role == manager
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
xxxxx-authentication-api:
image: xxxx_authentication_api_nightly:9999
deploy:
labels:
- "traefik.enable=true"
- "traefik.docker.lbswarm=true"
- "traefik.docker.network=demo_swarm_network"
- "traefik.http.routers.authenticationapi.rule=PathPrefix(`/api/authentication`)"
- "traefik.http.routers.authenticationapi.entrypoints=web"
- "traefik.http.services.xxxxx-authentication-api.loadbalancer.server.port=3000"
- "traefik.http.services.xxxxx-authentication-api.loadbalancer.server.scheme=http"
replicas: 1
update_config:
parallelism: 1
delay: 10s
order: stop-first
command: node ./server.js
environment:
- NODE_ENV=authentication
- LOG_LEVEL=info
- NODE_CONFIG_DIR=./config
networks:
- default
ports:
- "3000"
xxxxx-authentication-app:
image: xxxxx_authentication_app_nightly:9999
deploy:
labels:
- "traefik.enable=true"
- "traefik.docker.lbswarm=true"
- "traefik.docker.network=demo_swarm_network"
- "traefik.http.routers.authenticationapp.rule=PathPrefix(`/authentication`)"
- "traefik.http.routers.authenticationapp.entrypoints=web"
- "traefik.http.services.xxxxx-authentication-app.loadbalancer.server.port=80"
- "traefik.http.services.xxxxx-authentication-app.loadbalancer.server.scheme=http"
replicas: 1
update_config:
parallelism: 1
delay: 10s
order: stop-first
networks:
- default
ports:
- "80"
networks:
default:
external:
name: demo_swarm_network
The services are up and running, are so are the containers. Traefik is also running, just when I try to localhost:80/api/authentication or localhost:80/authentication I get gateway timeout.
Where is traefik sending my requests ? I've confirmed in the host ports, that the apps in both endpoints are running.
What's missing in my configuration ?
Huzzah! The timeouts disapeared when I updated the demo_swarm_network network to have overlay.
I have two docker-compose.*.yml files, one for the testing stage and one for production. The testing stage file is executed with docker compose and the production with docker swarm.
The docker compose setup works fine. In case of the production docker swarm setup I am getting a timeout 504 http status code when accessing the rabbitmq management endpoint.
Since the logs of both containers, traefik as well as rabbitmq do not display any error I do not know how to debug this.
Here are both files:
docker-compose.testing-stage.yml
(working example, executed with docker compose)
version: '3.7'
services:
traefik:
image: traefik:v2.2
hostname: traefik
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/acme.json:/root/acme.json
- /root/credentials.txt:/root/credentials.txt
ports:
- 80:80
- 443:443
command:
- --api=true
- --log.level=WARN
- --providers.docker=true
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.docker.exposedByDefault=false
- --certificatesresolvers.secure.acme.httpchallenge=true
- --certificatesresolvers.secure.acme.httpchallenge.entrypoint=web
- --certificatesresolvers.secure.acme.email=${MAIL_ADDRESS}
- --certificatesresolvers.secure.acme.storage=/root/acme.json
labels:
- traefik.enable=true
# dashboard
- traefik.http.routers.traefik.service=api#internal
- traefik.http.routers.traefik.rule=Host(`monitor.example.org`)
- traefik.http.routers.traefik.tls.certresolver=secure
- traefik.http.routers.traefik.middlewares=auth
- traefik.http.services.traefik.loadbalancer.server.port=8080
- traefik.http.middlewares.auth.basicauth.usersfile=/root/credentials.txt
# https redirect
- traefik.http.routers.detour.rule=hostregexp(`{host:[a-z-.]+}`)
- traefik.http.routers.detour.entrypoints=web
- traefik.http.routers.detour.middlewares=redirect-to-https
- traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https
- traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https
- traefik.http.services.dummy-svc.loadbalancer.server.port=9999
rabbitmq:
image: registry.exampe.com/root/blicc/rabbitmq:test
hostname: rabbitmq
environment:
- RABBITMQ_ERLANG_COOKIE=${RABBITMQ_PASSWORD}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_PASSWORD}
- RABBITMQ_DEFAULT_USER=admin
ports:
- 15672:15672
labels:
- traefik.enable=true
- traefik.http.routers.rabbitmq.rule=Host(`messaging.example.org`)
- traefik.http.routers.rabbitmq.tls.certresolver=secure
- traefik.http.services.rabbitmq.loadbalancer.server.port=15672
docker-compose.prod.yml
(example which gives a timeout on messaging.prod-example.org, executed with docker swarm)
version: '3.7'
services:
traefik:
image: traefik:v2.2
hostname: traefik
ports:
- 80:80
- 443:443
command:
# entry points
- --api=true
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
# tls certificates
- --certificatesresolvers.secure.acme.httpchallenge=true
- --certificatesresolvers.secure.acme.httpchallenge.entrypoint=web
- --certificatesresolvers.secure.acme.email=${MAIL_ADDRESS}
- --certificatesresolvers.secure.acme.storage=/root/acme.json
# metrics
- --metrics=true
- --metrics.prometheus=true
# docker
- --providers.docker=true
- --providers.docker.exposedByDefault=false
- --providers.docker.swarmMode=true
- --providers.docker.network=traefik-public
- --providers.docker.endpoint=unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/acme.json:/root/acme.json
- /root/credentials.txt:/root/credentials.txt
deploy:
replicas: 1
update_config:
parallelism: 1
order: start-first
failure_action: rollback
delay: 10s
rollback_config:
parallelism: 0
order: stop-first
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 120s
placement:
constraints:
- node.role == manager
labels:
- traefik.enable=true
# dashboard
- traefik.http.routers.traefik.service=api#internal
- traefik.http.routers.traefik.rule=Host(`monitor.prod-example.org`)
- traefik.http.routers.traefik.tls.certresolver=secure
- traefik.http.routers.traefik.middlewares=auth
- traefik.http.middlewares.auth.basicauth.usersfile=/root/credentials.txt
- traefik.http.services.traefik.loadbalancer.server.port=8080
# https redirect
- traefik.http.routers.detour.rule=hostregexp(`{host:[a-z-.]+}`)
- traefik.http.routers.detour.entrypoints=web
- traefik.http.routers.detour.middlewares=redirect-to-https
- traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https
- traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https
- traefik.http.services.dummy-svc.loadbalancer.server.port=9999
rabbitmq:
image: registry.exampe.com/root/blicc/rabbitmq:latest
hostname: rabbitmq
environment:
- RABBITMQ_ERLANG_COOKIE=${RABBITMQ_PASSWORD}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_PASSWORD}
- RABBITMQ_DEFAULT_USER=admin
ports:
- 15672:15672
deploy:
replicas: 1
update_config:
parallelism: 1
order: start-first
failure_action: rollback
delay: 10s
rollback_config:
parallelism: 0
order: stop-first
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 120s
placement:
constraints:
- node.role == manager
labels:
- traefik.enable=true
- traefik.http.routers.rabbitmq.rule=Host(`messaging.prod-example.org`)
- traefik.http.routers.rabbitmq.tls.certresolver=secure
- traefik.http.services.rabbitmq.loadbalancer.server.port=15672
Both server run the ubuntu 18.04 with the same firewall and the same ports exposed. I am guessing that I do some mistakes on the docker swarm setup for traefik, but I can not figure out what. The only thing I basically changed was putting the labels under deploy.
The rabbitmq container has the ui exposed on port 15672 which I am mapping with the load balancer to port 443 on messaging.prod-example.org. Nevertheless this endpoint gives me an timeout.
Does anyone sees the misconfiguration I am doing here?
Maybe you forget to set a "entrypoints" in rabbitmq labels, like below:
traefik.http.routers.rabbitmq.entrypoints=XXX
I have docker-compose build with symfony on apache and angular on nginx. It is possible that more docker-compositions can be run, so now I want to make my own DNS using traefik - I want to set hostname of each app, make docker-compose up and resolve apps with hostname when they are ready.
Traefik docker-compose:
version: '3.1'
networks:
proxy:
external: true
internal:
external: false
services:
traefik:
image: traefik:v2.1
command: --api.insecure=true --providers.docker
labels:
- traefik.frontend.rule=Host:monitor.docker.localhost
- traefik.port=8080
networks:
- proxy
ports:
- 80:80
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Apps docker-compose:
# Run docker-compose build
# Run docker-compose up
# Live long and prosper
version: '3.1'
networks:
proxy:
external: true
internal:
external: false
services:
apache:
build: .docker/apache
container_name: sf4_apache
volumes:
- .docker/config/vhosts:/etc/apache2/sites-enabled
- ./backend:/home/wwwroot/sf4
depends_on:
- php
labels:
- traefik.http.routers.sf4_apache.rule=Host(`symfony.docker.localhost`)
- traefik.http.services.apache.loadbalancer.server.port=80
networks:
- internal
- proxy
php:
build: .docker/php
container_name: sf4_php
volumes:
- ./backend:/home/wwwroot/sf4
- ./executor:/home/wwwroot/pipe
networks:
- internal
labels:
- traefik.enable=false
nginx:
container_name: angular_nginx
build: .docker/nginx
volumes:
- ./frontend/dist/frontend:/usr/share/nginx/html
ports:
- "81:80"
- "443:443"
labels:
- traefik.http.routers.angular_nginx.rule=Host(`angular.docker.localhost`)
networks:
- internal
- proxy
node:
build: .docker/node
container_name: angular_node
ports:
- 4200:4200
volumes:
- ./frontend:/home/node/app/frontend
tty: true
command:
- /bin/sh
- -c
- |
cd /home/node/app/frontend && npm start
expose:
- "4200"
networks:
- internal
labels:
- traefik.enable=false
Can't make it work: sometimes I get Bad Gateway at domains (symfony.docker.localhost), sometimes it crushed because both servers using one port, so please help me to run this correctly
First, docker frontend and backend are deprecated in version 2.1 check this link
here is an example of doing the same in traefik 2.1
version: '3.7'
networks:
traefik:
external: true
volumes:
db_data:
services:
proxy:
image: traefik:v2.1
command:
- '--providers.docker=true'
- '--entryPoints.web.address=:80'
- '--providers.providersThrottleDuration=2s'
- '--providers.docker.watch=true'
- '--providers.docker.swarmMode=true'
- '--providers.docker.swarmModeRefreshSeconds=15s'
- '--providers.docker.exposedbydefault=false'
- '--providers.docker.defaultRule=Host("local.me")'
- '--accessLog.bufferingSize=0'
- '--api=true'
- '--api.dashboard=true'
- '--api.insecure=true'
- '--ping.entryPoint=web'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:ro'
ports:
- '80:80'
- '8080:8080'
deploy:
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 120s
update_config:
delay: 10s
order: start-first
parallelism: 1
rollback_config:
parallelism: 0
order: stop-first
logging:
driver: json-file
options:
'max-size': '10m'
'max-file': '5'
networks:
- traefik
mysql:
image: mysql:5.7
command: mysqld --general-log=1 --general-log-file=/var/log/mysql/general-log.log
deploy:
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 120s
update_config:
delay: 10s
order: start-first
parallelism: 1
rollback_config:
parallelism: 0
order: stop-first
logging:
driver: json-file
options:
'max-size': '10m'
'max-file': '5'
networks:
- traefik
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: dummy
MYSQL_DATABASE: rails_blog_production
rails_blog_web:
image: wshihadeh/rails_blog:demo-v1
command: 'web'
deploy:
labels:
- traefik.enable=true
- traefik.http.services.blog.loadbalancer.server.port=8080
- traefik.http.routers.blog.rule=Host(`blog.local.me`)
- traefik.http.routers.blog.service=blog
- traefik.http.routers.blog.entrypoints=web
- traefik.docker.network=traefik
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 120s
update_config:
delay: 10s
order: start-first
parallelism: 1
rollback_config:
parallelism: 0
order: stop-first
logging:
driver: json-file
options:
'max-size': '10m'
'max-file': '5'
networks:
- traefik
depends_on:
- mysql
environment:
DATABASE_URL: mysql2://root:dummy#mysql/rails_blog_production
RAILS_SERVE_STATIC_FILES: 'true'
for more information, you can check this blog post
I trying to deploy docker stack, that includes my development environment. But in random cases I have next error:
> failed to create service < service_name >: Cannot connect to the
> Docker daemon at unix:///var/run/docker.sock. Is the docker daemon
> running?
Next I restart docker daemon. Sometimes it requires to kill docker processes and shims. I deleting old stack and build again. Some times docker successfully finishes build, but socket crashes on the starting stage.
Also all containers work properly when I starting it in regular mode, without swarm or stack. It is not work exactly inside swarm.
I have used next command to build:
> $ docker stack deploy dev-env-stc -c docker-compose.yml
Environment run in Antergos Linux(Arch).
Layout is like at the diagram
Nginx container and docker networks created using commands:
>$ docker run --detach --name nginx-main --net dev-env-ext --ip 10.20.20.10 --publish 80:80 --publish 443:443 --volume /env-vol/nginx/conf:/etc/nginx:ro --volume /env-vol/nginx/www:/usr/var/www --volume /env-vol/nginx/logs:/usr/var/logs --volume /env-vol/nginx/run:/usr/var/run --volume /env-vol/ssl:/usr/var/ssl:ro nginx-webserver
>
> $ docker network create --driver=bridge --attachable --ipv6 --subnet fd19:eb5a:3d2f:f15d::/48 --subnet 10.20.20.0/24 --gateway 10.20.20.1 dev-env-ext
>
> $ docker network create --driver=bridge --attachable --ipv6 --subnet fd19:eb5a:3e30:f15d::/48 --subnet 10.20.30.0/24 --gateway 10.20.30.1 dev-env-int
>
> $ docker network create --driver=overlay --attachable --ipv6 --subnet fd19:eb5a:3c1e:f15d::/48 --subnet 10.20.40.0/24 --gateway 10.20.40.1 dev-env-swarm
>
> $ docker network connect dev-env-swarm --ip=10.20.40.10 nginx-main
>
> $ docker network connect dev-env-int --ip=10.20.30.10 nginx-main
My docker-compose.yml file:
version: '3.6'
volumes:
postgres-data:
driver: local
redis-data:
driver: local
networks:
dev-env-swarm:
external: true
services:
gitlab:
image: gitlab/gitlab-ce:latest
hostname: gitlab.testenv.top
external_links:
- nginx-main
ports:
- 22:22
healthcheck:
test: ["CMD", "curl", "-f", "https://localhost:443"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 60s
deploy:
mode: global
endpoint_mode: vip
resources:
limits:
cpus: "0.50"
memory: 4096M
reservations:
cpus: "0.10"
memory: 512M
restart_policy:
condition: on-failure
delay: 20s
max_attempts: 3
window: 300s
networks:
dev-env-swarm:
aliases:
- gitlab.testenv.top
dns:
- 10.10.10.10
- 8.8.8.8
volumes:
- /env-vol/gitlab/config:/etc/gitlab
- /env-vol/gitlab/logs:/var/log/gitlab
- /env-vol/gitlab/data:/var/opt/gitlab
external_links:
- nginx-main
redis:
env_file: .env
image: redis:3.2.6-alpine
hostname: redis.testenv.top
external_links:
- nginx-main
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:6379"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 60s
deploy:
mode: global
endpoint_mode: dnsrr
resources:
limits:
cpus: "0.20"
memory: 1024M
reservations:
cpus: "0.05"
memory: 128M
restart_policy:
condition: on-failure
delay: 20s
max_attempts: 3
window: 60s
volumes:
- redis-data:/var/lib/redis
command: redis-server --appendonly yes
networks:
dev-env-swarm:
aliases:
- redis.testenv.top
dns:
- 10.10.10.10
- 8.8.8.8
redisco:
image: rediscommander/redis-commander:latest
hostname: redisco.testenv.top
external_links:
- nginx-main
depends_on:
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8081"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 60s
deploy:
mode: global
endpoint_mode: dnsrr
resources:
limits:
cpus: "0.20"
memory: 512M
reservations:
cpus: "0.05"
memory: 256M
restart_policy:
condition: on-failure
delay: 20s
max_attempts: 3
window: 60s
networks:
dev-env-swarm:
aliases:
- redisco.testenv.top
dns:
- 10.10.10.10
- 8.8.8.8
environment:
REDIS_PORT: 6379
REDIS_HOST: redis.testenv.top
plantuml:
image: plantuml/plantuml-server:tomcat
hostname: plantuml.testenv.top
external_links:
- nginx-main
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 60s
deploy:
mode: global
endpoint_mode: dnsrr
resources:
limits:
cpus: "0.20"
memory: 1024M
reservations:
cpus: "0.05"
memory: 256M
restart_policy:
condition: on-failure
delay: 20s
max_attempts: 3
window: 60s
networks:
dev-env-swarm:
aliases:
- plantuml.testenv.top
dns:
- 10.10.10.10
- 8.8.8.8
portainer-agent:
image: portainer/agent
external_links:
- nginx-main
expose:
- 9001
deploy:
mode: global
endpoint_mode: dnsrr
resources:
limits:
cpus: "0.20"
memory: 1024M
reservations:
cpus: "0.05"
memory: 256M
restart_policy:
condition: on-failure
delay: 20s
max_attempts: 3
window: 60s
environment:
AGENT_CLUSTER_ADDR: tasks.portainer-agent
AGENT_PORT: 9001
LOG_LEVEL: debug
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
dev-env-swarm:
aliases:
- portainer-agent.testenv.top
deploy:
mode: global
portainer:
image: portainer/portainer
command: -H tcp://tasks.portainer-agent:9001 --tlsskipverify
depends_on:
- portainer-agent
external_links:
- nginx-main
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 60s
deploy:
mode: global
endpoint_mode: dnsrr
resources:
limits:
cpus: "0.20"
memory: 2024M
reservations:
cpus: "0.05"
memory: 512M
restart_policy:
condition: on-failure
delay: 20s
max_attempts: 3
window: 60s
volumes:
- /env-vol/portainer/data:/data
hostname: portainer.testenv.top
networks:
dev-env-swarm:
aliases:
- portainer.testenv.top
dns:
- 10.10.10.10
- 8.8.8.8
pgadmin4:
image: dpage/pgadmin4:latest
hostname: pgadmin.testenv.top
external_links:
- nginx-main
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 60s
deploy:
mode: global
endpoint_mode: dnsrr
resources:
limits:
cpus: "0.20"
memory: 1024M
reservations:
cpus: "0.05"
memory: 256M
restart_policy:
condition: on-failure
delay: 20s
max_attempts: 3
window: 60s
environment:
PGADMIN_DEFAULT_EMAIL: email#example.com
PGADMIN_DEFAULT_PASSWORD: PASWORD
networks:
dev-env-swarm:
aliases:
- pgadmin.testenv.top
dns:
- 10.10.10.10
- 8.8.8.8
volumes:
- /env-vol/pgadmin:/var/lib/pgadmin
Problem with socket was from wrong Python installation from sources and manual installation of libs. Looks like I have installed incompatible versions. When I have reinstalled Python from repository this problem wasn't appear again.
I am trying to set a cloud of solars within docker.
I am using docker-compose to bring the cloud up.
I create a cloud, containing 3 zookeepers, and 4 solrs (solr1, solr2, solr3, solr4) containers.
Creation of distributed collection of 4 shards and 2 replicas is chill.
The problem is when it goes to creation the collection in a mounted volume to have a backup at a machine outside the docker.
I can mount the directory volume from machine to solr cloud for each container, but I can't create a collection there.
When I create distributed collection by
docker exec -it solr1 /opt/solr/bin/solr create_collection -c publications -s 4 -rf 2 -p 8983
I have directories of two cores in containers, e.g. for solr1 there is
/opt/solr/server/solr/publications_shard2_replica_n6
and
/opt/solr/server/solr/publications_shard4_replica_n14
for shard2, I have
I have a directories of two cores in containers, e.g. for solr1 there is
/opt/solr/server/solr/publications_shard1_replica_n1
and
/opt/solr/server/solr/publications_shard3_replica_n8
etc.
The names of cores are dynamic.
How to have them created in my volume directory which is
/root/solr/cores
My docker-compose.yml is as follows
version: "3.1"
services:
solr1:
image: solr:latest
environment:
- JVM_OPTS=-Xmx12g -Xms12g -XX:MaxPermSize=1024m
ports:
- "8983:8983"
restart: always
container_name: solr1
volumes:
- /root/solr/mycores:/opt/solr/server/solr/mycores
deploy:
mode: replicated
replicas: 2
resources:
limits:
memory: 1g
restart_policy:
condition: on-failure
links:
- zookeeper1
- zookeeper2
- zookeeper3
command: bash -c '/opt/solr/bin/solr start -f -z zookeeper1:2181,zookeeper2:2182,zookeeper2:2183 -m 1g'
solr2:
image: solr:latest
ports:
- "8984:8984"
restart: always
container_name: solr2
volumes:
- /root/solr/cores:/opt/solr/server/solr/mycores
deploy:
replicas: 2
resources:
limits:
memory: 1g
restart_policy:
condition: on-failure
links:
- zookeeper1
- zookeeper2
- zookeeper3
- solr1
command: bash -c '/opt/solr/bin/solr start -f -z zookeeper1:2181,zookeeper2:2182,zookeeper3:2183 -m 1g'
solr3:
image: solr:latest
ports:
- "8985:8985"
restart: always
container_name: solr3
volumes:
- /root/solr/cores:/opt/solr/server/solr/mycores
deploy:
replicas: 2
resources:
limits:
memory: 1g
restart_policy:
condition: on-failure
links:
- zookeeper1
- zookeeper2
- zookeeper3
- solr1
- solr2
command: bash -c '/opt/solr/bin/solr start -f -z zookeeper1:2181,zookeeper2:2182,zookeeper3:2183 -m 1g'
solr4:
image: solr:latest
ports:
- "8986:8986"
restart: always
container_name: solr4
volumes:
- /root/solr/cores:/opt/solr/server/solr/mycores
deploy:
replicas: 2
resources:
limits:
memory: 1g
restart_policy:
condition: on-failure
links:
- zookeeper1
- zookeeper2
- zookeeper3
- solr1
- solr2
- solr3
command: bash -c '/opt/solr/bin/solr start -f -z zookeeper1:2181,zookeeper2:2182,zookeeper3:2183 -m 1g'
zookeeper1:
image: jplock/zookeeper:latest
container_name: zookeeper1
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
restart: always
zookeeper2:
image: jplock/zookeeper:latest
container_name: zookeeper2
ports:
- "2182:2182"
- "2889:2889"
- "3889:3889"
restart: always
zookeeper3:
image: jplock/zookeeper:latest
container_name: zookeeper3
ports:
- "2183:2183"
- "2890:2890"
- "3890:3890"
restart: always
I found a solution. It was needed to add in a docker-compose.yml file a -t parameter and deliver a data directory /opt/solr/server/solr/mycores.
The edited line beneath
command: bash -c '/opt/solr/bin/solr start -f -z zookeeper1:2181,zookeeper2:2182,zookeeper2:2183 -m 1g -t /opt/solr/server/solr/mycores'