how to run SolrCloud with docker-compose - docker

Please help me with docker-compose file.
Right now, i'm using Solr in docker file, but i need to change it to SolrCloud. I need 2 Solr instances, an internal Zookeeper and docker (local).
This is an example of docker-compose file I did:
version: "3"
services:
mongo:
image: mongo:latest
container_name: mongo
hostname: mongo
networks:
- gsec
ports:
- 27018:27017
sqlserver:
image: microsoft/mssql-server-linux:latest
hostname: sqlserver
container_name: sqlserver
environment:
SA_PASSWORD: "#Password123!"
ACCEPT_EULA: "Y"
networks:
- gsec
ports:
- 1403:1433
solr:
image: solr
container_name: solr
ports:
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
volumes:
data:
networks:
gsec:
driver: bridge
Thank you in advanced.

Solr docker instance has a zookeeper server embedded into. You have just to start Solr with the right parameters and add the zookeeper ports 9983:9983 in the docker-compose file:
solr:
image: solr
container_name: solr
ports:
- "9983:9983"
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr
- start
- -c
- -f
SolrCloud basically is a Solr cluster where Zookeeper is used to coordinate and configure the cluster.
Usually you use SolrCloud with Docker because you're are learning how it works or because you're preparing your application (locally?) to deploy in a bigger environment.
On the other hand it doesn't make much sense run SolrCloud if you don't have a distributed configuration, i.e. having Solr and Zookeeper running on different nodes.
SolrCloud is the kind of cluster you need when you have hundred or even thousands searches per second with collection of millions or even billions of documents.
Your cluster have to scale horizontally.

Version to use with external zookeper.
'-t' to change data dir in container.
To see other options run: solr start -help
version: '3'
services:
solr1:
image: solr
ports:
- "8984:8984"
entrypoint:
- solr
command:
- start
- -f
- -c
- -h
- "10.1.0.157"
- -p
- "8984"
- -z
- "10.1.0.157:2181,10.1.0.157:2182,10.1.0.157:2183"
- -m
- 1g
- -t
- "/opt/solr/server/solr/mycores"
volumes:
- "./data1/mycores:/opt/solr/server/solr/mycores"

I use this setup locally to test three instances of solr and three instances of zookeeper, based on the official example.
version: '3.7'
services:
solr-1:
image: solr:8.7
container_name: solr-1
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
# command:
# - solr-precreate
# - gettingstarted
solr-2:
image: solr:8.7
container_name: solr-2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
solr-3:
image: solr:8.7
container_name: solr-3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
zoo-1:
image: zookeeper:3.6
container_name: zoo-1
restart: always
hostname: zoo-1
volumes:
- zoo1data:/data
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-2:
image: zookeeper:3.6
container_name: zoo-2
restart: always
hostname: zoo-2
volumes:
- zoo2data:/data
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-3:
image: zookeeper:3.6
container_name: zoo-3
restart: always
hostname: zoo-3
volumes:
- zoo3data:/data
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
networks:
- solr
networks:
solr:
# persist the zookeeper data in volumes
volumes:
zoo1data:
driver: local
zoo2data:
driver: local
zoo3data:
driver: local

Related

Persist nifi data and volume

I want to make my nifi data volume and configuration persist means even if I delete container and docker compose up again I would like to keep what I built so far in my nifi. I try to mount volumes as follows in my docker compose file in volumes section nevertheless it doesn't work and my nifi processors are not saved. How can I do it correctly? Below my docker-compose.yaml file.
version: "3.7"
services:
nifi:
image: koroslak/nifi:latest
container_name: nifi
restart: always
environment:
- NIFI_HOME=/opt/nifi/nifi-current
- NIFI_LOG_DIR=/opt/nifi/nifi-current/logs
- NIFI_PID_DIR=/opt/nifi/nifi-current/run
- NIFI_BASE_DIR=/opt/nifi
- NIFI_WEB_HTTP_PORT=8080
ports:
- 9000:8080
depends_on:
- openldap
volumes:
- ./volume/nifi-current/state:/opt/nifi/nifi-current/state
- ./volume/database/database_repository:/opt/nifi/nifi-current/repositories/database_repository
- ./volume/flow_storage/flowfile_repository:/opt/nifi/nifi-current/repositories/flowfile_repository
- ./volume/nifi-current/content_repository:/opt/nifi/nifi-current/repositories/content_repository
- ./volume/nifi-current/provenance_repository:/opt/nifi/nifi-current/repositories/provenance_repository
- ./volume/log:/opt/nifi/nifi-current/logs
#- ./volume/conf:/opt/nifi/nifi-current/conf
postgres:
image: koroslak/postgres:latest
container_name: postgres
restart: always
environment:
- POSTGRES_PASSWORD=secret123
ports:
- 6000:5432
volumes:
- postgres:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:4.18
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=admin
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- 8090:80
metabase:
container_name: metabase
image: metabase/metabase:v0.34.2
restart: always
environment:
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: metabase_admin
MB_DB_PASS: secret123
MB_DB_HOST: postgres
ports:
- 3000:3000
depends_on:
- postgres
openldap:
image: osixia/openldap:1.3.0
container_name: openldap
restart: always
ports:
- 38999:389
# Mocked source systems
jira-api:
image: danielgtaylor/apisprout:latest
container_name: jira-api
restart: always
ports:
- 8000:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/jira-api.json
pipedrive-api:
image: danielgtaylor/apisprout:latest
container_name: pipedrive-api
restart: always
ports:
- 8100:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/pipedrive-api.yaml
restcountries-api:
image: danielgtaylor/apisprout:latest
container_name: restcountries-api
restart: always
ports:
- 8200:8000
command: https://raw.githubusercontent.com/mvrabel/nifi-postgres-metabase/master/api_examples/restcountries-api.json
volumes:
postgres:
nifi:
openldap:
metabase:
pgadmin:
Using Registry you can achieve that all changes you are doing or your nifi are committed to git. I.e. if you change some processor configuration, it will be reflected in your git repo.
As for flow files, you may need to fix volumes mappings.

Setting global environment variables in docker-compose

I am looking for a way to set global environment variables in my docker-compose.yml file so that I won't need to duplicate them. What is the best way to do it? As you can see some environment variables are repeated in multiple containers. I don't want to re-write them many times.
version: "3.4"
services:
test-db:
image: postgres-test:1.4.0
environment:
- POSTGRES_PASSWORD=password
- DB_NAME_1=dbname1
- DB_USER_1=dbuser1
- DB_PASS_1=dbpass1
- DB_NAME_2=dbname2
- DB_USER_2=dbuser2
- DB_PASS_2=dbpass2
volumes:
- test-db-data:/var/lib/postgresql/data
restart: on-failure
ports:
- 5432:5432
networks:
- test
test-key:
image: key-test:1.4.0
environment:
DB_VENDOR: POSTGRES
DB_ADDR: test-db
DB_DATABASE: dbname1
DB_USER: dbuser1
DB_PASSWORD: dbpass1
KEY_USER: admin
KEY_PASSWORD: admin
#KEY_LOGLEVEL: DEBUG
KEY_FRONTEND_URL: http://test-key:8080/auth
KEY_IMPORT: /test-key/test_realm.json
command:
- "-Djboss.as.management.blocking.timeout=3600"
ports:
- 8080:8080
volumes:
- $PWD/test_realm.json:/test-key/test_realm.json
networks:
- test
depends_on:
- test-db
test-bus:
image: bus-test:1.5.0
environment:
PYTEST_HOST: test-pytest
PYTEST_PORT: 8090
TEST_DB_PW: dbpass2
TEST_DB_USER: dbuser2
TEST_DB_NAME: dbname2
TEST_DB_HOST: test-db
TEST_KAF_HOST: test-kaf
TEST_KAF_PORT: 9092 # for a connection inside test-network
#TEST_SERVER_STATE: PRODUCTION
TEST_SERVER_STATE: DEVELOPMENT
ports:
- 5000:5000
networks:
- test
depends_on:
- test-db
- test-key
You can define your variables in an env-file named .env beside docker-compose.yml file.
.env
dbname1_key=dbname1
dbuser1_key=dbuser1
dbpass1_key=dbpass1
Then you can use the variable in docker-compose.yml file
docker-compose.yml
version: "3.4"
services:
test-db:
image: postgres-test:1.4.0
environment:
- POSTGRES_PASSWORD=password
- DB_NAME_1=${dbname1_key}
- DB_USER_1=${dbuser1_key}
- DB_PASS_1=${dbpass1_key}
- DB_NAME_2=dbname2
- DB_USER_2=dbuser2
- DB_PASS_2=dbpass2
volumes:
- test-db-data:/var/lib/postgresql/data
restart: on-failure
ports:
- 5432:5432
networks:
- test
...

docker-compose: zipkin cannot connect to elasticsearch

I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f

Use another container inside of the other

I started using docker and I have created a basic docker-compose.yml file. The problem I am facing currently is that I have multiple containers that need to be one. My docker-compose.yml file:
version: '3.7'
services:
redis:
container_name: redis
image: redis
ports:
- "6379:6379"
mongodb:
container_name: mongodb
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: devmyoy123
ports:
- "27017:27017"
node:
container_name: node
image: node
volumes:
- ./node/:/var/app/node
environment:
REDIS_URI: redis://redis:6379
links:
- mongodb
- redis
ports:
- "9000:9000"
httpd:
container_name: httpd
image: php:7.2-apache
volumes:
- ./api/:/usr/local/apache2/htdocs
environment:
REDIS_URI: redis://redis:6379
links:
- mongodb
- redis
- node
ports:
- "80:80"
- "443:443"
I need a way to use node inside the httpd container and I also need the mongodb and redis parameters inside of httpd and node containers.

Docker elastic stack cannot receive connection error

I have docker-compose file looking like below
version: '3'
services:
redis:
build: ./docker/redis
postgresql:
build: ./docker/postgresql
ports:
- "5433:5432"
env_file:
- .env
graphql:
build: .
command: npm run start
volumes:
- ./logs/:/usr/app/logs/
ports:
- "3000:3000"
env_file:
- .env
depends_on:
- "redis"
- "postgresql"
links:
- "redis"
- "postgresql"
elasticsearch:
build: ./docker/elasticsearch
container_name: elasticsearch
ports:
- "9200:9200"
depends_on:
- "graphql"
links:
- "kibana"
kibana:
build: ./docker/kibana
ports:
- "5601:5601"
depends_on:
- "graphql"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
metricbeat:
build: ./docker/metricbeat
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
packetbeat:
build: ./docker/packetbeat
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
logstash:
build: ./docker/logstash
ports:
- "9600:9600"
volumes:
- ./logs:/usr/logs
depends_on:
- "graphql"
- "elasticsearch"
- "kibana"
networks:
- elastic
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
networks:
elastic:
driver: bridge
When I run docker-compose build and docker-compose up, I get "unable to revive connection: http://elasticsearch:9200" from every container. I don't think any of the containers are able to talk to each other right now. However, it really feels like everything should work because I have exposed all the ports for elastic components, linked them with same networks and URL is also pointing to the correct alias. What am I doing wrong?
The dockerfile settings are all correct as each container runs correctly in isolation - just not able to talk to each other at all

Resources