ElasticSearch container won't start up in Docker - docker

I'm attempting to run this script in Win10 to configure everything.
All containers except the elastic container are initialized correctly and
Elastic times out and then exits with code 124.
https://imgur.com/a/FO8ckwc (some log outputs)
I'm running this script where I didn't touch anything except the Windows ports (you can see the comments)
https://pastebin.com/7Z8Gnenr
version: '3.1'
# Generated on 23-04-2018
services:
alfresco:
image: openmbeeguest/mms-repo:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseConcMarkSweepGC"
depends_on:
- postgresql
- activemq
- elastic
networks:
- internal
ports:
- 8080:8080
volumes:
- alf_logs:/usr/local/tomcat/logs
- alf_data:/opt/alf_data
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
solr:
image: openmbeeguest/mms-solr:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:G1HeapRegionSize=8m -XX:MaxGCPauseMillis=200"
depends_on:
- alfresco
networks:
- internal
volumes:
- solr_logs:/usr/local/tomcat/logs/
- solr_content_store:/opt/solr/ContentStore
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
activemq:
image: openmbeeguest/mms-activemq:3.2.4-SNAPSHOT
ports:
#I changed these Windows side ports
- 61615:61616
- 61617:61614
- 8162:8161
# ORIGINAL
#- 61616:61616
#- 61614:61614
#- 8161:8161
volumes:
- activemq-data-volume:/data/activemq
- activemq-log-volume:/var/log/activemq
- activemq-conf-volume:/opt/activemq/conf
environment:
- ACTIVEMQ_ADMIN_LOGIN admin
- ACTIVEMQ_ADMIN_PASSWORD admin
networks:
- internal
elastic:
image: openmbeeguest/mms-elastic:3.2.4-SNAPSHOT
environment:
CLEAN: 'false'
ports:
- 9200:9200
volumes:
- elastic-data-volume:/usr/share/elasticsearch/data
networks:
- internal
postgresql:
image: openmbeeguest/mms-postgres:3.2.4-SNAPSHOT
volumes:
- pgsql_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=alfresco
- POSTGRES_PASSWORD=alfresco
- POSTGRES_DB=alfresco
networks:
- internal
volumes:
alf_logs:
alf_data:
solr_logs:
solr_content_store:
pgsql_data:
activemq-data-volume:
activemq-log-volume:
activemq-conf-volume:
elastic-data-volume:
nginx-external-volume:
networks:
internal:
Any help would be greatly appreciated!

Do you have the logs from the elasticsearch container to share? Without that it's hard to tell why it's exiting.
One thing that's tripped me up repeatedly though is the vm.max_map_count setting - the default in Docker is too low for elasticsearch to function, so it's a good first thing to check.

Related

Docker container can not resolve .test domain running on localhost

I have magento running in a docker container using this tutorial (https://github.com/markshust/docker-magento).
The docker container is accessed via https://magento.test and this works fine in the browser. We have a script in magento that is trying to connect to https://magento.test from within the container but this fails with Could not resolve host: magento.test.
Basically the host can access magento.test and connect to the docker container. But the docker container can not connect to itself.
I have tried adding extra hosts to the docker-composer.yml (see below) but this has not worked. I am guessing the IP 127.0.0.1 is incorrect.
version: "3"
services:
app:
image: markoshust/magento-nginx:1.18-8
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:127.0.0.1"
phpfpm:
image: markoshust/magento-php:7.4-fpm-15
volumes: *appvolumes
env_file: env/phpfpm.env
#extra_hosts: *appextrahosts
db:
image: mariadb:10.4
command:
--max_allowed_packet=64M
--optimizer_use_condition_selectivity=1
--optimizer_switch="rowid_filter=off"
ports:
- "3306:3306"
env_file: env/db.env
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:6.2-alpine
ports:
- "6379:6379"
elasticsearch:
image: markoshust/magento-elasticsearch:7.16-0
ports:
- "9200:9200"
- "9300:9300"
environment:
- "discovery.type=single-node"
## Set custom heap size to avoid memory errors
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
## Avoid test failures due to small disks
## More info at https://github.com/markshust/docker-magento/issues/488
- "cluster.routing.allocation.disk.threshold_enabled=false"
- "index.blocks.read_only_allow_delete"
rabbitmq:
image: markoshust/magento-rabbitmq:3.9-0
ports:
- "15672:15672"
- "5672:5672"
volumes:
- rabbitmqdata:/var/lib/rabbitmq
env_file: env/rabbitmq.env
mailcatcher:
image: sj26/mailcatcher
ports:
- "1080:1080"
## Blackfire support, uncomment to enable
#blackfire:
# image: blackfire/blackfire:2
# ports:
# - "8307"
# env_file: env/blackfire.env
## Selenium support, uncomment to enable
#selenium:
# image: selenium/standalone-chrome-debug:3.8.1
# ports:
# - "5900:5900"
# extra_hosts: *appextrahosts
volumes:
appdata:
dbdata:
rabbitmqdata:
sockdata:
ssldata:
Any help would be greatly appreciated, thanks!
Does host network usage solve your problem?
services:
app:
image: markoshust/magento-nginx:1.18-8
network_mode: "host" # share host network
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:127.0.0.1"

Issue with Collabora CODE, Nextcloud & Nginx using docker-compose

I am trying to get a docker-compose.yaml together to run Nextcloud and Collabora CODE via Nginx Proxy Manager.
So far I have Nginx and Nextcloud working really nicely with persistent volumes so that my configs survive kill & rm. My issue is that I cannot get my Collabora CODE instance to link to Nextcloud. There are multiple bits that I might have got wrong so I'll dump as much info as I can here.
I have the following subdomains all pointing at my server:
collabora.domain.tld nextcloud.domain.tld nginx.domain.tld
... and set up as proxy hosts:
As I mentioned, the Nginx and Nextcloud setups are great. When I point my browser at collabora.domain.tld I see the OK message. I can also access the admin page at collabora.domain.tld/loleaflet/dist/admin/admin.html
The NPM entry for collabora.domain.tld is below:
My docker-compose.yaml has gone through several iterations in an attempt to get this working, but my current attempt is below:
version: '3'
volumes:
nextcloud-data:
nextcloud-db:
npm-data:
npm-ssl:
npm-db:
networks:
frontend:
backend:
services:
code:
image: collabora/code:latest
restart: always
environment:
- password=${COLLABORA_PASSWORD:?Not defined!}
- username=${COLLABORA_USERNAME:?Not defined!}
- domain=${COLLABORA_DOMAIN:?Not defined!}
expose:
- "9980"
networks:
- frontend
- backend
nextcloud-app:
image: nextcloud:stable
restart: always
volumes:
- nextcloud-data:/var/www/html
environment:
- MYSQL_PASSWORD=${NC_MYSQL_PASSWORD:?Not defined!}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud-user
- MYSQL_HOST=nextcloud-db
networks:
- frontend
- backend
nextcloud-db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-file-per-table=1 --skip-innodb-read-only-compressed
volumes:
- nextcloud-db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${NC_MYSQL_ROOT_PASSWORD:?Not defined!}
- MYSQL_PASSWORD=${NC_MYSQL_PASSWORD:?Not defined!}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud-user
networks:
- backend
npm-app:
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- "80:80"
- "81:81"
- "443:443"
environment:
- DB_MYSQL_HOST=npm-db
- DB_MYSQL_PORT=3306
- DB_MYSQL_USER=npm-user
- DB_MYSQL_PASSWORD=${NPM_MYSQL_PASSWORD:?Not defined!}
- DB_MYSQL_NAME=npm
volumes:
- npm-data:/data
- npm-ssl:/etc/letsencrypt
networks:
- frontend
- backend
npm-db:
image: jc21/mariadb-aria:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=${NPM_MYSQL_ROOT_PASSWORD:?Not defined!}
- MYSQL_DATABASE=npm
- MYSQL_USER=npm-user
- MYSQL_PASSWORD=${NPM_MYSQL_PASSWORD:?Not defined!}
volumes:
- npm-db:/var/lib/mysql
networks:
- backend
$COLLABORA_DOMAIN is set to nextcloud.domain.tld.
Any ideas what I have done wrong, and how to get my Nextcloud connected to CODE?
What do your custom locations look like? See, e.g. https://www.collaboraoffice.com/code/nginx-reverse-proxy/
I have a very similar setup, except the collabora instance is not in docker compose, as that never worked for me.
Make sure you have specified your domain environmental variable correctly (dot escaping etc.)

multiple docker compose files with traefik (v2.1) and database networks

I would like to build a docker landscape. I use a container with a traefik (v2. 1) image and a mysql container for multiple databases.
traefik/docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api=true"
- "--api.dashboard=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=proxy"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.traefik-dashboard.address=:8080"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge=true"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge.entrypoint=web"
#- "--certificatesresolvers.devnik-resolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
- "--certificatesresolvers.devnik-resolver.acme.email=####"
- "--certificatesresolvers.devnik-resolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "./letsencrypt:/letsencrypt"
- "./data:/etc/traefik"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- "proxy"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`devnik.dev`)"
- "traefik.http.routers.traefik.entrypoints=traefik-dashboard"
- "traefik.http.routers.traefik.tls.certresolver=devnik-resolver"
#basic auth
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.usersfile=/etc/traefik/.htpasswd"
#Docker Networks
networks:
proxy:
database/docker-compose.yml
version: "3.3"
services:
#MySQL Service
mysql:
image: mysql:5.7
container_name: mysql
restart: always
ports:
- "3306:3306"
volumes:
#persist data
- ./mysqldata/:/var/lib/mysql/
- ./init:/docker-entrypoint-initdb.d
networks:
- "mysql"
environment:
MYSQL_ROOT_PASSWORD: ####
TZ: Europe/Berlin
#Docker Networks
networks:
mysql:
driver: bridge
For the structure I want to control all projects via multiple docker-compose files. These containers should run on the same network as the traefik container and some with the mysql container.
This also works for the following case (but only sometimes)
dev-releases/docker-compose.yml
version: "3.3"
services:
backend:
image: "registry.gitlab.com/devnik/dev-releases-backend/master:latest"
container_name: "dev-releases-backend"
restart: always
volumes:
#laravel logs
- "./logs/backend:/app/storage/logs"
#cron logs
- "./logs/backend/cron.log:/var/log/cron.log"
labels:
- "traefik.enable=true"
- "traefik.http.routers.dev-releases-backend.rule=Host(`dev-releases.backend.devnik.dev`)"
- "traefik.http.routers.dev-releases-backend.entrypoints=websecure"
- "traefik.http.routers.dev-releases-backend.tls.certresolver=devnik-resolver"
networks:
- proxy
- mysql
environment:
TZ: Europe/Berlin
#Docker Networks
networks:
proxy:
external:
name: "traefik_proxy"
mysql:
external:
name: "database_mysql"
As soon as I restart the containers in dev-releases/ via docker-compose up -d I get the typical error "Gateway timeout" when calling them in the browser.
As soon as I comment the network networks: #- mysql and restart the docker-compose in dev-releases/ it works again.
My guess is that I have not configured the external networks correctly. Is it not possible to use 2 external networks?
I'd like some container have access to the 'mysql' network but it should not be accessible for the whole traefik network.
Let me know if you need more information
EDIT (26.03.2020)
I make it running.
I put all my containers into one network "proxy". It seems mysql also have to be in the proxy network.
So I add following to database/docker-compose.yml
networks:
proxy:
external:
name: "traefik_proxy"
And removed the database_mysql network out of dev-releases/docker-compose.yml
based on the names of the files, your mysql network should be mysql_mysql.
you can verify this by executing
$> docker network ls
You are also missing a couple of labels for your services such as
traefik command line
- '--providers.docker.watch=true'
- '--providers.docker.swarmMode=true'
labels
- traefik.docker.network=proxy
- traefik.http.services.dev-releases-backend.loadbalancer.server.port=yourport
- traefik.http.routers.dev-releases-backend.service=mailcatcher
You can check this for more info

Apache nifi java.net.UnknownHostException: e2a2e8ab6b6b: Name or service not known

Hi All I faced a small issue with the nifi cluster am creating with the below docker-compose file
services:
zookeeper:
hostname: zookeeper
container_name: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- efactory-network
nifi:
image: apache/nifi:1.9.2
ports:
- 8080:8080 # Unsecured HTTP Web Port
- 8081:8081
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=true
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
- nifi.security.needClientAuth=false
networks:
- efactory-network
volumes:
- state:/opt/nifi/nifi-1.9.2/state
- conf:/opt/nifi/nifi-1.9.2/conf
- content:/opt/nifi/nifi-1.9.2/content_repository
- db:/opt/nifi/nifi-1.9.2/database_repository
- flowfile:/opt/nifi/nifi-1.9.2/flowfile_repository
- provenance:/opt/nifi/nifi-1.9.2/provenance_repository
- logs:/opt/nifi/nifi-1.9.2/logs
- data:/opt/nifi/nifi-1.9.2/data
extra_hosts:
- nifi.at:159.69.214.42
networks:
efactory-network:
external:
name: security-network
volumes:
conf:
content:
db:
flowfile:
provenance:
logs:
state:
data:
I persisted data with docker volumes. So state of the cluster should be persisted on a docker-compose restart . I think it's persisted but giving the below error
java.net.UnknownHostException: ffcca3db4879
I will be much grateful if someone can help me on this
Can you add on your nifi:
depends_on:
- zookeeper

how to run SolrCloud with docker-compose

Please help me with docker-compose file.
Right now, i'm using Solr in docker file, but i need to change it to SolrCloud. I need 2 Solr instances, an internal Zookeeper and docker (local).
This is an example of docker-compose file I did:
version: "3"
services:
mongo:
image: mongo:latest
container_name: mongo
hostname: mongo
networks:
- gsec
ports:
- 27018:27017
sqlserver:
image: microsoft/mssql-server-linux:latest
hostname: sqlserver
container_name: sqlserver
environment:
SA_PASSWORD: "#Password123!"
ACCEPT_EULA: "Y"
networks:
- gsec
ports:
- 1403:1433
solr:
image: solr
container_name: solr
ports:
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
volumes:
data:
networks:
gsec:
driver: bridge
Thank you in advanced.
Solr docker instance has a zookeeper server embedded into. You have just to start Solr with the right parameters and add the zookeeper ports 9983:9983 in the docker-compose file:
solr:
image: solr
container_name: solr
ports:
- "9983:9983"
- "8983:8983"
networks:
- gsec
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr
- start
- -c
- -f
SolrCloud basically is a Solr cluster where Zookeeper is used to coordinate and configure the cluster.
Usually you use SolrCloud with Docker because you're are learning how it works or because you're preparing your application (locally?) to deploy in a bigger environment.
On the other hand it doesn't make much sense run SolrCloud if you don't have a distributed configuration, i.e. having Solr and Zookeeper running on different nodes.
SolrCloud is the kind of cluster you need when you have hundred or even thousands searches per second with collection of millions or even billions of documents.
Your cluster have to scale horizontally.
Version to use with external zookeper.
'-t' to change data dir in container.
To see other options run: solr start -help
version: '3'
services:
solr1:
image: solr
ports:
- "8984:8984"
entrypoint:
- solr
command:
- start
- -f
- -c
- -h
- "10.1.0.157"
- -p
- "8984"
- -z
- "10.1.0.157:2181,10.1.0.157:2182,10.1.0.157:2183"
- -m
- 1g
- -t
- "/opt/solr/server/solr/mycores"
volumes:
- "./data1/mycores:/opt/solr/server/solr/mycores"
I use this setup locally to test three instances of solr and three instances of zookeeper, based on the official example.
version: '3.7'
services:
solr-1:
image: solr:8.7
container_name: solr-1
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
# command:
# - solr-precreate
# - gettingstarted
solr-2:
image: solr:8.7
container_name: solr-2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
solr-3:
image: solr:8.7
container_name: solr-3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo-1:2181,zoo-2:2181,zoo-3:2181
networks:
- solr
depends_on:
- zoo-1
- zoo-2
- zoo-3
zoo-1:
image: zookeeper:3.6
container_name: zoo-1
restart: always
hostname: zoo-1
volumes:
- zoo1data:/data
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-2:
image: zookeeper:3.6
container_name: zoo-2
restart: always
hostname: zoo-2
volumes:
- zoo2data:/data
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo-3:2888:3888;2181
networks:
- solr
zoo-3:
image: zookeeper:3.6
container_name: zoo-3
restart: always
hostname: zoo-3
volumes:
- zoo3data:/data
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo-1:2888:3888;2181 server.2=zoo-2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
networks:
- solr
networks:
solr:
# persist the zookeeper data in volumes
volumes:
zoo1data:
driver: local
zoo2data:
driver: local
zoo3data:
driver: local

Resources