Error while connecting node to mongodb using docker - docker

i'm getting this error while connecting node to mongodb:
techworld-js-docker-demo-app-mongo-express-1 | Could not connect to database using connectionString: mongodb://admin:password#mongodb:27017/"
techworld-js-docker-demo-app-mongo-express-1 | (node:7) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [mongodb:27017] on first connect [Error: connect ECONNREFUSED 172.19.0.3:27017
techworld-js-docker-demo-app-mongo-express-1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16) {
techworld-js-docker-demo-app-mongo-express-1 | name: 'MongoNetworkError'
techworld-js-docker-demo-app-mongo-express-1 | }]
techworld-js-docker-demo-app-mongo-express-1 | at Pool.<anonymous> (/node_modules/mongodb/lib/core/topologies/server.js:441:11)
techworld-js-docker-demo-app-mongo-express-1 | at Pool.emit (events.js:314:20)
techworld-js-docker-demo-app-mongo-express-1 | at /node_modules/mongodb/lib/core/connection/pool.js:564:14
techworld-js-docker-demo-app-mongo-express-1 | at /node_modules/mongodb/lib/core/connection/pool.js:1000:11
techworld-js-docker-demo-app-mongo-express-1 | at /node_modules/mongodb/lib/core/connection/connect.js:32:7
techworld-js-docker-demo-app-mongo-express-1 | at callback (/node_modules/mongodb/lib/core/connection/connect.js:300:5)
techworld-js-docker-demo-app-mongo-express-1 | at Socket.<anonymous> (/node_modules/mongodb/lib/core/connection/connect.js:330:7)
techworld-js-docker-demo-app-mongo-express-1 | at Object.onceWrapper (events.js:421:26)
techworld-js-docker-demo-app-mongo-express-1 | at Socket.emit (events.js:314:20)
techworld-js-docker-demo-app-mongo-express-1 | at emitErrorNT (internal/streams/destroy.js:92:8)
techworld-js-docker-demo-app-mongo-express-1 | at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
techworld-js-docker-demo-app-mongo-express-1 | at processTicksAndRejections (internal/process/task_queues.js:84:21)
techworld-js-docker-demo-app-mongo-express-1 | (node:7) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
Docker-compose.yaml file :
version: '3'
services:
my-app:
image: nihalchandra/myapp:4.0
ports:
- 3000:3000
mongodb:
image: mongo
restart: always
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
volumes:
- mongo-data:/data/db
mongo-express:
image: mongo-express
restart: always # fixes MongoNetworkError when mongodb is not ready when mongo-express starts
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
volumes:
mongo-data:
driver: local
done this in server.js file:
let mongoUrlLocal = "mongodb://admin:password#localhost:27017";
let mongoUrlDocker = "mongodb://admin:password#mongodb";
i tried changing the server but it still didnt work

You should add into your my-app definition a link to database:
my-app:
image: nihalchandra/myapp:4.0
ports:
- 3000:3000
links:
- mongodb:mongodb
Then your application will be able to connect to MongoDB using mongodb as a hostname.
More details about this you can find in documentation: https://docs.docker.com/compose/networking/

Related

Next.js fetch get an ECONNREFUSED error in Docker (Strapi as backend)

Good day everyone. I set up a simple next.js page to get data from Strapi via API. It runs fine locally. But when I run them in Docker, I got a ECONNREFUSED error.
here is the error:
error - TypeError: fetch failed
frontend | at Object.fetch (node:internal/deps/undici/undici:11118:11)
frontend | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
frontend | at async getStaticProps (webpack-internal:///./pages/index.tsx:38:17)
frontend | at async Object.renderToHTML (/usr/local/frontend/node_modules/next/dist/server/render.js:384:20)
frontend | at async doRender (/usr/local/frontend/node_modules/next/dist/server/base-server.js:708:34)
frontend | at async cacheEntry.responseCache.get.isManualRevalidate.isManualRevalidate (/usr/local/frontend/node_modules/next/dist/server/base-server.js:813:28)
frontend | at async /usr/local/frontend/node_modules/next/dist/server/response-cache/index.js:80:36 {
frontend | cause: Error: connect ECONNREFUSED 127.0.0.1:1337
frontend | at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1300:16) {
frontend | errno: -111,
frontend | code: 'ECONNREFUSED',
frontend | syscall: 'connect',
frontend | address: '127.0.0.1',
frontend | port: 1337
frontend | },
frontend | page: '/'
frontend | }
And here is the code in Next.js:
export async function getStaticProps() {
const res = await fetch('http://localhost:1337/api/posts')
const data = await res.json()
return {
props: { data },
}
}
My docker-compose.yml:
version: '3.8'
services:
backend:
container_name: backend
build:
context: backend
dockerfile: .docker/dev.dockerfile
image: backend:latest
# restart: unless-stopped
env_file: ./backend/.env
volumes:
- strapi_data:/usr/local/backend/strapi/.tmp
- strapi_upload:/usr/local/backend/strapi/public/uploads
ports:
- '1337:1337'
frontend:
container_name: frontend
build:
context: frontend
dockerfile: .docker/dev.dockerfile
image: frontend:latest
# restart: unless-stopped
env_file: ./frontend/.env
volumes:
- node_modules_next:/usr/local/frontend/node_modules
- next_folder:/usr/local/frontend
ports:
- '3000:3000'
depends_on:
- 'backend'
volumes:
node_modules_next:
next_folder:
strapi_data:
strapi_upload:
I followed this guide to set up the middleware in Strapi:
export default [
"strapi::errors",
"strapi::security",
// 'strapi::cors',
{
name: "strapi::cors",
config: {
origin: "*",
methods: ["GET", "POST", "PUT", "PATCH", "DELETE", "HEAD", "OPTIONS"],
headers: ["Content-Type", "Authorization", "Origin", "Accept"],
keepHeaderOnError: true,
},
},
"strapi::poweredBy",
"strapi::logger",
"strapi::query",
"strapi::body",
"strapi::session",
"strapi::favicon",
"strapi::public",
];
But it doesn't work.
I also tried to change getStaticProps to getServerSideProps, no luck.
I figured it out myself. Once I run docker-compose on my server, I change the fetch link from http://localhost:1337/api/posts to http://{ip}:1337/api/posts
And it works.

Elastic Search connection refused with Docker Compose (connect ECONNREFUSED )

There are multiple services that I had been trying to run (redis, front-end, back-end and elastic-search) and I was not able to connect to the elastic search service. I even tried giving a static ip for the service. (The networking part is currently commented out in the docker file attached). I tried changing the images and it still was not working.
When I tested ES locally using curl localhost:9200/_cat/health as I have mapped the container port locally it gives me that the cluster is green. I could connect to the other services like redis without issues. As with redis, I am using the service name, elasticsearch to connect it to the back-end service. Following is my docker-compose.yml file.
version: '3'
services:
arc-external:
image: arc-external
build:
context: ./arc-development-branch/arc-external
ports:
- '4201:4201'
# networks:
# - vpcbr
redis:
image: redis:3.2.11-alpine
ports:
- '6379:6379'
# networks:
# - vpcbr
elasticsearch:
image: elasticsearch:2
ports:
- '9200:9200'
- '9300:9300'
environment:
- node.name=elasticsearch
- cluster.name=datasearch
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./data/elastic:/usr/share/elasticsearch/data
# networks:
# vpcbr:
# ipv4_address: 10.5.0.4
api-external:
image: api-external
build: .
ports:
- '3001:3001'
depends_on:
- redis
- elasticsearch
# networks:
# - vpcbr
# networks:
# vpcbr:
# driver: bridge
# ipam:
# config:
# - subnet: 10.5.0.0/16
# gateway: 10.5.0.1
This is the exact error that I am getting when running docker compose-up
api-external_1 | 2021-03-09 20:41:46.3253 - info: Finished setting up log directories
api-external_1 | 2021-03-09 20:41:46.3514 - info: Connection successful to mongodb # mongodb://10.0.0.44:27017/arc
api-external_1 | 2021-03-09 20:41:46.3764 - info: Connection successful to redis at: host: redis port: 6379
api-external_1 | Elasticsearch ERROR: 2021-03-09T20:41:46Z
api-external_1 | Error: Request error, retrying
api-external_1 | HEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.24.0.4:9200
api-external_1 | at Log.error (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/log.js:226:56)
api-external_1 | at checkRespForFailure (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:259:18)
api-external_1 | at HttpConnector.<anonymous> (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/connectors/http.js:164:7)
api-external_1 | at ClientRequest.wrapper (/usr/src/app/api-external/node_modules/lodash/lodash.js:4935:19)
api-external_1 | at ClientRequest.emit (events.js:198:13)
api-external_1 | at ClientRequest.EventEmitter.emit (domain.js:448:20)
api-external_1 | at Socket.socketErrorListener (_http_client.js:401:9)
api-external_1 | at Socket.emit (events.js:198:13)
api-external_1 | at Socket.EventEmitter.emit (domain.js:448:20)
api-external_1 | at emitErrorNT (internal/streams/destroy.js:91:8)
api-external_1 | at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
api-external_1 | at process._tickCallback (internal/process/next_tick.js:63:19)
api-external_1 |
api-external_1 | Elasticsearch WARNING: 2021-03-09T20:41:46Z
api-external_1 | Unable to revive connection: http://elasticsearch:9200/
api-external_1 |
api-external_1 | Elasticsearch WARNING: 2021-03-09T20:41:46Z
api-external_1 | No living connections
api-external_1 |
api-external_1 | 2021-03-09 20:41:46.3844 - error: Error: Failed to connect to elasticsearch # elasticsearch:9200
api-external_1 | at exports.esClient.ping (/usr/src/app/api-external/dist/setup/elastic-search.js:33:46)
api-external_1 | at respond (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:327:9)
api-external_1 | at sendReqWithConnection (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:226:7)
api-external_1 | at next (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/connection_pool.js:214:7)
api-external_1 | at process._tickCallback (internal/process/next_tick.js:61:11)
api-external_1 | 2021-03-09 20:41:46.3854 - error: Error: No Living connections
api-external_1 | at sendReqWithConnection (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:226:15)
api-external_1 | at next (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/connection_pool.js:214:7)
api-external_1 | at process._tickCallback (internal/process/next_tick.js:61:11)
api-external_1 | npm ERR! code ELIFEC
Frankly, I searched a lot and were not able to debug it. Any help would be appreciated.
I was able to figure out the answer. The thing stating as depends_on does not wait the services to completely up. Here, api-external does get start up as soon as the redis and elasticsearch starts. However, elasticsearch need a bit time to configure everything so restarting the service will do the trick.
More permanent solution is to write a script that would wait until the elasticsearch is up completely before starting the api-external service

Error: Redis connection to 0.0.0.0:6379 failed - connect ECONNREFUSED 0.0.0.0:6379

I have been trying to build the classical voting app. But my redis and mongodb are giving same error. I have to connect my redis to my node app and i have tried the following:
//redis
const redisoption = {
host: "redis",
port: 6379
};
.
//redis
const redisoption = {
host: "0.0.0.0",
port: 6379
};
docker-compose.yml
version: "3"
services:
vote:
image: adityagaddhyan/voting-app
ports:
- "3000:3000"
depends_on:
- redis
networks:
- front-end
- back-end
volumes:
- ./vote:/app
result:
image: adityagaddhyan/voting-result
ports:
- "3011:3011"
depends_on:
- mongodb
networks:
- back-end
- front-end
volumes:
- ./result:/app
worker:
image: adityagaddhyan/service-worker
depends_on:
- redis
- mongodb
networks:
- back-end
volumes:
- ./worker:/app
redis:
image : redis
command: ["redis-server", "--bind", "0.0.0.0", "--port", "6379"
ports:
- "6379:6379"
networks:
- back-end
volumes:
- ./redis:/app
mongodb:
image: mongo
ports:
- "27017:27017"
volumes:
- ./db:/app
volumes:
db-data:
networks:
front-end:
back-end:
I have tried all sort of combination.
I am majorily getting two kind of error.
if i use host: "0.0.0.0", i get
Error: Redis connection to 0.0.0.0:6379 failed - connect ECONNREFUSED 0.0.0.0:6379
vote_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1141:16)
vote_1 | Emitted 'error' event on RedisClient instance at:
vote_1 | at RedisClient.on_error (/usr/src/app/node_modules/redis/index.js:341:14)
vote_1 | at Socket.<anonymous> (/usr/src/app/node_modules/redis/index.js:222:14)
vote_1 | at Socket.emit (events.js:315:20)
vote_1 | at emitErrorNT (internal/streams/destroy.js:92:8)
vote_1 | at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
vote_1 | at processTicksAndRejections (internal/process/task_queues.js:84:21) {
vote_1 | errno: 'ECONNREFUSED',
vote_1 | code: 'ECONNREFUSED',
vote_1 | syscall: 'connect',
vote_1 | address: '0.0.0.0',
vote_1 | port: 6379
vote_1 | }
and if i use host:redis, i get:
Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
worker_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1141:16)
worker_1 | Emitted 'error' event on RedisClient instance at:
worker_1 | at RedisClient.on_error (/usr/src/app2/node_modules/redis/index.js:341:14)
worker_1 | at Socket.<anonymous> (/usr/src/app2/node_modules/redis/index.js:222:14)
worker_1 | at Socket.emit (events.js:315:20)
worker_1 | at emitErrorNT (internal/streams/destroy.js:92:8)
worker_1 | at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
worker_1 | at processTicksAndRejections (internal/process/task_queues.js:84:21) {
worker_1 | errno: 'ECONNREFUSED',
worker_1 | code: 'ECONNREFUSED',
worker_1 | syscall: 'connect',
worker_1 | address: '127.0.0.1',
worker_1 | port: 6379
worker_1 | }
vote_1 | /usr/src/app/app.js:28
vote_1 | host: back-end,
vote_1 | ^
I am new to docker and unable to solve this problem. please help.

unable to connect bee-queue to docker container

for some reason i seem to be having difficulty pointing bee-queue's arena to my other docker image for redis. just wondering if anyone has any experience with this.
this is the pattern of the config i'm using
{
"queues": [
{
"hostId": "eyeshade-workers",
"type": "bee",
"name": "settlement-report",
"redis": "redis://redis:3011"
},
...
I have also tried
{
"queues": [
{
"hostId": "eyeshade-workers",
"type": "bee",
"name": "settlement-report",
"redis": {
"url": "redis://redis:3011"
}
},
{
"queues": [
{
"hostId": "eyeshade-workers",
"type": "bee",
"name": "settlement-report",
"url": "redis://redis:3011"
},
here is my redis and arena docker-compose file are here:
version: "2.1"
networks:
ledger:
driver: bridge
services:
redis:
container_name: ledger-redis
image: redis:latest
ports:
- "3011:6379"
networks:
- ledger
arena:
container_name: worker-arena
image: mixmaxhq/arena:latest
networks:
- ledger
ports:
- "4567:4567"
depends_on:
- redis
- eyeshade-worker
volumes:
- ./queue/index.json:/opt/arena/src/server/config/index.json
but i continuously get this error saying that it is trying to connect to the default :/
worker-arena | (node:42) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
worker-arena | (node:42) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
worker-arena | (node:42) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
worker-arena | events.js:182
worker-arena | throw er; // Unhandled 'error' event
worker-arena | ^
worker-arena |
worker-arena | Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
worker-arena | at Object.exports._errnoException (util.js:1016:11)
worker-arena | at exports._exceptionWithHostPort (util.js:1039:20)
worker-arena | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1138:14)
worker-arena | [nodemon] app crashed - waiting for file changes before starting...

unable to run kibana and logstash with elasticsearch

Elastic serach is running fine on 9201 port. But unable to run kibana and logstash with docker-compose.
For logstash it throws the error:
Attempted to resurrect connection to dead ES instance, but got an
error.
For kibana it throw warnings:
"warning","elasticsearch","admin"],"pid":1,"message":"No living
connections"
Below is the docker-compose.yml file:
version: '2'
services:
# Service 1 : elasticsearch
elasticsearch-5-6:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: elasticsearch-5-6
ports:
- "9201:9200"
volumes:
- /etc/elasticsearch/elasticsearch-5-6.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /var/elasticsearch/data/immunedata-5-6/:/usr/share/elasticsearch/data/
#- /etc/elasticsearch/logging.yml:/usr/share/elasticsearch/config/logging.yml
#- /var/log/elasticsearch/:/usr/share/elasticsearch/logs/
environment:
- cluster.name=docker-cluster-elasticsearch-5-6
#- bootstrap.memory_lock=true
- "ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
# Disabling the xpack security as it costs after one month of free trail.
- xpack.security.enabled=false
# Service 2 : logstash
logstash-5-6:
image: docker.elastic.co/logstash/logstash:5.6.3
container_name: logstash-5-6
ports:
#- "5044:5044"
- "5001:5001"
volumes:
- /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
- /etc/logstash/pipeline:/usr/share/logstash/pipeline
#- /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
#- /var/logstash/pipeline:/usr/share/logstash/pipeline
environment:
- "ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
depends_on:
- elasticsearch-5-6
# Service 3 : kibana
kibana-5-6:
image: docker.elastic.co/kibana/kibana:5.6.3
container_name: kibana-5-6
ports:
- "5601:5601"
volumes:
- /etc/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
#- /var/kibana/immunedata-5-6/:/usr/share/kibana/data/
environment:
- xpack.security.enabled=false
- xpack.graph.enabled = false
- xpack.ml.enabled = false
- xpack.monitoring.enabled = false
- xpack.watcher.enabled = false
- xpack.reporting.enabled = false
depends_on:
- elasticsearch-5-6
# Service 4 : elasticseach-head
elasticsearch-head:
image: mobz/elasticsearch-head:5
container_name: elasticsearch-head
# will not wait for elasticsearch to be ready.
ports:
- "9100:9100"
elasticserach.yml
cluster.name: immunedata-cluster-5.6
node.name: "immunedata-cluster-5-6.node-1"
# Elasticsearch in docker access different data directory, defined mapping directory in docker-compose.yml
#path.data: /var/elasticsearch/data/immunedata-5-6/
path.data: /usr/share/elasticsearch/data/
#path.data: /var/elasticsearch/data
# NOTE : Since elasticsearch 5.x index level settings can NOT be set on the nodes configuration like the elasticsearch.yaml
#index.number_of_shards: 1
#index.number_of_replicas: 0
# Allow all host access
network.bind_host: 0.0.0.0
http.port: 9200
# To enable cross-origin resource sharing (Accessing on browser)
http.cors.enabled: true
http.cors.allow-origin : "*"
logstash.yml file
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
#xpack.monitoring.elasticsearch.url: http://localhost:9201
##xpack.monitoring.elasticsearch.url: http://elasticsearch:9201
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.enabled: false
kibana.yml file
server.name: kibana
server.host: "0"
elasticsearch.url: http://192.168.56.10:9201
xpack.monitoring.ui.container.elasticsearch.enabled: false
#elasticsearch.url: http://elasticsearch:9201
xpack.security.enabled: false
## Above I tired this - not working
#elasticsearch.username: elastic
#elasticsearch.password: changeme
#xpack.monitoring.ui.container.elasticsearch.enabled: false
#xpack.monitoring.ui.container.elasticsearch.enabled: true
# Extra:
ssl.verificationMode: false
Logs:
[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
elasticsearch-5-6 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
elasticsearch-5-6 | at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
elasticsearch-5-6 | [2017-11-26T06:07:57,084][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][young][14][6] duration [18.2s], collections [1]/[18.5s], total [18.2s]/[23.5s], memory [178.2mb]->[79.5mb]/[1.9gb], all_pools {[young] [132.1mb]->[964kb]/[133.1mb]}{[survivor] [16.6mb]->[12.5mb]/[16.6mb]}{[old] [29.4mb]->[66.5mb]/[1.8gb]}
elasticsearch-5-6 | [2017-11-26T06:07:57,085][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][14] overhead, spent [18.2s] collecting in the last [18.5s]
elasticsearch-5-6 | [2017-11-26T06:07:57,298][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [immunedata-cluster-5-6.node-1] collector [index-recovery] failed to collect data
elasticsearch-5-6 | org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
elasticsearch-5-6 | at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:114) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:52) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.<init>(TransportBroadcastByNodeAction.java:256) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:234) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:79) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:142) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:84) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at
elasticsearch-5-6 | [2017-11-26T06:08:45,238][WARN ][o.e.x.w.e.ExecutionService] [immunedata-cluster-5-6.node-1] Failed to execute watch [XYNCje-TQzKm9OLdiH60gQ_elasticsearch_cluster_status_60e3c208-acca-4462-ba47-0711279d8f5e-2017-11-26T06:08:35.573Z]
elasticsearch-5-6 | [2017-11-26T06:08:54,886][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][young][63][9] duration [3.6s], collections [1]/[4.6s], total [3.6s]/[30.2s], memory [226.9mb]->[103.5mb]/[1.9gb], all_pools {[young] [127.5mb]->[1mb]/[133.1mb]}{[survivor] [16.6mb]->[11.3mb]/[16.6mb]}{[old] [82.7mb]->[91.2mb]/[1.8gb]}
elasticsearch-5-6 | [2017-11-26T06:08:54,886][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][63] overhead, spent [3.6s] collecting in the last [4.6s]
logstash-5-6 | Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
elasticsearch-5-6 | [2017-11-26T06:08:55,988][INFO ][o.e.c.r.a.AllocationService] [immunedata-cluster-5-6.node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.watcher-history-6-2017.11.20][0], [.monitoring-es-6-2017.11.20][0]] ...]).
logstash-5-6 | [2017-11-26T06:08:56,786][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
logstash-5-6 | [2017-11-26T06:08:56,891][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
logstash-5-6 | [2017-11-26T06:08:57,558][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.6.3-java/modules/arcsight/configuration"}
logstash-5-6 | [2017-11-26T06:09:04,121][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch-5-6:9201/]}}
logstash-5-6 | [2017-11-26T06:09:04,123][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
elasticsearch-5-6 | [2017-11-26T06:09:04,687][WARN ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] high disk watermark [90%] exceeded on [eAlcHBJ2QVG58e0HJsgrdQ][immunedata-cluster-5-6.node-1][/usr/share/elasticsearch/data/nodes/0] free: 1.9gb[7.4%], shards will be relocated away from this node
elasticsearch-5-6 | [2017-11-26T06:09:04,687][INFO ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] rerouting shards: [high disk watermark exceeded on one or more nodes]
logstash-5-6 | [2017-11-26T06:09:06,450][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
logstash-5-6 | [2017-11-26T06:09:06,452][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash-5-6 | [2017-11-26T06:09:06,455][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Template file '' could not be found!", :class=>"ArgumentError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:37:in `read_template_file'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:23:in `get_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:58:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:25:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}
logstash-5-6 | [2017-11-26T06:09:06,455][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch-5-6:9201"]}
logstash-5-6 | [2017-11-26T06:09:06,462][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
logstash-5-6 | [2017-11-26T06:09:09,818][INFO ][logstash.pipeline ] Pipeline main started
logstash-5-6 | [2017-11-26T06:09:10,341][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
logstash-5-6 | [2017-11-26T06:09:11,460][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:11,484][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
logstash-5-6 | [2017-11-26T06:09:16,491][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:16,500][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:21Z","tags":["warning","elasticsearch","config","deprecation"],"pid":1,"message":"Config key \"ssl.verify\" is deprecated. It has been replaced with \"ssl.verificationMode\""}
logstash-5-6 | [2017-11-26T06:09:21,513][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:21,523][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:kibana#5.6.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
logstash-5-6 | [2017-11-26T06:09:26,536][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:26,570][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:elasticsearch#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:xpack_main#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["status","plugin:graph#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch-5-6:9201/ => connect ECONNREFUSED 172.21.0.2:9201"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["status","plugin:monitoring#5.6.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
logstash-5-6 | [2017-11-26T06:09:31,585][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:31,603][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["reporting","warning"],"pid":1,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:reporting#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:xpack_main#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:graph#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:reporting#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:elasticsearch#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:searchprofiler#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch-5-6 | [2017-11-26T06:09:34,750][WARN ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] high disk watermark [90%] exceeded on [eAlcHBJ2QVG58e0HJsgrdQ][immunedata-cluster-5-6.node-1][/usr/share/elasticsearch/data/nodes/0] free: 1.9gb[7.4%], shards will be relocated away from this node
logstash-5-6 | [2017-11-26T06:09:36,692][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["status","plugin:ml#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from red to yellow - Waiting for Elasticsearch","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201."}
logstash-5-6 | [2017-11-26T06:09:37,366][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":
You called elasticsearch service elasticsearch-5-6 in your docker-compose.yml. That means that container with elasticsearch is available on address http://elasticsearch-5-6:9200 for all other containers in your docker-compose.yaml. And it is available on address http://127.0.0.1:9201 from the host machine.
In order to have workable ELK stack you need to change logstash config to:
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
xpack.monitoring.elasticsearch.url: http://elasticsearch-5-6:9200
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.enabled: false
and kibana config to:
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch-5-6:9200
xpack.monitoring.ui.container.elasticsearch.enabled: false
xpack.security.enabled: false
## Above I tired this - not working
#elasticsearch.username: elastic
#elasticsearch.password: changeme
#xpack.monitoring.ui.container.elasticsearch.enabled: false
#xpack.monitoring.ui.container.elasticsearch.enabled: true
# Extra:
ssl.verificationMode: false
EKL Cluster with Xpack disabled
You are missing with the ELASTICSEARCH_URL: "http://elasticsearch:9200" in kibana and xpack.monitoring.elasticsearch.url: http://elasticsearch:9200 in Logstash
here is the sample yml configuration ith all possible environment varibales defined in environment
version: '3.4'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
container_name: elasticsearch
environment:
ES_JAVA_OPTS: '-Xms2048m -Xmx2048m'
cluster.name: es-cluster
node.name: es1
network.bind_host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: elasticsearch1
xpack.security.enabled: 'false'
xpack.monitoring.enabled: 'false'
xpack.watcher.enabled: 'false'
xpack.ml.enabled: 'false'
http.cors.enabled : 'true'
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length
logger.level: debug
volumes:
- /var/elasticsearch/db/elasticsearch/data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- elastic
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
ports:
- 5044:5044
- 5001:5001
volumes:
- /var/elasticsearch/logstash/pipeline:/usr/share/logstash/pipeline
environment:
ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
http.host: 0.0.0.0
xpack.monitoring.enabled: 'false'
xpack.monitoring.elasticsearch.url: http://elasticsearch:9200
networks:
- elastic
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.6.0
container_name: kibana
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
xpack.security.enabled: 'false'
xpack.graph.enabled : 'false'
xpack.ml.enabled : 'false'
xpack.monitoring.enabled : 'false'
xpack.watcher.enabled : 'false'
xpack.reporting.enabled : 'false'
ports:
- 5601:5601
networks:
- elastic
depends_on:
- elasticsearch
elasticsearch-head:
image: mobz/elasticsearch-head:5
container_name: elasticsearch-head
ports:
- "9100:9100"
networks:
- elastic
networks:
elastic:
driver: bridge

Resources