unable to run kibana and logstash with elasticsearch - docker

Elastic serach is running fine on 9201 port. But unable to run kibana and logstash with docker-compose.
For logstash it throws the error:
Attempted to resurrect connection to dead ES instance, but got an
error.
For kibana it throw warnings:
"warning","elasticsearch","admin"],"pid":1,"message":"No living
connections"
Below is the docker-compose.yml file:
version: '2'
services:
# Service 1 : elasticsearch
elasticsearch-5-6:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: elasticsearch-5-6
ports:
- "9201:9200"
volumes:
- /etc/elasticsearch/elasticsearch-5-6.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /var/elasticsearch/data/immunedata-5-6/:/usr/share/elasticsearch/data/
#- /etc/elasticsearch/logging.yml:/usr/share/elasticsearch/config/logging.yml
#- /var/log/elasticsearch/:/usr/share/elasticsearch/logs/
environment:
- cluster.name=docker-cluster-elasticsearch-5-6
#- bootstrap.memory_lock=true
- "ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
# Disabling the xpack security as it costs after one month of free trail.
- xpack.security.enabled=false
# Service 2 : logstash
logstash-5-6:
image: docker.elastic.co/logstash/logstash:5.6.3
container_name: logstash-5-6
ports:
#- "5044:5044"
- "5001:5001"
volumes:
- /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
- /etc/logstash/pipeline:/usr/share/logstash/pipeline
#- /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
#- /var/logstash/pipeline:/usr/share/logstash/pipeline
environment:
- "ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
depends_on:
- elasticsearch-5-6
# Service 3 : kibana
kibana-5-6:
image: docker.elastic.co/kibana/kibana:5.6.3
container_name: kibana-5-6
ports:
- "5601:5601"
volumes:
- /etc/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
#- /var/kibana/immunedata-5-6/:/usr/share/kibana/data/
environment:
- xpack.security.enabled=false
- xpack.graph.enabled = false
- xpack.ml.enabled = false
- xpack.monitoring.enabled = false
- xpack.watcher.enabled = false
- xpack.reporting.enabled = false
depends_on:
- elasticsearch-5-6
# Service 4 : elasticseach-head
elasticsearch-head:
image: mobz/elasticsearch-head:5
container_name: elasticsearch-head
# will not wait for elasticsearch to be ready.
ports:
- "9100:9100"
elasticserach.yml
cluster.name: immunedata-cluster-5.6
node.name: "immunedata-cluster-5-6.node-1"
# Elasticsearch in docker access different data directory, defined mapping directory in docker-compose.yml
#path.data: /var/elasticsearch/data/immunedata-5-6/
path.data: /usr/share/elasticsearch/data/
#path.data: /var/elasticsearch/data
# NOTE : Since elasticsearch 5.x index level settings can NOT be set on the nodes configuration like the elasticsearch.yaml
#index.number_of_shards: 1
#index.number_of_replicas: 0
# Allow all host access
network.bind_host: 0.0.0.0
http.port: 9200
# To enable cross-origin resource sharing (Accessing on browser)
http.cors.enabled: true
http.cors.allow-origin : "*"
logstash.yml file
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
#xpack.monitoring.elasticsearch.url: http://localhost:9201
##xpack.monitoring.elasticsearch.url: http://elasticsearch:9201
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.enabled: false
kibana.yml file
server.name: kibana
server.host: "0"
elasticsearch.url: http://192.168.56.10:9201
xpack.monitoring.ui.container.elasticsearch.enabled: false
#elasticsearch.url: http://elasticsearch:9201
xpack.security.enabled: false
## Above I tired this - not working
#elasticsearch.username: elastic
#elasticsearch.password: changeme
#xpack.monitoring.ui.container.elasticsearch.enabled: false
#xpack.monitoring.ui.container.elasticsearch.enabled: true
# Extra:
ssl.verificationMode: false
Logs:
[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
elasticsearch-5-6 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
elasticsearch-5-6 | at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
elasticsearch-5-6 | [2017-11-26T06:07:57,084][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][young][14][6] duration [18.2s], collections [1]/[18.5s], total [18.2s]/[23.5s], memory [178.2mb]->[79.5mb]/[1.9gb], all_pools {[young] [132.1mb]->[964kb]/[133.1mb]}{[survivor] [16.6mb]->[12.5mb]/[16.6mb]}{[old] [29.4mb]->[66.5mb]/[1.8gb]}
elasticsearch-5-6 | [2017-11-26T06:07:57,085][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][14] overhead, spent [18.2s] collecting in the last [18.5s]
elasticsearch-5-6 | [2017-11-26T06:07:57,298][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [immunedata-cluster-5-6.node-1] collector [index-recovery] failed to collect data
elasticsearch-5-6 | org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
elasticsearch-5-6 | at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:114) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:52) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.<init>(TransportBroadcastByNodeAction.java:256) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:234) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:79) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:142) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:84) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at
elasticsearch-5-6 | [2017-11-26T06:08:45,238][WARN ][o.e.x.w.e.ExecutionService] [immunedata-cluster-5-6.node-1] Failed to execute watch [XYNCje-TQzKm9OLdiH60gQ_elasticsearch_cluster_status_60e3c208-acca-4462-ba47-0711279d8f5e-2017-11-26T06:08:35.573Z]
elasticsearch-5-6 | [2017-11-26T06:08:54,886][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][young][63][9] duration [3.6s], collections [1]/[4.6s], total [3.6s]/[30.2s], memory [226.9mb]->[103.5mb]/[1.9gb], all_pools {[young] [127.5mb]->[1mb]/[133.1mb]}{[survivor] [16.6mb]->[11.3mb]/[16.6mb]}{[old] [82.7mb]->[91.2mb]/[1.8gb]}
elasticsearch-5-6 | [2017-11-26T06:08:54,886][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][63] overhead, spent [3.6s] collecting in the last [4.6s]
logstash-5-6 | Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
elasticsearch-5-6 | [2017-11-26T06:08:55,988][INFO ][o.e.c.r.a.AllocationService] [immunedata-cluster-5-6.node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.watcher-history-6-2017.11.20][0], [.monitoring-es-6-2017.11.20][0]] ...]).
logstash-5-6 | [2017-11-26T06:08:56,786][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
logstash-5-6 | [2017-11-26T06:08:56,891][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
logstash-5-6 | [2017-11-26T06:08:57,558][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.6.3-java/modules/arcsight/configuration"}
logstash-5-6 | [2017-11-26T06:09:04,121][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch-5-6:9201/]}}
logstash-5-6 | [2017-11-26T06:09:04,123][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
elasticsearch-5-6 | [2017-11-26T06:09:04,687][WARN ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] high disk watermark [90%] exceeded on [eAlcHBJ2QVG58e0HJsgrdQ][immunedata-cluster-5-6.node-1][/usr/share/elasticsearch/data/nodes/0] free: 1.9gb[7.4%], shards will be relocated away from this node
elasticsearch-5-6 | [2017-11-26T06:09:04,687][INFO ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] rerouting shards: [high disk watermark exceeded on one or more nodes]
logstash-5-6 | [2017-11-26T06:09:06,450][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
logstash-5-6 | [2017-11-26T06:09:06,452][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash-5-6 | [2017-11-26T06:09:06,455][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Template file '' could not be found!", :class=>"ArgumentError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:37:in `read_template_file'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:23:in `get_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:58:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:25:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}
logstash-5-6 | [2017-11-26T06:09:06,455][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch-5-6:9201"]}
logstash-5-6 | [2017-11-26T06:09:06,462][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
logstash-5-6 | [2017-11-26T06:09:09,818][INFO ][logstash.pipeline ] Pipeline main started
logstash-5-6 | [2017-11-26T06:09:10,341][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
logstash-5-6 | [2017-11-26T06:09:11,460][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:11,484][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
logstash-5-6 | [2017-11-26T06:09:16,491][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:16,500][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:21Z","tags":["warning","elasticsearch","config","deprecation"],"pid":1,"message":"Config key \"ssl.verify\" is deprecated. It has been replaced with \"ssl.verificationMode\""}
logstash-5-6 | [2017-11-26T06:09:21,513][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:21,523][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:kibana#5.6.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
logstash-5-6 | [2017-11-26T06:09:26,536][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:26,570][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:elasticsearch#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:xpack_main#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["status","plugin:graph#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch-5-6:9201/ => connect ECONNREFUSED 172.21.0.2:9201"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["status","plugin:monitoring#5.6.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
logstash-5-6 | [2017-11-26T06:09:31,585][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:31,603][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["reporting","warning"],"pid":1,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:reporting#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:xpack_main#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:graph#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:reporting#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:elasticsearch#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:searchprofiler#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch-5-6 | [2017-11-26T06:09:34,750][WARN ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] high disk watermark [90%] exceeded on [eAlcHBJ2QVG58e0HJsgrdQ][immunedata-cluster-5-6.node-1][/usr/share/elasticsearch/data/nodes/0] free: 1.9gb[7.4%], shards will be relocated away from this node
logstash-5-6 | [2017-11-26T06:09:36,692][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["status","plugin:ml#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from red to yellow - Waiting for Elasticsearch","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201."}
logstash-5-6 | [2017-11-26T06:09:37,366][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":

You called elasticsearch service elasticsearch-5-6 in your docker-compose.yml. That means that container with elasticsearch is available on address http://elasticsearch-5-6:9200 for all other containers in your docker-compose.yaml. And it is available on address http://127.0.0.1:9201 from the host machine.
In order to have workable ELK stack you need to change logstash config to:
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
xpack.monitoring.elasticsearch.url: http://elasticsearch-5-6:9200
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.enabled: false
and kibana config to:
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch-5-6:9200
xpack.monitoring.ui.container.elasticsearch.enabled: false
xpack.security.enabled: false
## Above I tired this - not working
#elasticsearch.username: elastic
#elasticsearch.password: changeme
#xpack.monitoring.ui.container.elasticsearch.enabled: false
#xpack.monitoring.ui.container.elasticsearch.enabled: true
# Extra:
ssl.verificationMode: false

EKL Cluster with Xpack disabled
You are missing with the ELASTICSEARCH_URL: "http://elasticsearch:9200" in kibana and xpack.monitoring.elasticsearch.url: http://elasticsearch:9200 in Logstash
here is the sample yml configuration ith all possible environment varibales defined in environment
version: '3.4'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
container_name: elasticsearch
environment:
ES_JAVA_OPTS: '-Xms2048m -Xmx2048m'
cluster.name: es-cluster
node.name: es1
network.bind_host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: elasticsearch1
xpack.security.enabled: 'false'
xpack.monitoring.enabled: 'false'
xpack.watcher.enabled: 'false'
xpack.ml.enabled: 'false'
http.cors.enabled : 'true'
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length
logger.level: debug
volumes:
- /var/elasticsearch/db/elasticsearch/data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- elastic
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
ports:
- 5044:5044
- 5001:5001
volumes:
- /var/elasticsearch/logstash/pipeline:/usr/share/logstash/pipeline
environment:
ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
http.host: 0.0.0.0
xpack.monitoring.enabled: 'false'
xpack.monitoring.elasticsearch.url: http://elasticsearch:9200
networks:
- elastic
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.6.0
container_name: kibana
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
xpack.security.enabled: 'false'
xpack.graph.enabled : 'false'
xpack.ml.enabled : 'false'
xpack.monitoring.enabled : 'false'
xpack.watcher.enabled : 'false'
xpack.reporting.enabled : 'false'
ports:
- 5601:5601
networks:
- elastic
depends_on:
- elasticsearch
elasticsearch-head:
image: mobz/elasticsearch-head:5
container_name: elasticsearch-head
ports:
- "9100:9100"
networks:
- elastic
networks:
elastic:
driver: bridge

Related

ElasticSearch Logstash not connecting "Connection refused" - Docker

I need help! (who would have thought, right? lol)
I have a job interview in few days and it would mean the world to me to be well prepared for it and have some working examples.
I am trying to set up an ELK pipeline to stream data from kafka, through logstash, elasticsearch and finally read it from Kibana. The usual.
I am making use of containers, but the duo logstash - elasticsearch are giving me an aneurism.
Everything else works perfectly fine. I've checked the logs off of kafka and that is working just fine. Kibana is collected to elasticsearch just fine as well. But logstash and es really don't want to match.
Here is the setup
docker-compose.yml
version: '3.6'
services:
elasticsearch:
image: elasticsearch:8.6.0
container_name: elasticsearch
#restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
cluster.name: elf-kafka-cluster
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
kibana:
image: kibana:8.6.0
container_name: kibana
#restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
networks:
- elk
logstash:
image: logstash:8.6.0
container_name: logstash
#restart: always
volumes:
- type: bind
source: ./logstash_pipeline/
target: /usr/share/logstash/pipeline
read_only: true
command: logstash -f /home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf
depends_on:
- elasticsearch
ports:
- '9600:9600'
environment:
xpack.monitoring.enabled: true
# LS_JAVA_OPTS: "-Xmx256m -Xms256m"
links:
- elasticsearch
networks:
- elk
volumes:
elastic_data: {}
networks:
elk:
driver: bridge
logstash.conf
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["topic"]
}
}
output {
elasitcsearch {
hosts => ["http://localhost:9200"]
index => "topic"
workers => 1
}
}
These are logstash error logs when I compose up:
logstash | [2023-01-17T13:59:02,680][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash | [2023-01-17T13:59:04,711][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash | [2023-01-17T13:59:05,373][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,379][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,436][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,444][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
logstash | [2023-01-17T13:59:05,449][WARN ][logstash.licensechecker.licensereader] Attempt to validate Elasticsearch license failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,477][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash | [2023-01-17T13:59:05,567][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash | [2023-01-17T13:59:05,661][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf"}
logstash | [2023-01-17T13:59:05,664][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
logstash | [2023-01-17T13:59:06,333][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
logstash | [2023-01-17T13:59:06,411][INFO ][logstash.runner ] Logstash shut down.
logstash | [2023-01-17T13:59:06,419][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
logstash | org.jruby.exceptions.SystemExit: (SystemExit) exit
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:790) ~[jruby.jar:?]
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:753) ~[jruby.jar:?]
logstash | at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:91) ~[?:?]
and this is to prove that everything is working as intended with es (or so it seems)
netstat -an | grep 9200
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN
tcp6 0 0 :::9200 :::* LISTEN
unix 3 [ ] STREAM CONNECTED 49200
I've looked through everything and this is 100% not a duplicate because I have tried it all. I really can't figure it out. Hope anyone can help.
Thank you for you time.
You should set logstash.yml
Create a logstash.yml with values below:
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://localhost:9200" ]
In your docker-compose.yml, add another volume in Logstash container as shown below:
./logstash.yml:/usr/share/logstash/config/logstash.yml
Additionally, its good to run with restart condition.

Connection refused for Flink with docker-compose

I have the following docker-compose file which is a copy of the docker-compose from the docker apache flink site. The only difference is that I am using the Mac m1 version.
version: "2.2"
services:
jobmanager:
image: arm64v8/flink:alpine
ports:
- "8081:8081"
command: standalone-job --job-classname com.job.ClassName [--job-id <job id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job arguments]
volumes:
- ~/sg_flink/artifacts:/opt/flink/usrlib
networks:
- flink-network
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanager
parallelism.default: 2
taskmanager:
image: arm64v8/flink:alpine
depends_on:
- jobmanager
command: taskmanager
scale: 1
volumes:
- ~/sg_flink/artifacts:/opt/flink/usrlib
networks:
- flink-network
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanager
taskmanager.numberOfTaskSlots: 2
parallelism.default: 2
networks:
flink-network:
The error is a connection is refused
taskmanager_1 | 2021-11-03 17:43:02,724 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Could not resolve ResourceManager address akka.tcp://flink#9cf35ea13c8b:6123/user/resourcemanager, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink#9cf35ea13c8b:6123/user/resourcemanager..
taskmanager_1 | 2021-11-03 17:43:12,753 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: 9cf35ea13c8b/172.20.0.3:6123
taskmanager_1 | 2021-11-03 17:43:12,756 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#9cf35ea13c8b:6123] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#9cf35ea13c8b:6123]] Caused by: [Connection refused: 9cf35ea13c8b/172.20.0.3:6123]
taskmanager_1 | 2021-11-03 17:43:12,758 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Could not resolve ResourceManager address akka.tcp://flink#9cf35ea13c8b:6123/user/resourcemanager, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink#9cf35ea13c8b:6123/user/resourcemanager..
docker ps output looks like this
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4416f88f60c2 arm64v8/flink:alpine "/docker-entrypoint.…" 44 seconds ago Up 44 seconds 6123/tcp, 8081/tcp sg_flink_taskmanager_1
c211940acf41 arm64v8/flink:alpine "/docker-entrypoint.…" 45 seconds ago Up 44 seconds 6123/tcp, 0.0.0.0:8081->8081/tcp sg_flink_jobmanager_1
```
I have met this trouble in same condittion,and I fixed it use docker-compse links.
taskmanager:
links:
- jobmanager

Elastic Search connection refused with Docker Compose (connect ECONNREFUSED )

There are multiple services that I had been trying to run (redis, front-end, back-end and elastic-search) and I was not able to connect to the elastic search service. I even tried giving a static ip for the service. (The networking part is currently commented out in the docker file attached). I tried changing the images and it still was not working.
When I tested ES locally using curl localhost:9200/_cat/health as I have mapped the container port locally it gives me that the cluster is green. I could connect to the other services like redis without issues. As with redis, I am using the service name, elasticsearch to connect it to the back-end service. Following is my docker-compose.yml file.
version: '3'
services:
arc-external:
image: arc-external
build:
context: ./arc-development-branch/arc-external
ports:
- '4201:4201'
# networks:
# - vpcbr
redis:
image: redis:3.2.11-alpine
ports:
- '6379:6379'
# networks:
# - vpcbr
elasticsearch:
image: elasticsearch:2
ports:
- '9200:9200'
- '9300:9300'
environment:
- node.name=elasticsearch
- cluster.name=datasearch
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./data/elastic:/usr/share/elasticsearch/data
# networks:
# vpcbr:
# ipv4_address: 10.5.0.4
api-external:
image: api-external
build: .
ports:
- '3001:3001'
depends_on:
- redis
- elasticsearch
# networks:
# - vpcbr
# networks:
# vpcbr:
# driver: bridge
# ipam:
# config:
# - subnet: 10.5.0.0/16
# gateway: 10.5.0.1
This is the exact error that I am getting when running docker compose-up
api-external_1 | 2021-03-09 20:41:46.3253 - info: Finished setting up log directories
api-external_1 | 2021-03-09 20:41:46.3514 - info: Connection successful to mongodb # mongodb://10.0.0.44:27017/arc
api-external_1 | 2021-03-09 20:41:46.3764 - info: Connection successful to redis at: host: redis port: 6379
api-external_1 | Elasticsearch ERROR: 2021-03-09T20:41:46Z
api-external_1 | Error: Request error, retrying
api-external_1 | HEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.24.0.4:9200
api-external_1 | at Log.error (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/log.js:226:56)
api-external_1 | at checkRespForFailure (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:259:18)
api-external_1 | at HttpConnector.<anonymous> (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/connectors/http.js:164:7)
api-external_1 | at ClientRequest.wrapper (/usr/src/app/api-external/node_modules/lodash/lodash.js:4935:19)
api-external_1 | at ClientRequest.emit (events.js:198:13)
api-external_1 | at ClientRequest.EventEmitter.emit (domain.js:448:20)
api-external_1 | at Socket.socketErrorListener (_http_client.js:401:9)
api-external_1 | at Socket.emit (events.js:198:13)
api-external_1 | at Socket.EventEmitter.emit (domain.js:448:20)
api-external_1 | at emitErrorNT (internal/streams/destroy.js:91:8)
api-external_1 | at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
api-external_1 | at process._tickCallback (internal/process/next_tick.js:63:19)
api-external_1 |
api-external_1 | Elasticsearch WARNING: 2021-03-09T20:41:46Z
api-external_1 | Unable to revive connection: http://elasticsearch:9200/
api-external_1 |
api-external_1 | Elasticsearch WARNING: 2021-03-09T20:41:46Z
api-external_1 | No living connections
api-external_1 |
api-external_1 | 2021-03-09 20:41:46.3844 - error: Error: Failed to connect to elasticsearch # elasticsearch:9200
api-external_1 | at exports.esClient.ping (/usr/src/app/api-external/dist/setup/elastic-search.js:33:46)
api-external_1 | at respond (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:327:9)
api-external_1 | at sendReqWithConnection (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:226:7)
api-external_1 | at next (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/connection_pool.js:214:7)
api-external_1 | at process._tickCallback (internal/process/next_tick.js:61:11)
api-external_1 | 2021-03-09 20:41:46.3854 - error: Error: No Living connections
api-external_1 | at sendReqWithConnection (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:226:15)
api-external_1 | at next (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/connection_pool.js:214:7)
api-external_1 | at process._tickCallback (internal/process/next_tick.js:61:11)
api-external_1 | npm ERR! code ELIFEC
Frankly, I searched a lot and were not able to debug it. Any help would be appreciated.
I was able to figure out the answer. The thing stating as depends_on does not wait the services to completely up. Here, api-external does get start up as soon as the redis and elasticsearch starts. However, elasticsearch need a bit time to configure everything so restarting the service will do the trick.
More permanent solution is to write a script that would wait until the elasticsearch is up completely before starting the api-external service

Alfresco deployment with docker in a system that has a rabbitmq instance running

I am trying to deploy Alfresco community edition with its official docker-compose file, the problem that I am facing is that in the host system there is a RabbitMq instance running (with default configs) an I think the ActiveMq and RabbitMq interferes with each other causing the Alfresco Content Service (ACS) to get stuck in "Starting 'Messaging' subsystem, ID: [Messaging, default]", but the ActiveMq seems to run properly.
this is my docker-compose.yml (I changed ActiveMq ports):
version: "2"
services:
alfresco:
image: alfresco/alfresco-content-repository-community:6.2.0-ga
mem_limit: 1500m
environment:
JAVA_OPTS: "
-Ddb.driver=org.postgresql.Driver
-Ddb.username=alfresco
-Ddb.password=alfresco
-Ddb.url=jdbc:postgresql://postgres:5432/alfresco
-Dsolr.host=solr6
-Dsolr.port=8983
-Dsolr.secureComms=none
-Dsolr.base.url=/solr
-Dindex.subsystem.name=solr6
-Dshare.host=127.0.0.1
-Dshare.port=8080
-Dalfresco.host=localhost
-Dalfresco.port=8080
-Daos.baseUrlOverwrite=http://localhost:8080/alfresco/aos
-Dmessaging.broker.url=\"failover:(nio://activemq:11617)?timeout=3000&jms.useCompression=true\"
-Ddeployment.method=DOCKER_COMPOSE
-Dlocal.transform.service.enabled=true
-DlocalTransform.pdfrenderer.url=http://alfresco-pdf-renderer:8090/
-DlocalTransform.imagemagick.url=http://imagemagick:8090/
-DlocalTransform.libreoffice.url=http://libreoffice:8090/
-DlocalTransform.tika.url=http://tika:8090/
-DlocalTransform.misc.url=http://transform-misc:8090/
-Dlegacy.transform.service.enabled=true
-Dalfresco-pdf-renderer.url=http://alfresco-pdf-renderer:8090/
-Djodconverter.url=http://libreoffice:8090/
-Dimg.url=http://imagemagick:8090/
-Dtika.url=http://tika:8090/
-Dtransform.misc.url=http://transform-misc:8090/
-Dcsrf.filter.enabled=false
-Xms1500m -Xmx1500m
"
alfresco-pdf-renderer:
image: alfresco/alfresco-pdf-renderer:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8090:8090
imagemagick:
image: alfresco/alfresco-imagemagick:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8091:8090
libreoffice:
image: alfresco/alfresco-libreoffice:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8092:8090
tika:
image: alfresco/alfresco-tika:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8093:8090
transform-misc:
image: alfresco/alfresco-transform-misc:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8094:8090
share:
image: alfresco/alfresco-share:6.2.0
mem_limit: 1g
environment:
REPO_HOST: "alfresco"
REPO_PORT: "8080"
JAVA_OPTS: "
-Xms500m
-Xmx500m
-Dalfresco.host=localhost
-Dalfresco.port=8080
-Dalfresco.context=alfresco
-Dalfresco.protocol=http
"
postgres:
image: postgres:11.4
mem_limit: 512m
environment:
- POSTGRES_PASSWORD=alfresco
- POSTGRES_USER=alfresco
- POSTGRES_DB=alfresco
command: postgres -c max_connections=300 -c log_min_messages=LOG
ports:
- 5432:5432
solr6:
image: alfresco/alfresco-search-services:1.4.0
mem_limit: 2g
environment:
#Solr needs to know how to register itself with Alfresco
- SOLR_ALFRESCO_HOST=alfresco
- SOLR_ALFRESCO_PORT=8080
#Alfresco needs to know how to call solr
- SOLR_SOLR_HOST=solr6
- SOLR_SOLR_PORT=8983
#Create the default alfresco and archive cores
- SOLR_CREATE_ALFRESCO_DEFAULTS=alfresco,archive
#HTTP by default
- ALFRESCO_SECURE_COMMS=none
- "SOLR_JAVA_MEM=-Xms2g -Xmx2g"
ports:
- 8083:8983 #Browser port
activemq:
image: alfresco/alfresco-activemq:5.15.8
mem_limit: 1g
ports:
- 1162:8161 # Web Console
- 1673:5672 # AMQP
- 11617:61616 # OpenWire
- 11614:61613 # STOMP
proxy:
image: alfresco/acs-community-ngnix:1.0.0
mem_limit: 128m
depends_on:
- alfresco
ports:
- 8080:8080
links:
- alfresco
- share
this is ActivMq logs :
activemq_1 | INFO: Loading '/opt/activemq/bin/env'
activemq_1 | INFO: Using java '/usr/java/default/bin/java'
activemq_1 | INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C)
activemq_1 | INFO: Creating pidfile /opt/activemq/data/activemq.pid
activemq_1 | Extensions classpath:
activemq_1 | [/opt/activemq/lib,/opt/activemq/lib/camel,/opt/activemq/lib/optional,/opt/activemq/lib/web,/opt/activemq/lib/extra]
activemq_1 | ACTIVEMQ_HOME: /opt/activemq
activemq_1 | ACTIVEMQ_BASE: /opt/activemq
activemq_1 | ACTIVEMQ_CONF: /opt/activemq/conf
activemq_1 | ACTIVEMQ_DATA: /opt/activemq/data
activemq_1 | Loading message broker from: xbean:activemq.xml
activemq_1 | INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#73ad2d6: startup date [Mon Apr 27 09:57:23 UTC 2020]; root of context hierarchy
activemq_1 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/opt/activemq/data/kahadb]
activemq_1 | INFO | PListStore:[/opt/activemq/data/localhost/tmp_storage] started
activemq_1 | INFO | Apache ActiveMQ 5.15.8 (localhost, ID:7f445cd32cc5-39441-1587981447728-0:1) is starting
activemq_1 | INFO | Listening for connections at: tcp://7f445cd32cc5:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector openwire started
activemq_1 | INFO | Listening for connections at: amqp://7f445cd32cc5:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector amqp started
activemq_1 | INFO | Listening for connections at: stomp://7f445cd32cc5:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector stomp started
activemq_1 | INFO | Listening for connections at: mqtt://7f445cd32cc5:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector mqtt started
activemq_1 | INFO | Starting Jetty server
activemq_1 | INFO | Creating Jetty connector
activemq_1 | WARN | ServletContext#o.e.j.s.ServletContextHandler#8e50104{/,null,STARTING} has uncovered http methods for path: /
activemq_1 | INFO | Listening for connections at ws://7f445cd32cc5:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector ws started
activemq_1 | INFO | Apache ActiveMQ 5.15.8 (localhost, ID:7f445cd32cc5-39441-1587981447728-0:1) started
activemq_1 | INFO | For help or more information please see: http://activemq.apache.org
activemq_1 | WARN | Store limit is 102400 mb (current store usage is 0 mb). The data directory: /opt/activemq/data/kahadb only has 20358 mb of usable space. - resetting to maximum available disk space: 20358 mb
activemq_1 | WARN | Temporary Store limit is 51200 mb (current store usage is 0 mb). The data directory: /opt/activemq/data only has 20358 mb of usable space. - resetting to maximum available disk space: 20358 mb
and this is the Alfresco last log that it get stucks for ever :
alfresco_1 | 2020-04-27 09:59:50,116 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'Messaging' subsystem, ID: [Messaging, default]

Docker/zookeeper Will not attempt to authenticate using SASL

Good Day,
I wanted to test the config store which is built using spring boot. The instruction given to me is run the project using docker-compose.yml files. I'm new with this,I've tired to execute but while running those commands on iMAC terminal I'm facing the following exception.
platform-config-store | 2018-03-05 11:55:12.167 INFO 1 --- [ main] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#22bbbe6
platform-config-store | 2018-03-05 11:55:12.286 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
platform-config-store | 2018-03-05 11:55:12.314 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
platform-config-store | java.net.ConnectException: Connection refused
platform-config-store | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_144]
platform-config-store | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_144]
platform-config-store | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
platform-config-store | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
platform-config-store |
platform-config-store | 2018-03-05 11:55:13.422 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
platform-config-store | 2018-03-05 11:55:13.424 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
I've googled this problem and on some posts it was mentioned that zookeeper client server is not available that's why this error is occurring. So for this I've configured the zookeeper local instance on my machine and made changes in docker-compose.yml file. Instead of getting the image from docker, I tried to get it from local machine. It didn't work and faced the same issue.
Also some of them posted that this related to the firewall. I've verified and firewall's turned off.
Following is the docker-compose file I'm executing.
docker-compose.yml
version: "3.0"
services:
zookeeper:
container_name: zookeeper
image: docker.*****.net/zookeeper
#image: zookeeper // tired to connect with local zookeeper instance
ports:
- 2181:2181
postgres:
container_name: postgres
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=p3rmission
redis:
container_name: redis
image: redis
ports:
- 6379:6379
Could anyone please guide me, what I'm missing here. Help will be appreciated. Thanks

Resources