ElasticSearch Logstash not connecting "Connection refused" - Docker - docker
I need help! (who would have thought, right? lol)
I have a job interview in few days and it would mean the world to me to be well prepared for it and have some working examples.
I am trying to set up an ELK pipeline to stream data from kafka, through logstash, elasticsearch and finally read it from Kibana. The usual.
I am making use of containers, but the duo logstash - elasticsearch are giving me an aneurism.
Everything else works perfectly fine. I've checked the logs off of kafka and that is working just fine. Kibana is collected to elasticsearch just fine as well. But logstash and es really don't want to match.
Here is the setup
docker-compose.yml
version: '3.6'
services:
elasticsearch:
image: elasticsearch:8.6.0
container_name: elasticsearch
#restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
cluster.name: elf-kafka-cluster
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
kibana:
image: kibana:8.6.0
container_name: kibana
#restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
networks:
- elk
logstash:
image: logstash:8.6.0
container_name: logstash
#restart: always
volumes:
- type: bind
source: ./logstash_pipeline/
target: /usr/share/logstash/pipeline
read_only: true
command: logstash -f /home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf
depends_on:
- elasticsearch
ports:
- '9600:9600'
environment:
xpack.monitoring.enabled: true
# LS_JAVA_OPTS: "-Xmx256m -Xms256m"
links:
- elasticsearch
networks:
- elk
volumes:
elastic_data: {}
networks:
elk:
driver: bridge
logstash.conf
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["topic"]
}
}
output {
elasitcsearch {
hosts => ["http://localhost:9200"]
index => "topic"
workers => 1
}
}
These are logstash error logs when I compose up:
logstash | [2023-01-17T13:59:02,680][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash | [2023-01-17T13:59:04,711][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash | [2023-01-17T13:59:05,373][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,379][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,436][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,444][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
logstash | [2023-01-17T13:59:05,449][WARN ][logstash.licensechecker.licensereader] Attempt to validate Elasticsearch license failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,477][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash | [2023-01-17T13:59:05,567][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash | [2023-01-17T13:59:05,661][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf"}
logstash | [2023-01-17T13:59:05,664][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
logstash | [2023-01-17T13:59:06,333][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
logstash | [2023-01-17T13:59:06,411][INFO ][logstash.runner ] Logstash shut down.
logstash | [2023-01-17T13:59:06,419][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
logstash | org.jruby.exceptions.SystemExit: (SystemExit) exit
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:790) ~[jruby.jar:?]
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:753) ~[jruby.jar:?]
logstash | at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:91) ~[?:?]
and this is to prove that everything is working as intended with es (or so it seems)
netstat -an | grep 9200
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN
tcp6 0 0 :::9200 :::* LISTEN
unix 3 [ ] STREAM CONNECTED 49200
I've looked through everything and this is 100% not a duplicate because I have tried it all. I really can't figure it out. Hope anyone can help.
Thank you for you time.
You should set logstash.yml
Create a logstash.yml with values below:
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://localhost:9200" ]
In your docker-compose.yml, add another volume in Logstash container as shown below:
./logstash.yml:/usr/share/logstash/config/logstash.yml
Additionally, its good to run with restart condition.
Related
Two rabbitmq isntances on one server with docker compose how to change the default port
I would like to run two instances of rabbitmq on one server. All I create with docker-compose. The thing is how I can change the default node and management ports. I have tried setting it via ports but it didn't help. When I was facing the same scenario but with mongo, I have used command: mongod --port CUSTOM_PORT . What would be the analogical command here for rabbitmq? Here is my config for the second instance of rabbitmq. version: '2' services: rabbitmq: image: rabbitmq:3-management-alpine container_name: 'rabbitmq_test' ports: - 5673:5673 - 15673:15673 volumes: - ./rabbitmq/data/:/var/lib/rabbitmq/ - ./rabbitmq/log/:/var/log/rabbitmq networks: - rabbitmq_go_net_test environment: RABBITMQ_DEFAULT_USER: 'test' RABBITMQ_DEFAULT_PASS: 'test' HOST_PORT_RABBIT: 5673 HOST_PORT_RABBIT_MGMT: 15673 networks: rabbitmq_go_net_test: driver: bridge And the outcome is below Management plugin: HTTP (non-TLS) listener started on port 15672 rabbitmq_test | 2021-03-18 11:32:42.553 [info] <0.738.0> Ready to start client connection listeners rabbitmq_test | 2021-03-18 11:32:42.553 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#fb24038613f3 rabbitmq_test | 2021-03-18 11:32:42.557 [info] <0.1035.0> started TCP listener on [::]:5672 We can see that there are still ports 5672 and 15672 exposed instead of 5673 and 15673. EDIT ports: - 5673:5672 - 15673:15672 I have tried that the above conf yet with no success rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.797.0> Management plugin: HTTP (non-TLS) listener started on port 15672 rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.903.0> Statistics database started. rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.902.0> Starting worker pool 'management_worker_pool' with 3 processes in it rabbitmq_test | 2021-03-18 14:08:56.168 [info] <0.44.0> Application rabbitmq_management started on node rabbit#9358e6f4d2a5 rabbitmq_test | 2021-03-18 14:08:56.208 [info] <0.44.0> Application prometheus started on node rabbit#9358e6f4d2a5 rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.916.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692 rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#9358e6f4d2a5 rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.738.0> Ready to start client connection listeners rabbitmq_test | 2021-03-18 14:08:56.216 [info] <0.1035.0> started TCP listener on [::]:5672
I have found the solution. I provided the configuration file to the rabbitmq container. loopback_users.guest = false listeners.tcp.default = 5673 default_pass = test default_user = test management.tcp.port = 15673 And a working docker-compose file version: '2' services: rabbitmq: image: rabbitmq:3-management-alpine container_name: 'rabbitmq_test' ports: - 5673:5673 - 15673:15673 volumes: - ./rabbitmq/data/:/var/lib/rabbitmq/ - ./rabbitmq/log/:/var/log/rabbitmq - ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.conf networks: - rabbitmq_go_net_test networks: rabbitmq_go_net_test: driver: bridge
A working example with rabbitmq:3.9.13-management-alpine docker/rabbitmq/rabbitmq.conf: loopback_users.guest = false listeners.tcp.default = 5673 default_pass = guest default_user = guest default_vhost = / docker/rabbitmq/Dockerfile: FROM rabbitmq:3.9.13-management-alpine COPY --chown=rabbitmq:rabbitmq rabbitmq.conf /etc/rabbitmq/rabbitmq.conf EXPOSE 4369 5671 5672 5673 15691 15692 25672 25673 docker-compose.yml: ... rabbitmq: #image: "rabbitmq:3-management-alpine" build: './docker/rabbitmq/' container_name: my-rabbitmq environment: RABBITMQ_DEFAULT_VHOST: / ports: - 5673:5672 - 15673:15672 networks: - default ...
Consul agent. Check socket connection failed: error="dial tcp 172.19.0.6:50044: connect: connection refused"
I am having troubles with microservice health checks in my consul docker setup, which i believe is a symptom of failure in service discovery as i only have one server in my registry. Below is consul list of members from inside the docker container. / # consul members Node Address Status Type Build Protocol DC Segment 7b1edb14a647 172.19.0.6:8301 alive server 1.7.4 2 dc1 <all> / # Consul container logs repeat the same error below for all the microservices: consul | 2020-06-16T12:19:11.087Z [WARN] agent: Check socket connection failed: check=service:ffa44b66c4869601c04abdbea6dc5be5 error="dial tcp 172.19.0.6:50044: connect: connection refused" I am using docker-compose v.3.2 to create a network for containers. This is a consul service definition consul: container_name: consul ports: - '8400:8400' - '8500:8500' - '8600:53/udp' image: consul command: ['agent', '-server', '-bootstrap', '-ui', '-client', '0.0.0.0'] Microservice definition service-notification: build: context: . dockerfile: apps/service-notification/Dockerfile args: NODE_ENV: development depends_on: - consul image: 'service-notification:latest' restart: always environment: - CONSUL_HOST=consul ports: - '50044:50044' I am using CONSUL_HOST env variable to pass in correct host url. Consul config for the microservice consul: host: ${{CONSUL_HOST}} port: 8500 service: discoveryHost: ${{CONSUL_HOST}} healthCheck: timeout: 1s interval: 10s tcp: ${{ service.discoveryHost }}:${{ service.port }} maxRetry: 5 retryInterval: 5000 tags: ["v1.0.0", "microservice"] name: io.ultimatebackend.srv.notification port: 50044 My conclusion so far is that consul server container fails to reach the agents somehow. But i don't know why and i feel like i am missing some obvious peace of consul structure. Please advise.
I was incorrectly configuring my service. The dicoveryHost should be an IP and port of a micro-service inside docker network.
rsyslog not connecting to elasticsearch in docker
I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch. I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/ Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is Failed to connect to localhost port 9200: Connection refused 2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ] Anyone can help on this? Everything is running in docker on a single machine. I use below docker compose file to start the stack. version: "3" services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1 environment: - discovery.type=single-node - xpack.security.enabled=false ports: - 9200:9200 networks: - logging-network kibana: image: docker.elastic.co/kibana/kibana:7.6.1 depends_on: - logstash ports: - 5601:5601 networks: - logging-network rsyslog: image: rsyslog/syslog_appliance_alpine:8.36.0-3.7 environment: - TZ=UTC - xpack.security.enabled=false ports: - 514:514/tcp - 514:514/udp volumes: - ./rsyslog.conf:/etc/rsyslog.conf:ro - rsyslog-work:/work - rsyslog-logs:/logs volumes: rsyslog-work: rsyslog-logs: networks: logging-network: driver: bridge rsyslog.conf file below: global(processInternalMessages="on") #module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1") module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`) module(load="imrelp") module(load="imptcp") module(load="imudp" TimeRequery="500") module(load="omstdout") module(load="omelasticsearch") module(load="mmjsonparse") module(load="mmutf8fix") input(type="imptcp" port="514") input(type="imudp" port="514") input(type="imrelp" port="1601") # includes done explicitely include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`) include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`) #try to parse a structured log action(type="mmjsonparse") # this is for index names to be like: rsyslog-YYYY.MM.DD template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%") # this is for formatting our syslog in JSON with #timestamp template(name="json-syslog" type="list") { constant(value="{") constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339") constant(value="\",\"host\":\"") property(name="hostname") constant(value="\",\"severity\":\"") property(name="syslogseverity-text") constant(value="\",\"facility\":\"") property(name="syslogfacility-text") constant(value="\",\"program\":\"") property(name="programname") constant(value="\",\"tag\":\"") property(name="syslogtag" format="json") constant(value="\",") property(name="$!all-json" position.from="2") # closing brace is in all-json } # this is where we actually send the logs to Elasticsearch (localhost:9200 by default) action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on") #################### default ruleset begins #################### # we emit our own messages to docker console: syslog.* :omstdout: include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages action(name="main_utf8fix" type="mmutf8fix" replacementChar="?") include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`) include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.
unable to run kibana and logstash with elasticsearch
Elastic serach is running fine on 9201 port. But unable to run kibana and logstash with docker-compose. For logstash it throws the error: Attempted to resurrect connection to dead ES instance, but got an error. For kibana it throw warnings: "warning","elasticsearch","admin"],"pid":1,"message":"No living connections" Below is the docker-compose.yml file: version: '2' services: # Service 1 : elasticsearch elasticsearch-5-6: image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3 container_name: elasticsearch-5-6 ports: - "9201:9200" volumes: - /etc/elasticsearch/elasticsearch-5-6.yml:/usr/share/elasticsearch/config/elasticsearch.yml - /var/elasticsearch/data/immunedata-5-6/:/usr/share/elasticsearch/data/ #- /etc/elasticsearch/logging.yml:/usr/share/elasticsearch/config/logging.yml #- /var/log/elasticsearch/:/usr/share/elasticsearch/logs/ environment: - cluster.name=docker-cluster-elasticsearch-5-6 #- bootstrap.memory_lock=true - "ES_JAVA_OPTS: -Xmx2048m -Xms2048m" # Disabling the xpack security as it costs after one month of free trail. - xpack.security.enabled=false # Service 2 : logstash logstash-5-6: image: docker.elastic.co/logstash/logstash:5.6.3 container_name: logstash-5-6 ports: #- "5044:5044" - "5001:5001" volumes: - /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml - /etc/logstash/pipeline:/usr/share/logstash/pipeline #- /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml #- /var/logstash/pipeline:/usr/share/logstash/pipeline environment: - "ES_JAVA_OPTS: -Xmx2048m -Xms2048m" depends_on: - elasticsearch-5-6 # Service 3 : kibana kibana-5-6: image: docker.elastic.co/kibana/kibana:5.6.3 container_name: kibana-5-6 ports: - "5601:5601" volumes: - /etc/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml #- /var/kibana/immunedata-5-6/:/usr/share/kibana/data/ environment: - xpack.security.enabled=false - xpack.graph.enabled = false - xpack.ml.enabled = false - xpack.monitoring.enabled = false - xpack.watcher.enabled = false - xpack.reporting.enabled = false depends_on: - elasticsearch-5-6 # Service 4 : elasticseach-head elasticsearch-head: image: mobz/elasticsearch-head:5 container_name: elasticsearch-head # will not wait for elasticsearch to be ready. ports: - "9100:9100" elasticserach.yml cluster.name: immunedata-cluster-5.6 node.name: "immunedata-cluster-5-6.node-1" # Elasticsearch in docker access different data directory, defined mapping directory in docker-compose.yml #path.data: /var/elasticsearch/data/immunedata-5-6/ path.data: /usr/share/elasticsearch/data/ #path.data: /var/elasticsearch/data # NOTE : Since elasticsearch 5.x index level settings can NOT be set on the nodes configuration like the elasticsearch.yaml #index.number_of_shards: 1 #index.number_of_replicas: 0 # Allow all host access network.bind_host: 0.0.0.0 http.port: 9200 # To enable cross-origin resource sharing (Accessing on browser) http.cors.enabled: true http.cors.allow-origin : "*" logstash.yml file http.host: "0.0.0.0" path.config: /usr/share/logstash/pipeline #xpack.monitoring.elasticsearch.url: http://localhost:9201 ##xpack.monitoring.elasticsearch.url: http://elasticsearch:9201 #xpack.monitoring.elasticsearch.username: logstash_system #xpack.monitoring.elasticsearch.password: changeme xpack.monitoring.enabled: false kibana.yml file server.name: kibana server.host: "0" elasticsearch.url: http://192.168.56.10:9201 xpack.monitoring.ui.container.elasticsearch.enabled: false #elasticsearch.url: http://elasticsearch:9201 xpack.security.enabled: false ## Above I tired this - not working #elasticsearch.username: elastic #elasticsearch.password: changeme #xpack.monitoring.ui.container.elasticsearch.enabled: false #xpack.monitoring.ui.container.elasticsearch.enabled: true # Extra: ssl.verificationMode: false Logs: [elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141] elasticsearch-5-6 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141] elasticsearch-5-6 | at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141] elasticsearch-5-6 | [2017-11-26T06:07:57,084][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][young][14][6] duration [18.2s], collections [1]/[18.5s], total [18.2s]/[23.5s], memory [178.2mb]->[79.5mb]/[1.9gb], all_pools {[young] [132.1mb]->[964kb]/[133.1mb]}{[survivor] [16.6mb]->[12.5mb]/[16.6mb]}{[old] [29.4mb]->[66.5mb]/[1.8gb]} elasticsearch-5-6 | [2017-11-26T06:07:57,085][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][14] overhead, spent [18.2s] collecting in the last [18.5s] elasticsearch-5-6 | [2017-11-26T06:07:57,298][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [immunedata-cluster-5-6.node-1] collector [index-recovery] failed to collect data elasticsearch-5-6 | org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized]; elasticsearch-5-6 | at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:114) ~[elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:52) ~[elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.<init>(TransportBroadcastByNodeAction.java:256) ~[elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:234) ~[elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:79) ~[elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170) ~[elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:142) ~[elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:84) ~[elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-5.6.3.jar:5.6.3] elasticsearch-5-6 | at elasticsearch-5-6 | [2017-11-26T06:08:45,238][WARN ][o.e.x.w.e.ExecutionService] [immunedata-cluster-5-6.node-1] Failed to execute watch [XYNCje-TQzKm9OLdiH60gQ_elasticsearch_cluster_status_60e3c208-acca-4462-ba47-0711279d8f5e-2017-11-26T06:08:35.573Z] elasticsearch-5-6 | [2017-11-26T06:08:54,886][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][young][63][9] duration [3.6s], collections [1]/[4.6s], total [3.6s]/[30.2s], memory [226.9mb]->[103.5mb]/[1.9gb], all_pools {[young] [127.5mb]->[1mb]/[133.1mb]}{[survivor] [16.6mb]->[11.3mb]/[16.6mb]}{[old] [82.7mb]->[91.2mb]/[1.8gb]} elasticsearch-5-6 | [2017-11-26T06:08:54,886][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][63] overhead, spent [3.6s] collecting in the last [4.6s] logstash-5-6 | Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties elasticsearch-5-6 | [2017-11-26T06:08:55,988][INFO ][o.e.c.r.a.AllocationService] [immunedata-cluster-5-6.node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.watcher-history-6-2017.11.20][0], [.monitoring-es-6-2017.11.20][0]] ...]). logstash-5-6 | [2017-11-26T06:08:56,786][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"} logstash-5-6 | [2017-11-26T06:08:56,891][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"} logstash-5-6 | [2017-11-26T06:08:57,558][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.6.3-java/modules/arcsight/configuration"} logstash-5-6 | [2017-11-26T06:09:04,121][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch-5-6:9201/]}} logstash-5-6 | [2017-11-26T06:09:04,123][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"} elasticsearch-5-6 | [2017-11-26T06:09:04,687][WARN ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] high disk watermark [90%] exceeded on [eAlcHBJ2QVG58e0HJsgrdQ][immunedata-cluster-5-6.node-1][/usr/share/elasticsearch/data/nodes/0] free: 1.9gb[7.4%], shards will be relocated away from this node elasticsearch-5-6 | [2017-11-26T06:09:04,687][INFO ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] rerouting shards: [high disk watermark exceeded on one or more nodes] logstash-5-6 | [2017-11-26T06:09:06,450][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"} logstash-5-6 | [2017-11-26T06:09:06,452][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} logstash-5-6 | [2017-11-26T06:09:06,455][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Template file '' could not be found!", :class=>"ArgumentError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:37:in `read_template_file'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:23:in `get_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:58:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:25:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]} logstash-5-6 | [2017-11-26T06:09:06,455][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch-5-6:9201"]} logstash-5-6 | [2017-11-26T06:09:06,462][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250} logstash-5-6 | [2017-11-26T06:09:09,818][INFO ][logstash.pipeline ] Pipeline main started logstash-5-6 | [2017-11-26T06:09:10,341][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} logstash-5-6 | [2017-11-26T06:09:11,460][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"} logstash-5-6 | [2017-11-26T06:09:11,484][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"} logstash-5-6 | [2017-11-26T06:09:16,491][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"} logstash-5-6 | [2017-11-26T06:09:16,500][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:21Z","tags":["warning","elasticsearch","config","deprecation"],"pid":1,"message":"Config key \"ssl.verify\" is deprecated. It has been replaced with \"ssl.verificationMode\""} logstash-5-6 | [2017-11-26T06:09:21,513][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"} logstash-5-6 | [2017-11-26T06:09:21,523][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:kibana#5.6.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} logstash-5-6 | [2017-11-26T06:09:26,536][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"} logstash-5-6 | [2017-11-26T06:09:26,570][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:elasticsearch#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:xpack_main#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["status","plugin:graph#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch-5-6:9201/ => connect ECONNREFUSED 172.21.0.2:9201"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["status","plugin:monitoring#5.6.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"} logstash-5-6 | [2017-11-26T06:09:31,585][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"} logstash-5-6 | [2017-11-26T06:09:31,603][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["reporting","warning"],"pid":1,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:reporting#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:xpack_main#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:graph#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:reporting#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:elasticsearch#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:searchprofiler#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"uninitialized","prevMsg":"uninitialized"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"uninitialized","prevMsg":"uninitialized"} elasticsearch-5-6 | [2017-11-26T06:09:34,750][WARN ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] high disk watermark [90%] exceeded on [eAlcHBJ2QVG58e0HJsgrdQ][immunedata-cluster-5-6.node-1][/usr/share/elasticsearch/data/nodes/0] free: 1.9gb[7.4%], shards will be relocated away from this node logstash-5-6 | [2017-11-26T06:09:36,692][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["status","plugin:ml#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from red to yellow - Waiting for Elasticsearch","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201."} logstash-5-6 | [2017-11-26T06:09:37,366][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"} kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":
You called elasticsearch service elasticsearch-5-6 in your docker-compose.yml. That means that container with elasticsearch is available on address http://elasticsearch-5-6:9200 for all other containers in your docker-compose.yaml. And it is available on address http://127.0.0.1:9201 from the host machine. In order to have workable ELK stack you need to change logstash config to: http.host: "0.0.0.0" path.config: /usr/share/logstash/pipeline xpack.monitoring.elasticsearch.url: http://elasticsearch-5-6:9200 #xpack.monitoring.elasticsearch.username: logstash_system #xpack.monitoring.elasticsearch.password: changeme xpack.monitoring.enabled: false and kibana config to: server.name: kibana server.host: "0" elasticsearch.url: http://elasticsearch-5-6:9200 xpack.monitoring.ui.container.elasticsearch.enabled: false xpack.security.enabled: false ## Above I tired this - not working #elasticsearch.username: elastic #elasticsearch.password: changeme #xpack.monitoring.ui.container.elasticsearch.enabled: false #xpack.monitoring.ui.container.elasticsearch.enabled: true # Extra: ssl.verificationMode: false
EKL Cluster with Xpack disabled You are missing with the ELASTICSEARCH_URL: "http://elasticsearch:9200" in kibana and xpack.monitoring.elasticsearch.url: http://elasticsearch:9200 in Logstash here is the sample yml configuration ith all possible environment varibales defined in environment version: '3.4' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0 container_name: elasticsearch environment: ES_JAVA_OPTS: '-Xms2048m -Xmx2048m' cluster.name: es-cluster node.name: es1 network.bind_host: 0.0.0.0 discovery.zen.minimum_master_nodes: 1 discovery.zen.ping.unicast.hosts: elasticsearch1 xpack.security.enabled: 'false' xpack.monitoring.enabled: 'false' xpack.watcher.enabled: 'false' xpack.ml.enabled: 'false' http.cors.enabled : 'true' http.cors.allow-origin : "*" http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length logger.level: debug volumes: - /var/elasticsearch/db/elasticsearch/data:/usr/share/elasticsearch/data ports: - 9200:9200 - 9300:9300 networks: - elastic logstash: image: docker.elastic.co/logstash/logstash:6.6.0 container_name: logstash ports: - 5044:5044 - 5001:5001 volumes: - /var/elasticsearch/logstash/pipeline:/usr/share/logstash/pipeline environment: ES_JAVA_OPTS: -Xmx2048m -Xms2048m" http.host: 0.0.0.0 xpack.monitoring.enabled: 'false' xpack.monitoring.elasticsearch.url: http://elasticsearch:9200 networks: - elastic depends_on: - elasticsearch kibana: image: docker.elastic.co/kibana/kibana:6.6.0 container_name: kibana environment: ELASTICSEARCH_URL: "http://elasticsearch:9200" xpack.security.enabled: 'false' xpack.graph.enabled : 'false' xpack.ml.enabled : 'false' xpack.monitoring.enabled : 'false' xpack.watcher.enabled : 'false' xpack.reporting.enabled : 'false' ports: - 5601:5601 networks: - elastic depends_on: - elasticsearch elasticsearch-head: image: mobz/elasticsearch-head:5 container_name: elasticsearch-head ports: - "9100:9100" networks: - elastic networks: elastic: driver: bridge
Dockerized Spring Cloud Stream services with Kafka broker unable to connect to Zookeeper
I'm testing a sample spring cloud stream application (running on a Ubuntu linux machine) with one source and one sink services. All my services are docker-containerized and I would like to use kafka as message broker. Below the relevant parts of the docker-compose.yml: zookeeper: image: confluent/zookeeper container_name: zookeeper ports: - "2181:2181" kafka: image: wurstmeister/kafka:0.9.0.0-1 container_name: kafka ports: - "9092:9092" links: - zookeeper:zk environment: - KAFKA_ADVERTISED_HOST_NAME=192.168.33.101 - KAFKA_ADVERTISED_PORT=9092 - KAFKA_DELETE_TOPIC_ENABLE=true - KAFKA_LOG_RETENTION_HOURS=1 - KAFKA_MESSAGE_MAX_BYTES=10000000 - KAFKA_REPLICA_FETCH_MAX_BYTES=10000000 - KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS=60000 - KAFKA_NUM_PARTITIONS=2 - KAFKA_DELETE_RETENTION_MS=1000 . . . # not shown: eureka service registry, spring cloud config service, etc. myapp-service-test-source: container_name: myapp-service-test-source image: myapp-h2020/myapp-service-test-source:0.0.1 environment: SERVICE_REGISTRY_HOST: 192.168.33.101 SERVICE_REGISTRY_PORT: 8761 ports: - 8081:8080 . . . Here the relevant part of application.yml for my service-test-source service: spring: cloud: stream: defaultBinder: kafka bindings: output: destination: messages content-type: application/json kafka: binder: brokers: ${SERVICE_REGISTRY_HOST:192.168.33.101} zkNodes: ${SERVICE_REGISTRY_HOST:192.168.33.101} defaultZkPort: 2181 defaultBrokerPort: 9092 The problem is the following, if I launch the docker-compose above, in the test-source container log I notice that the service fails to connect to zookeeper, giving a repeated set of Connection refused error, and finishing with a ZkTimeoutException which makes the service terminate (see below). The strange fact is that, if instead of running my source (and sink) test services as docker containers I run them as jar files via maven mvn spring-boot:run <etc...> the services work fine and are able to exchange messages via kafka. (note that kafka, zookeeper, etc. are still running as docker containers). . . . *** THE FOLLOWING REPEATED n TIMES *** 2017-02-14 14:40:09.164 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2017-02-14 14:40:09.166 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_111] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_111] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965] . . . java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:53) at java.lang.Thread.run(Thread.java:745) Caused by: org.springframework.context.ApplicationContextException: Failed to start bean 'outputBindingLifecycle'; nested exception is org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 10000 Any idea what the problem might be? edit: I discovered that in the "jar" execution logs the test-source service tries to connect to zookeeper through the IP 127.0.0.1, as can be seen from the log snipped below: 2017-02-15 14:24:04.159 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2017-02-15 14:24:04.159 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2017-02-15 14:24:04.178 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established to localhost/127.0.0.1:2181, initiating session 2017-02-15 14:24:04.201 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15a421fd9ec000a, negotiated timeout = 10000 2017-02-15 14:24:05.870 INFO 10348 --- [ main] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#72ba68e3 2017-02-15 14:24:05.882 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) 2017-02-15 14:24:05.883 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session This explains why everything works on the jar execution but not the docker one (the zookeeper container exports its 2181 port to the host machine, so it's visible as localhost for the service process when running directly on the host machine), but doesn't solve the problem: Apparently the spring cloud stream kafka configuration is ignoring the property spring.cloud.stream.kafka.binder.zkNodes as set in the application.yml (note that if I log the value of such environment variable from the service, I see the correct value of 192.168.33.101 that I hardcoded there for debugging purposes).
You have set the defaultBinder to be rabbit while trying to use the Kafka binder configuration. Do you have both rabbit and kafka binders in the classpath of your application? In that case, you can enable here
zookeeper: image: wurstmeister/zookeeper container_name: 'zookeeper' ports: - 2181:2181 --------------------- kafka -------------------------------- kafka: image: wurstmeister/kafka container_name: 'kafka' environment: - KAFKA_ADVERTISED_HOST_NAME=kafka - KAFKA_ADVERTISED_PORT=9092 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 - KAFKA_CREATE_TOPICS=kafka_docker_topic:1:1 ports: - 9092:9092 depends_on: - zookeeper spring: profiles: dev cloud: stream: defaultBinder: kafka kafka: binder: brokers: kafka # i added brokers and zkNodes property zkNodes: zookeeper # bindings: input: destination: message content-type: application/json