Connection refused for Flink with docker-compose - docker

I have the following docker-compose file which is a copy of the docker-compose from the docker apache flink site. The only difference is that I am using the Mac m1 version.
version: "2.2"
services:
jobmanager:
image: arm64v8/flink:alpine
ports:
- "8081:8081"
command: standalone-job --job-classname com.job.ClassName [--job-id <job id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job arguments]
volumes:
- ~/sg_flink/artifacts:/opt/flink/usrlib
networks:
- flink-network
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanager
parallelism.default: 2
taskmanager:
image: arm64v8/flink:alpine
depends_on:
- jobmanager
command: taskmanager
scale: 1
volumes:
- ~/sg_flink/artifacts:/opt/flink/usrlib
networks:
- flink-network
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanager
taskmanager.numberOfTaskSlots: 2
parallelism.default: 2
networks:
flink-network:
The error is a connection is refused
taskmanager_1 | 2021-11-03 17:43:02,724 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Could not resolve ResourceManager address akka.tcp://flink#9cf35ea13c8b:6123/user/resourcemanager, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink#9cf35ea13c8b:6123/user/resourcemanager..
taskmanager_1 | 2021-11-03 17:43:12,753 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: 9cf35ea13c8b/172.20.0.3:6123
taskmanager_1 | 2021-11-03 17:43:12,756 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#9cf35ea13c8b:6123] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#9cf35ea13c8b:6123]] Caused by: [Connection refused: 9cf35ea13c8b/172.20.0.3:6123]
taskmanager_1 | 2021-11-03 17:43:12,758 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Could not resolve ResourceManager address akka.tcp://flink#9cf35ea13c8b:6123/user/resourcemanager, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink#9cf35ea13c8b:6123/user/resourcemanager..
docker ps output looks like this
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4416f88f60c2 arm64v8/flink:alpine "/docker-entrypoint.…" 44 seconds ago Up 44 seconds 6123/tcp, 8081/tcp sg_flink_taskmanager_1
c211940acf41 arm64v8/flink:alpine "/docker-entrypoint.…" 45 seconds ago Up 44 seconds 6123/tcp, 0.0.0.0:8081->8081/tcp sg_flink_jobmanager_1
```

I have met this trouble in same condittion,and I fixed it use docker-compse links.
taskmanager:
links:
- jobmanager

Related

ElasticSearch Logstash not connecting "Connection refused" - Docker

I need help! (who would have thought, right? lol)
I have a job interview in few days and it would mean the world to me to be well prepared for it and have some working examples.
I am trying to set up an ELK pipeline to stream data from kafka, through logstash, elasticsearch and finally read it from Kibana. The usual.
I am making use of containers, but the duo logstash - elasticsearch are giving me an aneurism.
Everything else works perfectly fine. I've checked the logs off of kafka and that is working just fine. Kibana is collected to elasticsearch just fine as well. But logstash and es really don't want to match.
Here is the setup
docker-compose.yml
version: '3.6'
services:
elasticsearch:
image: elasticsearch:8.6.0
container_name: elasticsearch
#restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
cluster.name: elf-kafka-cluster
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
kibana:
image: kibana:8.6.0
container_name: kibana
#restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
networks:
- elk
logstash:
image: logstash:8.6.0
container_name: logstash
#restart: always
volumes:
- type: bind
source: ./logstash_pipeline/
target: /usr/share/logstash/pipeline
read_only: true
command: logstash -f /home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf
depends_on:
- elasticsearch
ports:
- '9600:9600'
environment:
xpack.monitoring.enabled: true
# LS_JAVA_OPTS: "-Xmx256m -Xms256m"
links:
- elasticsearch
networks:
- elk
volumes:
elastic_data: {}
networks:
elk:
driver: bridge
logstash.conf
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["topic"]
}
}
output {
elasitcsearch {
hosts => ["http://localhost:9200"]
index => "topic"
workers => 1
}
}
These are logstash error logs when I compose up:
logstash | [2023-01-17T13:59:02,680][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash | [2023-01-17T13:59:04,711][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash | [2023-01-17T13:59:05,373][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,379][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,436][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash | [2023-01-17T13:59:05,444][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
logstash | [2023-01-17T13:59:05,449][WARN ][logstash.licensechecker.licensereader] Attempt to validate Elasticsearch license failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash | [2023-01-17T13:59:05,477][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash | [2023-01-17T13:59:05,567][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash | [2023-01-17T13:59:05,661][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/home/ettore/Documenti/Portfolio/ELK/logstash/logstash.conf"}
logstash | [2023-01-17T13:59:05,664][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
logstash | [2023-01-17T13:59:06,333][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
logstash | [2023-01-17T13:59:06,411][INFO ][logstash.runner ] Logstash shut down.
logstash | [2023-01-17T13:59:06,419][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
logstash | org.jruby.exceptions.SystemExit: (SystemExit) exit
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:790) ~[jruby.jar:?]
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:753) ~[jruby.jar:?]
logstash | at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:91) ~[?:?]
and this is to prove that everything is working as intended with es (or so it seems)
netstat -an | grep 9200
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN
tcp6 0 0 :::9200 :::* LISTEN
unix 3 [ ] STREAM CONNECTED 49200
I've looked through everything and this is 100% not a duplicate because I have tried it all. I really can't figure it out. Hope anyone can help.
Thank you for you time.
You should set logstash.yml
Create a logstash.yml with values below:
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://localhost:9200" ]
In your docker-compose.yml, add another volume in Logstash container as shown below:
./logstash.yml:/usr/share/logstash/config/logstash.yml
Additionally, its good to run with restart condition.

Service Not accessible using Container Host Name from within the Container

We have been facing an issue while trying to spin up a local containerized confluent kafka ecosystem using docker-compose tool.
Host OS (Virtual Machine connecting through RDP)
$ sudo lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
$ uname -a
Linux IS*******1 4.15.0-189-generic #200-Ubuntu SMP Wed Jun 22 19:53:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
ghosh.sayak#ISRXDLKFD001:~$
Docker Engine
$ sudo docker version
Client: Docker Engine - Community
Version: 20.10.20
API version: 1.41
Go version: go1.18.7
Git commit: 9fdeb9c
Built: Tue Oct 18 18:20:19 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.20
API version: 1.41 (minimum version 1.12)
Go version: go1.18.7
Git commit: 03df974
Built: Tue Oct 18 18:18:11 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.16
GitCommit: 31aa4358a36870b21a992d3ad2bef29e1d693bec
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker-compose.yaml (Compose created network "kafka_default" using default bridge driver)
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:7.3.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:7.3.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
connect:
image: cnfldemos/cp-server-connect-datagen:0.6.0-7.3.0
hostname: connect
container_name: connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-6.2.0.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java/ConnectPlugin"
CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
CONNECT_TOPIC_CREATION_ENABLE: 'false'
volumes:
- ./connect-plugins:/usr/share/java/ConnectPlugin
control-center:
image: confluentinc/cp-enterprise-control-center:7.3.0
hostname: control-center
container_name: control-center
depends_on:
- broker
- schema-registry
- connect
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083'
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
Now, while executing the command $ sudo docker-compose up -d, we could only observed that the "zookeeper" container is only getting Up and the "broker" i.e. Kafka container is Exited with Status 1 with the following error -
$ sudo docker container logs -t --details broker
2023-02-08T14:07:30.660123686Z ===> User
2023-02-08T14:07:30.662518281Z uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
2023-02-08T14:07:30.666677333Z ===> Configuring ...
2023-02-08T14:07:34.144645597Z ===> Running preflight checks ...
2023-02-08T14:07:34.147953062Z ===> Check if /var/lib/kafka/data is writable ...
2023-02-08T14:07:34.564183498Z ===> Check if Zookeeper is healthy ...
2023-02-08T14:07:35.861623527Z [2023-02-08 14:07:35,856] INFO Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861652177Z [2023-02-08 14:07:35,857] INFO Client environment:host.name=broker (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861683838Z [2023-02-08 14:07:35,857] INFO Client environment:java.version=11.0.16.1 (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861689083Z [2023-02-08 14:07:35,857] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861694000Z [2023-02-08 14:07:35,857] INFO Client environment:java.home=/usr/lib/jvm/zulu11-ca (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861703288Z [2023-02-08 14:07:35,857] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.3.0.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.14.0.jar:/usr/share/java/cp-base-new/jackson-annotations-2.13.2.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/snakeyaml-1.30.jar:/usr/share/java/cp-base-new/utility-belt-7.3.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.6.3.jar:/usr/share/java/cp-base-new/audience-annotations-0.5.0.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.6.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.3.0-ccs.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.3.0-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/common-utils-7.3.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.13.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.10.jar:/usr/share/java/cp-base-new/zookeeper-3.6.3.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.3.0-ccs.jar:/usr/share/java/cp-base-new/kafka-server-common-7.3.0-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jose4j-0.7.9.jar:/usr/share/java/cp-base-new/snappy-java-1.1.8.4.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.5.jar:/usr/share/java/cp-base-new/scala-library-2.13.5.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/jackson-core-2.13.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-clients-7.3.0-ccs.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.13.2.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.13.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.3.0-ccs.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.13.2.jar:/usr/share/java/cp-base-new/kafka_2.13-7.3.0-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.13.2.2.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.10.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/reload4j-1.2.19.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861725300Z [2023-02-08 14:07:35,857] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861730555Z [2023-02-08 14:07:35,857] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861734946Z [2023-02-08 14:07:35,857] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861740934Z [2023-02-08 14:07:35,857] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861745386Z [2023-02-08 14:07:35,857] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861749485Z [2023-02-08 14:07:35,857] INFO Client environment:os.version=4.15.0-189-generic (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.861754326Z [2023-02-08 14:07:35,857] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.862778083Z [2023-02-08 14:07:35,857] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.862781895Z [2023-02-08 14:07:35,857] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.862785144Z [2023-02-08 14:07:35,857] INFO Client environment:os.memory.free=242MB (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.862788600Z [2023-02-08 14:07:35,857] INFO Client environment:os.memory.max=4006MB (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.862792192Z [2023-02-08 14:07:35,857] INFO Client environment:os.memory.total=252MB (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.875374527Z [2023-02-08 14:07:35,869] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher#3c0a50da (org.apache.zookeeper.ZooKeeper)
2023-02-08T14:07:35.875395839Z [2023-02-08 14:07:35,873] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
2023-02-08T14:07:35.898794157Z [2023-02-08 14:07:35,894] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket)
2023-02-08T14:07:35.910713716Z [2023-02-08 14:07:35,905] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:36.094475506Z [2023-02-08 14:07:36,087] INFO Opening socket connection to server zookeeper/172.19.0.2:2181. (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:36.094498351Z [2023-02-08 14:07:36,090] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:36.114680668Z [2023-02-08 14:07:36,111] ERROR Unable to open socket to zookeeper/172.19.0.2:2181 (org.apache.zookeeper.ClientCnxnSocketNIO)
2023-02-08T14:07:36.121566533Z [2023-02-08 14:07:36,112] WARN Session 0x0 for sever zookeeper/172.19.0.2:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:36.121606559Z java.net.ConnectException: Connection refused
2023-02-08T14:07:36.121610572Z at java.base/sun.nio.ch.Net.connect0(Native Method)
2023-02-08T14:07:36.121614024Z at java.base/sun.nio.ch.Net.connect(Net.java:483)
2023-02-08T14:07:36.121617448Z at java.base/sun.nio.ch.Net.connect(Net.java:472)
2023-02-08T14:07:36.121620610Z at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:692)
2023-02-08T14:07:36.121623850Z at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:260)
2023-02-08T14:07:36.121627034Z at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:270)
2023-02-08T14:07:36.121630306Z at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1177)
2023-02-08T14:07:36.121633562Z at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1210)
2023-02-08T14:07:37.230453807Z [2023-02-08 14:07:37,224] INFO Opening socket connection to server zookeeper/172.19.0.2:2181. (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:37.230488726Z [2023-02-08 14:07:37,224] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
2023-02-08T14:07:37.230492954Z [2023-02-08 14:07:37,224] ERROR Unable to open socket to zookeeper/172.19.0.2:2181 (org.apache.zookeeper.ClientCnxnSocketNIO)
2023-02-08T14:07:37.230496368Z [2023-02-08 14:07:37,225] WARN Session 0x0 for sever zookeeper/172.19.0.2:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn)
To debug the connectivity, I have done couple of flight checks like -
Created a netshoot container within same "kafka_default" network using sudo docker run -it --net container:zookeeper nicolaka/netshoot
Some useful information -
zookeeper# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.19.0.2 zookeeper
zookeeper# cat /etc/resolv.conf
search corp.***.com
nameserver 127.0.0.11
options ndots:0
Tried nslookup, netcat, ping, nmap, telnet from within the container and the results are following -
zookeeper# nslookup zookeeper
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: zookeeper
Address: 172.19.0.2
zookeeper# nc -v -l -p 2181
nc: Address in use
zookeeper# ping -c 2 zookeeper
PING zookeeper (172.19.0.2) 56(84) bytes of data.
64 bytes from zookeeper (172.19.0.2): icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from zookeeper (172.19.0.2): icmp_seq=2 ttl=64 time=0.049 ms
--- zookeeper ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1015ms
rtt min/avg/max/mdev = 0.049/0.051/0.054/0.002 ms
zookeeper# nmap -p0- -v -A -T4 zookeeper
Starting Nmap 7.93 ( https://nmap.org ) at 2023-02-08 15:43 UTC
NSE: Loaded 155 scripts for scanning.
NSE: Script Pre-scanning.
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating SYN Stealth Scan at 15:43
Scanning zookeeper (172.19.0.2) [65536 ports]
Completed SYN Stealth Scan at 15:43, 2.96s elapsed (65536 total ports)
Initiating Service scan at 15:43
Initiating OS detection (try #1) against zookeeper (172.19.0.2)
Retrying OS detection (try #2) against zookeeper (172.19.0.2)
NSE: Script scanning 172.19.0.2.
Initiating NSE at 15:43
Completed NSE at 15:43, 0.01s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Nmap scan report for zookeeper (172.19.0.2)
Host is up (0.000060s latency).
Not shown: 65533 closed tcp ports (reset)
PORT STATE SERVICE VERSION
2181/tcp filtered eforward
8080/tcp filtered http-proxy
39671/tcp filtered unknown
Too many fingerprints match this host to give specific OS details
Network Distance: 0 hops
NSE: Script Post-scanning.
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Initiating NSE at 15:43
Completed NSE at 15:43, 0.00s elapsed
Read data files from: /usr/bin/../share/nmap
OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 5.55 seconds
Raw packets sent: 65551 (2.885MB) | Rcvd: 131094 (5.508MB)
zookeeper# telnet localhost 2181
Connected to localhost
***zookeeper# telnet zookeeper 2181
telnet: can't connect to remote host (172.19.0.2): Connection refused***
BUT, WHEN, tried from Host Machine, it succeeded -
ghosh.sayak#IS********1:~/Kafka$ telnet localhost 2181
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
^]
telnet> q
Connection closed.
Also, here is the current status of the container -
ghosh.sayak#IS********1:~/Kafka$ sudo docker-compose ps
[sudo] password for ghosh.sayak:
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------
broker /etc/confluent/docker/run Exit 1
connect /etc/confluent/docker/run Exit 1
control-center /etc/confluent/docker/run Exit 1
schema-registry /etc/confluent/docker/run Exit 1
zookeeper /etc/confluent/docker/run Up 0.0.0.0:2181->2181/tcp,:::2181->2181/tcp, 2888/tcp, 3888/tcp
NOTE: Docker Engine has been freshly installed by following this official documentation.
Stuck with this issue! Any help will be much appreciated!
Thanks in advance!

Symfony 5: Why am I getting this error? SQLSTATE[HY000] [2002] Connection refused

I'm getting this error:
An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
I have tried changing the IP address in my .env to localhost but I then got a not found error.
I also tried changing my .env db host to match my docker compose file:
DB_HOST=mysql
docker composer file:
version: "3.7"
services:
app:
image: kooldev/php:7.4-nginx
ports:
- ${KOOL_APP_PORT:-80}:80
environment:
ASUSER: ${KOOL_ASUSER:-0}
UID: ${UID:-0}
volumes:
- .:/app:delegated
networks:
- kool_local
- kool_global
database:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
ports:
- ${KOOL_DATABASE_PORT:-3306}:3306
I used kool.dev to do the Symfony install, that looks ok and the DB seems to be working as expected:
user#DESKTOP-QSCSABV:/mnt/c/dev/symfony-project$ kool status
+----------+---------+------------------------------------------------------+-------------------------+
| SERVICE | RUNNING | PORTS
| STATE |
+----------+---------+------------------------------------------------------+-------------------------+
| app | Running | 0.0.0.0:80->80/tcp, :::80->80/tcp, 9000/tcp
| Up 15 minutes |
| database | Running | 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp,
33060/tcp | Up 15 minutes (healthy) |
+----------+---------+------------------------------------------------------+-------------------------+
[done] Fetching services status
g
in my .env file:
DB_USERNAME=myusername
DB_PASSWORD=mypassword
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=mydatabase
DB_VERSION=8.0
DATABASE_URL="mysql://${DB_USERNAME}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_DATABASE}?serverVersion=${DB_VERSION}"
Any suggestions on how to resolve this?
DB_HOST=127.0.0.1
in your environment file should be
DB_HOST=database
127.0.0.1 is the address of the container itself, so in your case, the app container tries to make a connection to itself. Docker compose creates a virtual network where each container can be addressed by its service name. So in your case, you want to connect to the database service.

Two rabbitmq isntances on one server with docker compose how to change the default port

I would like to run two instances of rabbitmq on one server. All I create with docker-compose. The thing is how I can change the default node and management ports. I have tried setting it via ports but it didn't help. When I was facing the same scenario but with mongo, I have used command: mongod --port CUSTOM_PORT . What would be the analogical command here for rabbitmq?
Here is my config for the second instance of rabbitmq.
version: '2'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq_test'
ports:
- 5673:5673
- 15673:15673
volumes:
- ./rabbitmq/data/:/var/lib/rabbitmq/
- ./rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq_go_net_test
environment:
RABBITMQ_DEFAULT_USER: 'test'
RABBITMQ_DEFAULT_PASS: 'test'
HOST_PORT_RABBIT: 5673
HOST_PORT_RABBIT_MGMT: 15673
networks:
rabbitmq_go_net_test:
driver: bridge
And the outcome is below
Management plugin: HTTP (non-TLS) listener started on port 15672
rabbitmq_test | 2021-03-18 11:32:42.553 [info] <0.738.0> Ready to start client connection listeners
rabbitmq_test | 2021-03-18 11:32:42.553 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#fb24038613f3
rabbitmq_test | 2021-03-18 11:32:42.557 [info] <0.1035.0> started TCP listener on [::]:5672
We can see that there are still ports 5672 and 15672 exposed instead of 5673 and 15673.
EDIT
ports:
- 5673:5672
- 15673:15672
I have tried that the above conf yet with no success
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.797.0> Management plugin: HTTP (non-TLS) listener started on port 15672
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.903.0> Statistics database started.
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.902.0> Starting worker pool 'management_worker_pool' with 3 processes in it
rabbitmq_test | 2021-03-18 14:08:56.168 [info] <0.44.0> Application rabbitmq_management started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.208 [info] <0.44.0> Application prometheus started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.916.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.738.0> Ready to start client connection listeners
rabbitmq_test | 2021-03-18 14:08:56.216 [info] <0.1035.0> started TCP listener on [::]:5672
I have found the solution. I provided the configuration file to the rabbitmq container.
loopback_users.guest = false
listeners.tcp.default = 5673
default_pass = test
default_user = test
management.tcp.port = 15673
And a working docker-compose file
version: '2'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq_test'
ports:
- 5673:5673
- 15673:15673
volumes:
- ./rabbitmq/data/:/var/lib/rabbitmq/
- ./rabbitmq/log/:/var/log/rabbitmq
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.conf
networks:
- rabbitmq_go_net_test
networks:
rabbitmq_go_net_test:
driver: bridge
A working example with rabbitmq:3.9.13-management-alpine
docker/rabbitmq/rabbitmq.conf:
loopback_users.guest = false
listeners.tcp.default = 5673
default_pass = guest
default_user = guest
default_vhost = /
docker/rabbitmq/Dockerfile:
FROM rabbitmq:3.9.13-management-alpine
COPY --chown=rabbitmq:rabbitmq rabbitmq.conf /etc/rabbitmq/rabbitmq.conf
EXPOSE 4369 5671 5672 5673 15691 15692 25672 25673
docker-compose.yml:
...
rabbitmq:
#image: "rabbitmq:3-management-alpine"
build: './docker/rabbitmq/'
container_name: my-rabbitmq
environment:
RABBITMQ_DEFAULT_VHOST: /
ports:
- 5673:5672
- 15673:15672
networks:
- default
...

unable to run kibana and logstash with elasticsearch

Elastic serach is running fine on 9201 port. But unable to run kibana and logstash with docker-compose.
For logstash it throws the error:
Attempted to resurrect connection to dead ES instance, but got an
error.
For kibana it throw warnings:
"warning","elasticsearch","admin"],"pid":1,"message":"No living
connections"
Below is the docker-compose.yml file:
version: '2'
services:
# Service 1 : elasticsearch
elasticsearch-5-6:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: elasticsearch-5-6
ports:
- "9201:9200"
volumes:
- /etc/elasticsearch/elasticsearch-5-6.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /var/elasticsearch/data/immunedata-5-6/:/usr/share/elasticsearch/data/
#- /etc/elasticsearch/logging.yml:/usr/share/elasticsearch/config/logging.yml
#- /var/log/elasticsearch/:/usr/share/elasticsearch/logs/
environment:
- cluster.name=docker-cluster-elasticsearch-5-6
#- bootstrap.memory_lock=true
- "ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
# Disabling the xpack security as it costs after one month of free trail.
- xpack.security.enabled=false
# Service 2 : logstash
logstash-5-6:
image: docker.elastic.co/logstash/logstash:5.6.3
container_name: logstash-5-6
ports:
#- "5044:5044"
- "5001:5001"
volumes:
- /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
- /etc/logstash/pipeline:/usr/share/logstash/pipeline
#- /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
#- /var/logstash/pipeline:/usr/share/logstash/pipeline
environment:
- "ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
depends_on:
- elasticsearch-5-6
# Service 3 : kibana
kibana-5-6:
image: docker.elastic.co/kibana/kibana:5.6.3
container_name: kibana-5-6
ports:
- "5601:5601"
volumes:
- /etc/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
#- /var/kibana/immunedata-5-6/:/usr/share/kibana/data/
environment:
- xpack.security.enabled=false
- xpack.graph.enabled = false
- xpack.ml.enabled = false
- xpack.monitoring.enabled = false
- xpack.watcher.enabled = false
- xpack.reporting.enabled = false
depends_on:
- elasticsearch-5-6
# Service 4 : elasticseach-head
elasticsearch-head:
image: mobz/elasticsearch-head:5
container_name: elasticsearch-head
# will not wait for elasticsearch to be ready.
ports:
- "9100:9100"
elasticserach.yml
cluster.name: immunedata-cluster-5.6
node.name: "immunedata-cluster-5-6.node-1"
# Elasticsearch in docker access different data directory, defined mapping directory in docker-compose.yml
#path.data: /var/elasticsearch/data/immunedata-5-6/
path.data: /usr/share/elasticsearch/data/
#path.data: /var/elasticsearch/data
# NOTE : Since elasticsearch 5.x index level settings can NOT be set on the nodes configuration like the elasticsearch.yaml
#index.number_of_shards: 1
#index.number_of_replicas: 0
# Allow all host access
network.bind_host: 0.0.0.0
http.port: 9200
# To enable cross-origin resource sharing (Accessing on browser)
http.cors.enabled: true
http.cors.allow-origin : "*"
logstash.yml file
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
#xpack.monitoring.elasticsearch.url: http://localhost:9201
##xpack.monitoring.elasticsearch.url: http://elasticsearch:9201
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.enabled: false
kibana.yml file
server.name: kibana
server.host: "0"
elasticsearch.url: http://192.168.56.10:9201
xpack.monitoring.ui.container.elasticsearch.enabled: false
#elasticsearch.url: http://elasticsearch:9201
xpack.security.enabled: false
## Above I tired this - not working
#elasticsearch.username: elastic
#elasticsearch.password: changeme
#xpack.monitoring.ui.container.elasticsearch.enabled: false
#xpack.monitoring.ui.container.elasticsearch.enabled: true
# Extra:
ssl.verificationMode: false
Logs:
[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
elasticsearch-5-6 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
elasticsearch-5-6 | at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
elasticsearch-5-6 | [2017-11-26T06:07:57,084][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][young][14][6] duration [18.2s], collections [1]/[18.5s], total [18.2s]/[23.5s], memory [178.2mb]->[79.5mb]/[1.9gb], all_pools {[young] [132.1mb]->[964kb]/[133.1mb]}{[survivor] [16.6mb]->[12.5mb]/[16.6mb]}{[old] [29.4mb]->[66.5mb]/[1.8gb]}
elasticsearch-5-6 | [2017-11-26T06:07:57,085][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][14] overhead, spent [18.2s] collecting in the last [18.5s]
elasticsearch-5-6 | [2017-11-26T06:07:57,298][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [immunedata-cluster-5-6.node-1] collector [index-recovery] failed to collect data
elasticsearch-5-6 | org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
elasticsearch-5-6 | at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:114) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:52) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.<init>(TransportBroadcastByNodeAction.java:256) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:234) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:79) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:142) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:84) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at
elasticsearch-5-6 | [2017-11-26T06:08:45,238][WARN ][o.e.x.w.e.ExecutionService] [immunedata-cluster-5-6.node-1] Failed to execute watch [XYNCje-TQzKm9OLdiH60gQ_elasticsearch_cluster_status_60e3c208-acca-4462-ba47-0711279d8f5e-2017-11-26T06:08:35.573Z]
elasticsearch-5-6 | [2017-11-26T06:08:54,886][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][young][63][9] duration [3.6s], collections [1]/[4.6s], total [3.6s]/[30.2s], memory [226.9mb]->[103.5mb]/[1.9gb], all_pools {[young] [127.5mb]->[1mb]/[133.1mb]}{[survivor] [16.6mb]->[11.3mb]/[16.6mb]}{[old] [82.7mb]->[91.2mb]/[1.8gb]}
elasticsearch-5-6 | [2017-11-26T06:08:54,886][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][63] overhead, spent [3.6s] collecting in the last [4.6s]
logstash-5-6 | Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
elasticsearch-5-6 | [2017-11-26T06:08:55,988][INFO ][o.e.c.r.a.AllocationService] [immunedata-cluster-5-6.node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.watcher-history-6-2017.11.20][0], [.monitoring-es-6-2017.11.20][0]] ...]).
logstash-5-6 | [2017-11-26T06:08:56,786][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
logstash-5-6 | [2017-11-26T06:08:56,891][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
logstash-5-6 | [2017-11-26T06:08:57,558][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.6.3-java/modules/arcsight/configuration"}
logstash-5-6 | [2017-11-26T06:09:04,121][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch-5-6:9201/]}}
logstash-5-6 | [2017-11-26T06:09:04,123][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
elasticsearch-5-6 | [2017-11-26T06:09:04,687][WARN ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] high disk watermark [90%] exceeded on [eAlcHBJ2QVG58e0HJsgrdQ][immunedata-cluster-5-6.node-1][/usr/share/elasticsearch/data/nodes/0] free: 1.9gb[7.4%], shards will be relocated away from this node
elasticsearch-5-6 | [2017-11-26T06:09:04,687][INFO ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] rerouting shards: [high disk watermark exceeded on one or more nodes]
logstash-5-6 | [2017-11-26T06:09:06,450][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
logstash-5-6 | [2017-11-26T06:09:06,452][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash-5-6 | [2017-11-26T06:09:06,455][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Template file '' could not be found!", :class=>"ArgumentError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:37:in `read_template_file'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:23:in `get_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:58:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:25:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}
logstash-5-6 | [2017-11-26T06:09:06,455][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch-5-6:9201"]}
logstash-5-6 | [2017-11-26T06:09:06,462][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
logstash-5-6 | [2017-11-26T06:09:09,818][INFO ][logstash.pipeline ] Pipeline main started
logstash-5-6 | [2017-11-26T06:09:10,341][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
logstash-5-6 | [2017-11-26T06:09:11,460][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:11,484][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
logstash-5-6 | [2017-11-26T06:09:16,491][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:16,500][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:21Z","tags":["warning","elasticsearch","config","deprecation"],"pid":1,"message":"Config key \"ssl.verify\" is deprecated. It has been replaced with \"ssl.verificationMode\""}
logstash-5-6 | [2017-11-26T06:09:21,513][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:21,523][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:kibana#5.6.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
logstash-5-6 | [2017-11-26T06:09:26,536][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:26,570][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:elasticsearch#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:xpack_main#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["status","plugin:graph#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch-5-6:9201/ => connect ECONNREFUSED 172.21.0.2:9201"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["status","plugin:monitoring#5.6.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
logstash-5-6 | [2017-11-26T06:09:31,585][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:31,603][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["reporting","warning"],"pid":1,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:reporting#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:xpack_main#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:graph#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:reporting#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:elasticsearch#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:searchprofiler#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch-5-6 | [2017-11-26T06:09:34,750][WARN ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] high disk watermark [90%] exceeded on [eAlcHBJ2QVG58e0HJsgrdQ][immunedata-cluster-5-6.node-1][/usr/share/elasticsearch/data/nodes/0] free: 1.9gb[7.4%], shards will be relocated away from this node
logstash-5-6 | [2017-11-26T06:09:36,692][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["status","plugin:ml#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from red to yellow - Waiting for Elasticsearch","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201."}
logstash-5-6 | [2017-11-26T06:09:37,366][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":
You called elasticsearch service elasticsearch-5-6 in your docker-compose.yml. That means that container with elasticsearch is available on address http://elasticsearch-5-6:9200 for all other containers in your docker-compose.yaml. And it is available on address http://127.0.0.1:9201 from the host machine.
In order to have workable ELK stack you need to change logstash config to:
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
xpack.monitoring.elasticsearch.url: http://elasticsearch-5-6:9200
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.enabled: false
and kibana config to:
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch-5-6:9200
xpack.monitoring.ui.container.elasticsearch.enabled: false
xpack.security.enabled: false
## Above I tired this - not working
#elasticsearch.username: elastic
#elasticsearch.password: changeme
#xpack.monitoring.ui.container.elasticsearch.enabled: false
#xpack.monitoring.ui.container.elasticsearch.enabled: true
# Extra:
ssl.verificationMode: false
EKL Cluster with Xpack disabled
You are missing with the ELASTICSEARCH_URL: "http://elasticsearch:9200" in kibana and xpack.monitoring.elasticsearch.url: http://elasticsearch:9200 in Logstash
here is the sample yml configuration ith all possible environment varibales defined in environment
version: '3.4'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
container_name: elasticsearch
environment:
ES_JAVA_OPTS: '-Xms2048m -Xmx2048m'
cluster.name: es-cluster
node.name: es1
network.bind_host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: elasticsearch1
xpack.security.enabled: 'false'
xpack.monitoring.enabled: 'false'
xpack.watcher.enabled: 'false'
xpack.ml.enabled: 'false'
http.cors.enabled : 'true'
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length
logger.level: debug
volumes:
- /var/elasticsearch/db/elasticsearch/data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- elastic
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
ports:
- 5044:5044
- 5001:5001
volumes:
- /var/elasticsearch/logstash/pipeline:/usr/share/logstash/pipeline
environment:
ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
http.host: 0.0.0.0
xpack.monitoring.enabled: 'false'
xpack.monitoring.elasticsearch.url: http://elasticsearch:9200
networks:
- elastic
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.6.0
container_name: kibana
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
xpack.security.enabled: 'false'
xpack.graph.enabled : 'false'
xpack.ml.enabled : 'false'
xpack.monitoring.enabled : 'false'
xpack.watcher.enabled : 'false'
xpack.reporting.enabled : 'false'
ports:
- 5601:5601
networks:
- elastic
depends_on:
- elasticsearch
elasticsearch-head:
image: mobz/elasticsearch-head:5
container_name: elasticsearch-head
ports:
- "9100:9100"
networks:
- elastic
networks:
elastic:
driver: bridge

Resources