I've set up a Zookeeper-ensemble (version 3.4.9) with 3 instances. This works like a charm on the test-system, but doesn't come up on the live-system at all. The error message is the following:
2020-08-28 06:26:24,643 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager#400] - Cannot open channel to 2 at election address /10.3.1.173:3888
java.net.NoRouteToHostException: Host is unreachable (Host unreachable)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433)
at java.lang.Thread.run(Thread.java:745)
I've searched on here and in other places, but the only accepted solution to the problem is to set each node's server address to 0.0.0.0, which doesn't work here. My setup is fully dockerized and applied with ansible, so it might look a bit different from what people normally seem to do. But the connection string e.g. for server.1 is this:
"server.1=0.0.0.0:2888:3888 server.2=10.3.1.173:2888:3888 server.3=10.3.1.175:2888:3888"
which is also applied to the zookeepers internal configuration, as the logs show (again for server.1):
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
2020-08-28 06:26:23,549 [myid:] - INFO [main:QuorumPeerConfig#124] - Reading configuration from: /conf/zoo.cfg
2020-08-28 06:26:23,559 [myid:] - INFO [main:QuorumPeer$QuorumServer#149] - Resolved hostname: 10.3.1.175 to address: /10.3.1.175
2020-08-28 06:26:23,559 [myid:] - INFO [main:QuorumPeer$QuorumServer#149] - Resolved hostname: 10.3.1.173 to address: /10.3.1.173
2020-08-28 06:26:23,560 [myid:] - INFO [main:QuorumPeer$QuorumServer#149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0
2020-08-28 06:26:23,560 [myid:] - INFO [main:QuorumPeerConfig#352] - Defaulting to majority quorums
(...)
2020-08-28 06:26:23,570 [myid:1] - INFO [main:QuorumPeerMain#127] - Starting quorum peer
2020-08-28 06:26:23,577 [myid:1] - INFO [main:Login#294] - successfully logged in.
2020-08-28 06:26:23,579 [myid:1] - INFO [main:NIOServerCnxnFactory#89] - binding to port 0.0.0.0/0.0.0.0:2181
This is applied to all 3 instance of zookeeper, but none of them can talk to another.
Additional information:
Apart from IP-addresses for the servers, the configuration is identical to the test-system. The Ansible Docker module is configured the same, the JAAS-Config (with DigestLoginModule) is the same, and the environment variables inside of all docker containers are the same, too.
Each server inside the live system can ping the other servers. I can also ping these servers from inside each Zookeeper container. In addition, I can curl each Zookeeper container on the JMX-port from inside any other container of the live-system. So they definitely can connect over the network.
Please help, thanks :D
Edit: #Stefano was asking how I start the docker containers, so I'll try to provide some insight. As mentioned, it's an Ansible setup in a task using the "docker_container" plugin which is used in a playbook to install the 3 instances across machines:
---
- name: Install Zookeeper
docker_container:
name: zookeeper
image: zookeeper:3.4.9
state: started
ports:
- "2181:2181" # Zookeeper Port
- "2888:2888"
- "3888:3888" # Election ports
- "9998:8080" # JMX metrics
env:
ZOO_MY_ID: "{{ ID }}" #this is 1 for server.1, etc.
ZOO_PORT: "2181"
ZOO_SERVERS: "{{ ZOO_SERVERS }}" #provided in host-vars
SERVER_JVMFLAGS: "-Djava.security.auth.login.config=/etc/kafka/zookeeper_jaas.conf -javaagent:/opt/jmx-exporter/jmx_prometheus_javaagent-0.12.0.jar=8080:/opt/jmx-exporter/zookeeper.yml"
volumes:
- /home/ansible/volumes/zoo1/data:/data
- /home/ansible/volumes/zoo1/datalog:/datalog
- /home/ansible/jmx-exporter:/opt/jmx-exporter
- /home/ansible/zookeeper_jaas.conf:/etc/kafka/zookeeper_jaas.conf
The ZOO_SERVERS are taken from the hosts file:
all:
(...)
children:
zookeeper:
hosts:
zoo1:
ID: "1"
ZOO_SERVERS: "server.1=0.0.0.0:2888:3888 server.2=10.3.1.173:2888:3888 server.3=10.3.1.175:2888:3888"
ansible_host: 10.3.1.171
zoo2:
ID: "2"
ZOO_SERVERS: "server.1=10.3.1.171:2888:3888 server.2=0.0.0.0:2888:3888 server.3=10.3.1.175:2888:3888"
ansible_host: 10.3.1.173
zoo3:
ID: "3"
ZOO_SERVERS: "server.1=10.3.1.171:2888:3888 server.2=10.3.1.173:2888:3888 server.3=0.0.0.0:2888:3888"
ansible_host: 10.3.1.175
So when I read back what I commented above, I noticed that I am not actually using the "confluentinc/cp-zookeeper" docker image, but the "zookeeper" docker image.
Once I changed from "zookeeper:3.4.9" to "confluentinc/cp-zookeeper:5.4.0" and adjusted the ZOO_PORT env-var's name to ZOOKEEPER_CLIENT_PORT, it somehow worked.
This doesn't answer the "why" but maybe this workaround helps someone else. I'll mark this as the accepted answer for now, but please feel free to provide additional insight.
I have been trying to teach myself how to deploy ELK on Docker in my local machine, the following problem has been occuring for a week now and I have not been able to find a solution online.
I run "docker deploy -c docker-compose.yml elk_stack" on the following configuration.
The problem I am facing is, after the logstash container is created, the logs show the pipeline configuration was correctly picked and the data flows through to the elasticsearch container. Then once all the data has been moved, the logstash container destroys itself and a new container is created which follows the same steps as the last one.
Why is this the case?
The following is my docker-compose.yml
version: "3"
networks:
elk_net:
services:
db:
image: mariadb:latest
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- 3306:3306
volumes:
- mysqldata:/var/lib/mysql
deploy:
placement:
constraints: [node.role == manager]
networks:
- elk_net
depends_on:
- elk_net
- mysqldata
adminer:
image: adminer
ports:
- "8080:8080"
deploy:
placement:
constraints: [node.role == manager]
networks:
- elk_net
depends_on:
- elk_net
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
environment:
discovery.type: single-node
ports:
- 9200:9200
- 9300:9300
volumes:
- esdata01:/usr/share/elasticsearch/data
networks:
- elk_net
depends_on:
- elk_net
logstash:
image: logstash:custom
stdin_open: true
tty: true
volumes:
- ./dependency:/usr/local/dependency/
- ./logstash/pipeline/mysql:/usr/share/logstash/pipeline/
networks:
- elk_net
depends_on:
- elk_net
kibana:
image: docker.elastic.co/kibana/kibana:7.3.1
ports:
- 5601:5601
networks:
- elk_net
depends_on:
- elk_net
volumes:
esdata01:
driver: local
mysqldata:
driver: local
Here is my logstash conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://db:3306/sonar_data"
jdbc_user => "root"
jdbc_password => "root"
jdbc_driver_library => ""
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_paging_enabled => true
tracking_column => "accounting_entry_id"
tracking_column_type => "numeric"
use_column_value => true
statement => "SELECT * FROM call_detail_record WHERE accounting_entry_id > :sql_last_value ORDER BY accounting_entry_id ASC"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "cdr_data"
}
}
Sample of the docker logs:
ravi#ravi-VirtualBox:~/Documents/git_personal/cdr-data-visualizer-elk$ sudo docker logs 2c89502d48b3 -f
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-09-17T08:06:56,317][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-09-17T08:06:56,339][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-09-17T08:06:56,968][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.1"}
[2019-09-17T08:06:57,002][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"7a2b2d2a-157e-42c3-bcde-a14dc773750f", :path=>"/usr/share/logstash/data/uuid"}
[2019-09-17T08:06:57,795][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2019-09-17T08:06:59,033][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:06:59,316][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:06:59,391][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2019-09-17T08:06:59,393][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:06:59,720][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2019-09-17T08:06:59,725][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2019-09-17T08:07:01,244][INFO ][org.reflections.Reflections] Reflections took 59 ms to scan 1 urls, producing 19 keys and 39 values
[2019-09-17T08:07:01,818][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:07:01,842][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:07:01,860][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-17T08:07:01,868][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:07:01,930][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-09-17T08:07:02,138][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-09-17T08:07:02,328][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-09-17T08:07:02,332][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, :thread=>"#<Thread:0x2228b784 run>"}
[2019-09-17T08:07:02,439][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-09-17T08:07:02,947][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-09-17T08:07:03,178][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-09-17T08:07:04,327][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"7d7dfa0f023f65240aeb31ebb353da5a42dc782979a2bd7e26e28b7cbd509bb3", document_type=>"%{[#metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_151a6660-4b00-4b2c-8a78-3d93f5161cbe", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-09-17T08:07:04,499][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:07:04,529][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:07:04,550][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-17T08:07:04,560][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:07:04,596][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[2019-09-17T08:07:04,637][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x736c74cd run>"}
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
[2019-09-17T08:07:04,892][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2019-09-17T08:07:04,920][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2019-09-17T08:07:05,660][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-09-17T08:07:06,850][INFO ][logstash.inputs.jdbc ] (0.029802s) SELECT version()
[2019-09-17T08:07:07,038][INFO ][logstash.inputs.jdbc ] (0.007399s) SELECT version()
[2019-09-17T08:07:07,393][INFO ][logstash.inputs.jdbc ] (0.003612s) SELECT count(*) AS `count` FROM (SELECT * FROM call_detail_record WHERE accounting_entry_id > 0 ORDER BY accounting_entry_id ASC) AS `t1` LIMIT 1
[2019-09-17T08:07:07,545][INFO ][logstash.inputs.jdbc ] (0.041288s) SELECT * FROM (SELECT * FROM call_detail_record WHERE accounting_entry_id > 0 ORDER BY accounting_entry_id ASC) AS `t1` LIMIT 100000 OFFSET 0
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
....
[2019-09-17T08:07:13,148][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2019-09-17T08:07:13,633][INFO ][logstash.runner ] Logstash shut down.
ravi#ravi-VirtualBox:~/Documents/git_personal/cdr-data-visualizer-elk$
This was an annoying issue but I found the answer after a lot of trial and error.
My issue was that I did not have the schedule CRON expression configured in my logstash pipeline config.
Adding the following line to the config did the trick.
schedule => "*/10 * * * *"
This post helped me out.
Logstash not reading in new entries from MySQL
I am trying to configure a docker-compose.yml (I am aware version and services is not stated, they are apart of the file) file to run a neo4j instance. I am using docker swarm and deploying a stack i.e. used the following commands:
docker swarm init
docker stack deploy -c docker-compose.yml neo
note_db:
image: neo4j:latest
environment:
- NEO4J_AUTH=<username>/<password>
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_dbms_connector_http_listen__address=:7474
- NEO4J_dbms_connector_https_listen__address=:6477
- NEO4J_dbms_connector_bolt_listen__address=:7687
ports:
- "7474:7474"
- "6477:6477"
- "7687:7687"
volumes:
- type: bind
source: ~/neo4j/data
target: /data
- type: bind
source: ~/neo4j/logs
target: /logs
deploy:
replicas: 1
resources:
limits:
cpus: "0.1"
memory: 120M
restart_policy:
condition: on-failure
I have omitted the username and password. I am currently only trying to spin up one instance as I am still testing. I have removed NEO4J_AUTH as well as NEO4J_AUTH=none, with the same outcome.
The logs provide the following:
org.neo4j.commandline.admin.CommandFailed: initial password was not set because live Neo4j-users were detected., at org.neo4j.commandline.admin.security.SetInitialPasswordCommand.setPasswor (SetInitialPasswordCommand.java:83)
command failed: initial password was not set because live Neo4j-users were detected.,
Starting Neo4j.,
2018-09-17 16:12:39.396+0000 INFO ======== Neo4j 3.4.7 ========,
2018-09-17 16:12:41.990+0000 INFO Starting...,
2018-09-17 16:12:43.792+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#70b0b186' was successfully initialized, but failed to start. Please see the attached cause exception "/logs/debug.log (Permission denied)".
In the debug.log file, the only things I found is :
[o.n.b.s.a.BasicAuthentication] Failed authentication attempt for 'neo4j' (no other failures, errors or warnings).
Clearly, I have some sort of auth issue but I am not sure where the error lies and how to address it. I have attempted NEO4J_AUTH=none and removing the ENV completely, it still does not work.
Someone has posted something along the lines of this issue but they haven't received any responses. I am hoping mine does.
FROM user: logisima
You don't have any issue with auth, it's a permission issue : cause exception "/logs/debug.log (Permission denied)"
i'm trying to use jetty's serverpush-feature with haproxy. I've set up Jetty 9.4.7 with PushCacheFilter and haproxy in two docker-containers.
I think jetty tries to push something, but no PUSH_PROMISE-frames are delivered to the client (I've checked chrome's net-internals-tab).
I'm not sure if this is an issue with jetty (maybe with h2c)!
Here's my haporxy-config (taken from jetty's documentation):
global
tune.ssl.default-dh-param 1024
defaults
timeout connect 10000ms
timeout client 60000ms
timeout server 60000ms
frontend fe_http
mode http
bind *:80
# Redirect to https
redirect scheme https code 301
frontend fe_https
mode tcp
bind *:443 ssl no-sslv3 crt /usr/local/etc/domain.pem ciphers TLSv1.2
alpn h2,http/1.1
default_backend be_http
backend be_http
mode tcp
server domain basexhttp:8984
and here's how jetty starts:
[main] INFO org.eclipse.jetty.util.log - Logging initialized #377ms to org.eclipse.jetty.util.log.Slf4jLog
BaseX 9.0 beta 5cc42ae [HTTP Server]
[main] INFO org.eclipse.jetty.server.Server - jetty-9.4.7.v20170914
[main] INFO org.eclipse.jetty.webapp.StandardDescriptorProcessor - NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
[main] INFO org.eclipse.jetty.server.session - DefaultSessionIdManager workerName=node0
[main] INFO org.eclipse.jetty.server.session - No SessionScavenger set, using defaults
[main] INFO org.eclipse.jetty.server.session - Scavenging every 600000ms
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#7dc222ae{/,file:///opt/basex/webapp/,AVAILABLE}{/opt/basex/webapp}
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#3439f68d{h2c,[h2c, http/1.1]}{0.0.0.0:8984}
[main] INFO org.eclipse.jetty.server.Server - Started #784ms
HTTP Server was started (port: 8984).
HTTP Stop Server was started (port: 8985).
And here's the simple docker-compose.yml
version: '3.3'
services:
basexhttp:
container_name: pushcachefilter-basexhttp
build: pushcachefilter-basexhttp/
image: "pushcachefilter/basexhttp"
volumes:
- "${HOME}/data:/opt/basex/data"
- "${HOME}/base/app-web/webapp:/opt/basex/webapp"
networks:
- web
haproxy:
container_name: haproxy_container
build: ha-proxy/
image: "my_haproxy"
depends_on:
- basexhttp
ports:
- 80:80
- 443:443
networks:
- web
networks:
web:
driver: overlay
Please note that i have to configure jetty using my own jetty.xml - the way shown in the documentation is not working for me.
Thx in advance
Bodo
I'm testing a sample spring cloud stream application (running on a Ubuntu linux machine) with one source and one sink services. All my services are docker-containerized and I would like to use kafka as message broker.
Below the relevant parts of the docker-compose.yml:
zookeeper:
image: confluent/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:0.9.0.0-1
container_name: kafka
ports:
- "9092:9092"
links:
- zookeeper:zk
environment:
- KAFKA_ADVERTISED_HOST_NAME=192.168.33.101
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_DELETE_TOPIC_ENABLE=true
- KAFKA_LOG_RETENTION_HOURS=1
- KAFKA_MESSAGE_MAX_BYTES=10000000
- KAFKA_REPLICA_FETCH_MAX_BYTES=10000000
- KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS=60000
- KAFKA_NUM_PARTITIONS=2
- KAFKA_DELETE_RETENTION_MS=1000
.
.
.
# not shown: eureka service registry, spring cloud config service, etc.
myapp-service-test-source:
container_name: myapp-service-test-source
image: myapp-h2020/myapp-service-test-source:0.0.1
environment:
SERVICE_REGISTRY_HOST: 192.168.33.101
SERVICE_REGISTRY_PORT: 8761
ports:
- 8081:8080
.
.
.
Here the relevant part of application.yml for my service-test-source service:
spring:
cloud:
stream:
defaultBinder: kafka
bindings:
output:
destination: messages
content-type: application/json
kafka:
binder:
brokers: ${SERVICE_REGISTRY_HOST:192.168.33.101}
zkNodes: ${SERVICE_REGISTRY_HOST:192.168.33.101}
defaultZkPort: 2181
defaultBrokerPort: 9092
The problem is the following, if I launch the docker-compose above, in the test-source container log I notice that the service fails to connect to zookeeper, giving a repeated set of Connection refused error, and finishing with a ZkTimeoutException which makes the service terminate (see below).
The strange fact is that, if instead of running my source (and sink) test services as docker containers I run them as jar files via maven mvn spring-boot:run <etc...> the services work fine and are able to exchange messages via kafka. (note that kafka, zookeeper, etc. are still running as docker containers).
.
.
.
*** THE FOLLOWING REPEATED n TIMES ***
2017-02-14 14:40:09.164 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-02-14 14:40:09.166 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_111]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_111]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
.
.
.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:53)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.context.ApplicationContextException: Failed to start bean 'outputBindingLifecycle'; nested exception is org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 10000
Any idea what the problem might be?
edit:
I discovered that in the "jar" execution logs the test-source service tries to connect to zookeeper through the IP 127.0.0.1, as can be seen from the log snipped below:
2017-02-15 14:24:04.159 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-02-15 14:24:04.159 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-02-15 14:24:04.178 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established to localhost/127.0.0.1:2181, initiating session
2017-02-15 14:24:04.201 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15a421fd9ec000a, negotiated timeout = 10000
2017-02-15 14:24:05.870 INFO 10348 --- [ main] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#72ba68e3
2017-02-15 14:24:05.882 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-02-15 14:24:05.883 INFO 10348 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
This explains why everything works on the jar execution but not the docker one (the zookeeper container exports its 2181 port to the host machine, so it's visible as localhost for the service process when running directly on the host machine), but doesn't solve the problem: Apparently the spring cloud stream kafka configuration is ignoring the property spring.cloud.stream.kafka.binder.zkNodes as set in the application.yml (note that if I log the value of such environment variable from the service, I see the correct value of 192.168.33.101 that I hardcoded there for debugging purposes).
You have set the defaultBinder to be rabbit while trying to use the Kafka binder configuration. Do you have both rabbit and kafka binders in the classpath of your application? In that case, you can enable here
zookeeper:
image: wurstmeister/zookeeper
container_name: 'zookeeper'
ports:
- 2181:2181
--------------------- kafka --------------------------------
kafka:
image: wurstmeister/kafka
container_name: 'kafka'
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CREATE_TOPICS=kafka_docker_topic:1:1
ports:
- 9092:9092
depends_on:
- zookeeper
spring:
profiles: dev
cloud:
stream:
defaultBinder: kafka
kafka:
binder:
brokers: kafka # i added brokers and zkNodes property
zkNodes: zookeeper #
bindings:
input:
destination: message
content-type: application/json