I'm trying to start the docker container. I am using docker-elk.yml file to generate ELK containers.Elasticsearch and Kibana Containers are working fine.But for Logstash,it is starting and going into container bash but after sometimes it stops automatically.
Logs of container:
[2019-04-11T08:48:26,882][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.6.0"}
[2019-04-11T08:48:33,497][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-04-11T08:48:34,062][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-11T08:48:34,310][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-04-11T08:48:34,409][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-11T08:48:34,415][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2019-04-11T08:48:34,469][INFO ][logstash.outputs.elasticsearch]
New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-04-11T08:48:34,486][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-04-11T08:48:34,503][INFO ][logstash.outputs.elasticsearch]
Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-04-11T08:48:34,960][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5000"}
[2019-04-11T08:48:34,985][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#"}
[2019-04-11T08:48:35,077][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-11T08:48:35,144][INFO ][org.logstash.beats.Server] Starting server on port: 5000
[2019-04-11T08:48:35,499][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-04-11T08:48:50,591][INFO ][logstash.outputs.file ] Opening file {:path=>"/usr/share/logstash/output.log"}
[2019-04-11T13:16:51,947][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2019-04-11T13:16:56,498][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#"}
Does it try to require a relative path? That's been removed in Ruby 1.9.
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:59:in `require':
It seems your ruby installation is missing psych (for YAML output).
To eliminate this warning, please install libyaml and reinstall your ruby.
[ERROR] 2019-04-11 14:18:02.058 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error:
(GemspecError) There was a LoadError while loading logstash-core.gemspec:
load error: psych -- java.lang.RuntimeException: BUG: we can not copy embedded jar to temp directoryoes it try to require a relative path? That's been removed in Ruby 1.9.
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:59:in `require':
It seems your ruby installation is missing psych (for YAML output).
To eliminate this warning, please install libyaml and reinstall your ruby.
[ERROR] 2019-04-11 13:42:01.450 [main]
Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (GemspecError) There was a LoadError while loading logstash-core.gemspec: load error: psych -- java.lang.RuntimeException: BUG: we can not copy embedded jar to temp directory
I have tried to remove tmp folder in container.But it is not working.
Related
I am trying to setup Sonarqube in my Macbook but I am getting following error when I try to start it with sh sonar.sh console
sudo sh sonar.sh console
Password:
/usr/bin/java
Running SonarQube...
Removed stale pid file: ./SonarQube.pid
INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /Applications/sonarqube/temp
INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:60506]
INFO app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/Applications/sonarqube/elasticsearch]: /Applications/sonarqube/elasticsearch/bin/elasticsearch
INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
Exception in thread "main" java.lang.UnsupportedOperationException: The Security Manager is deprecated and will be removed in a future release
at java.base/java.lang.System.setSecurityManager(System.java:416)
at org.elasticsearch.bootstrap.Security.setSecurityManager(Security.java:99)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:70)
2022.08.24 16:24:52 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [ElasticSearch]: 1
2022.08.24 16:24:52 INFO app[][o.s.a.SchedulerImpl] Process[ElasticSearch] is stopped
2022.08.24 16:24:52 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
After some research on internet I have installed Java 11 but it is not helping me.
I would like to setup my .NETCore microservice running via Dotnet/Tye project with local ELK docker.
I followed next guidelines:
https://github.com/dotnet/tye/blob/main/docs/recipes/logging_elastic.md and set this to my tye.yaml:
extensions:
- name: elastic
logPath: ./.logs
And running this project with
tye run --watch --logs elastic=http://localhost:9200
For some reason I'm not getting any indexes in Kibana configuration.
Update 1
Attached Elasticsearch logs.
Also I would like to say that since I'm using Mac M1 I needed to compile sebp/elk image for arm64 locally (according official docs).
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/_state/retention-leases-0.st': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/_state/state-4.st': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/translog/translog.ckp': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/translog/translog-6.tlog': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/translog': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/index/segments_2': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/index/write.lock': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/index': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0/translog': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0/index': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/nodes': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/snapshot_cache/segments_6': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/snapshot_cache/write.lock': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/snapshot_cache': Permission denied
chown: changing ownership of '/var/lib/elasticsearch': Permission denied
* Starting Elasticsearch Server
...done.
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
waiting for Elasticsearch to be up (10/30)
Waiting for Elasticsearch cluster to respond (1/30)
logstash started.
* Starting Kibana5
...done.
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:23,121][INFO ][o.e.b.BootstrapChecks ] [elk] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2022-08-13T20:02:23,122][INFO ][o.e.c.c.Coordinator ] [elk] cluster UUID [BCwoAnKSQYqAe1XVWAkKQg]
[2022-08-13T20:02:23,243][INFO ][o.e.c.s.MasterService ] [elk] elected-as-master ([1] nodes joined)[{elk}{xL-uMsG2RD2KxDzGf8SGhw}{5WXUD0v3Txe1UtMW0ws07A}{192.168.208.2}{192.168.208.2:9300}{cdfhilmrstw} completing election, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 7, version: 169, delta: master node changed {previous [], current [{elk}{xL-uMsG2RD2KxDzGf8SGhw}{5WXUD0v3Txe1UtMW0ws07A}{192.168.208.2}{192.168.208.2:9300}{cdfhilmrstw}]}
[2022-08-13T20:02:23,352][INFO ][o.e.c.s.ClusterApplierService] [elk] master node changed {previous [], current [{elk}{xL-uMsG2RD2KxDzGf8SGhw}{5WXUD0v3Txe1UtMW0ws07A}{192.168.208.2}{192.168.208.2:9300}{cdfhilmrstw}]}, term: 7, version: 169, reason: Publication{term=7, version=169}
[2022-08-13T20:02:23,385][INFO ][o.e.h.AbstractHttpServerTransport] [elk] publish_address {192.168.208.2:9200}, bound_addresses {0.0.0.0:9200}
[2022-08-13T20:02:23,385][INFO ][o.e.n.Node ] [elk] started
[2022-08-13T20:02:23,566][WARN ][o.e.x.s.i.SetSecurityUserProcessor] [elk] Creating processor [set_security_user] (tag [null]) on field [_security] but authentication is not currently enabled on this cluster - this processor is likely to fail at runtime if it is used
[2022-08-13T20:02:23,683][INFO ][o.e.l.LicenseService ] [elk] license [2cc9ac6a-c421-4021-aa2d-daa12f2e2d0a] mode [basic] - valid
[2022-08-13T20:02:23,685][INFO ][o.e.g.GatewayService ] [elk] recovered [8] indices into cluster_state
[2022-08-13T20:02:25,510][INFO ][o.e.c.r.a.AllocationService] [elk] current.health="GREEN" message="Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.geoip_databases][0]]])." previous.health="RED" reason="shards started [[.geoip_databases][0]]"
==> /var/log/logstash/logstash-plain.log <==
==> /var/log/kibana/kibana5.log <==
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:25,940][INFO ][o.e.i.g.DatabaseNodeService] [elk] successfully loaded geoip database file [GeoLite2-Country.mmdb]
[2022-08-13T20:02:26,012][INFO ][o.e.i.g.DatabaseNodeService] [elk] successfully loaded geoip database file [GeoLite2-ASN.mmdb]
[2022-08-13T20:02:26,934][INFO ][o.e.i.g.DatabaseNodeService] [elk] successfully loaded geoip database file [GeoLite2-City.mmdb]
==> /var/log/logstash/logstash-plain.log <==
[2022-08-13T20:02:39,552][INFO ][logstash.runner ] Log4j configuration path used is: /opt/logstash/config/log4j2.properties
[2022-08-13T20:02:39,561][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.1.0", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.13+8 on 11.0.13+8 +indy +jit [linux-aarch64]"}
[2022-08-13T20:02:39,562][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Djava.io.tmpdir=/opt/logstash]
[2022-08-13T20:02:39,577][INFO ][logstash.settings ] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
[2022-08-13T20:02:39,584][INFO ][logstash.settings ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
[2022-08-13T20:02:39,813][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"7710d596-7725-4b7c-b1c6-371f4360a636", :path=>"/opt/logstash/data/uuid"}
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:41,002][INFO ][o.e.t.LoggingTaskListener] [elk] 190 finished with response BulkByScrollResponse[took=331.9ms,timed_out=false,sliceId=null,updated=19,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
[2022-08-13T20:02:41,034][INFO ][o.e.t.LoggingTaskListener] [elk] 184 finished with response BulkByScrollResponse[took=462.3ms,timed_out=false,sliceId=null,updated=11,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
==> /var/log/logstash/logstash-plain.log <==
[2022-08-13T20:02:41,833][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-08-13T20:02:43,307][INFO ][org.reflections.Reflections] Reflections took 181 ms to scan 1 urls, producing 120 keys and 417 values
[2022-08-13T20:02:43,821][INFO ][logstash.javapipeline ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-08-13T20:02:43,855][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2022-08-13T20:02:44,105][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2022-08-13T20:02:44,281][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2022-08-13T20:02:44,292][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.1.0) {:es_version=>8}
[2022-08-13T20:02:44,294][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2022-08-13T20:02:44,373][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-08-13T20:02:44,381][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-08-13T20:02:44,382][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2022-08-13T20:02:44,437][WARN ][logstash.filters.grok ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2022-08-13T20:02:44,530][WARN ][logstash.filters.grok ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2022-08-13T20:02:44,594][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/02-beats-input.conf", "/etc/logstash/conf.d/10-syslog.conf", "/etc/logstash/conf.d/11-nginx.conf", "/etc/logstash/conf.d/30-output.conf"], :thread=>"#<Thread:0x2926b0a run>"}
[2022-08-13T20:02:45,171][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.57}
[2022-08-13T20:02:45,184][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2022-08-13T20:02:45,213][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-08-13T20:02:45,313][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-08-13T20:02:45,315][INFO ][org.logstash.beats.Server][main][829cd21b7fbde9c57f6074e54675a6dd14081ec403bdd5ea935fd37106249341] Starting server on port: 5044
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:52,829][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
[2022-08-13T20:02:54,059][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
[2022-08-13T20:03:54,361][WARN ][o.e.x.s.i.SetSecurityUserProcessor] [elk] Creating processor [set_security_user] (tag [null]) on field [_security] but authentication is not currently enabled on this cluster - this processor is likely to fail at runtime if it is used
[2022-08-13T20:03:54,380][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
[2022-08-13T20:05:09,475][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
I started learning ELK and was trying to setup ELK locally on my docker desktop.This process works fine in Windows if I run the services separately. But if I run the services on docker I get error.
My elastic-search and kibana are working fine.
Docker command
docker run -it --name=logstash --link elasticsearch:elasticsearch -v D:/logstash/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:7.9.1
logstash.conf file
input {
file {
path => "D:/logs/service.log"
start_position => "beginning"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logs-%{+yyyy.MM.dd}"
}
}
I get following error
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby10434374664132949646jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2020-09-24T03:57:01,161][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.1", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
[2020-09-24T03:57:01,210][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2020-09-24T03:57:01,227][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2020-09-24T03:57:01,597][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"975ca258-9df4-4e76-a40b-8ba27de762e7", :path=>"/usr/share/logstash/data/uuid"}
[2020-09-24T03:57:02,186][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2020-09-24T03:57:02,191][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
Please configure Metricbeat to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[2020-09-24T03:57:03,038][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2020-09-24T03:57:03,226][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2020-09-24T03:57:03,275][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2020-09-24T03:57:03,281][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-09-24T03:57:03,441][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2020-09-24T03:57:03,442][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2020-09-24T03:57:05,116][INFO ][org.reflections.Reflections] Reflections took 35 ms to scan 1 urls, producing 22 keys and 45 values
[2020-09-24T03:57:05,466][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2020-09-24T03:57:05,492][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2020-09-24T03:57:05,507][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
[2020-09-24T03:57:05,509][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-09-24T03:57:05,611][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
[2020-09-24T03:57:05,641][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2020-09-24T03:57:05,750][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2020-09-24T03:57:05,757][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2020-09-24T03:57:05,767][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-09-24T03:57:05,768][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-09-24T03:57:05,792][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x7f619e13 run>"}
[2020-09-24T03:57:05,805][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2020-09-24T03:57:05,827][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2020-09-24T03:57:05,835][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x5094c85b run>"}
[2020-09-24T03:57:05,882][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-09-24T03:57:06,588][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.79}
[2020-09-24T03:57:06,648][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.81}
[2020-09-24T03:57:06,664][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2020-09-24T03:57:07,847][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2020-09-24T03:57:08,054][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-09-24T03:57:10,001][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2020-09-24T03:57:10,047][INFO ][logstash.runner ] Logstash shut down.
If I use logstash.conf file as
input {
stdin {}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
Then logstash starts perfectly and the logs start coming in kibana.
I am trying to start Apache nifi version 1.2.0 on window 8 machine. It used to start properly. After I restarted the system the nifi is not starting at all. I had check status Its keep getting "Apacha Nifi not running".
Below are logs from nifi.bootstrap.log file:-
2017-07-05 15:41:57,105 WARN [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Failed to set permissions so that only the
owner can read pid file E:\softwares\nifi-1.2.0\bin\..\run\nifi.pid; this
may allows others to have access to the key needed to communicate with NiFi.
Permissions should be changed so that only the owner can read this file
2017-07-05 15:41:57,142 WARN [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Failed to set permissions so that only the
owner can read status file E:\softwares\nifi-1.2.0\bin\..\run\nifi.status;
this may allows others to have access to the key needed to communicate with
NiFi. Permissions should be changed so that only the owner can read this
file
2017-07-05 15:41:57,168 INFO [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for
Bootstrap requests on port 50765
2017-07-05 15:43:12,077 ERROR [NiFi logging handler] org.apache.nifi.StdErr
Failed to start web server: Unable to start Flow Controller.
2017-07-05 15:43:12,078 ERROR [NiFi logging handler] org.apache.nifi.StdErr
Shutting down...
2017-07-05 15:43:14,501 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi
never started. Will not restart NiFi
Stack trace from nifi.app.log: -
2017-07-05 15:43:12,077 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
org.apache.nifi.web.NiFiCoreException: Unable to start Flow Controller.
at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:88)
at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:876)
at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:532)
at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:839)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:344)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1480)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1442)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:799)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:540)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:290)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start(Server.java:452)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:419)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:695)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: java.io.IOException: Expected to read a Sentinel Byte of '1' but got a value of '0' instead
at org.apache.nifi.repository.schema.SchemaRecordReader.readRecord(SchemaRecordReader.java:65)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeRecord(SchemaRepositoryRecordSerde.java:115)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeEdit(SchemaRepositoryRecordSerde.java:109)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeEdit(SchemaRepositoryRecordSerde.java:46)
at org.wali.MinimalLockingWriteAheadLog$Partition.recoverNextTransaction(MinimalLockingWriteAheadLog.java:1096)
at org.wali.MinimalLockingWriteAheadLog.recoverFromEdits(MinimalLockingWriteAheadLog.java:459)
at org.wali.MinimalLockingWriteAheadLog.recoverRecords(MinimalLockingWriteAheadLog.java:301)
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.loadFlowFiles(WriteAheadFlowFileRepository.java:381)
at org.apache.nifi.controller.FlowController.initializeFlow(FlowController.java:712)
at org.apache.nifi.controller.StandardFlowService.initializeController(StandardFlowService.java:953)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:534)
at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:72)
... 28 common frames omitted
Thanks in advance
After Googling on this error "Caused by: java.io.IOException: Expected to read a Sentinel Byte of '1' but got a value of '0' instead" I found that this error indicates a partial write to the repos.
Here are a couple of things you can check/try to bring your Dataflow back online ;
check if your dsks are not full
Did you launch nifi with the same user ? Did you run it with administrator privileges ?
You can backup/move your repositories and try to start Nifi with empty repositories, you will still have your dataflows there but any file that was processing when you shutdown will be gone.
Could you please try that ?
I think the issue is with incompatible java version, use JAVA 8 version.
If you haven't set JAVA_HOME then set in environment variables with path Like "C:/program files/jdk1.8"
Jira addressing when NiFi run with java 9 version and the issue not resolved yet
https://issues.apache.org/jira/browse/NIFI-4419
Good Morning,
I'm trying to implement the massive data dump cassandra example using the bulk-loading (http://www.datastax.com/dev/blog/bulk-loading) as a guide.
In the example resolve dependencies with the script (http://www.datastax.com/wp-content/uploads/2011/08/DataImport) but I find that the dependencies to be covered with cassandra libraries not located in the directories listed here because the version I'm working with dse with cassandra 2.0. Well then trying to cover such dependencies get the following script.
#!/bin/sh
# paths to the cassandra source tree, cassandra jar and java
CASSANDRA_HOME="/usr/share/dse/cassandra"
# CASSANDRA_JAR="./apache-cassandra-2.0.10.jar"
JAVA=`which java`
# Java classpath. Must include:
# - directory of DataImportExample
# - directory with cassandra/log4j config files
# - cassandra jar
# - cassandra depencies jar
CLASSPATH=".:/usr/share/dse/dse.jar:./slf4j-1.7.7/slf4-nop-1.7.7.jar:./slf4j-1.7.7/slf4j-simple-1.7.7.jar:/etc/dse/cassandra"
for jar in $CASSANDRA_HOME/lib/*.jar; do
CLASSPATH=$CLASSPATH:$jar
done
$JAVA -ea -cp $CLASSPATH -Xmx256M \
-Dlog4j.configuration=log4j-tools.properties \
CassandraDataBulk "$#"
CASSANDRA_JAR is commented and I use "cassandra-all-2.0.8.39.jar" located in the folder "/ usr / share / dse / cassandra / lib" and is already included.
I solve slf4j dependencies downloading that in 1.7.7 version.
Due to the difference of cassandra version also I had to accustom SSTableSimpleUnsortedWriter builder.
IPartitioner partitioner = new RandomPartitioner();
SSTableSimpleUnsortedWriter sourcesWriter = new SSTableSimpleUnsortedWriter(
directory,
partitioner,
keyspace,
table,
AsciiType.instance,
null,
64
);
It seems that the problem today is that there are still dependencies.
Under, the trace error I get.
There is a dependency but it seems that being "org.apache.commons.configuration.ConfigurationRuntimeException" the real problem could be another, Could have a bad configuration "cassandra.yaml"?
Thanks,
A greeting!
[dmdb#vm-dmdb01 ~]$ ./init_env.sh export.csv
[main] ERROR org.apache.cassandra.cql3.QueryProcessor - Unable to initialize MemoryMeter (jamm not specified as javaagent). This means Cassandra will be unable to measure object sizes accurately and may consequently OOM.
[main] INFO org.apache.cassandra.config.YamlConfigurationLoader - Loading settings from file:/etc/dse/cassandra/cassandra.yaml
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - Data files directories: [/data01, /data02]
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - Commit log directory: /datatmp/commitlog
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - disk_failure_policy is stop
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - commit_failure_policy is stop
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - Global memtable threshold is enabled at 61MB
[main] INFO com.datastax.bdp.snitch.Workload - Setting my workload to Cassandra
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/configuration/ConfigurationRuntimeException
at com.datastax.bdp.config.ConfigUtil.defaultValue(ConfigUtil.java:18)
at com.datastax.bdp.config.DseConfig.<clinit>(DseConfig.java:51)
at com.datastax.bdp.snitch.DseDelegateSnitch.<init>(DseDelegateSnitch.java:42)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:488)
at org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:508)
at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:341)
at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:111)
at org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.<init>(AbstractSSTableSimpleWriter.java:50)
at org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.<init>(SSTableSimpleUnsortedWriter.java:96)
at org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.<init>(SSTableSimpleUnsortedWriter.java:80)
at org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.<init>(SSTableSimpleUnsortedWriter.java:91)
at CassandraDataBulk.main(CassandraDataBulk.java:35)
Caused by: java.lang.ClassNotFoundException: org.apache.commons.configuration.ConfigurationRuntimeException
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 17 more
You are missing a "javaagent" parameter in your java call. Add the following:
-javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
Your final call should look like:
$JAVA -ea -cp $CLASSPATH -Xmx256M \
-Dlog4j.configuration=log4j-tools.properties \
-javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
CassandraDataBulk "$#"
NOTE: Adjust the path to jamm.jar as necessary
Reference
As for the runtime configuration error, download apache commons 'lang' library and include it to your classpath.
Download here
If you receive NEW exceptions after implementing the fix, download google-common.jar and guava-16.0.1.jar and include them as well to your classpath. These are all of the JARs that my own bulk loader required so far.