dotnet/tye and ELK docker on local machine - no indexes - docker

I would like to setup my .NETCore microservice running via Dotnet/Tye project with local ELK docker.
I followed next guidelines:
https://github.com/dotnet/tye/blob/main/docs/recipes/logging_elastic.md and set this to my tye.yaml:
extensions:
- name: elastic
logPath: ./.logs
And running this project with
tye run --watch --logs elastic=http://localhost:9200
For some reason I'm not getting any indexes in Kibana configuration.
Update 1
Attached Elasticsearch logs.
Also I would like to say that since I'm using Mac M1 I needed to compile sebp/elk image for arm64 locally (according official docs).
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/_state/retention-leases-0.st': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/_state/state-4.st': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/translog/translog.ckp': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/translog/translog-6.tlog': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/translog': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/index/segments_2': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/index/write.lock': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/index': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0/translog': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0/index': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/nodes': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/snapshot_cache/segments_6': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/snapshot_cache/write.lock': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/snapshot_cache': Permission denied
chown: changing ownership of '/var/lib/elasticsearch': Permission denied
* Starting Elasticsearch Server
...done.
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
waiting for Elasticsearch to be up (10/30)
Waiting for Elasticsearch cluster to respond (1/30)
logstash started.
* Starting Kibana5
...done.
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:23,121][INFO ][o.e.b.BootstrapChecks ] [elk] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2022-08-13T20:02:23,122][INFO ][o.e.c.c.Coordinator ] [elk] cluster UUID [BCwoAnKSQYqAe1XVWAkKQg]
[2022-08-13T20:02:23,243][INFO ][o.e.c.s.MasterService ] [elk] elected-as-master ([1] nodes joined)[{elk}{xL-uMsG2RD2KxDzGf8SGhw}{5WXUD0v3Txe1UtMW0ws07A}{192.168.208.2}{192.168.208.2:9300}{cdfhilmrstw} completing election, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 7, version: 169, delta: master node changed {previous [], current [{elk}{xL-uMsG2RD2KxDzGf8SGhw}{5WXUD0v3Txe1UtMW0ws07A}{192.168.208.2}{192.168.208.2:9300}{cdfhilmrstw}]}
[2022-08-13T20:02:23,352][INFO ][o.e.c.s.ClusterApplierService] [elk] master node changed {previous [], current [{elk}{xL-uMsG2RD2KxDzGf8SGhw}{5WXUD0v3Txe1UtMW0ws07A}{192.168.208.2}{192.168.208.2:9300}{cdfhilmrstw}]}, term: 7, version: 169, reason: Publication{term=7, version=169}
[2022-08-13T20:02:23,385][INFO ][o.e.h.AbstractHttpServerTransport] [elk] publish_address {192.168.208.2:9200}, bound_addresses {0.0.0.0:9200}
[2022-08-13T20:02:23,385][INFO ][o.e.n.Node ] [elk] started
[2022-08-13T20:02:23,566][WARN ][o.e.x.s.i.SetSecurityUserProcessor] [elk] Creating processor [set_security_user] (tag [null]) on field [_security] but authentication is not currently enabled on this cluster - this processor is likely to fail at runtime if it is used
[2022-08-13T20:02:23,683][INFO ][o.e.l.LicenseService ] [elk] license [2cc9ac6a-c421-4021-aa2d-daa12f2e2d0a] mode [basic] - valid
[2022-08-13T20:02:23,685][INFO ][o.e.g.GatewayService ] [elk] recovered [8] indices into cluster_state
[2022-08-13T20:02:25,510][INFO ][o.e.c.r.a.AllocationService] [elk] current.health="GREEN" message="Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.geoip_databases][0]]])." previous.health="RED" reason="shards started [[.geoip_databases][0]]"
==> /var/log/logstash/logstash-plain.log <==
==> /var/log/kibana/kibana5.log <==
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:25,940][INFO ][o.e.i.g.DatabaseNodeService] [elk] successfully loaded geoip database file [GeoLite2-Country.mmdb]
[2022-08-13T20:02:26,012][INFO ][o.e.i.g.DatabaseNodeService] [elk] successfully loaded geoip database file [GeoLite2-ASN.mmdb]
[2022-08-13T20:02:26,934][INFO ][o.e.i.g.DatabaseNodeService] [elk] successfully loaded geoip database file [GeoLite2-City.mmdb]
==> /var/log/logstash/logstash-plain.log <==
[2022-08-13T20:02:39,552][INFO ][logstash.runner ] Log4j configuration path used is: /opt/logstash/config/log4j2.properties
[2022-08-13T20:02:39,561][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.1.0", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.13+8 on 11.0.13+8 +indy +jit [linux-aarch64]"}
[2022-08-13T20:02:39,562][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Djava.io.tmpdir=/opt/logstash]
[2022-08-13T20:02:39,577][INFO ][logstash.settings ] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
[2022-08-13T20:02:39,584][INFO ][logstash.settings ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
[2022-08-13T20:02:39,813][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"7710d596-7725-4b7c-b1c6-371f4360a636", :path=>"/opt/logstash/data/uuid"}
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:41,002][INFO ][o.e.t.LoggingTaskListener] [elk] 190 finished with response BulkByScrollResponse[took=331.9ms,timed_out=false,sliceId=null,updated=19,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
[2022-08-13T20:02:41,034][INFO ][o.e.t.LoggingTaskListener] [elk] 184 finished with response BulkByScrollResponse[took=462.3ms,timed_out=false,sliceId=null,updated=11,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
==> /var/log/logstash/logstash-plain.log <==
[2022-08-13T20:02:41,833][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-08-13T20:02:43,307][INFO ][org.reflections.Reflections] Reflections took 181 ms to scan 1 urls, producing 120 keys and 417 values
[2022-08-13T20:02:43,821][INFO ][logstash.javapipeline ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-08-13T20:02:43,855][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2022-08-13T20:02:44,105][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2022-08-13T20:02:44,281][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2022-08-13T20:02:44,292][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.1.0) {:es_version=>8}
[2022-08-13T20:02:44,294][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2022-08-13T20:02:44,373][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-08-13T20:02:44,381][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-08-13T20:02:44,382][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2022-08-13T20:02:44,437][WARN ][logstash.filters.grok ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2022-08-13T20:02:44,530][WARN ][logstash.filters.grok ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2022-08-13T20:02:44,594][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/02-beats-input.conf", "/etc/logstash/conf.d/10-syslog.conf", "/etc/logstash/conf.d/11-nginx.conf", "/etc/logstash/conf.d/30-output.conf"], :thread=>"#<Thread:0x2926b0a run>"}
[2022-08-13T20:02:45,171][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.57}
[2022-08-13T20:02:45,184][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2022-08-13T20:02:45,213][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-08-13T20:02:45,313][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-08-13T20:02:45,315][INFO ][org.logstash.beats.Server][main][829cd21b7fbde9c57f6074e54675a6dd14081ec403bdd5ea935fd37106249341] Starting server on port: 5044
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:52,829][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
[2022-08-13T20:02:54,059][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
[2022-08-13T20:03:54,361][WARN ][o.e.x.s.i.SetSecurityUserProcessor] [elk] Creating processor [set_security_user] (tag [null]) on field [_security] but authentication is not currently enabled on this cluster - this processor is likely to fail at runtime if it is used
[2022-08-13T20:03:54,380][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
[2022-08-13T20:05:09,475][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]

Related

Crunchy Postgres log messages

I am new to Crunchy Postgres, and recently I installed a Crunchy PostgresCluster on an openshift environment. After the cluster was started, I had a look at the container log messages.
I also checked script startup.sh , which is called during Postgresql startup. In this shell script, there are some lines (begin with echo_info) used for log messsages, for example:
echo_info "Starting PostgreSQL.."
But I could not see this message in the logs.
NAME READY STATUS RESTARTS AGE ROLE
demo-instance1-4vtv-0 5/5 Running 0 7h36m replica
demo-instance1-dg7j-0 5/5 Running 0 7h36m replica
demo-instance1-f696-0 5/5 Running 0 7h36m master
:~$ oc logs -f demo-instance1-f696-0 -c database | more
2022-07-08 07:42:31,064 INFO: No PostgreSQL configuration items changed, nothing to reload.
2022-07-08 07:42:31,068 INFO: Lock owner: None; I am demo-instance1-f696-0
2022-07-08 07:42:31,383 INFO: trying to bootstrap a new cluster
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf-8".
The default text search configuration will be set to "english".
Data page checksums are enabled.
fixing permissions on existing directory /pgdata/pg14 ... ok
creating directory /pgdata/pg14_wal ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
/usr/pgsql-14/bin/pg_ctl -D /pgdata/pg14 -l logfile start
2022-07-08 07:42:35.953 UTC [92] LOG: pgaudit extension initialized
2022-07-08 07:42:35,955 INFO: postmaster pid=92
/tmp/postgres:5432 - no response
2022-07-08 07:42:35.998 UTC [92] LOG: redirecting log output to logging collector process
2022-07-08 07:42:35.998 UTC [92] HINT: Future log output will appear in directory "log".
/tmp/postgres:5432 - accepting connections
/tmp/postgres:5432 - accepting connections
2022-07-08 07:42:37,038 INFO: establishing a new patroni connection to the postgres cluster
2022-07-08 07:42:37,334 INFO: running post_bootstrap
2022-07-08 07:42:37,754 INFO: initialized a new cluster
2022-07-08 07:42:38,039 INFO: no action. I am (demo-instance1-f696-0), the leader with the lock
2022-07-08 07:42:48,504 INFO: no action. I am (demo-instance1-f696-0), the leader with the lock
2022-07-08 07:42:58,476 INFO: no action. I am (demo-instance1-f696-0), the leader with the lock
2022-07-08 07:43:08,497 INFO: no action. I am (demo-instance1-f696-0), the leader with the lock

Logstash Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action:

I started learning ELK and was trying to setup ELK locally on my docker desktop.This process works fine in Windows if I run the services separately. But if I run the services on docker I get error.
My elastic-search and kibana are working fine.
Docker command
docker run -it --name=logstash --link elasticsearch:elasticsearch -v D:/logstash/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:7.9.1
logstash.conf file
input {
file {
path => "D:/logs/service.log"
start_position => "beginning"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logs-%{+yyyy.MM.dd}"
}
}
I get following error
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby10434374664132949646jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2020-09-24T03:57:01,161][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.1", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
[2020-09-24T03:57:01,210][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2020-09-24T03:57:01,227][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2020-09-24T03:57:01,597][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"975ca258-9df4-4e76-a40b-8ba27de762e7", :path=>"/usr/share/logstash/data/uuid"}
[2020-09-24T03:57:02,186][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2020-09-24T03:57:02,191][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
Please configure Metricbeat to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[2020-09-24T03:57:03,038][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2020-09-24T03:57:03,226][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2020-09-24T03:57:03,275][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2020-09-24T03:57:03,281][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-09-24T03:57:03,441][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2020-09-24T03:57:03,442][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2020-09-24T03:57:05,116][INFO ][org.reflections.Reflections] Reflections took 35 ms to scan 1 urls, producing 22 keys and 45 values
[2020-09-24T03:57:05,466][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2020-09-24T03:57:05,492][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2020-09-24T03:57:05,507][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
[2020-09-24T03:57:05,509][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-09-24T03:57:05,611][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
[2020-09-24T03:57:05,641][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2020-09-24T03:57:05,750][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2020-09-24T03:57:05,757][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2020-09-24T03:57:05,767][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-09-24T03:57:05,768][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-09-24T03:57:05,792][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x7f619e13 run>"}
[2020-09-24T03:57:05,805][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2020-09-24T03:57:05,827][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2020-09-24T03:57:05,835][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x5094c85b run>"}
[2020-09-24T03:57:05,882][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-09-24T03:57:06,588][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.79}
[2020-09-24T03:57:06,648][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.81}
[2020-09-24T03:57:06,664][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2020-09-24T03:57:07,847][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2020-09-24T03:57:08,054][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-09-24T03:57:10,001][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2020-09-24T03:57:10,047][INFO ][logstash.runner ] Logstash shut down.
If I use logstash.conf file as
input {
stdin {}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
Then logstash starts perfectly and the logs start coming in kibana.

Logstash container is getting stoped automatically

I'm trying to start the docker container. I am using docker-elk.yml file to generate ELK containers.Elasticsearch and Kibana Containers are working fine.But for Logstash,it is starting and going into container bash but after sometimes it stops automatically.
Logs of container:
[2019-04-11T08:48:26,882][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.6.0"}
[2019-04-11T08:48:33,497][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-04-11T08:48:34,062][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-11T08:48:34,310][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-04-11T08:48:34,409][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-11T08:48:34,415][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2019-04-11T08:48:34,469][INFO ][logstash.outputs.elasticsearch]
New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-04-11T08:48:34,486][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-04-11T08:48:34,503][INFO ][logstash.outputs.elasticsearch]
Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-04-11T08:48:34,960][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5000"}
[2019-04-11T08:48:34,985][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#"}
[2019-04-11T08:48:35,077][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-11T08:48:35,144][INFO ][org.logstash.beats.Server] Starting server on port: 5000
[2019-04-11T08:48:35,499][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-04-11T08:48:50,591][INFO ][logstash.outputs.file ] Opening file {:path=>"/usr/share/logstash/output.log"}
[2019-04-11T13:16:51,947][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2019-04-11T13:16:56,498][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#"}
Does it try to require a relative path? That's been removed in Ruby 1.9.
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:59:in `require':
It seems your ruby installation is missing psych (for YAML output).
To eliminate this warning, please install libyaml and reinstall your ruby.
[ERROR] 2019-04-11 14:18:02.058 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error:
(GemspecError) There was a LoadError while loading logstash-core.gemspec:
load error: psych -- java.lang.RuntimeException: BUG: we can not copy embedded jar to temp directoryoes it try to require a relative path? That's been removed in Ruby 1.9.
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:59:in `require':
It seems your ruby installation is missing psych (for YAML output).
To eliminate this warning, please install libyaml and reinstall your ruby.
[ERROR] 2019-04-11 13:42:01.450 [main]
Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (GemspecError) There was a LoadError while loading logstash-core.gemspec: load error: psych -- java.lang.RuntimeException: BUG: we can not copy embedded jar to temp directory
I have tried to remove tmp folder in container.But it is not working.

Can not enable Alwayson sql in DSE

I get this error when start Alwayson sql, tried many ways but the results still same. any ideas why?
Im using 1 cluster, 1 analytics+search center, 2 ubuntu 16.04 nodes.
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,348 ALWAYSON-SQL AlwaysOnSqlRunner.scala:304 - Shutting down AlwaysOn SQL.
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,617 ALWAYSON-SQL AlwaysOnSqlRunner.scala:328 - Set status to stopped
INFO [ALWAYSON-SQL] 2019-02-14 11:36:01,620 ALWAYSON-SQL AlwaysOnSqlRunner.scala:382 - Reserve port for AlwaysOn SQL
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,621 ALWAYSON-SQL AlwaysOnSqlRunner.scala:375 - Release reserved port
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,622 ALWAYSON-SQL AlwaysOnSqlRunner.scala:805 - Set InCluster token to DseFs client
INFO [ForkJoinPool-1-worker-1] 2019-02-14 11:36:04,650 AlwaysOnSqlRunner.scala:740 - dsefs server heartbeat response: pong
INFO [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,757 AlwaysOnSqlRunner.scala:704 - Create DseFs directory /var/log/spark/alwayson_sql
INFO [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,758 AlwaysOnSqlRunner.scala:805 - Set InCluster token to DseFs client
ERROR [ForkJoinPool-1-worker-3] 2019-02-14 11:36:04,788 AlwaysOnSqlRunner.scala:722 - Failed to check dsefs directory alwayson_sql
com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:258)
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:232)
at spray.json.JsValue.convertTo(JsValue.scala:31)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:48)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:44)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at java.lang.Thread.run(Thread.java:748)
INFO [ALWAYSON-SQL] 2019-02-14 11:36:04,788 ALWAYSON-SQL AlwaysOnSqlRunner.scala:247 - ALWAYSON-SQL caused an exception in state RUNNING : com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
com.datastax.bdp.fs.model.AccessDeniedException: Insufficient permissions to path /
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:258)
at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:232)
at spray.json.JsValue.convertTo(JsValue.scala:31)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:48)
at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:44)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at java.lang.Thread.run(Thread.java:748)
I have seen this problem too! It was a permissions problem in dsefs! To fix, login with the root Cassandra user, and change permissions of the your alwayson log directory to the alwayson user.

Apache nifi is not starting up

I am trying to start Apache nifi version 1.2.0 on window 8 machine. It used to start properly. After I restarted the system the nifi is not starting at all. I had check status Its keep getting "Apacha Nifi not running".
Below are logs from nifi.bootstrap.log file:-
2017-07-05 15:41:57,105 WARN [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Failed to set permissions so that only the
owner can read pid file E:\softwares\nifi-1.2.0\bin\..\run\nifi.pid; this
may allows others to have access to the key needed to communicate with NiFi.
Permissions should be changed so that only the owner can read this file
2017-07-05 15:41:57,142 WARN [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Failed to set permissions so that only the
owner can read status file E:\softwares\nifi-1.2.0\bin\..\run\nifi.status;
this may allows others to have access to the key needed to communicate with
NiFi. Permissions should be changed so that only the owner can read this
file
2017-07-05 15:41:57,168 INFO [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for
Bootstrap requests on port 50765
2017-07-05 15:43:12,077 ERROR [NiFi logging handler] org.apache.nifi.StdErr
Failed to start web server: Unable to start Flow Controller.
2017-07-05 15:43:12,078 ERROR [NiFi logging handler] org.apache.nifi.StdErr
Shutting down...
2017-07-05 15:43:14,501 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi
never started. Will not restart NiFi
Stack trace from nifi.app.log: -
2017-07-05 15:43:12,077 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
org.apache.nifi.web.NiFiCoreException: Unable to start Flow Controller.
at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:88)
at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:876)
at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:532)
at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:839)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:344)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1480)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1442)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:799)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:540)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:290)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start(Server.java:452)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:419)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:695)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: java.io.IOException: Expected to read a Sentinel Byte of '1' but got a value of '0' instead
at org.apache.nifi.repository.schema.SchemaRecordReader.readRecord(SchemaRecordReader.java:65)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeRecord(SchemaRepositoryRecordSerde.java:115)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeEdit(SchemaRepositoryRecordSerde.java:109)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeEdit(SchemaRepositoryRecordSerde.java:46)
at org.wali.MinimalLockingWriteAheadLog$Partition.recoverNextTransaction(MinimalLockingWriteAheadLog.java:1096)
at org.wali.MinimalLockingWriteAheadLog.recoverFromEdits(MinimalLockingWriteAheadLog.java:459)
at org.wali.MinimalLockingWriteAheadLog.recoverRecords(MinimalLockingWriteAheadLog.java:301)
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.loadFlowFiles(WriteAheadFlowFileRepository.java:381)
at org.apache.nifi.controller.FlowController.initializeFlow(FlowController.java:712)
at org.apache.nifi.controller.StandardFlowService.initializeController(StandardFlowService.java:953)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:534)
at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:72)
... 28 common frames omitted
Thanks in advance
After Googling on this error "Caused by: java.io.IOException: Expected to read a Sentinel Byte of '1' but got a value of '0' instead" I found that this error indicates a partial write to the repos.
Here are a couple of things you can check/try to bring your Dataflow back online ;
check if your dsks are not full
Did you launch nifi with the same user ? Did you run it with administrator privileges ?
You can backup/move your repositories and try to start Nifi with empty repositories, you will still have your dataflows there but any file that was processing when you shutdown will be gone.
Could you please try that ?
I think the issue is with incompatible java version, use JAVA 8 version.
If you haven't set JAVA_HOME then set in environment variables with path Like "C:/program files/jdk1.8"
Jira addressing when NiFi run with java 9 version and the issue not resolved yet
https://issues.apache.org/jira/browse/NIFI-4419

Resources