Parse a log file and send info to sensu - parsing

Is there a way to make a Sensu check that takes a .log file as input and parses it and returns selected info to InfluxDB.
Im very new to this so maybe I didnt describe my problem the best way.

I found the best way to do this is with Logstash (mostly because I use ELK for general log aggregation anyway).
Set up a Logstash server.
https://www.elastic.co/products/logstash
Install logstash-forwarder on the client(s). Configure logstash-forwarder to read the logs you want and to send them to your logstash server.
https://github.com/elastic/logstash-forwarder
In the Logstash server's config;
Define a lumberjack input for the log you want to send to sensu (https://www.elastic.co/guide/en/logstash/current/plugins-inputs-lumberjack.html).
Eg:
input {
lumberjack {
port => 5555
type => "logs"
tags => ["lumberjack", "influxdb"]
}
}
Do your processing/filtering.
Eg:
filter {
if ("influxdb" in [tags]) {
...
}
}
Define an InfluxDB output (https://www.elastic.co/guide/en/logstash/current/plugins-outputs-influxdb.html).
Eg:
output {
influxdb {
...
}
}
This method would skip Sensu all together. If you do want to send the logs to Sensu and see the output in Uchiwa it would involve setting up some Sensu-friendly info in your logstash filter:
filter {
if ("influxdb" in [tags]) {
add_field => { "name" => "SensuCheckName" }
add_field => { "handler" => "SensuHandlerName" }
add_field => { "output" => "the stuff you want to send to sensu" }
add_field => { "status" => "1" }
}
}
And sending the logs to sensu's RabbitMQ transport (https://www.elastic.co/guide/en/logstash/current/plugins-outputs-rabbitmq.html):
output {
rabbitmq {
exchange => "results"
exchange_type => "direct"
host => "192.168.0.5 or whatever it is"
vhost => "/sensu"
user => "sensuUser"
password => "whateverItIs"
}
}
Define a Sensu handler for this (name above in logstash filter) and do any extra processing there before passing it to InfluxDB.
If you haven't got Sensu sending data to InfuxBD set up already, go here: https://github.com/sensu-plugins/sensu-plugins-influxdb

Related

How to select field with whitespace in logstash

I have this field in the raw log:
"user ip port" : 192.xxx.xx.xx:8080
I want to process those fields using grok like this
if "user ip port" {
grok{
match => { "c&c_ip_port" => ["^%{DATA:ip}\:%{DATA:port}$"] }
}
}
How do I select those fields in the if statement ?
already try using ["user device ip"] and [user ip port] but the field wont process by the grok.
Thanks

logstash elastic not using source timestamp

Current setup looks like this.
Spring boot -> log-file.json ( using logstash-logback-encoder) -> filebeat -> logstash -> elastic
I am able to see logs appearing in elastic search ok. However its not using the dates provided in the log-file its creating them on the fly.
json-example
{
"#timestamp":"2017-09-08T17:23:38.677+01:00",
"#version":1,
"message":"A received request - withtimestanp",
etc..
My logstash.conf input filter looks like this.
input {
beats {
port => 5044
codec => "json"
}
}
output {
elasticsearch {
hosts => [ 'elasticsearch' ]
}
}
If you take a look at the kibana output for the log it has the 9th not the 8th (when I actually created the log)
So have now resolved this.. Detail of fix is below.
logback.xml
<appender name="stash" class="ch.qos.logback.core.rolling.RollingFileAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>info</level>
</filter>
<file>/home/rob/projects/scratch/log-tracing-demo/build/logs/tracing-A.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>/home/rob/projects/scratch/log-tracing-demo/build/logs/tracing-A.log.%d{yyyy-MM-dd}</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" >
<includeContext>false</includeContext>
<fieldNames>
<message>msg</message>
</fieldNames>
</encoder>
</appender>
renamed field message to msg as logstash expects different default for message when incoming from beat
json-file.log
below is what the sample json output looks like
{"#timestamp":"2017-09-11T14:32:47.920+01:00","#version":1,"msg":"Unregistering JMX-exposed beans","logger_name":"org.springframework.jmx.export.annotation.AnnotationMBeanExporter","thread_name":"Thread-19","level":"INFO","level_value":20000}
filebeat.yml
json settings below now handle timestamp issue where it didnt use the time from log file.
Also it moves the json into the root of the json output to logstash. i.e. its not nested within a beat json event its part of the root.
filebeat.prospectors:
- input_type: log
paths:
- /mnt/log/*.log
json.overwrite_keys: true
json.keys_under_root: true
fields_under_root: true
output.logstash:
hosts: ['logstash:5044']
logstash.conf
Using the msg rather than message resolves the JSON parse error, original data now in message field. see here
https://discuss.elastic.co/t/logstash-issue-with-json-input-from-beats-solved/100039
input {
beats {
port => 5044
codec => "json"
}
}
filter {
mutate {
rename => {"msg" => "message"}
}
}
output {
elasticsearch {
hosts => [ 'elasticsearch' ]
user => 'elastic'
password => 'changeme'
}
}

Icinga2 Cluster?

I'm trying to configure an Icinga2 Master Server with 2 Clients for the beginning. So I want the configuration like I'm configuring the Master and synchronize the Configs to the Clients.
This works already, but if a client goes down. The Master says it is still up, because the clients are checking themselves.
The tricky thing is that I can't work with IP's because all IP's are dynamic and I can't register a dyn-dns for every Server. Later it will be 30-50 Servers.
Hope someone can help me.
You can use puppet-icinga2 which allows collecting information about nodes. On client side you'd create exportable resource (puppet code follows):
##icinga2::object::host { $::fqdn:
display_name => $::fqdn,
address => $::ipaddress_eth0,
check_command => 'hostalive',
target => "/etc/icinga2/zones.d/${::domain}/hosts.conf",
zone => $::fqdn,
}
##::icinga2::object::endpoint { "$::fqdn":
host => "$::ipaddress_eth0",
}
##::icinga2::object::zone { "$::fqdn":
endpoints => [ "$::fqdn", ],
parent => 'master',
}
which will be propagated to master (PuppetDB is required):
Icinga2::Object::Host <<| |>> { }
Icinga2::Object::Endpoint <<| |>> { }
Icinga2::Object::Zone <<| |>> { }
As long as the puppet master has stable DNS you'll have updated zone.conf. After puppet agent run on client host information gets registered in PuppetDB. Upon next puppet agent run on master it will have up-to-date information about the node.
Then you can implement a check from icinga master:
apply Service "ping" to Host {
import "generic-service"
check_command = "ping"
zone = "master" //execute check from master zone
assign where "linux-server" in host.groups
}
Note there are also other automation integrations like Ansible which might offer similar functionality.

Two different syntax in grok

A normal event could be like this:
2015-11-20 18:50:33,739 [TRE01_0101] [76] [10.117.10.220]
but sometimes I have a log with "default" IP:
2015-11-04 23:14:27,469 [TRE01_0101] [40] [default]
If I have defined in grok a [SYNTAX:SEMANTIC] pattern as follows:
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} \[%{DATA:instance}\] \[%{NUMBER:numeric}\] \[%{IP:client}\]}"}
}
How can I parse a log that contains dafault as IP?
Now I'm getting a _grokparsefailure because "default" is not an "IP SYNTAX".
Thanks in advance
You can group things together and then make them conditional:
(%{IP:client}|default)

Parse docker logs with logstash

I have a docker container that log to stdout/stderr. Docker save it's output into /var/lib/docker/containers//-logs.json
The log has lines with the following structure
{"log":"This is a message","stream":"stderr","time":"2015-03-12T19:27:27.310818102Z"}
which input/codec/filter should I use to get only the log field as the message ?
Thanks!
Use the json codec to parse the JSON string (you could instead use the json filter), then rename the "log" field to "message" with the mutate filter and finally use the date filter to parse the "time" field.
filter {
mutate {
rename => ["log", "message"]
}
date {
match => ["time", "ISO8601"]
remove_field => ["time"]
}
}

Resources