Logstash Geolocation not functioning properly - geolocation

My entire system crashes after I change the configuration to add geo location field.
My system runs correctly when my config looks like this:
input {
syslog
{
host => "localhost4"
port => 5140
type => "system"
}
}
filter {
grok { match => { message => [ ".*ipaddr: %{IP:ipaddr}.*" ] }}
grok { match => { message => [ ".*dnsname: %{HOSTNAME:query_name}.*" ] }}
grok { match => { message => [ ".*mal_rank: %{NUMBER:malrank:int}.*" ] }}
grok { match => { message => [ ".*packet_size: %{NUMBER:packetsize:int}.*" ] }}
grok { match => { message => [ ".*source_ip: %{IP:sourceip}.*" ] }}
grok { match => { message => [ ".*dest_ip: %{IP:dest_ip}.*" ] }}
grok { match => { message => [ ".*source_ip: %{IP:src_ip}.*" ] }}
grok { match => { message => [ ".*sport: %{NUMBER:sport:int}.*" ] }}
}
output {
elasticsearch { hosts => ["localhost4:9200"] }
stdout { codec => rubydebug }
}
But when I add the code to my filter
input {
syslog
{
host => "localhost4"
port => 5140
type => "system"
}
}
filter {
grok { match => { message => [ ".*ipaddr: %{IP:ipaddr}.*" ] }}
grok { match => { message => [ ".*dnsname: %{HOSTNAME:query_name}.*" ] }}
grok { match => { message => [ ".*mal_rank: %{NUMBER:malrank:int}.*" ] }}
grok { match => { message => [ ".*packet_size: %{NUMBER:packetsize:int}.*" ] }}
grok { match => { message => [ ".*source_ip: %{IP:sourceip}.*" ] }}
grok { match => { message => [ ".*dest_ip: %{IP:dest_ip}.*" ] }}
grok { match => { message => [ ".*source_ip: %{IP:src_ip}.*" ] }}
grok { match => { message => [ ".*sport: %{NUMBER:sport:int}.*" ] }}
geoip {
source => "ipaddr"
target => "geoip"
add_tag => ["geoip"]
database => "/etc/logstash/GeoLiteCity.dat"
}
}
output {
elasticsearch { hosts => ["localhost4:9200"] }
stdout { codec => rubydebug }
}
I can run the curl command and get the correct output
curl http://localhost:9200/logstash-2016.04.19/_mapping/system/field/geoip.location?pretty
and returned:
{
"logstash-2016.04.19" : {
"mappings" : {
"system" : {
"geoip.location" : {
"full_name" : "geoip.location",
"mapping" : {
"location" : {
"type" : "geo_point"
}
}
}
}
}
}
}
But instead of getting anything, my logstash stops reading from the syslog.
Any suggestions?

I'm not sure what the delay is, but if I let the system wait for about an hour it starts processing logs again. I just wanted to to confirm that this code does function properly.

Related

Add geo point in Elastic

I use ELK and filebeat. I send a lot of logs with distincte fields.
logstash config:
input {
beats {
port => 5044
include_codec_tag => false
}
}
filter {
if [type] == "json" {
json {
source => "message"
target => "msg"
}
mutate {
remove_field => ["msg.ecs.version", "ecs.version", "#version"]
}
}
if [type] != "json" {
grok {
match => {
message => ["time=\"%{TIMESTAMP_ISO8601:time}\""]
}
}
date {
match => [ "time", "YYYY-MM-dd'T'HH:mm:ssZZ"]
target => "time"
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
sniffing => true
manage_template => false
index => "%{[source][project]}-%{[source][application]}-%{+YYYY.MM.dd}"
}
}
Some of my message contain location
{
"location": {
"lat": 11.11,
"lon": 22.22
}
}
In elastic I can see my location, (msg.location.lat and msg.location.lon), but I don't know how convert my location to geo_point.
As I understand current index mapping is created by logstash plugin or by elastic search by default template. What and where shoud I write to use my location as geo_point?

logstash change type format

I have ror application that in admin dashboard, admin could observe the location of his employee, in my case, I use elk to gather information of employees that contains latitude and longitude and which send to my map based on his movement, My problem is, I have a template that logstash based on template create daily index but recently I found every field in my index that have type changed to text when indexed created.
this is my json that logstash reads:
{"driver_id": 31,"driver_email": "ankith.ravindran#mailinator.com","location": {"latitude": "-35.2824767","longitude": "149.1326453"},"created_at": "2021-06-29 14:28:47", "required_matches": 1, "type": "location"}
this is my logstash.conf file:
input {
file {
path => ["/usr/share/logstash/MPD_LOCATION/*",
"/usr/share/logstash/MPD_LOCATION/*/*",
"/usr/share/logstash/MPD_LOCATION/*/*/*",
"/usr/share/logstash/MPD_LOCATION/*/*/*/*",
"/usr/share/logstash/MPD_LOCATION/*/*/*/*/*"]
start_position => "beginning"
type => "json"
sincedb_path => "/dev/null"
}
}
filter {
mutate {
gsub => ["message","/}+({)/", "}::{"]
}
mutate {
gsub => ["message","/}+( )/", "}::"]
}
split {
field => "message"
terminator => "::"
}
json { source => "message" }
mutate {
add_field => { "uuid" => "D%{driver_id}T%{created_at}" }
rename => {
"[location][latitude]" => "[location][lat]"
"[location][longitude]" => "[location][lon]"
}
convert => {
"[location][lat]" => "float"
"[location][lon]" => "float"
}
}
}
output {
if ([type] == "location") {
elasticsearch {
hosts => "http://elasticsearch:9200"
index => "live_locations_%{+YYYY_MM_dd}"
# manage_template => true
template => "/usr/share/logstash/Template/live_locations.json"
template_name => "live_locations"
# template_overwrite => true
document_id => "%{uuid}"
}
} else if ([type] == "app_info") {
elasticsearch {
hosts => "http://elasticsearch:9200"
index => "app_info_%{+YYYY_MM_dd}"
document_id => "%{uuid}"
}
}
stdout { codec => rubydebug }
}
this is my template file:
{
"settings": {
"index": {
"number_of_shards": 5,
"number_of_replicas": 1
}
},
"mappings": {
"properties": {
"driver_id": { "type": "integer" },
"email": { "type": "text" },
"location": { "type": "geo_point" },
"app-platform": { "type": "text" },
"app-version": { "type": "text" },
"created_at": { "type": "date", "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"},
"required_matches": { "type": "integer" }
}
}
}
for example, I defined type of created_at , date but when index created this field return as text and I can't understand what happened or field of location it's return float so I could not use my index as geo_point, I have to add I use elk in the version of 7.13 and used on docker.
Updated : I have two types of JSON that one of them just returns the location of the employee the second of them just returns app_version and app_platform of the employee that used.
Updated 2 : I change my input from logstash to filebeat but I still have the same problem.

Convert logstash filter to fluentd

I'm really new to fluentd configurations and need help to convert this logstash config to fluentd to get started
filter {
if [syslog5424_host] =~ /apilog/ {
if [syslog5424_msg] =~ /\"ApplicationType\"\:\"API\"/ {
json {
source => "syslog5424_msg"
# Remove syslog5424_msg field only if json filter is successful
remove_field => ["syslog5424_msg", "syslog5424_sd", "syslog5424_proc", "syslog5424_pri", "syslog5424_ver", "syslog_facility", "syslog_facility_code"]
}
mutate {
add_tag => ["API"]
replace => { "type" => "api-dev" }
}
}
else {
mutate {
add_tag => ["API"]
}
}
}
}

Failed to parse date field [0] with format [MMM, YY] with elastic search 5.0

I am trying to get the date parsed into a string format as month and numerical year format like "JAN, 92". My mapping is as below:
size" => 0,
"query" => {
"bool" => {
"must" => [
{
"term" => {
"checkin_progress_for" => {
"value" => "Goal"
}
}
},
{
"term" => {
"goal_owner_id" => {
"value" => "#{current_user.access_key}"
}
}
}
]
}
},
"aggregations" => {
"chekins_over_time" => {
"range" => {
"field" => "checkin_at",
"format" => "MMM, YY",
"ranges" => [
{
"from" => "now-6M",
"to" => "now"
}
]
},
"aggs" => {
"checkins_monthly" => {
"date_histogram" => {
"field" => "checkin_at",
"format" => "MMM, YY",
"interval" => "month",
"min_doc_count" => 0,
"missing" => 0,
"extended_bounds" => {
"min" => "now-6M",
"max" => "now"
}
}
}
}
}
}
}
I throws the following error:
elasticsearch.transport.RemoteTransportException: [captia-america][127.0.0.1:9300][indices:data/read/search[phase/query]]
Caused by: elasticsearch.ElasticsearchParseException: failed to parse date field [0] with format [MMM, YY]
If I remove the {MMM, YY} and put the normal date format it works.
What could the solution to rectify this.Help appreciated.
Your checkins_monthly aggregation is a bit wrong. The missing part should have the same format for the date to use when the field is missing. A 0 is not actually a date.
For example:
"aggs": {
"checkins_monthly": {
"date_histogram": {
"field": "checkin_at",
"format": "MMM, YY",
"interval": "month",
"min_doc_count": 0,
"missing": "Jan, 17",
"extended_bounds": {
"min": "now-6M",
"max": "now"
}
}
}

Exclude nil values from ElasticSearch Aggregation

I was using this query to retrieve the most significant values:
keywords = Answer.search(
:size => 5,
:query => {
:match => {
:question_id => 32481
}
},
:aggregations => {
:keywords => {
:significant_terms => {
:field => 'text'
}
}
}
)
The field is :text, but it has nil values, so the answer is always:
2.1.2 :135 > keywords.map(&:text)
=> [nil, nil, nil, nil, nil]
I tried to add a filter, as the documentation suggests, but it gives me a parse error:
keywords = Answer.search(
:size => 5,
:query => {
:match => {
:question_id => 32481
},
:filtered => {
:filter => {
:exists => { :field => 'text' }
}
}
},
:aggregations => {
:keywords => {
:significant_terms => {
:field => 'text'
}
}
}
)
I've tried many combinations, with no success. How can I get only the valid text answers?
I believe your ES query should translate to something like this:
"size": 5,
"query": {
"filtered": {
"query": { "match": { "question_id" : 32481 } },
"filter": {
"exists": {
"field": "text"
}
}
}
},
"aggs": {
"keywords": {
"significant_terms": {
"field": "text"
}
}
}
meaning your "question_id" "match" should be enclosed in the "filtered" element.

Resources