How to map location (lon, lat) in logstash to visualize in kibana? - geolocation

I have a csv file holding longitude and latitude for some of the records (otherwise it's " "). Now I want to use logstash 5.1.2 to ge the data into elasticsearch 5.1.2. I've written the following conf-file but the location field is still mapped to text.
input {
file {
path => "/usr/local/Cellar/logstash/5.1.2/bin/data.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
columns => ['logtime', 'text', 'user', 'country', 'location']
separator => ","
}
date {
match => ["logtime", "yyyy-MM-dd HH:mm:ss"]
timezone => "Europe/London"
target => "Date"
}
if [latitude] and [longitude] {
mutate { convert => {"latitude" => "float"} }
mutate { convert => {"longitude" => "float"} }
mutate { rename => {"latitude" => "[location][lat]"} }
mutate { rename => {"longitude" => "[location][lon]"} }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "twitter"}
}
What am I supposed to do to make the location field mapped as geo-point and be able to visualize the points on the map in Kibana 5.1.2? Thanks

You need to create a mapping that maps location to a geo_point. The easiest way to do that is with an index template so that when you start using time based indices, it will auto-create the mapping when a new index is created.
PUT /_template/twitter
{
"order": 0,
"template": "twitter*",
"mappings": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
Then delete your /twitter index and re-index your data.
The above template says that any index that gets created with the name twitter* will that have any _type's location field turned into a geo_point.
**NOTE: After ES 7.0 Above : types were removed and when creating a mapping it no longer accepts types which is a breaking change
**

Related

Add geo point in Elastic

I use ELK and filebeat. I send a lot of logs with distincte fields.
logstash config:
input {
beats {
port => 5044
include_codec_tag => false
}
}
filter {
if [type] == "json" {
json {
source => "message"
target => "msg"
}
mutate {
remove_field => ["msg.ecs.version", "ecs.version", "#version"]
}
}
if [type] != "json" {
grok {
match => {
message => ["time=\"%{TIMESTAMP_ISO8601:time}\""]
}
}
date {
match => [ "time", "YYYY-MM-dd'T'HH:mm:ssZZ"]
target => "time"
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
sniffing => true
manage_template => false
index => "%{[source][project]}-%{[source][application]}-%{+YYYY.MM.dd}"
}
}
Some of my message contain location
{
"location": {
"lat": 11.11,
"lon": 22.22
}
}
In elastic I can see my location, (msg.location.lat and msg.location.lon), but I don't know how convert my location to geo_point.
As I understand current index mapping is created by logstash plugin or by elastic search by default template. What and where shoud I write to use my location as geo_point?

logstash change type format

I have ror application that in admin dashboard, admin could observe the location of his employee, in my case, I use elk to gather information of employees that contains latitude and longitude and which send to my map based on his movement, My problem is, I have a template that logstash based on template create daily index but recently I found every field in my index that have type changed to text when indexed created.
this is my json that logstash reads:
{"driver_id": 31,"driver_email": "ankith.ravindran#mailinator.com","location": {"latitude": "-35.2824767","longitude": "149.1326453"},"created_at": "2021-06-29 14:28:47", "required_matches": 1, "type": "location"}
this is my logstash.conf file:
input {
file {
path => ["/usr/share/logstash/MPD_LOCATION/*",
"/usr/share/logstash/MPD_LOCATION/*/*",
"/usr/share/logstash/MPD_LOCATION/*/*/*",
"/usr/share/logstash/MPD_LOCATION/*/*/*/*",
"/usr/share/logstash/MPD_LOCATION/*/*/*/*/*"]
start_position => "beginning"
type => "json"
sincedb_path => "/dev/null"
}
}
filter {
mutate {
gsub => ["message","/}+({)/", "}::{"]
}
mutate {
gsub => ["message","/}+( )/", "}::"]
}
split {
field => "message"
terminator => "::"
}
json { source => "message" }
mutate {
add_field => { "uuid" => "D%{driver_id}T%{created_at}" }
rename => {
"[location][latitude]" => "[location][lat]"
"[location][longitude]" => "[location][lon]"
}
convert => {
"[location][lat]" => "float"
"[location][lon]" => "float"
}
}
}
output {
if ([type] == "location") {
elasticsearch {
hosts => "http://elasticsearch:9200"
index => "live_locations_%{+YYYY_MM_dd}"
# manage_template => true
template => "/usr/share/logstash/Template/live_locations.json"
template_name => "live_locations"
# template_overwrite => true
document_id => "%{uuid}"
}
} else if ([type] == "app_info") {
elasticsearch {
hosts => "http://elasticsearch:9200"
index => "app_info_%{+YYYY_MM_dd}"
document_id => "%{uuid}"
}
}
stdout { codec => rubydebug }
}
this is my template file:
{
"settings": {
"index": {
"number_of_shards": 5,
"number_of_replicas": 1
}
},
"mappings": {
"properties": {
"driver_id": { "type": "integer" },
"email": { "type": "text" },
"location": { "type": "geo_point" },
"app-platform": { "type": "text" },
"app-version": { "type": "text" },
"created_at": { "type": "date", "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"},
"required_matches": { "type": "integer" }
}
}
}
for example, I defined type of created_at , date but when index created this field return as text and I can't understand what happened or field of location it's return float so I could not use my index as geo_point, I have to add I use elk in the version of 7.13 and used on docker.
Updated : I have two types of JSON that one of them just returns the location of the employee the second of them just returns app_version and app_platform of the employee that used.
Updated 2 : I change my input from logstash to filebeat but I still have the same problem.

Convert logstash filter to fluentd

I'm really new to fluentd configurations and need help to convert this logstash config to fluentd to get started
filter {
if [syslog5424_host] =~ /apilog/ {
if [syslog5424_msg] =~ /\"ApplicationType\"\:\"API\"/ {
json {
source => "syslog5424_msg"
# Remove syslog5424_msg field only if json filter is successful
remove_field => ["syslog5424_msg", "syslog5424_sd", "syslog5424_proc", "syslog5424_pri", "syslog5424_ver", "syslog_facility", "syslog_facility_code"]
}
mutate {
add_tag => ["API"]
replace => { "type" => "api-dev" }
}
}
else {
mutate {
add_tag => ["API"]
}
}
}
}

ElasticSearch/Tire: How to properly set partial word searches up

Even though I've seen many accounts of it mentioning this as relatively straightforward, I haven't managed to see it working properly. Let's say I have this:
class Car < ActiveRecord::Base
settings analysis: {
filter: {
ngram_filter: { type: "nGram", min_gram: 3, max_gram: 12 }
},
analyzer: {
partial_analyzer: {
type: "snowball",
tokenizer: "standard",
filter: ["standard", "lowercase", "ngram_filter"]
}
}
} do
mapping do
indexes :name, index_analyzer: "partial_analyzer"
end
end
end
And let's say I have a car named "Ford" and I update my index. Now, if I search for "Ford":
Car.tire.search { query { string "Ford" } }
My car is in my results. Now, If I look for "For":
Car.tire.search { query { string "For" } }
My car isn't found anymore. I thought the nGram filter would automatically take care of it for me, but apparently it isn't. As a temporary solution I'm using the wildcard (*) for such searches, but this is definitely not the best approach, being the min_gram and max_gram definitions key elements in my search. Can anyone tell me how they solved this?
I'm using Rails 3.2.12 with ruby 1.9.3 . ElasticSearch version is 0.20.5.
You want to use the custom analyzer instead of the snowball one: Elasticsearch custom analyzer
Basically the other analyzers come with a predefined set of filters and tokenizers.
You probably also want to use the Edge-Ngram filter: Edge-Ngram filter
The difference between Edge-NGram and NGram is basically Edge-Ngram basically only sticking to the "edges" of a term. So it starts at the front or at the back. Ford -> [For] instead of -> [For, ord]
Some more advanced links on the topic of autocompletion:
Autocompletion with fuzziness (pure elasticsearch, no tire, but very good read)
Another useful question with links provided
Edit
Basically I have a very similar setup to what you have. But with another analyzer for title and multi-field for both. And because of multi-language support here is an array of names instead of just a name.
I also specify the search_analyzer and I use string-keys instead of symbols. This is what I actually have:
settings "analysis" => {
"filter" => {
"name_ngrams" => {
"side" => "front",
"max_gram" => 20,
"min_gram" => 2,
"type" => "edgeNGram"
}
},
"analyzer" => {
"full_name" => {
"filter" => %w(standard lowercase asciifolding),
"type" => "custom",
"tokenizer" => "letter"
},
"partial_name" => {
"filter" => %w(standard lowercase asciifolding name_ngrams),
"type" => "custom",
"tokenizer" => "standard"
}
}
} do
mapping do
indexes :names do
mapping do
indexes :name, :type => 'multi_field',
:fields => {
"partial" => {
"search_analyzer" => "full_name",
"index_analyzer" => "partial_name",
"type" => "string"
},
"title" => {
"type" => "string",
"analyzer" => "full_name"
}
}
end
end
end
end

mongo-ruby-driver will not create a new document on upsert when there is a custom _id

I want to upsert a document with the mongo-ruby-driver using something like the following-
id = "#{params[:id]}:#{Time.now.strftime("%y%m%d")}"
# db.collection('metrics').insert({'_id' => id})
db.collection('metrics').update(
{ '_id' => id },
{ '$inc' => { "hits" => 1 } },
{ 'upsert' => true }
)
Right now this will only update existing documents, and not create one if it doesn't already exist. The only way it will perform both actions is if I uncomment the insert() command above it.
If I use the mongo console and try and do this upsert directly (without the need for the insert() ) it works as expected.
You should use a symbol instead of string in params. This code works.
db.collection('metrics').update(
{ '_id' => id },
{ '$inc' => { "hits" => 1 } },
{ :upsert => true }
)
In fact, you can use symbols most everywhere. This also works:
db.collection(:metrics).update(
{ :_id => id },
{ :$inc => { :hits => 1 } },
{ :upsert => true }
)

Resources