Node as Hash not Array in RABL - ruby-on-rails

I have this RABL template:
object :#pollution => nil
attributes :id, :time
node :components do |p|
p.components.map do |component|
{ component.name => { level: component.level, main: component.main } }
end
end
It renders
{ "id":820,
"time":"2017-05-12 20:00:00 UTC",
"components": [ # I don't need this array
{ "component1": { "level": 3, "main": false } },
{ "component2": { "level": 5, "main": false } },
]
}
And I want this
{ "id":820,
"time":"2017-05-12 20:00:00 UTC",
"components": {
"component1": { "level": 3, "main" :false },
"component2": { "level": 5, "main" :false },
}
}
So, instead of array of components, I need a hash, where keys will be components names and value — hash with components data (level(Int) and main(Bool)).
I tried to render child :components, but it also renders an array.
Thanks for any help!

To get what you want, you need to change these lines:
p.components.map do |component|
{ component.name => { level: component.level, main: component.main } }
end
which are returning an array, to something like:
p.components.inject({}) do |components, component|
components[component.name] = { level: component.level, main: component.main }
components
end
that will build a hash instead of an array.

Related

logstash change type format

I have ror application that in admin dashboard, admin could observe the location of his employee, in my case, I use elk to gather information of employees that contains latitude and longitude and which send to my map based on his movement, My problem is, I have a template that logstash based on template create daily index but recently I found every field in my index that have type changed to text when indexed created.
this is my json that logstash reads:
{"driver_id": 31,"driver_email": "ankith.ravindran#mailinator.com","location": {"latitude": "-35.2824767","longitude": "149.1326453"},"created_at": "2021-06-29 14:28:47", "required_matches": 1, "type": "location"}
this is my logstash.conf file:
input {
file {
path => ["/usr/share/logstash/MPD_LOCATION/*",
"/usr/share/logstash/MPD_LOCATION/*/*",
"/usr/share/logstash/MPD_LOCATION/*/*/*",
"/usr/share/logstash/MPD_LOCATION/*/*/*/*",
"/usr/share/logstash/MPD_LOCATION/*/*/*/*/*"]
start_position => "beginning"
type => "json"
sincedb_path => "/dev/null"
}
}
filter {
mutate {
gsub => ["message","/}+({)/", "}::{"]
}
mutate {
gsub => ["message","/}+( )/", "}::"]
}
split {
field => "message"
terminator => "::"
}
json { source => "message" }
mutate {
add_field => { "uuid" => "D%{driver_id}T%{created_at}" }
rename => {
"[location][latitude]" => "[location][lat]"
"[location][longitude]" => "[location][lon]"
}
convert => {
"[location][lat]" => "float"
"[location][lon]" => "float"
}
}
}
output {
if ([type] == "location") {
elasticsearch {
hosts => "http://elasticsearch:9200"
index => "live_locations_%{+YYYY_MM_dd}"
# manage_template => true
template => "/usr/share/logstash/Template/live_locations.json"
template_name => "live_locations"
# template_overwrite => true
document_id => "%{uuid}"
}
} else if ([type] == "app_info") {
elasticsearch {
hosts => "http://elasticsearch:9200"
index => "app_info_%{+YYYY_MM_dd}"
document_id => "%{uuid}"
}
}
stdout { codec => rubydebug }
}
this is my template file:
{
"settings": {
"index": {
"number_of_shards": 5,
"number_of_replicas": 1
}
},
"mappings": {
"properties": {
"driver_id": { "type": "integer" },
"email": { "type": "text" },
"location": { "type": "geo_point" },
"app-platform": { "type": "text" },
"app-version": { "type": "text" },
"created_at": { "type": "date", "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"},
"required_matches": { "type": "integer" }
}
}
}
for example, I defined type of created_at , date but when index created this field return as text and I can't understand what happened or field of location it's return float so I could not use my index as geo_point, I have to add I use elk in the version of 7.13 and used on docker.
Updated : I have two types of JSON that one of them just returns the location of the employee the second of them just returns app_version and app_platform of the employee that used.
Updated 2 : I change my input from logstash to filebeat but I still have the same problem.

Elasticsearch Find Out does user stops or moving - Possible?

I want to use elasticsearch configuration about mapping to display user location and his/her direction to admin in my web app. so I create an index in elasticsearch like:
{
"settings": {
"index": {
"number_of_shards": 5,
"number_of_replicas": 1
},
"analysis": {
"analyzer": {
"analyzer-name": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
}
},
"mappings": {
"properties": {
"driver_id": { "type": "integer" },
"email": { "type": "text" },
"location": { "type": "geo_point" },
"app-platform": { "type": "text" },
"app-version": { "type": "text" },
"created_at": { "type": "date", "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"}
}
}
}
and start to inserting user location to elasticsearch with this curl
{
"driver_id": 357,
"driver_email": "Andrew#mailinatior.com",
"location": {
"lat": 37.3,
"lon": 59.52
},
"created_at": "2021-06-04 00:09:00"
}
this structure came from user mobile to my elasticsearch, after that I wrote these services to fetch data for my web-end part of my designing:
module Api
module V1
module Drivers
module Elastic
class LiveLocation
include Peafowl
attribute :driver_id, ::Integer
def call
#driver = ::Driver.find(driver_id) if driver_id.present?
result = []
options = {
headers: {
'Content-Type' => 'application/json'
},
body: #driver.present? ? options_with_driver : options
}
begin
response = HTTParty.get(elasticseach_url.to_s, options)
records = JSON.parse(response.body)['hits']['hits']
if records.present?
records.group_by { |r| r['_source']['driver_id'] }.to_a.each do |record|
driver = ::Driver.where(id: record[0]).first
if driver.present?
location = record[1][0]['_source']['location']
app_platform = record[1][0]['_source']['app-platform']
app_version = record[1][0]['_source']['app-version']
result.push(driver_id: driver.id, driver_email: driver.profile.email, location: location, app_platform: app_platform, app_version: app_version)
end
end
end
rescue StandardError => error
Rails.logger.info "Error => #{error}"
result = []
end
context[:response] = result
end
def elasticseach_url
"#{ENV.fetch('ELASTICSEARCH_BASE_URL', 'http://127.0.0.1:9200')}/#{ENV.fetch('ELASTICSEARCH_DRIVER_POSITION_INDEX', 'live_location')}/_search"
end
def options
{
query: {
bool: {
filter: [
{
range: {
created_at: {
gte: (Time.now.beginning_of_day.strftime '%Y-%m-%d %H:%M:%S')
}
}
}
]
}
},
sort: [
{
created_at: {
order: 'desc'
}
}
]
}.to_json
end
def optinos_with_driver
{
query: {
bool: {
must: [
{
term: {
driver_id: {
value: #driver.id
}
}
}
],
filter: [
{
range: {
created_at: {
gte: (Time.now.beginning_of_day.strftime '%Y-%m-%d %H:%M:%S')
}
}
}
]
}
},
sort: [
{
created_at: {
order: 'desc'
}
}
]
}.to_json
end
end
end
end
end
end
this structure working perfectly but even if the user stops while elasticsearch saves his location but I need to filter user data that if the user stops for one hour in place elasticsearch understand and not saving data. Is it possible?
I use elsticsearch 7.1
and ruby 2.5
I know it's possible in kibana but I could not using kibana at this tim.
I am not sure if this can be done via a single ES query...
However you can use 2 queries:
one to check if the user's location's during the last hour is the same
Second same then don't insert
But i don't recommend that
What you could do:
Use REDIS or any in-mem cache to maintain the user's last geo-location duration
Basis that, update or skip update to Elastic Search
PS: I am not familiar with ES geo-location API

How to remove multiple attributes from a json using ruby

I have a json object. It has multiple fields "passthrough_fields" which is unnecessary for me and I want to remove them. Is there a way to get all those attributes filtered out?
JSON:
{
"type": "playable_item",
"id": "p06s0lq7",
"urn": "urn:bbc:radio:episode:p06s0mk3",
"network": {
"id": "bbc_radio_five_live",
"key": "5live",
"short_title": "Radio 5 live",
"logo_url": "https://sounds.files.bbci.co.uk/v2/networks/bbc_radio_five_live/{type}_{size}.{format}",
"passthrough_fields": {}
},
"titles": {
"primary": "Replay",
"secondary": "Bill Shankly",
"tertiary": null,
"passthrough_fields": {}
},
"synopses": {
"short": "Bill Shankly with Sue MacGregor in 1979 - five years after he resigned as Liverpool boss.",
"medium": null,
"long": "Bill Shankly in conversation with Sue MacGregor in 1979, five years after he resigned as Liverpool manager.",
"passthrough_fields": {}
},
"image_url": "https://ichef.bbci.co.uk/images/ic/{recipe}/p06qbz1x.jpg",
"duration": {
"value": 1774,
"label": "29 mins",
"passthrough_fields": {}
},
"progress": null,
"container": {
"type": "series",
"id": "p06qbzmj",
"urn": "urn:bbc:radio:series:p06qbzmj",
"title": "Replay",
"synopses": {
"short": "Colin Murray unearths classic sports commentaries and interviews from the BBC archives.",
"medium": "Colin Murray looks back at 90 years of sport on the BBC by unearthing classic commentaries and interviews from the BBC archives.",
"long": null,
"passthrough_fields": {}
},
"activities": [],
"passthrough_fields": {}
},
"availability": {
"from": "2018-11-16T16:18:54Z",
"to": null,
"label": "Available for over a year",
"passthrough_fields": {}
},
"guidance": {
"competition_warning": false,
"warnings": null,
"passthrough_fields": {}
},
"activities": [],
"uris": [
{
"type": "latest",
"label": "Latest",
"uri": "/v2/programmes/playable?container=p06qbzmj&sort=sequential&type=episode",
"passthrough_fields": {}
}
],
"passthrough_fields": {}
}
Is there a way I can remove all those fields and store the updated json in a new variable?
You can do this recursively to tackle nested occurances of passthrough_fields, whether they're found in an array or a sub hash. Inline comments to explain things a little as it goes:
hash = JSON.parse(input) # convert the JSON to a hash
def remove_recursively(hash, *to_remove)
hash.each do |key, val|
hash.except!(*to_remove) # the heavy lifting: remove all keys that match `to_remove`
remove_recursively(val, *to_remove) if val.is_a? Hash # if a nested hash, run this method on it
if val.is_a? Array # if a nested array, loop through this checking for hashes to run this method on
val.each { |el| remove_recursively(el, *to_remove) if el.is_a? Hash }
end
end
end
remove_recursively(hash, 'passthrough_fields')
To demonstrate, with a simplified example:
hash = {
"test" => { "passthrough_fields" => [1, 2, 3], "wow" => '123' },
"passthrough_fields" => [4, 5, 6],
"array_values" => [{ "to_stay" => "I am", "passthrough_fields" => [7, 8, 9]}]
}
remove_recursively(hash, 'passthrough_fields')
#=> {"test"=>{"wow"=>"123"}, "array_values"=>[{"to_stay"=>"I am"}]}
remove_recursively(hash, 'passthrough_fields', 'wow', 'to_stay')
#=> {"test"=>{}, "array_values"=>[{}]}
This will tackle any arrays, and will dig for nested hashes however deep it needs to go.
It takes any number of fields to remove, in this case a single 'passthrough_fields'.
Hope this helps, let me know how you get on.
I think that the easiest solution would be to:
convert JSON into hash (JSON.parse(input))
use this answer to extend the functionality of Hash (save it in config/initializers/except_nested.rb)
on the hash from 1st step, call:
without_passthrough = your_hash.except_nested('passthrough_fields')
covert hash to JSON (without_passthrough.to_json)
Please keep in mind that it will work for passthrough_fields that is nested directly in hashes. In your JSON, you have the following part:
"uris" => [
{
"type"=>"latest",
"label"=>"Latest",
"uri"=>"/v2/programmes/playable?container=p06qbzmj&sort=sequential&type=episode",
"passthrough_fields"=>{}
}
]
In this case, the passthrough_fields will not be removed. You have to find a more sophisticated solution :)
You can do something like this:
def nested_except(hash, except_key)
sanitized_hash = {}
hash.each do |key, value|
next if key == except_key
sanitized_hash[key] = value.is_a?(Hash) ? nested_except(value, except_key) : value
end
sanitized_hash
end
json = JSON.parse(json_string)
sanitized = nested_except(json, 'passthrough_fields')
See example:
json = { :a => 1, :b => 2, :c => { :a => 1, :b => { :a => 1 } } }
nested_except(json, :a)
# => {:b=>2, :c=>{:b=>{}}}
This helper can easily be converted to support multiple keys to except, simply by except_keys = Array.wrap(except_key) and next if except_keys.include?(key)

Rails 4 - Iterate through nested JSON params

I'm passing nested JSON into rails like so:
{
"product": {
"vendor": "Acme",
"categories":
{
"id": "3",
"method": "remove",
},
"categories":
{
"id": "4"
}
}
}
in order to update the category on a product. I am trying to iterate through the categories attribute in my products_controller so that I can add/remove the product to multiple categories at once:
def updateCategory
#product = Product.find(params[:id])
params[:product][:categories].each do |u|
#category = Category.find_by(id: params[:product][:categories][:id])
if params[:product][:categories][:method] == "remove"
#product.remove_from_category(#category)
else
#product.add_to_category(#category)
end
end
end
However, this only uses the second 'categories' ID in the update and doesn't iterate through both.
Example response JSON:
{
"product": {
"id": 20,
"title": "Heavy Duty Aluminum Chair",
"product_price": "47.47",
"vendor": "Acme",
"categories": [
{
"id": 4,
"title": "Category 4"
}
]
}
}
As you can see, it only added the category with ID = 4, and skipped over Category 3.
I'm fairly new to rails so I know I'm probably missing something obvious here. I've played around with the format of the JSON I'm passing in as well but it only made things worse.
You need to change your JSON structure. As you currently have it, the second "categories" reference will override the first one since you can only have 1 instance of a key. To get what you want, you should change it to:
{
"product": {
"vendor": "Acme",
"categories": [
{
"id": "3",
"method": "remove",
},
{
"id": "4"
}
]
}
}
You will also need to change your ruby code to look like:
def updateCategory
#product = Product.find(params[:id])
params[:product][:categories].each do |u|
#category = Category.find_by(id: u[:id])
if u[:method] == "remove"
#product.remove_from_category(#category)
else
#product.add_to_category(#category)
end
end
end

how to get the union of hashes in ruby for this json structure

Below is json I translated from ruby hash for ease of representation for this question using hash.to_json. Notice how the key range is being repeated since the values in the nested doc are different. How do I merge the ranges so that for the weight key both "gt": 2232, "lt": 4444 fall under the one hash key weight inside range. Is there some union or collapse method in ruby to sort of "compactify" hashes?
{
"must": [
{
"match": {
"status_type": "good"
}
},
{
"range": {
"created_date": {
"lte": 43252
}
}
},
{
"range": {
"created_date": {
"gt": "42323"
}
}
},
{
"range": {
"created_date": {
"gte": 523432
}
}
},
{
"range": {
"weight": {
"gt": 2232
}
}
},
{
"range": {
"weight": {
"lt": 4444
}
}
}
],
"should": [
{
"match": {
"product_age": "old"
}
}
]
}
Want to change the above to this:
{
"must": [
{
"range": {
"created_date": {
"gte": 523432,
"gt": "42323"
}
}
},
{
"range": {
"weight": {
"gt": 2232,
"lt": 4444
}
}
}
],
"should": [
{
"match": {
"product_age": "old"
}
}
]
}
I don't know of a built in way to handle something like this, but you could write a method that does something like this:
def collapse(array, key)
# Get only the hashes with :range
to_collapse = array.select { |elem| elem.has_key? key }
uncollapsed = array - to_collapse
# Get the hashes that :range points to
to_collapse = to_collapse.map { |elem| elem.values }.flatten
collapsed = {}
# Iterate through each range hash and their subsequent subhashes.
# Collapse the values into the collapsed hash as necessary
to_collapse.each do |elem|
elem.each do |k, v|
collapsed[k] = {} unless collapsed.has_key? k
v.each do |inner_key, inner_val|
collapsed[k][inner_key] = inner_val
end
end
end
[uncollapsed, collapsed].flatten
end
hash[:must] = collapse hash[:must], :range
Note that this is a specific solution that's mainly applicable to the presented problem. It only works for the hash/array depths specified here. You could probably write a recursive solution that could potentially work at any level of depth with a bit more work.

Resources