Rails serializers - Controlling attributes of the association using selective include / exclude - ruby-on-rails

I have 2 resources - post and author. post has an author. When I fetch the post details, I would like to selectively include the details of the author.
Example -
GET /authors/1
{
'id': 1,
'name': 'Some name',
'email': 'admin#example.com',
'address': 'Some address',
'mobile': '',
'language': 'ruby'
}
GET /posts/1
I would like to have -
{
'id': 1,
'created_at': '2017-01-01 10:00:00',
'views': 7500,
'seo_score': 4,
'author': {
'id': 1,
'name': 'Some name',
'email': 'admin#example.com'
}
}
instead of,
{
'id': 1,
'created_at': '2017-01-01 10:00:00',
'views': 7500,
'seo_score': 4,
'author': {
'id': 1,
'name': 'Some name',
'email': 'admin#example.com',
'address': 'Some address',
'mobile': '',
'language': 'ruby'
}
}
I would like to know if I can selectively choose the attributes of the association in the serializers, by using include, exclude, only etc.
I know I can write a separate serializer and map it to the association while defining it, but just to include 2 specific attributes I don't think adding a separate class makes sense.
I also know that I can override the association and fetch all the attributes and selectively choose a few; but I would like to know if Serializers support this by default.

you can create a new association with your selected columns like this
belongs_to :author_with_partial_details, -> { select [:id, :name, :email] }, :class_name => "Author"
and use it something like this
Post.find(params[:id]).includes(:author_with_partial_details)
Alternatively you can also use jbuilder gem to create your own JSON structure, here is the link to the gem https://github.com/rails/jbuilder

Related

CSV Upload in rails

I am trying to add parents and their children data in the parent and child table. I have existing data in these tables and I am trying to add further data and I don't want the data to be repeated. Below is the code I am using to upload data. The child has parent_id.
parent.rb
has_many :children, dependent: :destroy
def self.import(file)
CSV.foreach(file.path, headers:true) do |row|
parent = Parent.find_or_update_or_create_by(
parent_1_firstname: row['parent_1_firstname'],
parent_1_lastname: row['parent_1_lastname'],
address: row['address'],
address_line_2: row['address_line_2'],
city: row['city'],
province: row['province'],
postal_code: row['postal_code'],
telephone_number: row['telephone_number'],
email: row['email'],
family_situation: row['admin_notes'],
gross_income: row['gross_income'],
created_by_admin: row['created_by_admin'],
status: row['status']
)
parent.children.find_or_create_by(
firstname: row['firstname'],
lastname: row['lastname'],
dateofbirth: row['dateofbirth'],
gender: row['gender']
)
end
end
child.rb
belongs_to :parent
The error I am facing is when I choose the csv file to be uploaded below is the error which I am getting.
undefined method `find_or_update_or_create_by' for #<Class:0x00007f8797be74b0> Did you mean? find_or_create_by
I have added a sample csv below. Please help me figure out the issue.
parent_1_firstname,parent_1_lastname,address,address_line_2,city,province,postal_code,telephone_number,email,admin_notes,gross_income, created_by_admin ,status,firstname,lastname,dateofbirth,gender
Nav,Deo,College Road,,Alliston,BC,N4c 6u9,500 000 0000,nav#prw.com,"HAPPY",13917, TRUE , Approved ,Sami,Kidane,2009-10-10,Male
undefined method `find_or_update_or_create_by' for
Class:0x00007f8797be74b0 Did you mean? find_or_create_by
AFAIK, there is no find_or_update_or_create_by method in Rails. Unless you have defined it as a class method in the Parent model, you can't call that method on a class. I believe you meant to use find_or_create_by. Change
Parent.find_or_update_or_create_by
to
Parent.find_or_create_by
Update:
You cannot call create unless the parent is saved
Ok, so the parent isn't saved which could be due to any validations has failed. Change Parent.find_or_create_by to Parent.find_or_create_by!(as #jvillian stated) which will raise an exception with the validation error message. Fix the error and you are good to go.
To not have to hard-code various nested loops doing find_or_create_by logic, there is a gem called DutyFree that makes imports and exports like this fairly painless. It intelligently analyses the has_many and belongs_to associations on models and based on these relationships identifies how to properly save each imported row across multiple destination tables. Either a create or an update is performed based on if the data already exists or not.
To demonstrate your example from above, I wrote an RSpec test based on the CSV data you provided:
https://github.com/lorint/duty_free/blob/master/spec/models/parent_complex_spec.rb
There is also a simpler example available with just 6 columns:
https://github.com/lorint/duty_free/blob/master/spec/models/parent_simple_spec.rb
One nice thing about this gem is that after configuring the column definitions to do an import, you get export for free because everything works from the same template. For this example here's the template which allows the column names from your CSV to line up perfectly with the database columns:
IMPORT_TEMPLATE = {
uniques: [:firstname, :children_firstname],
required: [],
all: [:firstname, :lastname, :address, :address_line_2, :city, :province, :postal_code,
:telephone_number, :email, :admin_notes, :gross_income, :created_by_admin, :status,
{ children: [:firstname, :lastname, :dateofbirth, :gender] }],
as: {
'parent_1_firstname' => 'Firstname',
'parent_1_lastname' => 'Lastname',
'address' => 'Address',
'address_line_2' => 'Address Line 2',
'city' => 'City',
'province' => 'Province',
'postal_code' => 'Postal Code',
'telephone_number' => 'Telephone Number',
'email' => 'Email',
'admin_notes' => 'Admin Notes',
'gross_income' => 'Gross Income',
'created_by_admin' => 'Created By Admin',
'status' => 'Status',
'firstname' => 'Children Firstname',
'lastname' => 'Children Lastname',
'dateofbirth' => 'Children Dateofbirth',
'gender' => 'Children Gender'
}
}.freeze
With this in your parent.rb, you can call Parent.df_import(your_csv_object) or Parent.df_export, and the gem does the rest.

Query nested model with multiple plucks

I was just wondering if the following is possible: I have model with nested associative models. I want to be able to render json: on current_user.reports.minned and have it eager_load the plucked values from each model. How can I accomplish this?
Here I use only 2 models as an example. In reality, the solution needs to work for n+1 nested models.
Does not work:
class Report
has_many :templates
def minned
self.pluck(:id, :title)
self.templates = templates.minned
end
end
class Template
belongs_to :report
def minned
self.pluck(:id, :name, :sections, :columns)
end
end
....
# reports.minned.limit(limit).offset(offset)
# This should return something like:
[{
'id': 0,
'title': 'Rep',
'templates': [{
'id': 0,
'name': 'Temp'
'sections': [],
'columns': []
}]
},
{
'id': 1,
'title': 'Rep 1',
'templates': [{
'id': 0,
'name': 'Temp',
'sections': [],
'columns': []
},
{
'id': 1,
'name': 'Temp 1',
'sections': [],
'columns': []
}]
}]
Thanks for any help.
Edit:
I will add that I found a way to do this by overriding as_json for each model, but this applies the plucking to all requests. I need to have control over which requests give what pieces of information.
# in Report model
def as_json(options={})
super(:id, :title).merge(templates: templates)
end
# in Template model
def as_json(options={})
super(:id, :name, :sections, :columns)
end
Thanks to eirikir, this is all I need to do:
Report model
def self.minned
includes(:templates).as_json(only: [:id, :title], include: {templates: {only: [:id, :name, :sections, :columns]}})
end
Then when using this with pagination order, limit or anything like that, just drop it at the end:
paginate pre_paginated_reports.count, max_per_page do |limit, offset|
render json: pre_paginated_reports.order(id: :desc).limit(limit).offset(offset).minned
end
Now I'm not overriding as_json and have complete control over the data I get back.
If I understand correctly, you should be able to achieve this specifying the output in the options given to as_json:
current_user.reports.includes(:templates).as_json(only: [:id, :title], include: {templates: {only: [:id, :name, :sections, :columns]}})

Rails 4, elasticsearch-rails

I'm looking for some advice on the best way forward with my app which i have began to integrate elasticsearch for the first time. Im a bit of a beginner in rails but keen to dive in so forgive any glaring errors!
I followed a tutorial http://www.sitepoint.com/full-text-search-rails-elasticsearch/ and have also implemented some additional elasticsearch dsl features from reading documentation etc.. im just not convinced i'm there yet. (I certainly need to move out of the model, as currently most sits in the Product active record model.)
I am trying to implement a search on the Product model with ability to partial word search, fuzzy search (misspellings). From what I understand, I am able to set my own analyzers and filters for the elasticsearch, which I have done and currently reside in the Product model. I would like to move these to a more sensible location too, once I have established if indeed I am actually doing this correctly. I do get results when i search but im including things like deleting the index, creating a new index with mapping all in the end of the product model, if what i have below is not "the correct way", what better way is there than what i have to 1, implement elastic search using rails 2, seperate concerns more efficiently.
thanks and much appreciated
CODE:
lib/tasks/elasticsearch.rake:
require 'elasticsearch/rails/tasks/import'
View:
<%= form_tag search_index_path, class: 'search', method: :get do %>
<%= text_field_tag :query, params[:query], autocomplete: :off, placeholder: 'Search', class: 'search' %>
<% end %>
Gems i used:
gem 'elasticsearch-model', git: 'git://github.com/elasticsearch/elasticsearch-rails.git'
gem 'elasticsearch-rails', git: 'git://github.com/elasticsearch/elasticsearch-rails.git'
Search Controller:
class SearchController < ApplicationController
def index
if params[:query].nil?
#products = []
else
#products = Product.search(params[:query])
end
end
end
Product Model:
require 'elasticsearch/model'
class Product < ActiveRecord::Base
# ElasticSearch
include Elasticsearch::Model
include Elasticsearch::Model::Callbacks
settings index: {
number_of_shards: 1,
analysis: {
filter: {
trigrams_filter: {
type: 'ngram',
min_gram: 2,
max_gram: 10
},
content_filter: {
type: 'ngram',
min_gram: 4,
max_gram: 20
}
},
analyzer: {
index_trigrams_analyzer: {
type: 'custom',
tokenizer: 'standard',
filter: ['lowercase', 'trigrams_filter']
},
search_trigrams_analyzer: {
type: 'custom',
tokenizer: 'whitespace',
filter: ['lowercase']
},
english: {
tokenizer: 'standard',
filter: ['standard', 'lowercase', 'content_filter']
}
}
}
} do
mappings dynamic: 'false' do
indexes :name, index_analyzer: 'index_trigrams_analyzer', search_analyzer: 'search_trigrams_analyzer'
indexes :description, index_analyzer: 'english', search_analyzer: 'english'
indexes :manufacturer_name, index_analyzer: 'english', search_analyzer: 'english'
indexes :type_name, analyzer: 'snowball'
end
end
# Gem Plugins
acts_as_taggable
has_ancestry
has_paper_trail
## -99,6 +146,33 ## def all_sizes
product_attributes.where(key: 'Size').map(&:value).join(',').split(',')
end
def self.search(query)
__elasticsearch__.search(
{
query: {
query_string: {
query: query,
fuzziness: 2,
default_operator: "AND",
fields: ['name^10', 'description', 'manufacturer_name', 'type_name']
}
},
highlight: {
pre_tags: ['<em>'],
post_tags: ['</em>'],
fields: {
name: {},
description: {}
}
}
}
)
end
def as_indexed_json(options={})
as_json(methods: [:manufacturer_name, :type_name])
end
end
# Delete the previous products index in Elasticsearch
Product.__elasticsearch__.client.indices.delete index: Product.index_name rescue nil
# Create the new index with the new mapping
Product.__elasticsearch__.client.indices.create \
index: Product.index_name,
body: { settings: Product.settings.to_hash, mappings: Product.mappings.to_hash }
# Index all article records from the DB to Elasticsearch
Product.import(force: true)
end
If you are using elasticsearch for searching,then i will recommend gem 'chewy' with elasticsearch server.
For more information just go to the links provided below.
for chewy:
https://github.com/toptal/chewy
Integrate chewy with elasticsearch:
http://www.toptal.com/ruby-on-rails/elasticsearch-for-ruby-on-rails-an-introduction-to-chewy
Thanks
I can recommend searchkick:
https://github.com/ankane/searchkick
Several apps in production running with searchkick and it's easy to use.
Also check out the documentation of searchkick where a search for products is described in detail with facets, suggestions, etc.

Multikey indexing in rails mongoid

I want to store data in this format.
{
"_id": ObjectId(...)
"title": "Grocery Quality"
"comments": [
{ author_id: ObjectId(...)
date: Date(...)
text: "Please expand the cheddar selection." },
{ author_id: ObjectId(...)
date: Date(...)
text: "Please expand the mustard selection." },
{ author_id: ObjectId(...)
date: Date(...)
text: "Please expand the olive selection." }
]
}
I'm confused as to how to achieve this format for my data.
I am using mongoid; does Mongoid support Multikey Indexing?
How can use mongoid to achieve my desired format and behaviour?
I'm not sure if I got your doubt correctly but as I can't comment I'm answering right away. If it isnt this what you asked, please explain a little bit more =)
You have your model with those fields you wrote before, I will call it Post model. For the comments on it, I would suggest you create another model callend Comment and embed it on the Post model:
class Post
field: title
embeds_many :comments
end
class Comment
field :date
field :text
has_one :author
embedded_in :post
end
And to index the comments on the Post model you could do:
index({ :"comments.updated_at" => 1 })

Active record result and transformed JSON

I need to transform active record JSON to something like this:
{
cols: [{id: 'task', label: 'Task', type: 'string'},
{id: 'hours', label: 'Hours per Day', type: 'number'}],
rows: [{c:[{v: 'Work'}, {v: 11}]},
{c:[{v: 'Eat'}, {v: 2}]},
{c:[{v: 'Commute'}, {v: 2}]},
{c:[{v: 'Watch TV'}, {v:2}]},
{c:[{v: 'Sleep'}, {v:7, f:'7.000'}]}
]
}
That is totally different from what to_json returns from activerecord. What is the most ruby way to transform JSON?
Override the to_json method in your model
# your_model.rb, implement an instance method to_json
def to_json(options = {})
{
'cols' => [{'id' => 'whateveryoulike'}],
'rows' => [{'id' => 'whateveryoulike'}]
}.to_json(options)
end
Remember, it is important to accept options as parameter to this method and pass it on to to_json of the hash (or any other to_json call you make inside this method, for that matter). Otherwise, the method may not behave as expected on collection JSON serialization. And of course since you haven't given any details as to what your model is and how it maps to the desired JSON response, you will have to implement the representation of cols and rows as you like.
This also applies to to_xml.

Resources