How to join multiple tables with one to one relationships in rails - ruby-on-rails

Ruby on Rails is very new to me. I am trying to retrieve set of columns from 3 different tables. I thought I could use SQL view to retrieve my results but could not find a way to use views in Rails. Here are my tables.
1) User table --> user name, password and email
2) UserDetails table --> foreign key: user_id, name, address1, city etc.
3) UserWorkDetails --> foreign key: user_id, work address1, work type, etc
These 3 tables have one to one relationships. So table 2 belongs to table 1 and table 3 also belongs to table 1. Table 1 has one userdetails and one userworkdetails.
I want to get user email, name, address1, city, work address1, work type using joins.
What is the best way to handle this?

The data is (are) in the models. Everything else is just an optimization. So address1 is at user.user_detail.address1, for instance.
if you have
class User
has_one :user_detail
has_one :user_work_detail
end
class UserDetail
belongs_to :user
end
class UserWorkDetail
belongs_to :user
end
With user_id columns in tables named user_details and user_work_details then everything else is done for you.
If you later need to optimize you can :include the owned models, but it's not necessary for everything to work.

To get what you want done quickly use the :include option to include both the other tables when you query the primary table, so:
some_user_details = User.find(some_id, :include => [:user_details, :user_work_details])
This will just load all of the fields from the tables at once so there's only one query executed, then you can do what you need with the objects as they will contain all of the user data.
I find that this is simple enough and sufficient, and with this you're not optimising too early before you know where the bottlenecks are.
However if you really want to just load the required fields use the :select option as well on the ActiveRecord::Base find method:
some_user_details = User.find(some_id, :include => [:user_details, :user_work_details], :select => "id, email, name, address1, city, work_address1")
Although my SQL is a bit rusty at the moment.

User.find(:first, :joins => [:user_work_details, :user_details], :conditions => {:id => id})
I'd say that trying to just select the fields you want is a premature optimization at this point, but you could do it in a complicated :select hash.
:include will do 3 selects, 1 on user, one on user_details, and one on user_work_details. It's great for selecting a collection of objects from multiple tables in minimum queries.
http://apidock.com/rails/ActiveRecord/Base/find/class

Related

Searching multiple tables with postgreSQL 13 and Rails 6+

I provide a lot of context to set the stage for the question. What I'm trying to solve is fast and accurate fuzzysearch against multiple database tables using structured data, not full-text document search.
I'm using postgreSQL 13.4+ and Rails 6+ if it matters.
I have fairly structured data for several tables:
class Contact
attribute :id
attribute :first_name
attribute :last_name
attribute :email
attribute :phone
end
class Organization
attribute :name
attribute :license_number
end
...several other tables...
I'm trying to implement a fast and accurate fuzzysearch so that I can search across all these tables (Rails models) at once.
Currently I have a separate search query using ILIKE that concats the columns I want to search against on-the-fly for each model:
# contact.rb
scope :search -> (q) { where("concat_ws(' ', first_name, last_name, email, phone) ILIKE :q", q: "%#{q}%")
# organization.rb
scope :search -> (q) { where("concat_ws(' ', name, license_number) ILIKE :q", q: "%#{q}%") }
In my search controller I query each of these tables separately and display the top 3 results for each model.
#contacts = Contact.search(params[:q]).limit(3)
#organizations = Organization.search(params[:q]).limit(3)
This works but is fairly slow and not as accurate as I would like.
Problems with my current approach:
Slow (relatively speaking) with only thousands of records.
Not accurate because ILIKE must have an exact match somewhere in the string and I want to implement fuzzysearch (ie, with ILIKE, "smth" would not match "smith").
Not weighted; I would like to weight the contacts.last_name column over say the organizations.name because the contacts table is generally speaking the higher priority search item.
My solution
My theoretical solution is to create a search_entries polymorphic table that has a separate record for each contact, organization, etc, that I want to search against, and then this search_entries table could be indexed for fast retrieval.
class SearchEntry
attribute :data
belongs_to :searchable, polymorphic: true
# Store data as all lowercase to optimize search (avoid lower method in PG)
def data=(text)
self[:data] = text.lowercase
end
end
However, what I'm getting stuck on is how to structure this table so that it can be indexed and searched quickly.
contact = Contact.first
SearchEntry.create(searchable: contact, data: "#{contact.first_name} #{contact.last_name} #{contact.email} #{contact.phone}")
organization = Organization.first
SearchEntry.create(searchable: organization, data: "#{organization.name} #{organization.license_number}")
This gives me the ability to do something like:
SearchEntry.where("data LIKE :q", q: "%#{q}%")
or even something like fuzzysearch using PG's similarity() function:
SearchEntry.connection.execute("SELECT * FROM search_entries ORDER BY SIMILARITY(data, '#{q}') LIMIT 10")
I believe I can use a GIN index with pg_trgm on this data field as well to optimize searching (not 100% on that...).
This simplifies my search into a single query on a single table, but it still doesn't allow me to do weighted column searching (ie, contacts.last_name is more important than organizations.name).
Questions
Would this approach enable me to index the data so that I could have very fast fuzzysearch? (I know "very fast" is subjective, so what I mean is an efficient usage of PG to get results as quickly as possible).
Would I be able to use a GIN index combined with pg_trgm tri-grams to index this data for fast fuzzysearch?
How would I implement weighting certain values higher than others in an approach like this?
One potential solution is to create a materialized view consisting of a union of data from the two (or more tables). Take this simplefied example:
CREATE MATERIALIZED VIEW searchables AS
SELECT
resource_id,
resource_type,
name,
weight
FROM
SELECT
id as resource_id,
'Contact' as resource_type
concat_ws(' ', first_name, last_name) AS name,
1 AS weight
FROM contacts
UNION
SELECT
id as resource_id,
'Organization' as resource_type
name
2 AS weight
FROM organizations
class Searchable < ApplicationRecord
belongs_to :resource, polymorphic: true
def readonly?
true
end
# Search contacts and organziations with a higher weight on contacts
def self.search(name)
where(arel_table[:name].matches(name)).order(weight: :desc)
end
end
Since materialized views are stored in a table like structure you can apply indices just like you could with a normal table:
CREATE INDEX searchables_name_trgm ON name USING gist (searchables gist_trgm_ops);
To ActiveRecord it also behaves just like a normal table.
Of course the complexity here will grow with number of columns you want to search and the end result might end up both underwhelming in functionality and overwhelming in complexity compared to an off the shelf solution with thousands of hours behind it.
The scenic gem can be used to make the migrations for creating materialized views simpler.

Mean SQL in Rails 4 without SQL using joins and where

A location belongs to one or more entities. An entity can have one or more locations.
I try to get all other locations that have the same entity like the current location.
I have the following model:
class Location < ActiveRecord::Base
has_many :location_assignments
has_many :entities, :through => :location_assignments
accepts_nested_attributes_for :location_assignments
end
class Entity < ActiveRecord::Base
has_many :location_assignments
has_many :locations, through: :location_assignments
accepts_nested_attributes_for :location_assignments
end
This is in SQL what I want
SELECT DISTINCT l.* FROM locations l, location_assignments la, entities e
WHERE l.id = la.location_id
AND la.entity_id = e.id
AND e.id in ( SELECT ee.id from entities ee, location_assignments laa
WHERE ee.id = laa.entity_id
AND laa.location_id = 1)
But I don't want to use SQL.
This is what I tried with Rails
Location.joins(:entities => :locations).where(:locations => {:id => location.id})
It gives me several times the current location. The amount of rows is the same like the SQL (without distinct to get the current location only ones).
Any thoughts?
One way is to mimic your SQL which uses a subquery and just use standard ActiveRecord querying without dropping down to AREL. Let's start with the subquery:
Entity.joins(:location_assignments).where(:location_assignments => {:location_id => location.id})
This returns a relation which contains all the entities which have the location represented by location.id associated with them, just like your SQL subquery.
Then the main query, with the subquery as xxxxx for now for readability:
Location.joins(:entities).where(:entities => {:id => xxxxx})
This is the equivalent of your main query. Plugging in the subquery which returns what is basically an array of entities (ok, a relation, but same effect in this case) prompts ActiveRecord to turn the WHERE condition into an IN rather than just an =. ActiveRecord is also smart enough to use the id of each entity. So, plug in the subquery:
Location.joins(:entities).where(:entities => {:id => Entity.joins(:location_assignments).where(:location_assignments => {:location_id => location.id})})
Note that like your query, this returns the original location you started with, as well as all the others which share the same entities.
I think this should be equivalent to your SQL, but see how it goes on your data!
If you want to make the query more efficient using a self join (which means that instead of having 2 joins and a subquery with another join, you can just have 2 joins) but without using SQL fragment strings, I think you might need to drop down to AREL something like this:
l = Arel::Table.new(:locations)
la = Arel::Table.new(:location_assignments)
starting_location_la = la.alias
l_joined_la = l.join(la).on(l[:id].eq(la[:location_id]))
filtered_and_projected = l_joined_la.join(starting_location_la).on(starting_location_la[:entity_id].eq(la[:entity_id])).where(starting_location_la[:location_id].eq(location.id)).project(location[Arel.star])
Location.find_by_sql(filtered_and_projected)
This just joins all locations to their location assignments and then joins again with the location assignments using entity id, but only with those belonging to your starting location object so it acts like a filter. This gives me the same results as the previous approach using standard ActiveRecord querying, but again, see how it goes on your data!

How to select from a table that has been joined with same model/class/table?

I'm trying to get a count of how many subcontacts every contact has.
Class Contacts
has_many :subcontacts, class_name: "Contact",foreign_key: "supercontact_id"
belongs_to :supercontact, class_name:"Contact"
And here's the activerecord part i have so far that's roughly what i'm trying to do.
Contact.joins{subcontacts.outer}.select(subcontacts.count as subcontact_count)
I think the problem is that the joins portion is looking for a association name and the select part is looking for a table name. The trouble is that the table name is the same table... What's the best way to do this so that it stays as a relation or using SQL so that we can minimize the number of queries so that it isn't an N+1 problem?
Try using
results = Contact.joins(:subcontacts).select("count(subcontacts.id) as count, contacts.id").group("contacts.id")
and count can be fetched as
results.map do |result|
"Contact ID: #{result.id} - Subcontacts Count: #{result['count']}"
end
Contacts.all.each do |contact|
puts contact.name => contact.subcontacts.count
end
OR
Contacts.all.map{|contact| [contact.name => contact.subcontacts.count]}
The above will provide you the hash like answer{contact_name => subcontacts.count}

See if one person is before another in the alphabet, ruby, rails

I'm doing an app for a membership database.
Each person may have a partner. When it comes to displaying the list, I only want to have one row for each family, so at the moment I'm comparing first names and not displaying the row if the person's name is second. Like this
person.first_name != [person.first_name, person.partner.first_name].sort[0]
This means each family only gets displayed once, not twice - once for each partner.
And I'm doing this in the view.
There must be a better way of doing this, and it'd be really great if I could do it at the database level. I'm using postgresql if that makes a difference.
Edit
Sorry if it was unclear.
Say Person 1 has the first_name "Edward" and Person 2 has the first_name "Fay". Edward and Fay are married.
I only want to show them once in my list - I want a row to look like this
Surname First name Address etc
Mysurname Edward ....
Fay
I don't want to display it again with Fay first because I've got both Fay and Edward in list of people, so I use the ruby in the first part of the question to check if I should display the row - it compares their first names and only does the row if the person has a fist name that's before his/her partner's first name.
Here's the relevant part of my person model
class Person < ActiveRecord::Base
has_one :relationship_link, :foreign_key => :person_id, :dependent => :destroy, :include => :partner
has_one :partner, :through => :relationship_link, :source => :person_b, :class_name => "Person"
I hope that's clearer
You need to use DISTINCT ON or GROUP BY. In postgres you need to be careful to group by everything that you are selecting. If you only need to get the last names you can select("DISTINCT ON(last_name) last_name").pluck("last_name"). You will only get an array of last names though.
Maybe you can get records if you order by every other fields in your table, like this:
select("DISTINCT ON(people.last_name) people.*").order("people.last_name ASC, people.first_name ASC, people.field2 DESC, people.field3 ASC...")
You need to order by every attribute so the result is not ambigious.
For this case, i would create a data structure (a Hash) to store people instances given a specific surname. Something like this:
def build_surnames_hash(people_array)
surnames_hash = {}
people_array.each do |person|
last_name = person.last_name
surnames_hash[last_name] ||= []
surnames_hash[last_name] << person
end
surnames_hash
end
That way, you can iterate over the hash and display people using their surnames stored as hash's keys:
surnames_hash = build_surnames_hash(Person.all)
surnames_hash.each do |surname, person_instances_array|
# display the surname once
# iterate over person_instances_array displaying their properties
end

Datamapper: Sorting results through association

I'm working on a Rails 3.2 app that uses Datamapper as its ORM. I'm looking for a way to sort a result set by an attribute of the associated model. Specifically I have the following models:
class Vehicle
include DataMapper::Resource
belongs_to :user
end
class User
include DataMapper::Resource
has n, :vehicles
end
Now I want to be able to query the vehicles and sort them by the name of the driver. I tried the following but neither seems to work with Datamapper:
> Vehicle.all( :order => 'users.name' )
ArgumentError: +options[:order]+ entry "users.name" does not map to a property in Vehicle
> Vehicle.all( :order => { :users => 'name' } )
ArgumentError: +options[:order]+ entry [:users, "name"] of an unsupported object Array
Right now I'm using Ruby to sort the result set post-query but obviously that's not helping performance any, also it stops me from further chaining on other scopes.
I spent some more time digging around and finally turned up an old blog which has a solution to this problem. It involves manually building the ordering query in DataMapper.
From: http://rhnh.net/2010/12/01/ordering-by-a-field-in-a-join-model-with-datamapper
def self.ordered_by_vehicle_name direction = :asc
order = DataMapper::Query::Direction.new(vehicle.name, direction)
query = all.query
query.instance_variable_set("#order", [order])
query.instance_variable_set("#links", [relationships['vehicle'].inverse])
all(query)
end
This will let you order by association and still chain on other scopes, e.g.:
User.ordered_by_vehicle_name(:desc).all( :name => 'foo' )
It's a bit hacky but it does what I wanted it to do at least ;)
Note: I'm not familiar with DataMapper and my answer might not be within the standards and recommendations of using DataMapper, but it should hopefully give you the result you're looking for.
I've been looking through various Google searches and the DataMapper documentation and I haven't found a way to "order by assocation attribute". The only solution I have thought of is "raw" SQL.
The query would look like this.
SELECT vehicles.* FROM vehicles
LEFT JOIN users ON vehicles.user_id = users.id
ORDER BY users.name
Unfortunately, from my understanding, when you directly query the database you won't get the Vehicle object, but the data from the database.
From the documentation: http://datamapper.org/docs/find.html. It's near the bottom titled "Talking directly to your data-store"
Note that this will not return Zoo objects, rather the raw data straight from the database
Vehicle.joins(:user).order('users.name').all
or in Rails 2.3,
Vehicle.all(:joins => "inner join users on vehicles.user_id = user.id", :order => 'users.name')

Resources