How to sort the Mongoid model by the length of the array which is a field inside the model.
Mongo documentation says:
You cannot use $size to find a range of sizes (for example: arrays
with more than 1 element). If you need to query for a range, create an
extra size field that you increment when you add elements. Indexes
cannot be used for the $size portion of a query, although if other
query expressions are included indexes may be used to search for
matches on that portion of the query expression.
So we cannot order by using mongo's $size.
You can solve your task by adding new field, which will store array size.
class Post
include Mongoid::Document
field :likes, type: Array, default: []
field :likes_size, type: Integer
before_save do
self.likes_size = likes.size
end
end
Sort posts by likes_size:
Post.order_by(likes_size: :desc)
Document says that you can't orderby using size.
Try adding a new column containing the value of size and sort it which will work as order by.
In ruby, you can sort an array like this :
my_array.sort_by(&:my_attr)
It will sort the array my_array by the attribute my_attr of each element inside the array.
You can also write it like this :
my_array.sort_by{|element| element.my_attr }
Which is exactly the same, it will sort by the my_attr attribute of each element. This second syntax is for when you want a more complex sort condition than just the result of a method of each element.
Documentation : http://ruby-doc.org/core-2.3.1/Enumerable.html#method-i-sort_by
Related
I want to search by an attribute that contains an array. I'm interested in returning all records where the array in this attribute contains a specific value.
example object
Location_1 {
regions: ["on", "qc"]
}
I want to do something like this Location.where(regions: "on"), but I'm not sure of the correct syntax.
what is the right way to do this?
Try this Location.where('regions in (?)', ['on','qc'])
The IN operator allows you to specify multiple values in your WHERE clause.
Let's say I have a model Neighborhood that has a jsonb[] field families, which is an array containing json objects with any type of key value pairing like so [{"name":"Smiths", "count":4}, {"name":"Miller","out_on_vacation":false}, {"name":"Bennet", "house_color":"red", "count": 4}]
I want to do an activerecord query to find Neighborhoods for Neighborhoods having certain objects inside their families array.
So if I did something like Neighborhood.where({families: {count: 4}), the result would be any Neighborhood models whose families field contain a jsonb object with a key value pairing of count: 4. I've played around with a bunch of different queries, but can't seem to get any of them to work without getting an error back. How would I go about writing an Activerecord query to getthe desired results?
EDIT:
I had run a migration like so:
def change
add_column :neighborhoods, :families, :jsonb, array: true, default: [], index: true
end
I believe you would do something like:
Neighborhood.where("families -> 'count' ? 4")
This article might help you: http://nandovieira.com/using-postgresql-and-jsonb-with-ruby-on-rails
edit: Just noticed that you have an array inside of the jsonb, so this probably won't work.
edit 2: This was answered over on Reddit and worked for me as well. Answering here as a reference for myself.
Neighborhood.where %q(families #> '[{"count":?}]'), 4
I’m using Rails 4.2.7. I have an array of my model objects and currently I’m iterating through that array to find matching entries in the database based on a field my each object …
my_object_times.each_with_index do |my_object_time, index|
found_my_object_time = MyObjectTime.find_by_my_object_id_and_overall_rank(my_object_id, my_object_time.overall_rank)
My question is, how can I rewrite the above to run one query instead of N queries, if N is the size of the array. What I wanted was to force my underlying database (PostGres 9.5) to do a “IF VALUE IN (…)” type of query but I’m not sure how to extract all the attributes from my array and then pass them in appropriately to a query.
I would do something like this:
found_my_object_times = MyObjectTime.where(
object_id: my_object_id,
overall_rank: my_object_times.map(&:overall_rank)
)
I have a model A associated to model B via INNER JOIN:
class A
has_many :bees, as: :bable
scope :bees, -> () {
joins("INNER JOIN bees AS b ON id = b.bable_id .......")
}
end
class B
table_name = "bees"
belongs_to :bable, polymorphic: true
end
I need to filter using B's datetime field (created_at), so I declared a new attribute thus:
has bees.created_at, as: :b_created_at
The sphinx query statement generated now includes:
GROUP_CONCAT(DISTINCT UNIX_TIMESTAMP(bees.`created_at`) SEPARATOR ',') AS `b_created_at`
After indexing, my sphinx index file size exploded.
How much is the "GROUP_CONCAT" part of the query causing the problem, and is there a better way to filter by this attribute?
How can I debug the indexer and find other causes of the large index file being generated?
Thanks
It appears that the indexer is creating, within the index file, a comma separated list of all created timestamps of all bees - as created timestamps are generally unique (!), this indexing is going to create one item for every bee. If you have a lot of bees then this is going to be big.
I would be looking at some way to bypass Sphinx for this part of the query if that is possible and get it to add a direct SQL BETWEEN LowDateTs AND HighDateTs against the built in created_at instead. I hope this is possible - it will definitely be better than using a text index to find it.
Hope this is of some help.
Edit:
Speed reading Sphinx' docs:
[...] WHERE clause. This clause will map both to fulltext query and filters. Comparison operators (=, !=, <, >, <=, >=), IN, AND, NOT, and BETWEEN are all supported and map directly to filters [...]
So the key is to stop it treating the timestamp as a text search and use a BETWEEN, which will be vastly more efficient and hopefully stop it trying to use text indexing on this field.
I am using some custom queries in rails.
code snippet looks like
#time_spent = TimeEntry.find(:all,
:joins => "INNER JOIN sometable ON x = y",
:select =>"id, subject, spent_on")
now to get values I am using
#time_spent[index][:spent_on]
#time_spent[index][:subject]
what I want is to use index numbers in place of symbols. So that at run time I don't need to know the fields in the select clause.
for e.g. i want to do some thing similar to this
#time_spent[index][1]
#time_spent[index][2]
or
If I could get metadata of resultset i can use that information
Comments please?
When #time_spent is a collection of objects, this will get the attribute's value for the index specified for the first [0] item in that collection:
#time_spent[0].attributes.values[index]
So, for example, to get the 5th attribute's value for the 2nd object in the collection:
#time_spent[1].attributes.values[4]
to get field names from result set, use attributes.keys method