This is how I went about to query for one specific element.
results << read_db.collection("users").find(:created_at => {:$gt => initial_date}).to_a
Now, I am trying to query by more than one.
db.inventory.find({ $and: [ { price: 1.99 }, { qty: { $lt: 20 } }, { sale: true } ] } )
Now how do I build up my query? Essentially I will have have a bunch of if statements, if true, i want to extend my query. I heard there is a .extend command in another langue, is there something similar in ruby?
Essentially i want to do this:
if price
query = "{ price: 1.99 }"
end
if qty
query = query + "{ qty: { $lt: 20 } }"
end
and than just have
db.inventory.find({ $and: [query]})
This syntax is wrong, what is the best way to go about doing this?
You want to end up with something like this:
db.inventory.find({ :$and => some_array_of_mongodb_queries})
Note that I've switched to the hashrocket syntax, you can't use the JavaScript notation with symbols that aren't labels. The value for :$and should be an array of individual queries, not an array of strings; so you should build an array:
parts = [ ]
parts.push(:price => 1.99) if(price)
query.push(:qty => { :$lt => 20 }) if(qty)
#...
db.inventory.find(:$and => parts)
BTW, you might run into some floating point problems with :price => 1.99, you should probably use an integer for that and work in cents instead of dollars. Some sort of check that parts isn't empty might be a good idea too.
Related
Currently I have two models
class Author
# gender
# name
end
class Book
# status -> ['published', 'in_progress']
has_one :author
end
I decided to use group_by to group the dataset
def group_by_gender_by_status
books.group_by { |book| [book.author.gender, book.status] }
end
What do I get instead is this
{["male", "published"] => [{BooksRecord}]
["female", "published"] => [{BooksRecord}]
["male", "in_progress"] => [{BooksRecord}]
["female", "in_progress"] => [{BooksRecord}]}
My goal is to get this result
{
female: {
published: 10,
in_progress: 7
},
male: {
published: 6,
in_progress: 9
}
}
so that I can access via data[:male][:published], easier to present the data
I think you can do something like this:
books.group_by { |book| book.author.gender }
.transform_values { |books| books.map(&:status).tally }
In particular, this is leveraging Enumerable#tally, which as added to ruby version 2.7.
You didn't specify which ruby version you're actually using though, so if you're stuck on an older one, you could replace the last line with:
.transform_values { |books| books.group_by(&:status).transform_values(&:count) }
Enumerable#group_by just creates keys for grouping so you cannot use this exclusively in order to produce your desired result. Additionally as books grows iterating in this fashion will be come less and less performant.
You will be better off putting the grouping and counting on the database so that return is closer to your desired end result, like so:
def group_by_gender_by_status
books.joins(:author)
.group(Author.arel_attribute(:gender),Book.arel_attribute(:status))
.count
end
This will have a similar resulting Hash as your current group_by implementation however the counting and grouping will be performed on the database side before returning:
{["male", "published"] => 6,
["female", "published"] => 10,
["male", "in_progress"] => 9,
["female", "in_progress"] => 7}
To transition this into your desired nesting we will need to post process this data.
def group_by_gender_by_status
books.joins(:author)
.group(Author.arel_attribute(:gender),Book.arel_attribute(:status))
.count
.each_with_object(Hash.new {|h,k| h[k] = {}}) do |((gender,status),counter),obj|
obj[gender.to_sym][status.to_sym] = counter
end
end
The end result will be equivalent to your desired result and by moving the grouping and the counting to the database level it should degrade at a much slower rate.
Note: I have no idea where books came from or where this method currently exists. The implementation could potentially be further reduced by this understanding.
I am trying to make a query that:
Finds/Gets the object (Coupon.code)
Checks if the coupon is expired (expires_at)
Checks if the coupon has been used up. (coupons_remaining)
I got some syntax using a newer version of ruby but it isnt working with my version 2.2.1 The syntax I have is
def self.get(code)
where(code: normalize_code(code)).
where("coupon_count > ? OR coupon_count IS NULL", 0).
where("expires_at > ? OR expires_at IS NULL", Time.now).
take(1)
end
This throws an error of wrong number of arguments (2 for 1) which is because my rails doesn't seem to recognize the 2 arguments ("coupon_count > ? OR coupon_count IS NULL", 0) so I have tried to change it but when I change them to something like this (which in my heart felt horribly wrong)
def self.get(code)
where(code: normalize_code(code)).
where(coupon_count: self.coupon_count > 0 || self.coupon_count.nil? ).
where(expires_at: self.expires_at > Time.now || self.expires_at.nil? ).
take(1)
end
I get undefined method `coupon_count' for Coupon:Class
I am short on ideas can someone help me get the syntax for this get method in my model? By the way if it matters I am using mongoid 5.1.0
I feel your pain. Combining OR and AND in MongoDB is a bit messy because you're not really working with a query language at all, you're just building a hash. Similar complications apply if you might apply multiple conditions to the same field. This is also why you can't include SQL-like snippets like you can with ActiveRecord.
For example, to express:
"coupon_count > ? OR coupon_count IS NULL", 0
you need to build a hash like:
:$or => [
{ :coupon_count.gt => 0 },
{ :coupon_count => nil }
]
but if you try to add another OR to that, you'll overwrite the existing :$or key and get confusion. Instead, you need to be aware that there will be multiple ORs and manually avoid the duplicate by saying :$and:
:$and => [
{
:$or => [
{ :coupon_count.gt => 0 },
{ :coupon_count => nil }
]
}, {
:$or => [
{ :expires_at.gt => Time.now },
{ :expires_at => nil }
]
}
]
Then adding the code condition is straight forward:
:code => normalize_code(code),
:$and => [ ... ]
That makes the whole thing a rather hideous monstrosity:
def self.get(code)
where(
:code => normalize_code(code),
:$and => [
{
:$or => [
{ :coupon_count.gt => 0 },
{ :coupon_count => nil }
]
}, {
:$or => [
{ :expires_at.gt => Time.now },
{ :expires_at => nil }
]
}
]
).first
end
You could also use find_by(that_big_mess) instead of where(that_big_mess).first. Also, if you expect the query to match multiple documents, then you probably want to add an order call to make sure you get the one you want. You could probably use the and and or query methods instead of a single hash but I doubt it will make things easy to read, understand, or maintain.
I try to avoid ORs with MongoDB because the queries lose their little minds fast and you're left with some gibbering eldritch horror that you don't want to think about too much. You're usually better off precomputing parts of your queries with generated fields (that you have to maintain and sanity check to make sure they are correct); for example, you could add another field that is true if coupon_count is positive or nil and then update that field in a before_validation hook when coupon_count changes.
You've defined a class method, so self in this circumstance references the Coupon class rather than a Coupon instance.
Try the following:
scope :not_expired, -> { where("expires_at > ? OR expires_at IS NULL", Time.now) }
scope :previously_used, -> { where("coupon_count > 0 OR coupon_count IS NULL") }
def self.get(code)
previously_used.not_expired.find_by!(code: normalize_code(code))
end
I'm using the elasticsearch-rails gem and the elasticsearch-model gem and writing a query that happens to be really huge just because of the way the gem accepts queries.
The query itself isn't very long, but it's the filters that are very, very long, and I need to pass variables in to filter out the results correctly. Here is an example:
def search_for(input, question_id, tag_id)
query = {
:query => {
:filtered => {
:query => {
:match => {
:content => input
}
},
:filter => {
:bool => {
:must => [
{
# another nested bool with should
},
{
# another nested bool with must for question_id
},
{
# another nested bool with must for tag_id
}
]
}
}
}
}
}
User.search(query) # provided by elasticsearch-model gem
end
For brevity's sake, I've omitted the other nested bools, but as you can imagine, this can get quite long quite fast.
Does anyone have any ideas on how to store this? I was thinking of a yml file, but it seems wrong especially because I need to pass in question_id and tag_id. Any other ideas?
If anyone is familiar with those gems and knows whether the gem's search method accepts other formats, I'd like to know that, too. Looks to me that it just wants something that can turn into a hash.
I think using a method is fine. I would separate the searching from the query:
def query_for(input, question_id, tag_id)
query = {
:query => {
...
end
search query_for(input, question_id, tag_id)
Also, I see that this search functionality is in the User model, but I wonder if it is belongs there. Would it make more sense to have a Search or Query model?
I have a model Event that is connected to MongoDB using Mongoid:
class Event
include Mongoid::Document
include Mongoid::Timestamps
field :user_name, type: String
field :action, type: String
field :ip_address, type: String
scope :recent, -> { where(:created_at.gte => 1.month.ago) }
end
Usually when I use ActiveRecord, I can do something like this to group results:
#action_counts = Event.group('action').where(:user_name =>"my_name").recent.count
And I get results with the following format:
{"action_1"=>46, "action_2"=>36, "action_3"=>41, "action_4"=>40, "action_5"=>37}
What is the best way to do the same thing with Mongoid?
Thanks in advance
I think you'll have to use map/reduce to do that. Look at this SO question for more details:
Mongoid Group By or MongoDb group by in rails
Otherwise, you can simply use the group_by method from Enumerable. Less efficient, but it should do the trick unless you have hundreds of thousands documents.
EDIT: Example of using map/reduce in this case
I'm not really familiar with it but by reading the docs and playing around I couldn't reproduce the exact same hash you want but try this:
def self.count_and_group_by_action
map = %Q{
function() {
key = this.action;
value = {count: 1};
emit(key, value);
# emit a new document {"_id" => "action", "value" => {count: 1}}
# for each input document our scope is applied to
}
}
# the idea now is to "flatten" the emitted documents that
# have the same key. Good, but we need to do something with the values
reduce = %Q{
function(key, values) {
var reducedValue = {count: 0};
# we prepare a reducedValue
# we then loop through the values associated to the same key,
# in this case, the 'action' name
values.forEach(function(value) {
reducedValue.count += value.count; # we increment the reducedValue - thx captain obvious
});
# and return the 'reduced' value for that key,
# an 'aggregate' of all the values associated to the same key
return reducedValue;
}
}
self.map_reduce(map, reduce).out(inline: true)
# we apply the map_reduce functions
# inline: true is because we don't need to store the results in a collection
# we just need a hash
end
So when you call:
Event.where(:user_name =>"my_name").recent.count_and_group_by_action
It should return something like:
[{ "_id" => "action1", "value" => { "count" => 20 }}, { "_id" => "action2" , "value" => { "count" => 10 }}]
Disclaimer: I'm no mongodb nor mongoid specialist, I've based my example on what I could find in the referenced SO question and Mongodb/Mongoid documentation online, any suggestion to make this better would be appreciated.
Resources:
http://docs.mongodb.org/manual/core/map-reduce/
http://mongoid.org/en/mongoid/docs/querying.html#map_reduce
Mongoid Group By or MongoDb group by in rails
Need some help with how to use atomic modifiers on an embedded document.
To illustrate, let's assume I've got a collection that looks like this.
Posts Collection
{
"_id" : ObjectId("blah"),
"title" : "Some title",
"comments" : [
{
"_id" : ObjectId("bleh"),
"text" : "Some comment text",
"score" : 0,
"voters" : []
}
]
}
What I'm looking to do with MongoMapper/MongoDB is perform an atomic update on a specific comment within a post document.
Something like:
class Comment
include MongoMapper::EmbeddedDocument
# Other stuff...
# For the current comment that doesn't have the current user voting, increment the vote score and add that user to the voters array so they can't vote again
def upvote!(user_id)
collection.update({"comments._id" => post_id, "comments.voters" => {"$ne" => user_id}},
{"$inc" => {"comments.score" => 1}, "$push" => {"comments.voters" => user_id}})
end
end
That's basically what I have now and it isn't working at all (nothing gets updated). Ideally, I'd also want to reload the document / embedded document but it seems as though there may not be a way to do this using MongoMapper's embedded document. Any ideas as to what I'm doing wrong?
Got this working for anyone that's interested. Two things I was missing
Using $elemMatch to search objects within an array that need to satisfy two conditions (such as _id = "" AND voters DOES NOT contain the user_id)
Using the $ operator on the $inc and $push operations to ensure I'm modifying the specific object that's referenced by my query.
def upvote!(user_id)
# Use the Ruby Mongo driver to make a direct call to collection.update
collection.update(
{
'meanings' => {
'$elemMatch' => {
'_id' => self.id,
'voters' => {'$ne' => user_id}
}
}
},
{
'$inc' => { 'meanings.$.votes' => 1 },
'$push' => { 'meanings.$.voters' => user_id }
})
end