If I have a model and save it like this:
model = Website.new
model.attr = 1
model.id = 1
model.save #assume no errors in saving
then retrieve it like this:
model2 = Website.find(1)
Will model2 always be returned? Ignoring errors saving to the database.
Is there a possible scenario where the data is not yet committed to the database, and as a result the find results in no records found? Do I need to delay the find to guarantee the row is returned?
Assuming no database errors, and assuming you haven't overwritten save on Website, the only race condition you'd have is if you try to access the object (via find or otherwise) in the milliseconds before the record is created in the database.
So, to directly answer your question - yes, it's possible - but given a single database (e.g. no read-only slaves or anything like that), it's highly, highly unlikely.
Related
Very short question, I feel like the answer must be already on StackOverflow but I couldn't find it.
I have some incoming parameters. They each have a unique ID.
I query the database and get an active record association using something like:
existing_things = current_user.things.where(uuid: [param_uuids])
This gives me an Active Record Association of those objects.
However, what I later do is something like:
existing_things.where(uuid: some_specific_uuid)
Of course, when I run the above query, it issues an SQL statement.
What I would love to do is find an object in a pre-loaded Active Record array of objects, and return that object without issuing another query.
How should I do that?
The same way you would if it was an ordinary array: Enumerable#find.
existing_things.find { |t| t.uuid == some_specific_uuid }
(or select if you're expecting more than one match)
If you're doing a lot of lookups like that, you might want to get a Hash involved:
things_by_uuid = existing_things.index_by(&:uuid)
things_by_uuid[some_specific_uuid]
(again, if you're expecting more than one match per value, there's group_by)
In my Grails App, I have bootstrapped an object of a domain class as:
def user1 = new Users(userID: 1, name: "John Doe")
user1.save()
In my dashboard controller i have retrieved the object and modified its property name as:
Users userObj = Users.get((Long) 1)
println(userObj as JSON); // This gives me: {"class":"com.prabin.domains.Users","id":1,"name":"John Doe"}
userObj.name = "anonymous"
Now i create a new Users object to retrieve the changed object with same ID 1 as
Users otherUserObj = Users.get((Long) 1) // **Line 2** Is this retrieving from database or from GORM session?
print(otherUserObj as JSON)// This gives me: {"class":"com.prabin.domains.Users","id":1,"name":"anonymous"}
But the value of object in database is not changed. And even when i retrieve the Users object of same id 1 in another controller it gives me the initial object rather than the modified as:
Users userObjAtDifferentController = Users.get(1);
print(userObjAtDifferentController) //This gives me: {"class":"com.prabin.domains.Users","id":1,"name":"John Doe"}
My question is, if the value is not changed in the database, why it gives me the modified object at Line 2 though i have retrieved the object using GORM query (which i guess should retrieve from the database)?
I have also tried using save() after the modification but the result is same.
userObj.save() //doesn't show the changes in database.
My guess is that the object is not being saved to the database because some constraint(s) are invalid. You can determine whether this is the case by replacing your calls to save() with save(failOnError: true). If my guess is correct, an exception will be thrown if saving to the database fails.
When you call the save() method on a domain object, it may not persist in the database immediately. In order to persist the changed value to the database, you would need to do the following.
userObj.save(flush: true)
By using flush, you are telling GORM to persist immediately in the database.
In some cases when validation fails, the data will still not persist in the database. The save() method will fail silently. To catch validation errors as well as save to the database immediately, you would want to do the following
userObj.save(flush:true, failOnError:true)
If validation errors exist, then the GROM will throw ValidationException (http://docs.grails.org/latest/api/grails/validation/ValidationException.html)
You need to consider two things:
If you do save(), it only retains in hibernate session, until you flush it the changes does not persist in database. So, better do save(flush:true, failOnError: true) and put in try/catch block and print exception.
And another important things is, your method in controller needs to be with #Transactional annotation, without it your changes does not persist in database, you will encounter Connection is read-only. Queries leading to data modification are not allowed. exception.
Hope this helps, let me know if your issue is fixed. Happy coding . :)
I'm using mongoid gem on rail project and I'm quite puzzled by trying to modify the model in memory but never saving it so I do not modify the db. I'm trying to modify an attribute from a model loaded in memory but it does not work as shown bellow:
mymodel = MyModel.where('some criteria')
mymodel.first.some_attribute = 0
mymodel.first.some_attribute == 0 -> is false
So I guess mongo reloads from the db each time we do first or even looping on each entry and setting some attribute has no effect, if I loop again all attributes I set are still set to the original value. Is there a way to commit the transaction and force mymodel to stay loaded in memory? It's hard for me to use proper terminology so I hope you get what I'm saying.
Calling first is a query so this is two distinct queries:
M.first
M.first
and two hits to the database that produce two completely different model instances. Similarly, calling M.each { ... } (or some other iteration method) twice will hit the database twice and produce two sets of completely distinct model instances. You could have a look at what #object_id says to verify this.
If you want to load the objects and do things to them then be explicit about it:
m = M.first
m.attr = 0
# Now m.attr == 0 will be true and you can m.save to update the database
and for iterating, you can call #to_a to execute the query and pull a bunch of model instances from the database into local memory:
ms = M.some_query.to_a
ms.each { ... }
ms.each { ... } # iterates over the same model instances as the first ms.each
Suppose you have two ActiveRecord models - Problem and ProblemSet.
You have a #problem_set object, and you want to check if it has a problem with a certain title attribute.
You could say:
#problem_set.problems.where(:title => "Adding Numbers").any?
which returns true, by running the optimal SQL:
SELECT COUNT(*) FROM "problems" INNER JOIN "problem_sets_problems" ON "problems"."id" = "problem_sets_problems"."problem_id" WHERE "problem_sets_problems"."problem_set_id" = 1 AND "problems"."title" = 'Adding Numbers'
However, if #problem_set was in memory, ie, you got #problem_set by:
#problem_set = ProblemSet.new()
#problem_set.problems << Problem.new(:title = "Adding Numbers")
Then you will not be able to find the problem (ie. it returns false!). This is because the following SQL is run:
SELECT COUNT(*) FROM "problems" INNER JOIN "problem_sets_problems" ON "problems"."id" = "problem_sets_problems"."problem_id" WHERE "problem_sets_problems"."problem_set_id" IS NULL AND "problems"."title" = 'Adding Numbers'
A possible way to perform the check correctly for both persistent objects and in-memory objects, is:
#problem_set.problems.map(&:title).include? "Adding Numbers"
Which correctly returns true in both cases. However, in the case of a persistent object, it runs the non-optimal SQL (which retrieves all problems):
SELECT "problems".* FROM "problems" INNER JOIN "problem_sets_problems" ON "problems"."id" = "problem_sets_problems"."problem_id" WHERE "problem_sets_problems"."problem_set_id" = 1
Question: Is there a way to use the same code to check for both persistent objects and in-memory objects, while running optimal SQL code?
Note that a solution which checks for object persistence is permitted (but I don't see how to check the dirtiness of the collection). However, it should still work if a persistent object is modified (ie. the association collection attribute becomes dirty, and therefore the result from an SQL query would be out-of-date).
Ok, I finally worked it out.
Browsing through obscure rails functions, I found the persisted? method. You can use #problem_set.persisted? to check if the object is persistent or in-memory only.
So the answer is:
if #problem_set.persisted?
#problem_set.problems.where(:title => "Adding Numbers").any?
else
#problem_set.problems.map(&:title).include? "Adding Numbers"
end
The remaining question is, what about persistent objects where the association collection is dirty? Well, by experimentation, I found out that it doesn't really happen. When you add an object to the collection, for example, one of the following:
#problem_set.problems << Problem.new(:title => "hello")
#problem_set.problems.push Problem.new(:title => "hello")
ActiveRecord immediately saves the data. Similarly, it immediately destroys the row from the associations table when you say:
#problem_set.problems.delete(#problem_set.problems[2])
That means, although there is no such method as #problem_set.problems_changed?, if there was, the current implementation would result in problems_changed? always returning false.
In effect, the collection<<, collection.push, collection.delete methods auto-save (ie. calls save automatically).
I'm learning about concurrency in conjunction with EF4.0 and have a question about the locking pattern used.
Say I configure a fixed concurrency mode on a version number property.
Now say I fetch a record (entity) from the database (context) and edit some property. Version gets incremented and when SaveChanges is called on its context. If the current database (context) version matches the version of the original record (entity) the save continues, otherwise an OptimisticConcurrencyException gets thrown by EF.
Now, my point of interest is the following: between the check of the versions there's always a small period of time, however small, it is there. So in theory someone else could've just updated the record between the comparison and the actual save, thus possibly corrupting the data.
How does this get solved? It feels as if the problem just gets pushed forward.
There is no period of time between checking versions and updating record because the database command looks like:
UPDATE SomeTable
SET SomeColumn = 'SomeValue'
WHERE Id = #Id AND Version = #OldVersion
SELECT ##ROWCOUNT
The check and update is one atomic operation. Rowcount will return 0 if no record with Id = #Id and Version = #OldVersion exists and that zero is translated to the exception.
This can (and probably is) solved using locking hints.
For SQL Server, EF can query (SELECT) from the database WITH UPDLOCK.
This tells the Database Engine that, you want to read a/several records, and nobody else can change those records until you perform an update thereafter.
If you want to see this for yourself, check out the Sql Server Profiler which will show you the queries in real-time.
Hope that helps.
CAVEAT: I can't say for sure that this is the way EF handles this scenario because I haven't checked myself but, certainly if you were going to do it yourself, this is one way to do it.