I'm trying to lock one of the records in my database when seeding (using API). By locking I mean- not being able to create topics under certain movie or just disable 'show' method.
It'll be simpler if I'll just show you my seeds.rb file:
require 'open-uri'
#doc=Nokogiri::XML(open("http://www.kinoballada.info/repertuar/export/small/dzien/xml"))
movie_array = []
#doc.css('dzien').each do |node|
children=node.children
movie_array << children.css('tytul').inner_text
Movie.find_or_create_by(
:name => children.css('tytul').inner_text
)
end
movies_to_delete = Movie.where.not(name: movie_array)
movies_to_delete.destroy_all
Last 2 rows are essential- I want to LOCK the movie, not destroy it, making something like:
movies_to_lock = Movie.where.not(name: movie_array)
movies_to_lock.??????_all
Is there any way I can do this?
The easiest way to do what you want is to issue a select for update command. Once you select a set of rows, they'll be locked for other threads until you do something that causes the database to release the lock.
Not every RDBMS supports the command, and some older databases will lock the entire table when you select from one of its rows. You'll probably want an RDBMS agnostic solution from the application side that avoids such problems, preserving the freedom to switch databases in the future without additional worries.
Consider adding a boolean column called locked to your table which you can read before allowing the row to be included in any result set. This approach seems to come at a minimal expense, while allowing you to avoid database specific problems.
Related
i have Account Model,Asset, Capital and Revenue this table are all inherited in my Account model. i have 3 kind of attributes in my Account model. name, code and type. when i create an account where will be to insert will happen one in my account and the other one is in my type for example
Account.create(name: "test123", code:"test123", type:"Asset")
sql will run Two Insert one for Account model and one for Asset Table
and my sunspot work well it will reindex my database and i can search my params
but when i update my model Account my sql run one insert and one update
my question is how can i reindex my model when i update. with a particular data. i can do Sunspot.reindex but this is will load all data in my sql. that will cause me to slow
sql will run Two Insert one for Account model and one for Asset Table
FYI you use STI when you want to share same database table between multiple models because they are similar in attributes and behavior. Like AdminUser model is likely to have almost same attributes/columns as PublisherUser or ReaderUser. Therefore you might wish to have a common table called users or model User and share this table among the above mentioned models.
Point is: ActiveRecord will run a single SQL query not two, like:
INSERT INTO "accounts" ("name", "code", "type") VALUES ('test123', 'test123', 'Asset')
my question is how can i reindex my model when i update. with a particular data. i can do Sunspot.reindex but this is will load all data in my sql. that will cause me to slow
Actually sunspot_rails is designed to auto-reindex whenever you make changes to your model/record. It listens to the save callbacks.
But you need to make sure that you are not using methods like update_column(s). See the list of silent create/update methods which do not trigger callbacks and validations at all.
In addition, you need to understand the concept of batch size in terms of Solr. For performance reasons, all of your new indexes are not immediately committed. Committed means, writing indexes to database like in RDBMS commits.
By default the batch_size for commits is 50. Meaning after 50 index method executions only the indexes will be committed and you will be able to search the records. To change it, use following
# in config/initializers/sunspot_config.rb
Sunspot.config.indexing.default_batch_size = 1 # or any number
or
# in models; its not considered good though
after_commit do
Sunspot.commit
end
For manual re-indexing, you can use like #Kathryn suggested.
But, I don't think you need to intervene in the auto-operation. I think you were not seeing immediate results so you were worrying.
According to the documentation, objects will be indexed automatically if you are on Rails. But it also mentions you can reindex a class manually:
Account.reindex
Sunspot.commit
It also suggests using Sunspot.index on individual objects.
i put this to my model
after_update do
Sunspot.index Account.where(id: self.id)
end
long time reader first time poster.
I recently started using ruby on rails so I am still very new to the environment (even though I have completed a few guides) so be gentle please.
What I want to do is create a sort of archive table of another table that the user can access at any time(via a different link on the website).
So for example, if I have the "users" table, I want to be able to archive old users but still give the option for someone to go and view them.
Basically, it will sort of have to delete the user from the initial table, and save his/her info in to the archived_users table.
Thank you for your time.
I figured my comment was more of an answer, so posting it here and adding more info
In this situation you're better off adding some sort if "active" flag to the users table, which you can flip on or off as needed. That way you don't need to worry about dealing with yet another model class, and you can reuse all the same view and controller structures. In your views, you can then simply "hide" any inactive users (and maybe only show inactive folks if the logged in user is an admin...etc).
You also have the freedom to include other meta data such as "deactivated on" (time stamp) for example.
Long story short, if you're concerned about performance, with proper indexing (and partitioning if necessary), you shouldn't really need to create a separate archive table.
The only reason I can think of to do this is if you're dealing with billions upon billions of records, and/or growing by an insane amount (which is probably not your case).
The best way to do this is probably to add a column called deleted on the original Users table. You can then filter out the old users in normal circumstances (preferably using a default scope) but allow them to be seen/queried when needed.
Papertrail might work for you.
It creates a "versions" table and logs create/update/destroy events for any class which includes has_paper_trail. For example:
class User < ActiveRecord::Base
has_paper_trail
end
deleted_users = Papertrail::Version.where(item_type: User, event: "destroy")
deleted_users.last.reify.name # assuming the users table has a 'name' column
I have posts and organisations in my database. Posts belongs_to organisation and organisation has_many posts.
I have an existing post_id column in my post table which I by now increment manually when I create a new post.
How can I add auto increment to that column scoped to the organisation_id?
Currently I use mysql as my database, but I plan to switch to PostgreSQL, so the solution should work for both if possible :)
Thanks a lot!
#richard-huxton has the correct answer and is thread safe.
Use a transaction block and use SELECT FOR UPDATE inside that transaction block. Here is my rails implementation. Use 'transaction' on a ruby class to start a transaction block. Use 'lock' on the row you want to lock, essentially blocking all other concurrent access to that row, which is what you want for ensuring unique sequence number.
class OrderFactory
def self.create_with_seq(order_attributes)
order_attributes.symbolize_keys!
raise "merchant_id required" unless order_attributes.has_key?(:merchant_id)
merchant_id = order_attributes[:merchant_id]
SequentialNumber.transaction do
seq = SequentialNumber.lock.where(merchant_id: merchant_id, type: 'SequentialNumberOrder').first
seq.number += 1
seq.save!
order_attributes[:sb_order_seq] = seq.number
Order.create(order_attributes)
end
end
end
We run sidekiq for background jobs, so I tested this method by creating 1000 background jobs to create orders using 8 workers with 8 threads each. Without the lock or the transaction block, duplicate sequence number occur as expected. With the lock and the transaction block, all sequence numbers appear to be unique.
OK - I'll be blunt. I can't see the value in this. If you really want it though, this is what you'll have to do.
Firstly, create a table org_max_post (org_id, post_id). Populate it when you add a new organisation (I'd use a database trigger).
Then, when adding a new post you will need to:
BEGIN a transaction
SELECT FOR UPDATE that organisation's row to lock it
Increment the post_id by one, update the row.
Use that value to create your post.
COMMIT the transaction to complete your updates and release locks.
You want all of this to happen within a single transaction of course, and with a lock on the relevant row in org_max_post. You want to make sure that a new post_id gets allocated to one and only one post and also that if the post fails to commit that you don't waste post_id's.
If you want to get clever and reduce the SQL in your application code you can do one of:
Wrap the hole lot above in a custom insert_post() function.
Insert via a view that lacks the post_id and provides it via a rule/trigger.
Add a trigger that overwrites whatever is provided in the post_id column with a correctly updated value.
Deleting a post obviously doesn't affect your org_max_post table, so won't break your numbering.
Prevent any updates to the posts at the database level with a trigger. Check for any changes in the OLD vs NEW post_id and throw an exception if there is one.
Then delete your existing redundant id column in your posts table and use (org_id,post_id) as your primary key. If you're going to this trouble you might as well use it as your pkey.
Oh - and post_num or post_index is probably better than post_id since it's not an identifier.
I've no idea how much of this will play nicely with rails I'm afraid - the last time I looked at it, the database handling was ridiculously primitive.
Its good to know how to implement it. I would prefer to use a gem myself.
https://github.com/austinylin/sequential (based on sequenced)
https://github.com/djreimer/sequenced
https://github.com/felipediesel/auto_increment
First, I must say this is not a good practice, but I will only focus on a solution for your problem:
You can always get the organisation's posts count by doing on your PostsController:
def create
post = Post.new(...)
...
post.post_id = Organization.find(organization_id).posts.count + 1
post.save
...
end
You should not alter the database yourself. Let ActiveRecord take care of it.
I am building a rails app and the data should be reset every "season" but still kept. In other words, the only data retrieved from any table should be for the current season but if you want to access previous seasons, you can.
We basically need to have multiple instances of the entire database, one for each season.
The clients idea was to export the database at the end of the season and save it, then start fresh. The problem with this is that we can't look at all of the data at once.
The only idea I have is to add a season_id column to every model. But in this scenario, every query would need to have where(season_id: CURRENT_SEASON). Should I just make this a default scope for every model?
Is there a good way to do this?
If you want all the data in a single database, then you'll have to filter it, so you're on the right track. This is totally fine, as data is filtered all the time anyway so it's not a big deal. Also, what you're describing sounds very similar to marking data as archived (where anything not in the current season is essentially archived), something that is very commonly done and usually accomplished (I believe) via setting a boolean flag on every record to true or false in order to hide it, or some equivalent method.
You'll probably want a scope or default_scope, where the main downside of a default_scope is that you must use .unscoped in all places where you want to access data outside of the current season, whereas not using a default scope means you must specify the scope on every call. Default scopes can also seem to get applied in funny places from time to time, and in my experience I prefer to always be explicit about the scopes I'm using (i.e. I therefore never use default_scope), but this is more of a personal preference.
In terms of how to design the database you can either add the boolean flag for every record that tells whether or not that data is in the current season, or as you noted you can include a season_id that will be checked against the current season ID and filter it that way. Either way, a scope of some sort would be a good way to do it.
If using a simple boolean, then either at the end of the current season or the start of the new season, you would have to go and mark any current season records as no longer current. This may require a rake task or something similar to make this convenient, but adds a small amount of maintenance.
If using a season_id plus a constant in the code to indicate which season is current (perhaps via a config file) it would be easier to mark things as the current season since no DB updates will be required from season to season.
[Disclaimer: I'm not familiar with Ruby so I'll just comment from the database perspective.]
The problem with this is that we can't look at all of the data at once.
If you need to keep the old versions accessible, then you should keep them in the same database.
Designing "versioned" (or "temporal" or "historized") data model is something of a black art - let me know how your model looks like now and I might have some suggestions how to "version" it. Things can get especially complicated when handling connections between versioned objects.
In the meantime, take a look at this post, for an example of one such model (unrelated to your domain, but hopefully providing some ideas).
Alternatively, you could try using a DBMS-specific mechanism such as Oracle's flashback query, but this is obviously not available to everybody and may not be suitable for keeping the permanent history...
I'd like to be able to "reserve" an element similar to how an airplane seat is locked for a short period of time before it's actually paid for. I think the best way is to do it through the database and preferably at the ORM layer.
Here's an example:
ActiveRecord::Base.transaction do
bar = Bar.find(1, :lock => true)
# do my stuff
end
I need a more flexible solution though.
Here's how I am imagining it to work conceptually:
# action1:
# put an expiring lock (30s) on an element (don't block unrelated code)
# other code
# action2 (after payment):
# come back to the locked element to claim ownership of it
UPDATE: Anyone trying to do this in Rails should try using built-in optimistic locking functionality first.
Add an additional column locked_until - but beware of concurrency. I'd probably do that down on the db layer.
I could have a separate table specifically for this purpose called potential_owner. It would have a timestamp, so that one can figure out the timing. Basically it would work something like that:
# lock the table
# check latest record to see if the element is available
# add a new record or die
This is pretty simple to implement, however locking is not fine-grained. The table describes potential ownership of different elements, and a simple check locks down the whole table. In Tass's solution only the row for a particular element is locked.