I've set up a trigger-based partitioning scheme on one of our pg 8.3 databases according to the pg docs here:. Basically, I have a parent table, along with several child tables. An insert trigger on the parent redirects any inserts on the parent into the appropriate child table -- this works well.
The ActiveRecord pg adapter, however, seems to rely on the postgres INSERT ... RETURNING "id" extension to get the id of the returned row after the initial insert. But the trigger seems to break the RETURNING clause -- no id is returned, although the row is created correctly.
While I suppose this behavior makes sense -- after all, nothing is being inserted in the main table, I really need to find some kind of work-around, as other child records will be inserted that require the row id of the just-inserted row.
I suppose I could add some kind of unique id to row prior to insert and then re-read it using this key after insert, but this seems pretty kludgy. Does anyone have a better work-around?
Since Rails v.2.2.1, you can turn off 'returning id' behavior just by overriding #supports_insert_with_returning method in PostgreSQLAdapter.
class ActiveRecord::ConnectionAdapters::PostgreSQLAdapter
def supports_insert_with_returning?
false
end
end
Currently it looks like my best option is to just change the table prefix in a before_create event so that the insert happens on the underlying partition table directly, bypassing the insert trigger altogether. This is not a perfect solution, however, but seems to be the most performant and the simplest.
The only other solution I can come up with is to add a guid column to each table, and re-read the row from the parition table by guid immediately after insert to get the id.
Any other suggestions are welcome. Thanx -- m
Related
I have existing rails app that have some tables with some data. I did the CRUD operation directly from postgresql client before using activeadmin.
I don't know whether I missed the documentation or this is a bug: activeadmin cannot detect my existing autoincrement id for table.
If I refresh the submitted form until the auto increment id surpass the existing id in my table, it works.
First think which I could think of would be that you have passed the id parameter in permit params.
Please check that and if is present then remove it.
Secondly,as mentioned in the post that there are already data in the database so there is a problem with the sequences generated, since they can be only used once.
The solution is to set the sequence for your song_artists.id column to the highest value in the table with a query like this:
SELECT setval('song_artist_id_seq', (SELECT max(id) FROM song_artists));
I am assuming that your sequence name "song_artist_id_seq", table name "song_artist", and column name "id".
To get the sequence name run the below mentioned command:
SELECT pg_get_serial_sequence('tablename', 'columname');
For Resetting the postgres sequences from rails console:
ActiveRecord::Base.connection.tables.each do |t|
ActiveRecord::Base.connection.reset_pk_sequence!(t)
end
Another solution would be to override the save() method in your song_artist class to manually set the song_artist id for new records but is not advisable.
i have Account Model,Asset, Capital and Revenue this table are all inherited in my Account model. i have 3 kind of attributes in my Account model. name, code and type. when i create an account where will be to insert will happen one in my account and the other one is in my type for example
Account.create(name: "test123", code:"test123", type:"Asset")
sql will run Two Insert one for Account model and one for Asset Table
and my sunspot work well it will reindex my database and i can search my params
but when i update my model Account my sql run one insert and one update
my question is how can i reindex my model when i update. with a particular data. i can do Sunspot.reindex but this is will load all data in my sql. that will cause me to slow
sql will run Two Insert one for Account model and one for Asset Table
FYI you use STI when you want to share same database table between multiple models because they are similar in attributes and behavior. Like AdminUser model is likely to have almost same attributes/columns as PublisherUser or ReaderUser. Therefore you might wish to have a common table called users or model User and share this table among the above mentioned models.
Point is: ActiveRecord will run a single SQL query not two, like:
INSERT INTO "accounts" ("name", "code", "type") VALUES ('test123', 'test123', 'Asset')
my question is how can i reindex my model when i update. with a particular data. i can do Sunspot.reindex but this is will load all data in my sql. that will cause me to slow
Actually sunspot_rails is designed to auto-reindex whenever you make changes to your model/record. It listens to the save callbacks.
But you need to make sure that you are not using methods like update_column(s). See the list of silent create/update methods which do not trigger callbacks and validations at all.
In addition, you need to understand the concept of batch size in terms of Solr. For performance reasons, all of your new indexes are not immediately committed. Committed means, writing indexes to database like in RDBMS commits.
By default the batch_size for commits is 50. Meaning after 50 index method executions only the indexes will be committed and you will be able to search the records. To change it, use following
# in config/initializers/sunspot_config.rb
Sunspot.config.indexing.default_batch_size = 1 # or any number
or
# in models; its not considered good though
after_commit do
Sunspot.commit
end
For manual re-indexing, you can use like #Kathryn suggested.
But, I don't think you need to intervene in the auto-operation. I think you were not seeing immediate results so you were worrying.
According to the documentation, objects will be indexed automatically if you are on Rails. But it also mentions you can reindex a class manually:
Account.reindex
Sunspot.commit
It also suggests using Sunspot.index on individual objects.
i put this to my model
after_update do
Sunspot.index Account.where(id: self.id)
end
I have 3 models: movies, movie_tags and movie_tag_counts
It is a classic has many through relationship. My use case is that every movie can have multiple tags and user can vote on tags that were already added.
My Problem is that I can't seem to update an existing object in movie_tag_counts
movie_tag_count = MovieTagCount.first
movie_tag_count.count += 1
movie_tag_count.save
the result is this error message
TypeError: nil is not a symbol nor a string
My best guess is that the reason is that movie_tag_counts table doesn't have an id column of its own, but I still have no idea how to fix it.
My current workaround is to execute a sql statement directly
Turns out my guess was right, ActiveRecord expects an id column, I added it like this
add_column :movie_tag_counts, :id, :primary_key
and everything worked perfectly. I'm sure there's a way to do it without the id column by overwriting some AR methods, but I guess having another column won't hurt that much
I have posts and organisations in my database. Posts belongs_to organisation and organisation has_many posts.
I have an existing post_id column in my post table which I by now increment manually when I create a new post.
How can I add auto increment to that column scoped to the organisation_id?
Currently I use mysql as my database, but I plan to switch to PostgreSQL, so the solution should work for both if possible :)
Thanks a lot!
#richard-huxton has the correct answer and is thread safe.
Use a transaction block and use SELECT FOR UPDATE inside that transaction block. Here is my rails implementation. Use 'transaction' on a ruby class to start a transaction block. Use 'lock' on the row you want to lock, essentially blocking all other concurrent access to that row, which is what you want for ensuring unique sequence number.
class OrderFactory
def self.create_with_seq(order_attributes)
order_attributes.symbolize_keys!
raise "merchant_id required" unless order_attributes.has_key?(:merchant_id)
merchant_id = order_attributes[:merchant_id]
SequentialNumber.transaction do
seq = SequentialNumber.lock.where(merchant_id: merchant_id, type: 'SequentialNumberOrder').first
seq.number += 1
seq.save!
order_attributes[:sb_order_seq] = seq.number
Order.create(order_attributes)
end
end
end
We run sidekiq for background jobs, so I tested this method by creating 1000 background jobs to create orders using 8 workers with 8 threads each. Without the lock or the transaction block, duplicate sequence number occur as expected. With the lock and the transaction block, all sequence numbers appear to be unique.
OK - I'll be blunt. I can't see the value in this. If you really want it though, this is what you'll have to do.
Firstly, create a table org_max_post (org_id, post_id). Populate it when you add a new organisation (I'd use a database trigger).
Then, when adding a new post you will need to:
BEGIN a transaction
SELECT FOR UPDATE that organisation's row to lock it
Increment the post_id by one, update the row.
Use that value to create your post.
COMMIT the transaction to complete your updates and release locks.
You want all of this to happen within a single transaction of course, and with a lock on the relevant row in org_max_post. You want to make sure that a new post_id gets allocated to one and only one post and also that if the post fails to commit that you don't waste post_id's.
If you want to get clever and reduce the SQL in your application code you can do one of:
Wrap the hole lot above in a custom insert_post() function.
Insert via a view that lacks the post_id and provides it via a rule/trigger.
Add a trigger that overwrites whatever is provided in the post_id column with a correctly updated value.
Deleting a post obviously doesn't affect your org_max_post table, so won't break your numbering.
Prevent any updates to the posts at the database level with a trigger. Check for any changes in the OLD vs NEW post_id and throw an exception if there is one.
Then delete your existing redundant id column in your posts table and use (org_id,post_id) as your primary key. If you're going to this trouble you might as well use it as your pkey.
Oh - and post_num or post_index is probably better than post_id since it's not an identifier.
I've no idea how much of this will play nicely with rails I'm afraid - the last time I looked at it, the database handling was ridiculously primitive.
Its good to know how to implement it. I would prefer to use a gem myself.
https://github.com/austinylin/sequential (based on sequenced)
https://github.com/djreimer/sequenced
https://github.com/felipediesel/auto_increment
First, I must say this is not a good practice, but I will only focus on a solution for your problem:
You can always get the organisation's posts count by doing on your PostsController:
def create
post = Post.new(...)
...
post.post_id = Organization.find(organization_id).posts.count + 1
post.save
...
end
You should not alter the database yourself. Let ActiveRecord take care of it.
So I've added the following trigger on a table:
INSERT INTO TNQueue (QueuedDate, Action)
VALUES (CURRENT_TIMESTAMP(), 'ManageLoadOrderTypes');
and it doesn't appear to do anything. I have several other, much more complicated trigger on other tables that all work great. They all do this sort of insert to this same table, but generally after checking for changes, if the record warrants an insert, decided what data to insert, sub query the __new and __old tables, etc.
The same trigger exists for both AFTER INSERT and AFTER UPDATE. I've tried with and without _old/_new tables and memo data.
Any ideas?
When you created the Trigger was the table open by other users (or even your user)?
If I remember correctly, if the table did not have any triggers and was opened, any new triggers do not take effect until ALL users close the table.
If you dont commit, I believe the change will automatically rollback