I upgraded Active Storage to store variants record in database using
config.active_storage.track_variants: true
I duplicated some items with images and it caused broken variants
I am generating variants like
item.image.variant(resize_to_limit= [800, nil]).processed
due to these broken variants/images I want to delete variant records from database while keeping original image, and than recreate variants
how can I only remove variants?
To delete all variant records you can call
ActiveStorage::VariantRecord.delete_all
or
ActiveStorage::VariantRecord.destroy_all
To remove variants of specific item image like this
ActiveStorage::VariantRecord.where(blob_id: item.image.blob.id).delete_all
Related
Here is the problem:
I have Ruby on Rails project that has a table that have almost 100k rows and I have a binary column and I want to make changes in the content data on this column.
So I must iterate over those 100k rows, making changes on that row on particular column, saving it back on database.
But I must keep track of changes because these changes could fail and I should have someway to re-start data change from where I stopped.
Here is what I thought of a way of doing it:
Create a Migration to have a table MigrationTrack to track all records that have being migrated
Create a model of the above migration
Create a rake task that grabs all 100k from TableToUpdate and iterate over them, saving data back to row and save its ID on MigrationTrack. Create a logic to have a join on TableToUpdate and MigrationTrack to filter only ids that I haven't updated yet
After above migration finished create another migration to drop MigrationTrack table and remove its model.
Is there any other "Railsh way" to do that? Anyone have done such change?
Thanks
I would do it like this:
Add and deploy a migration adding a new column with the desired data type to the database table.
Add code to your model that save the value from the old column into the new column too.
Run a rake task or a simple one-liner in the console that touches all records to make sure the code introduced in step one ran on each record.
After this step, you can manually verify if all records in the database have both columns set as expected.
Switch using the new attribute instead of the old attribute in the code.
Drop the old column.
For simple cases, try running a simple view to check how it will turn out to be, for example, if your migration is
change_column :table, :boolean_field, 'integer USING CASE boolean_field THEN ...'
then you try do a simple select query with your cast, if you need more safey, you can create 'up' and 'down' methods on your migrations, then you can create a backup table on up, and on down, you can revert the values
I'm trying to lock one of the records in my database when seeding (using API). By locking I mean- not being able to create topics under certain movie or just disable 'show' method.
It'll be simpler if I'll just show you my seeds.rb file:
require 'open-uri'
#doc=Nokogiri::XML(open("http://www.kinoballada.info/repertuar/export/small/dzien/xml"))
movie_array = []
#doc.css('dzien').each do |node|
children=node.children
movie_array << children.css('tytul').inner_text
Movie.find_or_create_by(
:name => children.css('tytul').inner_text
)
end
movies_to_delete = Movie.where.not(name: movie_array)
movies_to_delete.destroy_all
Last 2 rows are essential- I want to LOCK the movie, not destroy it, making something like:
movies_to_lock = Movie.where.not(name: movie_array)
movies_to_lock.??????_all
Is there any way I can do this?
The easiest way to do what you want is to issue a select for update command. Once you select a set of rows, they'll be locked for other threads until you do something that causes the database to release the lock.
Not every RDBMS supports the command, and some older databases will lock the entire table when you select from one of its rows. You'll probably want an RDBMS agnostic solution from the application side that avoids such problems, preserving the freedom to switch databases in the future without additional worries.
Consider adding a boolean column called locked to your table which you can read before allowing the row to be included in any result set. This approach seems to come at a minimal expense, while allowing you to avoid database specific problems.
for i, name in ipairs(redis.call('KEYS''cache:user_transaction_logs:*:8866666')) do redis.call('DEL', name); end"
How can I Optimise this redis query?
We are using Redis as cache store in Rails.Whenever auser makes a successfull transaction The receivers and initiators transaction history is expired from redis
The query can not be optimized - it should be replaced in its entirety because the use of KEYS is discouraged for anything other than debugging purposes on non-production environments.
A preferable approach, instead of trying to fetch the relevant key names ad-hoc, is to manage them in a data structure (e.g. Set or List) and read from it when you perform the deletions.
You need to change the approach for how you are storing cache entries for your users.
Your keys should look something like cache:user_transaction_logs:{user_id}.
Then you will be able to just delete the entry by its key (user_id).
In case if you need several cache entries per user_id - use Redis hashes (https://redis.io/commands#hash), and then again you will be able to delete all entries per user_id with one command DELETE or needed entry with HDEL.
Also a good idea to use Redis database numbers (default 0, 1-15 available) and put separate functionalities on separate database numbers. Then in case if you need to wipe cache of whole functionality that can be done with one command FLUSHDB
I have a a model "Entry" that has many items "Items" that are embedded documents:
class Entry
embeds_many :items, cascade_callbacks: true
...
end
the issues is i have to move a bunch of embedded document Items around deleting some, add others, and moving others between Entrys. It seems like any operation I do on an Entry.items like:
entry.items << item or entry.items.delete(i)
causes its own database write. And if i'm making many changes that seems very expensive. Is there a way to tell mongoid to let me add items, remove them, move them locally and only when everything is done send a single entry.save! write to the database?
Replacing the items array by doing:
entry.items = new_items
Is the most database efficient. But it turns out is buggy, make sure you have the latest version of mongoid and do an entry.save if entry.changed? || entry.new_record? because it occasionally wont save entry above when you modify the items.
I am using AR-Extensions to import a large number of objects to db, but synching them back from DB just isn't working.
MY code:
posts = [Post.new(:name=>"kuku1"), Post.new(:name=>"kuku2"), ...]
Post.import posts, :synchronize=>posts
posts are submitted to db, and each one is allocated with primary key (id) automatically. But when afterwards checking the objects in posts array, I see that they don't have id field, and new_record flag is still true.
I also tried adding :reload=>true, but that doesn't help as well.
Any idea why synch doesn't work?
This is not possible right now with new records. As of ar-extensions 0.9.3 this will not work when synchronizing new records as synchronizing expects the records you're sync'ing to already exist. It uses the primary key under the covers to determine what to load (but with new records the primary key is nil). This limitation* also exists in activerecord-import 0.2.5. If you can synchronize on other conditions I'd be happy to release a new gem allowing conditions to be passed in. For Rails 3.x you need to use activerecord-import though (it replaces ar-extensions). Please create ticket/issue on github: https://github.com/zdennis/activerecord-import/issues
For Rails 2.x you still want to use ar-extensions, and I'd likely backport the activerecord-import update and push out a new gem as well. If you'd like this functionality here please create a ticket/issue on github: https://github.com/zdennis/ar-extensions/
Patches are welcome as well.
*The limitation here is a database constraint, as its impossible to get the ids of all newly created records after a single insert/import without doing something strange like table locking, which I don't think is a good solution to that problem. If anyone has ideas I'm all ears.
UPDATE
activerecord-import 0.2.6 and ar-extensions 0.9.4 have been released and includes support for specifying the fields you want to synchronize on. Those fields should be unique. See http://www.continuousthinking.com/2011/4/6/activerecord-import-0-2-6-and-ar-extensions-0-9-4