I would like to know how can I delete all data from Active Storage or even resetting Active Storage? There is any way to do that? Thank you in advance!
NOTE: I'm using Rails 5.2
This question challenged me, so I did some test on my dummy app with local storage.
I have the usual model User which has_one_attached :avatar
On local storage files are saved on /storage folder, under subfolders named randomly with a string of two characters.
Informations related to files are stored in two tables:
ActiveStorage::Attachment
ActiveStorage::Blob
To completely clean the two tables, I did in rails console:
ActiveStorage::Attachment.all.each { |attachment| attachment.purge }
This command deletes
All record in that table: ActiveStorage::Attachment.any? #=> false
All the blobs: ActiveStorage::Blob.any? #=> false
All the files located under /storage subfolders; of course, subfolders are still there empty.
The ActiveStorage still works poperly.
I expect the same behaviour for remote storage, having the right privileges.
No doubt ActiveStorage::Attachment.all.each { |attachment| attachment.purge } will purge all records, but it will take long time if you have lots of files.
For development env, you can simply remove all the attachment records from the database and remove files from the storage folder.
run rails dbconsole and execute the following queries to delete the attachment records.
delete from active_storage_attachments;
if you are storing variant in your database:
delete from active_storage_variant_records;
then finally,
delete from active_storage_blobs;
On my local development environment, whenever I perform rails db:reset, I also run rm -rf storage to clear all previously-saved files. Rails will automatically recreate the storage directory the next time a file is uploaded.
Related
I asked about this once already, but I didn't get a working response, and I may have a better way of asking this question.
Long story short, I've deleted some problematic migration files, (the problem is coming from out-of-order migration files; there is an add_column_to_stocks before a create_stocks file), but for whatever reason, heroku continues to want to migrate these old, deleted files. I have no idea where these files are being stored.
If I do a heroku db:migrate:status, this is the response:
Status Migration ID Migration Name
--------------------------------------------------
up 20171231042756 Create articles
up 20171231044214 Add description to articles
up 20180116183526 Create users
up 20180116191414 Add user to articles
up 20180116195212 Add password digest to users
up 20180305082108 Create categories
up 20180305090315 Create article categories
down 20180515064500 Add latest price to stocks
down 20180517202216 Add timetables to stock
down 20180517205823 Add updatedtime to stocks
down 20180521021514 Create user stocks
The problems start at the first down file.
My local migration folder looks more like this:
20171231042756 Create articles
20171231044214 Add description to articles
20180116183526 Create users
20180116191414 Add user to articles
20180116195212 Add password digest to users
20180305082108 Create categories
20180305090315 Create article categories
20180515064499 Create stocks.rb
20180521021514 Create user stocks.rb
No matter what changes I make to my local migration files, it continues to want to migrate these problematic files, so I always get back the response:
PG::UndefinedTable: ERROR: relation "stocks" does not exist
: ALTER TABLE "stocks" ADD "latest_price" decimal
I tried getting into the heroku psql console and deleting them manually, but a delete from schema_migrations where version = 20180515064500 brings back a DELETE 0 response, meaning it hasn't deleted anything.
I'm friggen stumped and I've spent about a week and a half beating my head in over this.
Thank you all in advance!! Any help is appreciated.
File with migration number 20180515064500 should be gone as it is attempting to modify the table which doens't exist.
Remove the files which are breaking the migrations:
git rm db/migrate/20180515064500*.rb
and deploy on heroku.
Look if in your db files, your version of rails is indicated there (5.2 for example):
In your files that are in down mode.
class CurrencyCreateUsers <ActiveRecord :: Migration [5.2]
then rake db: migrate
I've some .json files with data that are automatically updated from time to time. I also have a Ruby on Rails app where I want to insert the information that's in those files.
By now, I'm parsing the JSON and inserting the data in the database in the seeds.rb file, but I'll want to add more data without having to restart the app, I mean, on the go.
From time to time, I'll check those files and if they have modifications, I want to insert those new items into my database.
What's the best way to do it?
Looks like a job for a cron.
Create the code you need in a rake task (in /lib/tasks):
task :import, => :environment do
Importer.read_json_if_modified # your importer class
end
Then run this with the period you want using your system's cron.
Can rails 3.1 engines have their own databases and at the same time also have access to the database of the main app, for example for user authentication
How can i configure this if possible?
thanks
Yes, they can. I have built engines that use a separate sqlite3 database. This way all the engine's functionality and data is isolated. Remove the engine, remove the database, and everything is gone without leaving a trace.
First of all it's preferred that you generate a mountable engine. This creates a namespace and isolates the engine from your main app. It's not a requirement, but a best practice. I assume you've done in the examples that follow.
At one point you are going to generate a model inside your engine. In the engine root path, type something like this:
$ rails generate resource Post
This will generate the Post controller, model and route. Everything's perfect except for the database migration. You're going to delete this. This migration is useless if you want to keep your data separate. The only goal of migrations inside engines is to have them copied over to the main app's database. So go ahead and get rid of it:
$ rm -r db
Now hook up your root route and controller, like usual.
There's one more change to make inside the model, to make it connect to a separate database.
module YourEngine
class Post < ActiveRecord::Base
establish_connection :adapter => 'sqlite3', :database => 'db/your_engine.sqlite3'
end
end
This way the engine's model will not use the main database, but the one you define. The key to understand is that the database file does not live inside the engine! This database file lives inside the host application. Since you are keeping everything separate, you must create this database by hand. Using the sqlite3 command-line tool and a hand-crafted create statement is quickest:
$ cd "the root dir of the host rails app"
$ sqlite3 db/your_engine.sqlite3
from where you create the table:
CREATE TABLE your_engine_posts (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, name varchar(255) NOT NULL DEFAULT '', body text, created_at DATETIME NOT NULL, updated_at DATETIME NOT NULL);
Presto! Now it's just a matter of mounting the engine inside your app, boot it and it should all be ready to roll. Obviously now that your engine has a separate database, it's no use working with migrations. You will have to update the schema by hand.
If you're worried about table names clashing with the app you can use the 'isolate_namespace' method. This will prefix all your table names with the namespace of your Engine.
Rails Casts just had a good tutorial which uses this, you should check it out.
http://railscasts.com/episodes/277-mountable-engines
Yes they can.
I wrote a guide on this here
http://railsforum.com/viewtopic.php?id=42143
I need to populate my production database app with data in particular tables. This is before anyone ever even touches the application. This data would also be required in development mode as it's required for testing against. Fixtures are normally the way to go for testing data, but what's the "best practice" for Ruby on Rails to ship this data to the live database also upon db creation?
ultimately this is a two part question I suppose.
1) What's the best way to load test data into my database for development, this will be roughly 1,000 items. Is it through a migration or through fixtures? The reason this is a different answer than the question below is that in development, there's certain fields in the tables that I'd like to make random. In production, these fields would all start with the same value of 0.
2) What's the best way to bootstrap a production db with live data I need in it, is this also through a migration or fixture?
I think the answer is to seed as described here: http://lptf.blogspot.com/2009/09/seed-data-in-rails-234.html but I need a way to seed for development and seed for production. Also, why bother using Fixtures if seeding is available? When does one seed and when does one use fixtures?
Usually fixtures are used to provide your tests with data, not to populate data into your database. You can - and some people have, like the links you point to - use fixtures for this purpose.
Fixtures are OK, but using Ruby gives us some advantages: for example, being able to read from a CSV file and populate records based on that data set. Or reading from a YAML fixture file if you really want to: since your starting with a programming language your options are wide open from there.
My current team tried to use db/seed.rb, and checking RAILS_ENV to load only certain data in certain places.
The annoying thing about db:seed is that it's meant to be a one shot thing: so if you have additional items to add in the middle of development - or when your app has hit production - ... well, you need to take that into consideration (ActiveRecord's find_or_create_by...() method might be your friend here).
We tried the Bootstrapper plugin, which puts a nice DSL over the RAILS_ENV checking, and lets your run only the environment you want. It's pretty nice.
Our needs actually went beyond that - we found we needed database style migrations for our seed data. Right now we are putting normal Ruby scripts into a folder (db/bootstrapdata/) and running these scripts with Arild Shirazi's required gem to load (and thus run) the scripts in this directory.
Now this only gives you part of the database style migrations. It's not hard to go from this to creating something where these data migrations can only be run once (like database migrations).
Your needs might stop at bootstrapper: we have pretty unique needs (developing the system when we only know half the spec, larg-ish Rails team, big data migration from the previous generation of software. Your needs might be simpler).
If you did want to use fixtures the advantage over seed is that you can easily export also.
A quick guess at how the rake task may looks is as follows
desc 'Export the data objects to Fixtures from data in an existing
database. Defaults to development database. Set RAILS_ENV to override.'
task :export => :environment do
sql = "SELECT * FROM %s"
skip_tables = ["schema_info"]
export_tables = [
"roles",
"roles_users",
"roles_utilities",
"user_filters",
"users",
"utilities"
]
time_now = Time.now.strftime("%Y_%h_%d_%H%M")
folder = "#{RAILS_ROOT}/db/fixtures/#{time_now}/"
FileUtils.mkdir_p folder
puts "Exporting data to #{folder}"
ActiveRecord::Base.establish_connection(:development)
export_tables.each do |table_name|
i = "000"
File.open("#{folder}/#{table_name}.yml", 'w') do |file|
data = ActiveRecord::Base.connection.select_all(sql % table_name)
file.write data.inject({}) { |hash, record|
hash["#{table_name}_#{i.succ!}"] = record
hash }.to_yaml
end
end
end
desc "Import the models that have YAML files in
db/fixture/defaults or from a specified path."
task :import do
location = 'db/fixtures/default'
puts ""
puts "enter import path [#{location}]"
location_in = STDIN.gets.chomp
location = location_in unless location_in.blank?
ENV['FIXTURES_PATH'] = location
puts "Importing data from #{ENV['FIXTURES_PATH']}"
Rake::Task["db:fixtures:load"].invoke
end
I have a migration in Rails that inserts a record into the database. The Category model depends on this record. Since RSpec clears the database before each example, this record is lost and furthermore never seems to be created since RSpec does not seem to generate the database from migrations. What is the best way to create/recreate this record in the database? Would it be using before(:all)?
It's not that RSpec clears the database, it's that Rails's rake:db:prepare task copies the schema (but not the contents) of your dev database into your *_test db.
Yes, you can use before(:all), as transactions are wrapped around each individual example - but a simple fixture file would also do the same job.
(There's a more complicated general solution to this issue: moving to a service-oriented architecture, where your 'dev' and 'test' services are going to be completely separate instances. You can then point your test db config to the development database in your test service, disable rake:db:prepare, and build your test service from migrations as you regenerate it. Then you can test your migrations and data transformations.)
What I like to do is create a folder in db/migration called data, and then put yml fixtures in there, in your case categories.yml
Then I create a migration with the following
def self.up
down
directory = File.join( File.dirname(__FILE__), "data" )
Fixtures.create_fixtures( directory, "categories" )
end
def self.down
Category.delete_all
end