Rails 4: JSON backup back to Postgresql DB - ruby-on-rails

I have an issue re-creating my app DB from JSON backup file.
What I am trying to do now is to create all the records from rake task reading lines in the file and rendering json like:
new_record = key.camelize.constantize.new(hs['key'])
new_record.save
The problem is that I can not skip callbacks for all the models, to be sure I am not creating any dup stuff there. .camelize.constantize.skip_callback(:after_create) is just not working giving me an error undefined method '_after_create_callbacks' for #<Class:.
So two questions here:
1) Is there any way to skip AR callbacks other way?
2) Are there any other options for re-creating db from JSON except SQL queries?

1. Skipping callbacks
You can ask AR to skip validation callbacks when changing a record like this:
new_record.class.skip_callback(:create, :after, :specific_callback)
new_record.save
2. Importing JSON into PSQL
Since your data is in JSON, there is no magical way of importing it into your DB. But, Postgres allows you easily import it using SQL like it is shown in this SO answer. Also, many others have written utilities to make your life easier.

Related

How to fill model(database) in ruby on rails with json file?

I have a json file with a lot of movies in it. I want to create a model 'Movie' and fill it with all movies from that json file. How do i do it? I know that I can parse json file into a hash but that's not the thing I am looking for.
The correct term you're looking for is "seeding"!
You're going to need a database however, and a migration to create the database along with the associated movies table. (There are plenty of guides on how to do this, along with the official documentation).
After that's done, you'll need to "seed" your database with the data in your json file.
In the seeds.rb file, assuming that the JSON file is an array of Movies in JSON form, you should be able to loop over every Movie JSON object and insert it into your database.
To add to docaholic's helpful response, here's some steps/pseudo-code that may help.
Assuming you're using a SQL database and need to create a model:
# creates a migration file.
rails generate migration create_movies title:string #duration_in_minutes:integer or whatever fields you have
# edit the file to add other fields/ensure it has what you want.
rake db:migrate
Write a script to populate your database. There are many patterns for this (rake task, test fixtures, etc) and which one you'd want to use would depend on what you need (whether it's for testing, for production environment, as seed data for new environments, etc).
But generally what the code would look like is:
text_from_file = File.read(file_path)
JSON.parse(text_from_file).each do |json_movie_object|
Movie.create!(title: json_movie_object[:title], other_attribute: json_movie_object[:other_attribute])
# if the json attributes exactly match the column names, you can do
# Movie.create!(json_movie_object)
end
This is not the most performant option for large amounts of data. For large files you can use insert_all for much greater efficiency, but this bypasses activerecord validations and callbacks so you'd want to understand what that means.
For my case, I need to seed around 200 hundred datas for production from a JSON file, so I tried to insert data from the json files in a database.
SO at first I created a database in rails project
rails generate migration create_movies title:string duration_in_minutes:integer
# or whatever fields you have
# edit the file to add other fields/ensure it has what you want.
rake db:migrate
Now its time to seed datas!
Suppose your movies.json file has:
[
{"id":"1", "name":"Titanic", "time":"120"},
{"id":"2", "name":"Ruby tutorials", "time":"120"},,
{"id":"3", "name":"Something spetial", "time":"500"}
{"id":"4", "name":"2HAAS", "time":"320"}
]
NOW, like this 4 datas, think your JSON file has 400+ datas to input which is a nightmare to write in seed files manually.
You need to add JSON gem to work. Run
bundle add json
to work with JSON file.
In db/seed.rb file, add these lines to add those 400+ infos in your DATABASE:
infos_from_json = File.read(your/JSON/file/path/movies.json)
JSON.parse(infos_from_json).each do |t|
Movie.create!(title: t['name'], duration_in_minutes:
t['time'])
end
Here:
infos_from_json variable fixing the JSON file from file directory
JSON.parse is calling the JSON datas then the variable is declared to indicate where the file is located and each do loop is dividing each data and |t| will be used to add part JSON.parse(infos_from_json). in every call.
Movie.create!(title: t['title'], duration_in_minutes: t['time']) is adding the data in database
This is how we can add datas in our database from a JSON file easily.
For more info about seeding data in database checkout the documentation
THANKS
Edit: This operation requires JSON gem just type bundle add json

How can I add a Friendly-id slug through seed.rb?

I am adding thousands of entries to my database through seeds.rb and a CSV file. In order to do so I am using Fast_Seeder:
FastSeeder.seed_csv!(Artist, "artist_sample.csv", :name, :sort_name)
I use Friendly-id, and as it is now, it is not creating a slug because I am not feeding it through the file.
How do I go about creating it without having to change the file manually?
Any help would be much appreciated!
I'm not familiar with Fast_Seeder, but I am with friendly-id. Their docs say the following:
# If you're adding FriendlyId to an existing app and need
# to generate slugs for existing users, do this from the
# console, runner, or add a Rake task:
User.find_each(&:save)
It would obviously be more efficient to get friendly-id playing nice in the first place, but barring that you can add that the next line. You basically just need to tap validation. I bet .find_each(&:valid?) might work too. This leads me to wonder if FastSeeder is creating these records without hitting your validations.
EDIT: Yup, I just dug through their source and they are creating straight through the database. You'll probably need to go the route I outlined above.

Rails - how to strip forward slash characters from data in database?

I have an app that I've converted over from another cms. The old URLs were being stored in the database like so:
/this-is-an-old-permalink/
And I need them to be like this:
this-is-an-old-permalink
Note the absence of forward slashes. What is the easiest way to go about removing them?
I'm not necessarily looking for the exact code to do it (although that'd be nice!) -- I'm asking also as a Rails newb: What is the best method to go about doing things like this? I've only really worked with Rails in setting up a model, controller, views and outputting data. I haven't had to do any processing like this. Would it go in the model? Any help is appreciated!
edit
Do I need to get all records, loop through them, do regex on that one field and then save it?
Since you're likely only going to write this once, your best bet is to create a script for it within lib, or to write a migration for it. I recommend the latter, because it will then be executed automatically with rake db:migrate if you restore from your old backup at a later date. You can then use all your standard Model processing tricks (like you would use on a Controller) within the migration without exposing the substitution code to a Controller.
EDIT:
You can add the following to a new file within lib/tasks to create a new rake task for this called db:substitute_slashes:
namespace :db do
desc "Remove slashes from old-style URLs"
task :substitute_slashes => :environment do
Modelname.find(:all).each do |obj|
obj.fieldname.gsub!(/regex here/,'')
obj.save!
end
end
end
The exclamation on the end of save! means it will throw an exception if the resulting object fails validation, which is a good thing to check for in this case.
You would then be able to execute this with the command rake db:substitute_slashes.

Ruby on Rails 2.3.5: Populating my prod and devel database with data (migration or fixture?)

I need to populate my production database app with data in particular tables. This is before anyone ever even touches the application. This data would also be required in development mode as it's required for testing against. Fixtures are normally the way to go for testing data, but what's the "best practice" for Ruby on Rails to ship this data to the live database also upon db creation?
ultimately this is a two part question I suppose.
1) What's the best way to load test data into my database for development, this will be roughly 1,000 items. Is it through a migration or through fixtures? The reason this is a different answer than the question below is that in development, there's certain fields in the tables that I'd like to make random. In production, these fields would all start with the same value of 0.
2) What's the best way to bootstrap a production db with live data I need in it, is this also through a migration or fixture?
I think the answer is to seed as described here: http://lptf.blogspot.com/2009/09/seed-data-in-rails-234.html but I need a way to seed for development and seed for production. Also, why bother using Fixtures if seeding is available? When does one seed and when does one use fixtures?
Usually fixtures are used to provide your tests with data, not to populate data into your database. You can - and some people have, like the links you point to - use fixtures for this purpose.
Fixtures are OK, but using Ruby gives us some advantages: for example, being able to read from a CSV file and populate records based on that data set. Or reading from a YAML fixture file if you really want to: since your starting with a programming language your options are wide open from there.
My current team tried to use db/seed.rb, and checking RAILS_ENV to load only certain data in certain places.
The annoying thing about db:seed is that it's meant to be a one shot thing: so if you have additional items to add in the middle of development - or when your app has hit production - ... well, you need to take that into consideration (ActiveRecord's find_or_create_by...() method might be your friend here).
We tried the Bootstrapper plugin, which puts a nice DSL over the RAILS_ENV checking, and lets your run only the environment you want. It's pretty nice.
Our needs actually went beyond that - we found we needed database style migrations for our seed data. Right now we are putting normal Ruby scripts into a folder (db/bootstrapdata/) and running these scripts with Arild Shirazi's required gem to load (and thus run) the scripts in this directory.
Now this only gives you part of the database style migrations. It's not hard to go from this to creating something where these data migrations can only be run once (like database migrations).
Your needs might stop at bootstrapper: we have pretty unique needs (developing the system when we only know half the spec, larg-ish Rails team, big data migration from the previous generation of software. Your needs might be simpler).
If you did want to use fixtures the advantage over seed is that you can easily export also.
A quick guess at how the rake task may looks is as follows
desc 'Export the data objects to Fixtures from data in an existing
database. Defaults to development database. Set RAILS_ENV to override.'
task :export => :environment do
sql = "SELECT * FROM %s"
skip_tables = ["schema_info"]
export_tables = [
"roles",
"roles_users",
"roles_utilities",
"user_filters",
"users",
"utilities"
]
time_now = Time.now.strftime("%Y_%h_%d_%H%M")
folder = "#{RAILS_ROOT}/db/fixtures/#{time_now}/"
FileUtils.mkdir_p folder
puts "Exporting data to #{folder}"
ActiveRecord::Base.establish_connection(:development)
export_tables.each do |table_name|
i = "000"
File.open("#{folder}/#{table_name}.yml", 'w') do |file|
data = ActiveRecord::Base.connection.select_all(sql % table_name)
file.write data.inject({}) { |hash, record|
hash["#{table_name}_#{i.succ!}"] = record
hash }.to_yaml
end
end
end
desc "Import the models that have YAML files in
db/fixture/defaults or from a specified path."
task :import do
location = 'db/fixtures/default'
puts ""
puts "enter import path [#{location}]"
location_in = STDIN.gets.chomp
location = location_in unless location_in.blank?
ENV['FIXTURES_PATH'] = location
puts "Importing data from #{ENV['FIXTURES_PATH']}"
Rake::Task["db:fixtures:load"].invoke
end

Ruby on Rails Migration - Create New Database Schema

I have a migration that runs an SQL script to create a new Postgres schema. When creating a new database in Postgres by default it creates a schema called 'public', which is the main schema we use. The migration to create the new database schema seems to be working fine, however the problem occurs after the migration has run, when rails tries to update the 'schema_info' table that it relies on it says that it does not exist, as if it is looking for it in the new database schema and not the default 'public' schema where the table actually is.
Does anybody know how I can tell rails to look at the 'public' schema for this table?
Example of SQL being executed: ~
CREATE SCHEMA new_schema;
COMMENT ON SCHEMA new_schema IS 'this is the new Postgres database schema to sit along side the "public" schema';
-- various tables, triggers and functions created in new_schema
Error being thrown: ~
RuntimeError: ERROR C42P01 Mrelation "schema_info" does not exist
L221 RRangeVarGetRelid: UPDATE schema_info SET version = ??
Thanks for your help
Chris Knight
Well that depends what your migration looks like, what your database.yml looks like and what exactly you are trying to attempt. Anyway more information is needed change the names if you have to and post an example database.yml and the migration. does the migration change the search_path for the adapter for example ?
But know that in general rails and postgresql schemas don't work well together (yet?).
There are a few places which have problems. Try and build and app that uses only one pg database with 2 non-default schemas one for dev and one for test and tell me about it. (from thefollowing I can already tell you that you will get burned)
Maybe it was fixed since the last time I played with it but when I see http://rails.lighthouseapp.com/projects/8994/tickets/390-postgres-adapter-quotes-table-name-breaks-when-non-default-schema-is-used or this http://rails.lighthouseapp.com/projects/8994/tickets/918-postgresql-tables-not-generating-correct-schema-list or this in postgresql_adapter.rb
# Drops a PostgreSQL database
#
# Example:
# drop_database 'matt_development'
def drop_database(name) #:nodoc:
execute "DROP DATABASE IF EXISTS #{name}"
end
(yes this is wrong if you use the same database with different schemas for both dev and test, this would drop both databases each time you run the unit tests !)
I actually started writing patches. the first one was for the indexes methods in the adapter which didn't care about the search_path ending up with duplicated indexes in some conditions, then I started getting hurt by the rest and ended up abandonning the idea of using schemas: I wanted to get my app done and I didn't have the extra time needed to fix the problems I had using schemas.
I'm not sure I understand what you're asking exactly, but, rake will be expecting to update the version of the Rails schema into the schema_info table. Check your database.yml config file, this is where rake will be looking to find the table to update.
Is it a possibility that you are migrating to a new Postgres schema and rake is still pointing to the old one? I'm not sure then that a standard Rails migration is what you need. It might be best to create your own rake task instead.
Edit: If you're referencing two different databases or Postgres schemas, Rails doesn't support this in standard migrations. Rails assumes one database, so migrations from one database to another is usually not possible. When you run "rake db:migrate" it actually looks at the RAILS_ENV environment variable to find the correct entry in database.yml. If rake starts the migration looking at the "development" environment and database config from database.yml, it will expect to update to this environment at the end of the migration.
So, you'll probably need to do this from outside the Rails stack as you can't reference two databases at the same time within Rails. There are attempts at plugins to allow this, but they're majorly hacky and don't work properly.
You can use pg_power. It provides additional DSL for migration to create PostgreSQL schemas and not only.

Resources