Smartest way to import massive datasets into a Rails application? - ruby-on-rails

I've got multiple massive (multi gigabyte) datasets I need to import into a Rails app. The datasets are currently each in their own database on my development machine, and I need to read from them and create rows in tables in my Rails database based on the information they contain. The tables in my Rails database will not be exactly the same as the tables in the source databases.
What's the smartest way to go about this?
I was thinking migrations, but I'm not exactly sure how to connect the migration to the databases, and even if that is possible, is that going to be ridiculously slow?

without seeing the schemas or knowing the logic you want to apply to each row, I would say the fastest way to import this data is to create a view of the table you want to export in the column order you want (and process it using sql) and the do a select into outfile on that view. You can then take the resulting file and import it into the target db.
This will not allow you to use any rails model validations on the imported data, though.
Otherwise, you have to go the slow way and create a model for each source db/table to extract the data (http://programmerassist.com/article/302 tells you how to connect to a different db for a given model) and import it that way. This is going to be quite slow, but you could set up an EC2 monster instance and run it as fast as possible.
Migrations would work for this, but I wouldn't recommend it for something like this.

Since georgian suggested it, I'll post my comment as an answer:
If the changes are superficial (column names changed, columns removed, etc), then I would just manually export them from the old database and into the new, and then run a migration to change the columns.

Related

How to load relational database tables into neo4j database

I have tables which have millions of records. I need to load these records as nodes in neo4j.
Please help me out on how to do it as I'm new to neo4j.
It is quite easy, just map your entities that should become nodes into a set of csv files and the connections that should become relationships in another set of files.
Then run them with my batch-importer: https://github.com/jexp/batch-import/tree/20#binary-download
import.sh nodes1.csv,nodes2.csv rels1.csv,rels2.csv
Add types and index information to the headers and the batch.properties config file as needed.
You can use the batch-importer for the initial inserter but also subsequent updates (but the database has to be shut-down for that).
It is pretty easy to connect to your existing database using its driver and then extract the information of the right shape and kind and insert it into your graph model,
Either using Cypher statements with parameters or the embedded, transactional Java API for ongoing updates.
See: http://jexp.de/blog/2013/05/on-importing-data-in-neo4j-blog-series/
You can export to CSV and import it into node (probably wont work well since you have millions of records)
You can write a program to do it (this is what I am currently working on).
This also depends on what programming languages you know... but the bottom line is, because no two databases are created equally (unless on purpose), it's very difficult to create a catch-all solution for migrating data from SQL to Neo.
The best way that I've discovered so far is to create a program that queries the tables in the database, finds all related tables (i.e. foreign keys), and imports all those table rows into Neo, labeling the nodes using the Table name, then process the foreign keys as relationships.
It's not easy. I've been working on something for my database here for a week or so now... but I'm close!

Dynamic database connection in a Rails App

I'm quite new to Rails but in my current assignment I have no other choice but use RoR. My problem is that in my app I would like to create, connect and destroy databases automatically on user demand but as far as I understand it is quite hard to accomplish this with ActiveRecord. It would be nice to hear some advice from more experienced RoR developers on this issue.
The problem in details:
I have a main database (which I access with activerecord). In this database I store a list of my active programs (and some template data for creating new programs). I would like to create a separate database for each of this programs (when a user creates a new program in my app).
In the programs' databases I would like to store the state and basic info of the particular program and a huge amount of program related data (which is used to calculate the state and is necessary to have for audit reasons).
My problem is that for example I want a dashboard listing all the active programs and their state data. So first I have to get the list from my main db and after that I have to connect to all the required program databases and get the state data.
My question is what is the best practice to accomplish this? What should I use (ActiveRecord, a particular gem, etc.)?
Hi, thanks for your answers so far, I would like to add a couple of details to make my problem more clear for you:
First of all, I'm not confusing database and table. In my case there is a tool which is processing log files. Its a legacy tool (written in ruby 1.8.6) and before running it, I have to run an SQL script which creates a database with prefilled- and also with empty tables for this tool. The tool then processes the logs and inserts the calculated data into different tables in this database. The catch is that the new system should support running programs parallel which means I have to create different databases for different programs.(this was not an issue so far while the tool was configured by hand before each run, but now the configuration must be automatic by my tool) There is no way of changing the legacy tool while it would be too complicated in the given time frame, also it's a validated tool. So this is the reason I cannot use different tables for different programs, because my solution should be based on an other tool.
Summing my task up:
I have to crate a complex tool using RoR and Ruby 2.0.0 which:
- creates a specific database for a legacy tool every time a user want to start a new program
- configures this old tool on a daily basis to process the required logs and insert the calculated data into the appropriate database
- access these databases and show dashboards based on their data
The database I'm using is MySQL.
I cannot use other framework, because the future owner of my tool won't be able to manage/change/update it. So I have to go with RoR, which is quite painful for me right now and I really hope some of you guys can give me a little guidance.
Ok, this is certainly outside of the typical use case scenario, BUT it is very doable within Rails and ActiveRecord.
First of all, you're going to want to execute some SQL directly, which is fine, but you'll also have to take extra care if you're using user input to determine the name of the new database for instance, and do your own escaping. (Or use one of ActiveRecord's lower-level escaping methods that we normally don't worry about.) The basic idea though is something like:
create_sql = <<SQL
CREATE TABLE foo ...
SQL
ActiveRecord::Base.connection.execute(create_sql)
Although now that I look at ActiveRecord::ConnectionAdapters::Mysql2Adapter, there's a #create method that might help you.
The next step is actually doing different things in the context of different databases. The key there is ActiveRecord::Base.establish_connection. Using that, and passing in the params for the database you just created, you should be able to do what you need to for that particular db. If the db's weren't being created dynamically, I'd put that line at the top of a standard ActiveRecord model so that that model would always connect to that db instead of the main one. If you want to use the same class, and connect it to different db's (one at a time of course), you would probably remove_connection before calling establish_connection to the next one.
I hope this points you in the right direction. Good luck!

What's the best practice for handling mostly static data I want to use in all of my environments with Rails?

Let's say for example I'm managing a Rails application that has static content that's relevant in all of my environments but I still want to be able to modify if needed. Examples: states, questions for a quiz, wine varietals, etc. There's relations between your user content and these static data and I want to be able to modify it live if need be, so it has to be stored in the database.
I've always managed that with migrations, in order to keep my team and all of my environments in sync.
I've had people tell me dogmatically that migrations should only be for structural changes to the database. I see the point.
My counterargument is that this mostly "static" data is essential for the app to function and if I don't keep it up to date automatically (everyone's already trained to run migrations), someone's going to have failures and search around for what the problem is, before they figure out that a new mandatory field has been added to a table and that they need to import something. So I just do it in the migration. This also makes deployments much simpler and safer.
The way I've concretely been doing it is to keep my test fixture files up to date with the good data (which has the side effect of letting me write more realistic tests) and re-importing it whenever necessary. I do it with connection.execute "some SQL" rather than with the models, because I've found that Model.reset_column_information + a bunch of Model.create sometimes worked if everyone immediately updated, but would eventually explode in my face when I pushed to prod let's say a few weeks later, because I'd have newer validations on the model that would conflict with the 2 week old migration.
Anyway, I think this YAML + SQL process works explodes a little less, but I also find it pretty kludgey. I was wondering how people manage that kind of data. Is there other tricks available right in Rails? Are there gems to help manage static data?
In an app I work with, we use a concept we call "DictionaryTerms" that work as look up values. Every term has a category that it belongs to. In our case, it's demographic terms (hence the data in the screenshot), and include terms having to do with gender, race, and location (e.g. State), among others.
You can then use the typical CRUD actions to add/remove/edit dictionary terms. If you need to migrate terms between environments, you could write a rake task to export/import the data from one database to another via a CSV file.
If you don't want to have to import/export, then you might want to host that data separate from the app itself, accessible via something like a JSON request, and have your app pull the terms from that request. That seems like a lot of extra work if your case is a simple one.

What are the best practices for importing large datasets into MongoDB?

We are just giving MongoDB a test run and have set up a Rails 3 app with Mongoid. What are the best practices for inserting large datasets into MongoDB? To flesh out a scenario: Say, I have a book model and want to import several million records from a CSV file.
I suppose this needs to be done in the console, so this may possibly not be a Ruby-specific question.
Edited to add: I assume it makes a huge difference whether the imported data includes associations or is supposed to go into one model only. Any comments on either scenario welcome.
MongoDB comes with import/export tools that parse JSON formatted data.
Assuming you have an existing database in SQL, the easiest way to migrate that data is to output your SQL data as JSON strings, then use the import tool for each collection.
This includes denormalization and nesting/embedding - so don't migrate a relational model to MongoDB, you should consider also refactoring your data model to leverage MongoDB features.
For example, a common task is to merge articles and tags to an articles collection with the tags embedded as an array. Do that in your export script, so all MongoDB sees is nice clean JSON coming in through the import :-)
You can still import all your tables as collections, but you're missing out on some of the true strengths of MongoDB by doing that.
If you want add this dataset only one time. You can use the db/seed.rb file. you can read your CSV and generate all Document.
If you want made that a lot of times, you can made a runner or task.
With task, you need define a lib/task/file.rake and generate task with your file and again parse it and generate all documents.
You can made a runner too.
It's the same thing that a ActiveRecord stuff.

Rails: Script to import data

I have a couple of scripts that generate & gather large amounts of data that I will need both to seed my database with and in the future to add large amounts more to it. What is the best way to import lots of relational data into a rails database as both seed data and intermittently during production?
I haven't settled on an output format for my script yet but the data's structure largely mirrors my rails model's and contains has_many associations that I would like the import to preserve.
I've googled a fair bit and seen ar-extensions and seed_fu as well as the idea of using fixtures.
With ar-extensions all the examples seem to be staightforward csv imports (likely from table dumps which seems to be its primary use case) with no mention of handling associations or avoiding duplicate updates. In my case I have no id's, foreign keys, or join tables in my script so this seems like it wouldn't work for me unless I was prepared to handle that complexity myself.
With seed_fu, it looks like it could handle the relational aspects of the data creation but would still require me to specify ids (how can you know which ones are available in production?) and mix code with data.
Fixtures also have the same id problem though now it requires objects to be named (I'd probably just end up using numbers for names) and I am not sure how I would avoid accidental duplications of records.
Or would i be better off just putting my data into a local sqlite db first and then using the straight table dumping techniques?
I've done this before with CSV files. I have a cron job that collects the data and puts it in accessible CSV files. Then I have a rake task (also in cron, but on another box) that looks for CSVs, and if there are any, calls model methods to create objects out of the CSV rows.
The models have a csv_create_or_update method that takes a CSV row. This technique sidesteps the id issue, and also allows validations to run (or not) on the data coming in.
I was handed a rails app recently that required some imported data and it was all handled in a rake task. All the data came in form of csv formatted files and was handled per class/model etc as necessary. This worked out relatively well for me being new to the system as it was very easy to see where data was going and how it was being applied. As you import the data you can check for id conflicts/collisions and handle them accordingly.

Resources