I have been working with and learning Rails for a few months and have recently taken up a project that calls for a gem that will periodically pull data from another site and compile it into a database friendly format.
I have the ruby code to do the pulling, editing, and formatting. My question is how to get the gem to edit the database for the Rails app that will be built by inserting the data. What I want is to have a few models based off the data being mined by the gem.
Some background info on the app. The app will be a stats reporting app for sports. So the models that are based on the data mined will be Stats, Players, Teams, and Games. There will be other models in the application as well, such as Devise users and others.
Once again, I have made the ruby code that will pull the data using 'json' and 'nokogiri' gems and will put them into (a lot of) hashes. I just have no idea how to store them in a database usable by a Rails app. Or any database for that matter. The only information I could turn up had to do with Engines and Railtie but there were no thorough explanations.
Thanks for the help.
Related
I just started coding with Ruby on rails, and I am currently trying to make an app, a search engine that finds books sold on Amazon. I also want users to be able to save books to their profiles, e.g. a wish list.
I know there are a lot of gems out there (amazon-ecs', vacuum, etc.), but I am struggling to make any sense of the documentation. I am looking for a step by step guide, from installing the gem, to what code to put in what files etc.
All help is much appreciated.
I am developing a multitenant Rails app using Postgresql schemas. All is going great, but my situation is a little different from the conventional multitenant apps out there; namely my app will require that I pull customer data for each tenant from their database to mine.
Here is where it gets tricky. I wrote a jRuby gem that connects to each customer's database and pulls data to my server, and then it processes that data and loads it into my Rails app (each customer set of data will end up in the appropriate tenant schema). Therefore this gem is the only place that it aware of all my tenants and their configuration (database info, which tables to pull, and so on).
My question is: What do you think of this design choice? Some of the problems I am already seeing is that this forced the app to be in two Ruby states, i.e. it normally functions in Ruby, but when I need to do a pull, I have to switch to jRuby. Furthermore, it is hard to inspect into tenant configuration without resort to this gem.
Any comments or feedback on this? Is there another path I could have taken with this?
I have a Ruby script that downloads web pages containing financial statements for publicly traded companies, scrapes the pages for essential financial data, processes the financial data, and writes the results to a Postgres database.
I looked at the procedure for creating a Ruby gem at http://guides.rubygems.org/make-your-own-gem/ , and I'm considering making my Ruby web-scraping script a Ruby gem. Unlike the Hello World exercise in the example, my script needs a Postgres database ready to go.
I am working on a Rails app (Doppler Value Investing) that displays the stock parameters. Having a Ruby gem that nicely integrates into my app would be smoother and more elegant than the setup I would otherwise use. (At the moment, I have a separate Ruby app that does the scraping work and writes the results to the Postgres database.)
The one hitch I can think of is the need to manually create a Postgres database first. Is there a way to programmatically do this, or do I simply need to include in the README a statement that says something like "You MUST create a Postgres database with the name *db_name*, or this gem will not work"?
Just include the instruction in the README. Apart from anything else, you can't know ahead of time what privileges the user of your gem is going to have, so you'd have to deal with not being able to create the database programmatically anyway. It's a one-time task, so automating it doesn't make a huge amount of sense.
Once the database is set up, creating the schema automatically does make sense.
Edited
The question in short:
Is there a way (a gem?) in Rails to bind two databases of same schemes to an app, where Rails decides which db to use on top level domain?
For example: if user entered example.de the data on site loads from a db called de_example_production and if it's example.com then the data is loaded from us_example_production.
Details (old question):
I have an ecommerce Rails app that has been developed for some particular country. Now I am trying to extend it to another country.
The main requirement is that it should be the same app running on the same server (so that code updates apply to all countries), but since the countries have different data - cities, stores, products - I want to them to be on separate databases. What's the best way to achieve this?
As an alternative, I thought of continuing with the current database by adding a country model on top of the hierarchy of models that I already have, but it seems to me this approach will add a lot of complexity and redundancy to the system.
Can you please help me out?
As Thomas pointed out, the paradigm I was looking for is called multitenancy. I decided to proceed with the Apartment gem as it perfectly suited my requirements.
These pro railscasts helped me in my implementation:
Multitenancy with Scopes
and Multitenancy with PostgreSQL.
The iTunes Enterprise Partner Feed is "a data feed of the complete set of metadata from iTunes and the App Store" and "is available in two different formats - either as the files necessary to build a relational database or as stand-alone flat files that are country and media dependent."
I need to consume the data from this feed (which is essentially exported into flat files) and allow linking of my own Model objects (User, Activity, etc.) to data provided by the feed (App, Developer, etc.) The data is provided as a weekly full export and a daily incremental export.
I have two ideas for ways to implement this:
Create all of the models in my rails app and write my own importer that will insert/update records directly into my app's database daily via cron using models I've created (App, Developer, etc.)
Keep this database entirely separate and open up REST API that my own app will consume
My naive approach with #1 to keep everything in the Rails app is based on the need to be able to observe changes in the data I'm getting from the EPF. For example, if an App's description is updated, I want to be able to create an Activity object via an observer to track that update.
On one hand #2 feels like a better approach because it creates a standalone API into the data that can be consumed from several different apps I create. On the other hand, I'm just not sure how I'd accomplish the data change notifications without using observers directly on my own models. Or, how I would even consume the data in an object oriented way that could still be used with my own models. It feels like a lot of duplicate work to have to query the API for, say, an App's data, create a proper Active Record object for it and then save it just so it can be linked to one of my own models.
Is there a standard way to do this that I'm totally missing? Any pointers?
EDIT: Rails engines sound interesting but it would mean that each app would still need to consume and insert the data separately. That doesn't sound so DRY. It sounds more and more like a REST API is the way to go. I just don't know how to bridge the gap from API to Active Record model.
Rails Engines might be a good fit for this. You can create a Rails Engine gem and add all of your models and rake tasks to consume the data. Then you can include this gem in any app that uses it and also create an API app which includes the gem. You should be able to create observers in your other apps that interact with the gem.
I have quite a few apps that interact with each other and this approach works well for me. I have one central app that includes all the engines that consume data and I run all of my cronjobs from this app. I use the use_db plugin which allows my app to communicate with different databases. Each engine has use_db as a dependency and I keep the database configuration inside the gem. One example:
Engine Gem = transaction_core
This gem consumes transaction data from a source and inserts it into my transaction database.
The gem is included in my central app and I pull the transaction data using a rake task on the cron
I include this gem in several other apps that need to use the transaction data. Since the engine automatically adds the models and database config to the app, there is no additional work required to use the models in the app.
I have not used observers inside an app that includes my engines, but I see no reason why it would not work. With the engine the models work as if they are in your app/models directory. Hope this helps!
Modest Rubyist has a good 4 part tutorial on Rails 3 plugins that includes Engines:
http://www.themodestrubyist.com/2010/03/05/rails-3-plugins---part-2---writing-an-engine/