quering an external oracle db in rails application - ruby-on-rails

I have a website which useses a mysql database for its whole operation . But for a new requirement i need to query a external oracle database( used by other component) and compile a list of items and display in a page in the website. How is it possible to connect to a external database just for rendering a single page.
And is it possible to cache the queried result for say 1 month before invalidating the cache and get the updated list of items. i dont want query the external oracle db for each request.

Why not a monthly job that just copies the data from the Oracle database into the MySQL database ?

As stated by Myers, a simple solution is to accept a data feed. For example, a cron job could pull data from the Oracle database at defined intervals, say daily or weekly, and then insert the data into your web application's local MySQL database. The whole process could be essentially transparent to your web application. The caching interval, or how long you go between feeds, would be up to you.
I'll also point out that this could be an opportunity for an API that would more readily support sharing of data between applications. This would, of course, be more work than a simple data feed, but has the possibility of being more useful to more people.

Related

Cache layer to json api (on rails)

I have a small social website on rails (planing to port to phoenix) that use react on view and backend is just a JSON API,
with more o less 3000 users online at any moment. It runs with postgres/memcached
When user, for example, visits its feed page, I do:
Select activities from database (20 per page)
Select last 4 comments from each activity from database (justo 1 select)
Select all users referenced by activity or comment from database (select users.* from users where id in (1,3,4,5,...100) )
I have a cache layer (memcached) that when I will load users, first I try load from memcached, if it not exists I read from database and put it on cache.
BUT I also have some "listenners" on users model (and over others referenced models like address and profile) to invalidate cache if any field change.
The problem:
This cache demand a lot of code.
Sometimes cache run out of sync.
I hate to have this listeners and they are "side effects"
My question is: Any one is doing something like that??
I search A LOT over all google about cache layer to json api and looks like that everyone is just using database directly.
I know that Rails has it own soluction (and I gess that phoenix dont has one), but it always end up using update_at column, that means, I have to go to database anyway.
alternative:
Live with date, life is not pretty
Buy a more powerful postgres instance... any one is using memcached like that.
Remover listeners, put some expires_in (1 or 2 minutos... or more) and let app
show out of sync data for a couple of minutes.
thanks for any help!

Deploying Neo4j database

so I developed a small Neo4j database with the aim of providing users with path-related information (shortest path from A to B and properties of individual sections of the path). My programming skills are very basic, but I want to make the database very user-friendly.
Basically, I would like to have a screen where users can choose start location and end location from dropdown lists, click a button, and the results (shortest path, distance of the path, properties of the path segments) will appear. For example, if this database had been made in MS Access, I would have made a form, where users could choose the locations, then click a control button which would have executed a query and produced results on a nice report.
Please note that all the nodes, relationships and queries are already in place. All I am looking for are some tips regarding the most user-friendly way of making the information accessible to the users.
Currently, all I can do is make the users install neo4j, run neo4j every time they need it, open the browser, run the cypher script and then edit the cypher script (write down strings as locations) and then execute the query. This makes it rather impractical for users and also I am worried that some user might corrupt the data,
I'd suggest making a web application using a web framework like Rails, especially if you're new to programming. You can use the neo4j gem for that to connect to your database and create models to access the data in a friendly way:
https://github.com/neo4jrb/neo4j
I'm one of the maintainers of that gem, so feel free to contact us if you have any questions:
neo4jrb#googlegroups.com
http://twitter.com/neo4jrb
Also, you might be interested in look at my newest project called meta model:
https://github.com/neo4jrb/meta_model
It's a Rails app that lets you define via the web app UI your database model (or at least part of it) and then browse/edit the objects via the web app. It's still very much preliminary, but I'd like to be able to things like what you're talking about (letting users examing data and the relationships between them in a user friendly way)
I general you would write an tiny (web/desktop/forms-)application that contains the form, takes the form values and issues the cypher requests with the form values as parameters.
The results can then be rendered as a table or chart or whatever.
You could even run this from Excel or Access with a Macro (using the Neo4j http endpoint).
Depending on your programming skills (which programming language can you write in) it can be anything. There is also a Neo4j .Net client (see http://neo4j.com/developer/dotnet).
And it's author Tatham Oddie showed a while ago how to do that with Excel

CouchDB and iOS

I need some help with a CouchDB iOS project.
I'm using Apache CouchDB Server and the couchbase-lite iOS Framework.
On my CouchDB I have a template document.
- CouchDB Server
- database
- template
- document 1
- document 2
- ...
My goal is to only synchronise my iPad with this template document to get the latest data which my application needs.
But when I enter some data on my iPad, I want that this data should be pushed only to couchBase Server.
How can I "tell" my application to synchronise only one file and not the entire database with my server and at the end how can I "tell" my application to only push the data that is input from user side ?
More importantly, Do I need two databases on my server? One for the template and a second one for user input data?
If YES, then I just need to know how I can only push my data.
Guidance needed. Thanks.
This is how I solve this:
I tend to add a 'last update' date to all my documents, and store this in a format that means they'll be sorted in time order (epoch or yyymmddhhmmss) both do.
Create a view that uses the update time as a date.
On your client, store the time since you last updated.
When you update, access the view with a startkey parameter set to the last update date.
You can then either use 'include-docs=true' to get the documents as you query the view.
I tend to use 'include-docs=false' though as it means when a lot of documents have been updated I transfer less data in a single query. I then just directly access each document id that the view returns.

Ruby on Rails, sharing data between applications via a central REST API store

So here is the basic structure I'm proposing:
Data warehouse (for want of a better word)
E-commerce online
Back-end MIS
etc
So the idea is that I have an Order for example. An order can be created via e-commerce site, or via back-end MIS. Either case the order should filter out to e-commerce to show order to user, and vise versa.
There will be other apps in the future.
So the thinking is, to have a central warehouse that wraps this data in a service API, and then the other apps push / pull to it.
Sound OK? I guess the question is syncing the data. When I create an order, do I push the order at create time to the warehouse, or put it to some queue, or is there some other method to keep all these in sync, assuming, near realtime to realtime sync is required.
Assume your REST server is just another data store. How would each client get updates from a plain old database when needed?
If you had each client poll the data store at regular intervals, that would be one solution.

Consuming web service data into DB?

I have created a website that has a store table of our company stores. The store table data will be populated from data it receives via web service. It will have to add new stores, mark stores closed that have been closed, and mark stores as open that have been reopened.
How do I populate this table with the web service?
1) Have some (cron) script that consumes the web service and syncs the data with the stores table?
2) Build this in to the app itself? So that on app start, it syncs the data? Then maybe somehow modify my model to sync data after every 10 minutes of a find (not really sure how this would work)?
3) Any other ideas?
I would go with having a cron job. That would keep the population of the data separate from your application that uses the data. Also, it would mean you can keep your data up to date, even if your application goes offline. Finally, the data could potentially be used by a different application as well? If this is the case, it wouldn't make much sense to tie the population of the data to one of the applications that uses the data.
Why do you need a database? Depending on what you're doing, it may be more practical to just talk to the Web service directly.

Resources