Using statistical tables with Rails - ruby-on-rails

I'm building an app that needs to store a fair amount of events that the users carry out. (Think LOTS as in millions per month).
I need to report on the these events (total of type x in the last month, etc) and need something resilient and fast.
I've toyed with Redis etc to store aggregates of the data, but this could just mean that I'm building up a massive store of single figure aggregates that aren't rebuildable.
Whilst this isn't a bad solution, I'm looking at storing the raw event data in tables that I can then query on a needs basis, and potentially generate aggregate counters on a periodic basis. This would thus give me the ability to add counters over time, and also carry out ad-hoc inspections on what is going on, something which aggregates don't allow.
Question is, how is best to do this? I obviously don't want to have to create a model for each table (which is what Rails would prefer), so do I just create the tables and interact with raw SQL on a needs basis, or is there some other choice for dealing with this sort of data?

I've worked on an app that had that type of data flow and the solution is the following :
-> store everything
-> create aggregates
-> delete everything after a short period (1 week or somehting) to free up resources
So you can simply store events with rails, have some background aggregate creation from another fast script (cron sql), read with rails the aggregates and yet another background script for raw event deletion.
Also .. rails and performance don't quite go hand in hand usually ;)

Related

Rails saving and/or caching complicated query results

I have an application that, at its core, is a sort of data warehouse and report generator. People use it to "mine" through a large amount of data with ad-hoc queries, produce a report page with a bunch of distribution graphs, and click through those graphs to look at specific result sets of the underlying items being "mined." The problem is that the database is now many hundreds of millions of rows of data, and even with indexing, some queries can take longer than a browser is willing to wait for a response.
Ideally, at some arbitrary cutoff, I want to "offline" the user's query, and perform it in the background, save the result set to a new table, and use a job to email a link to the user which could use this as a cached result to skip directly to the browser rendering the graphs. These jobs/results could be saved for a long time in case people wanted to revisit the particular problem they were working on, or emailed to coworkers. I would be tempted to just create a PDF of the result, but it's the interactive clicking of the graphs that I'm trying to preserve here.
None of the standard Rails caching techniques really captures this idea, so maybe I just have to do this all by hand, but I wanted to check to see if I wasn't missing something that I could start with. I could create a keyed model result in the in-memory cache, but I want these results to be preserved on the order of months, and I deploy at least once a week.
Considering Data loading from lots of join tables. That's why it's taking time to load.
Also you are performing calculation/visualization tasks with the data you fetch from DB, then show on UI.
I like to recommend some of the approaches to your problem:
Minimize the number of joins/nested join DB queries
Add some direct tables/columns, ex. If you are showing counts of comments of user the you can add new column in user table to store it in user table itself. You can add scheduled job to update data or add callback to update count
also try to minimize the calculations(if any) performing on UI side
you can also use the concept of lazy loading for fetching the data in chunks
Thanks, hope this will help you to decide where to start 🙂

Pre-Made ActiveRecord to Optimize Performance / Save Resources

Essentially each time a visitor reaches the application, the controller performs a database query to check what are the most relevant items to show.
Although the items shown vary with time, they are not personally selected for each user.
This means that instead of being calculated each time a visitor comes, it would be better to be system performing a single query every like 10 minutes and store it, to apply on each visit.
What is the best way to apply this idea? I was thinking on cronjobs and maybe store on redis but IDK, some help is appreciated!
There are a number of ways to do this. One way that I've used in the past with success is to have a table in your database that represents the most relevant items and then have a cron job that updates that table.
Fragment caching like #wesley6j recommended isn't a bad way to go either and you can combine the 2 techniques as well if you want.
If you want more detailed suggestions, you can provide some more details about what you are trying to achieve.

How to keep track of daily data?

In my rails application, I want to keep several daily metrics in order to see how this data changes over times. In other words, if I want to see how many times a user logged in on a particular date (and therefore allowing me to accumulate this data over times).
Some of this data I can figure out through queries, such as the number of posts a user made on a particular day (because the post model includes a date). However, there are many different daily metrics I want to keep track of.
I thought of creating a DataPlayers model which has data for every player and every day creating a new instance of this, but I don't think that is the best approach.
Are there best practices for this type of data collection?
You could use a gem like SqlMetrics to track events as they happen.
It stores the events in your own database so its easy to query them via sql.

ASP.NET MVC 3 - Web Application - Efficiently Aggregate Data

I am running an ASP.NET MVC 3 web application and would like to gather statistics such as:
How often is a specific product viewed
Which search phrases typically return specific products in their result list
How often (for specific products) does a search result convert to a view
I would like to aggregate this data and break it down:
By product
By product by week
etc.
I'm wondering what are the cleanest and most efficient strategies for aggregating the data. I can think of a couple but I'm sure there are many more:
Insert the data into a staging table, then run a job to aggregate the data and push it into permanent tables.
Use a queuing system (MSMQ/Rhino/etc.) and create a service to aggregate this data before it ever gets pushed to the database.
My concerns are:
I would like to limit the number of moving parts.
I would like to reduce impact on the database. The fewer round trips and less extraneous data stored the better
In certain scenarios (not listed) I would like the data to be somewhat close to real-time (accurate to the hour may be appropriate)
Does anyone have real world experience with this and if so which approach would you suggest and what are the positives and negatives? If there is a better solution that I am not thinking of I'd love ot hear it...
Thanks
JP
I needed to do something similar in a recent project. We've implemented a full audit system in a secondary database, it tracks changes on every record on the live db. Essentially every insert, update and delete actually updates 2 records, one in the live db and one in the audit db.
Since we have this data in realtime on the audit db, we use this second database to fill any reports we might need. One of the tricks I've found when working with a reporting DB is to forget about normalisation. Just create a table for each report you want, and have it carry just the data you want for that report. Its duplicating data, but the performance gains are worth it.
As to filling the actual data in the reports, we use a mixture. Daily reports are generated by a scheduled task at around 3am, ditto for the weekly and monthly reports, normally over weekends or late at night.
Other reports are generated on demand, using mostly the data since the last daily, so its not that many records, once again all from the secondary database.
I agree that you should create a separate database for your statistics, it will reduce the impact on your database.
You can go with your idea of having "Staging" tables and "Aggregate" tables; that way, if you want to access the near-real-time data you go o the staging table, when you want to historical data, you go to the aggregates.
Finally, I would recommend you use an asynchronous call to save your statistics; that way your pages will not have an impact in response time.
I suggest that you will create a separate database for this. The best way is to use BI technique. There is a separate services in
SQL server for Bi.

Database design - recording transactions

I would appreciate some advice on how to structure a database for the following scenario:
I'm using Ruby on Rails, and so I have the following tables:
Products
Salespeople
Stores
Products are manufactured in batches, so each product item has a Batch code, so I think I will also need a table of batches, which refers to a product type.
Batch
In the real world, Salespeople take Product items (from a specific Batch) and in due course issue it to a Store. Importantly, Batches are large, and may be spread across many Salespeople, and subsequently, Stores.
At some future date, I would like to run the following reports:
Show all Batches of a Product issued to a specific Store.
Show all Batches held by a Salesperson (i.e. not yet sold).
Now, I'm assuming I need to build a table of Transactions, something like,
Transaction
salesperson_id
batch_id (through which the product can be determined)
store_id
typeOfTransaction (whether the Salesperson has obtained some stock, or sold some stock)
quantity
By dynamically running through a table of Transaction records, I can could derive the above reports. However, this seems inefficient and, over time, increasingly slow.
My question is: what is the best way to keep track of transactions like this, preferably without requiring dynamic processing of all transactions to derive total items from a batch given to a given store.
I don't believe I can just keep a central record of stock as Product comes in Batches, and Batches are distributed by Salepeople across Stores.
Thank you.
My question is: what is the best way to keep track of transactions like this, preferably without requiring dynamic processing of all transactions to derive total items from a batch given to a given store.
I don't believe I can just keep a central record of stock as Product comes in Batches, and Batches are distributed by Salepeople across Stores.
Believe it. :-)
In my experience, the only correct way to store this kind of stuff, is to break it down to something akin to T-leger accounting, i.e. debit/credit with a chart of accounts. It requires dynamic processing to derive totals as you've found out, but anything short of that will lead to tricky queries when dealing with reports and audit trails.
You can speed things up significantly, by maintaining partial or complete aggregate balances using triggers (e.g. monthly stock movements per store). This will reduce the number of rows you need to sum when running larger queries. Which of these you'll want to maintain will depend on your app and your reporting requirements.

Resources