When is data erased from the OLAP DB? - data-warehouse

I am new to OLAP.
I understand the table structure and ETL process.
I don't understand when data is supposed to be deleted from the fact table.
Say I'm creating a reporting application for events. each event has the duration it took to complete, the exit code and total bytes read. There are several dimensions, e.g. time and location.
Say I have 1 million new records ready for my fact table daily, A total of 1 GB.
If my ETL process only adds data to my fact table it grows indefinitely.
When should I delete data from my fact table? Should I divide the data into several fact tables (e.g. monthly tables)?
Is there any rule-of-thumb?
Thanks

History should never be deleted.
Period.
However, some people get nervous that 1Gb per day may turn into 1Tb every 3 years. This rarely actually matters, but some people still like to worry about the price of storage.
Your time spent designing a data purge can be more expensive than the storage you're attempting to save.
[I found 3 DBA's and 2 programmers debating ways to save a few hundred MB's. I said that I would drive them all down to Best Buy and purchase a 500Mb disk drive with the spare change on the floor of my car. The price of 5 consultants merely walking into the room to discuss it had already exceeded the price of the storage they were attempting to "save".]
The question of "can we summarize?" is entirely up to the users. Sometimes you can't usefully summarize, so you can't easily delete anything either.
Some folks will say that the business cycle is 20 years or something like that, and want details for the first 20 years (on 7Tb) and then summaries for time periods before that.

Never. You can use partitioning to deal with old records and move partitions to different drives. If you partition fact tables by date (month quarter, year), then for all the practical purposes you mostly access few latest partitions most of the time.
Keep in mind that DW belongs to business users and not to IT. Do not limit (not try to assume) questions a business analyst may want to ask -- query the DW.

Related

How much data a column of mnesia table can store

How much data can a column of mnesia can store.Is there any limit on it or we can store as much as we want.Any pointer?(If table is disc_only_copy)
As with any potentially large data set (in terms of total entries, not total volume of bytes) the real question isn't how much you can cram into a single table, but how you want to partition the data and how unified or distinct those partitions should appear to the system.
In the context of a chat system, for example, you may want to be able to save the chat history forever, which is a reasonable goal. But you may not want all chat entries to be in the same table forever and ever (10 years? how long? who knows!) right next to chat entries made yesterday. You may also discover as time moves on that storing every chat message in a single table to be a painfully naive decision to overcome later on down the road.
So this brings up the issue of partitioning. How do you want to do it? (Staying within the context of a chat system, but easily transferrable to another problem...) By time? By channel? By user? By time and channel?
How do you want to locate the data later? This brings up obvious answers that are the same as above: By time? By channel? By user? By time and channel?
This issue exists whether you're dealing with Mnesia or with Postgres -- or any database -- when you're contemplating the storage of lots of entries. So think about your problem in the context of how you want to partition the data.
The second issue is the volume of the data in bytes, and the most natural representation of that data. Considering basic chat data, its not that hard to imagine simply plugging everything into the database. But if its a chat system that can have large files attached within a message, I would probably want to have those files stored as what they are (files) somewhere in a system made for that (like a file system!) and store only a reference to it in the database. If I were creating a movie archive I would certainly feel comfortable using Mnesia to store titles, actors, years, and a pointer (URL or file system path) to the movie, but I wouldn't dream of storing movie file data in my database, even if I was using Postgres (which can actually stand up to that sort of abuse... but think about new awkwardness of database dumps, backups and massive bottleneck introduced in the form of everyone's download/upload speed being whatever the core service's bandwidth to the database backend is!).
In addition to these issues, you want to think about how the data backend will interface with the rest of the system. What is the API you wish you could use? Write it now and think it through to see if its silly. Once it seems perfect, go back through critically and toss out any elements you don't have an immediate need to actually use right now.
So, that gives us:
Partition scheme
Context of future queries
Volume of data in bytes
Natural state of the different elements of data you want to store
Interface to the overall system you wish you could use
When you start wondering how much data you can put into a database these are the questions you have to start asking yourself.
Now that all that's been written, here is a question that discusses Mnesia in terms of entries, bytes, and how many bytes different types of entries might represent: What is the storage capacity of a Mnesia database?
Mnesia started as an in-memory database. It means that it is not designed to store large amount of data. When you ask yourself this question, it means you should look at another ejabberd backend.

Implementing offline Item based recommendation using Mahout

I am trying to add recommendations to our e-commerce website using Mahout. I have decided to use Item Based recommender, i have around 60K products, 200K users and 4M user-product preferences. I am looking for a way to provide recommendation by calculating the item similarities offline, so that the recommender.recommend() method would provide results in under 100 milli seconds.
DataModel dataModel = new FileDataModel("/FilePath");
_itemSimilarity = new TanimotoCoefficientSimilarity(dataModel);
_recommender = new CachingRecommender(new GenericBooleanPrefItemBasedRecommender(dataModel,_itemSimilarity));
I was hoping if someone could point out to a method or a blog to help me understand the procedure and challenges with an offline computation of the item similarities. Also what is the recommended procedure was storing the pre-computed results from item similarities, should they be stored in a separate db, or a memcache?
PS - I plan to refresh the user-product preference data in 10-12 hours.
MAHOUT-1167 introduced into (the soon to be released) Mahout 0.8 trunk a way to calculate similarities in parallel on a single machine. I'm just mentioning it so you keep it in mind.
If you are just going to refresh the user-product preference data every 10-12 hours, you are better off just having a batch process that stores these precomputed recommendations somewhere and then deliver them to the end user from there. I cannot give detail information or advice due to the fact that this will vary greatly according to many factors, such as your current architecture, software stack, network capacity and so on. In other words, in your batch process, just run over all your users and ask for 10 recommendations for every one of them, then store the results somewhere to be delivered to the end user.
If you need response within 100 Milli seconds, it's better to do batch processing in the background on your server and that may include the following jobs.
Fetching data from your own user database (60K products, 200K users and 4M user-product preferences).
Prepare your data model based on the nature of your data (number of parameters, size of data, preference values etc..lot more) This could be an important step.
Run algorithm on the data model (need to choose the right algorithm according to your requirement). Recommendation data is available here.
May need to process the resultant data as per the requirement.
Store this data into a database (It is NoSQL in all my projects)
The above steps should be running periodically as a batch process.
Whenever a user requests for recommendations, your service provides a response by reading the recommendation data from the pre-calculated DB.
You may look at Apache Mahout (for recommendations) for this kind of task.
These are the steps in brief...Hope this helps !

calculating lots of statistics on database user data: optimizing performance

I have the following situation (in Rails 3): my table contains financial transactions for each user (users can buy and sell products). Since lots of such transactions occur I present statistics related to the current user on the website, e.g. current balance, overall profit, how many products sold/bought overall, averages, etc. (the same also on a per month/per year basis instead of overall). Parts of this information is displayed to the user on many forms/pages so that the user can always see his current account information (different bits of statistics is displayed on different pages though).
My question is: how can I optimize database performance (and is it worth it)? Surely, if the user is just browsing, there is no need to re-calculate all of the values every time a new page is loaded unless a change to the underlying database has been made?
My first solution would be to store these statistics in their own table and update them once a financial transaction has been added/edited (in Rails maybe using :after_update ?). Taking this further, if, for example, a new transaction has been made, then I can just modify the average instead of re-calculating the whole thing.
My second idea would be to use some kind of caching (if this is possible?), or to store these values in the session object.
Which one is the preferred/recommended way, or is all of this a waste of time as the current largest number of financial transactions is in the range of 7000-9000?
You probably want to investigate summary tables, also known as materialized views.
This link may be helpful:
http://wiki.postgresql.org/wiki/Materialized_Views

Database design - recording transactions

I would appreciate some advice on how to structure a database for the following scenario:
I'm using Ruby on Rails, and so I have the following tables:
Products
Salespeople
Stores
Products are manufactured in batches, so each product item has a Batch code, so I think I will also need a table of batches, which refers to a product type.
Batch
In the real world, Salespeople take Product items (from a specific Batch) and in due course issue it to a Store. Importantly, Batches are large, and may be spread across many Salespeople, and subsequently, Stores.
At some future date, I would like to run the following reports:
Show all Batches of a Product issued to a specific Store.
Show all Batches held by a Salesperson (i.e. not yet sold).
Now, I'm assuming I need to build a table of Transactions, something like,
Transaction
salesperson_id
batch_id (through which the product can be determined)
store_id
typeOfTransaction (whether the Salesperson has obtained some stock, or sold some stock)
quantity
By dynamically running through a table of Transaction records, I can could derive the above reports. However, this seems inefficient and, over time, increasingly slow.
My question is: what is the best way to keep track of transactions like this, preferably without requiring dynamic processing of all transactions to derive total items from a batch given to a given store.
I don't believe I can just keep a central record of stock as Product comes in Batches, and Batches are distributed by Salepeople across Stores.
Thank you.
My question is: what is the best way to keep track of transactions like this, preferably without requiring dynamic processing of all transactions to derive total items from a batch given to a given store.
I don't believe I can just keep a central record of stock as Product comes in Batches, and Batches are distributed by Salepeople across Stores.
Believe it. :-)
In my experience, the only correct way to store this kind of stuff, is to break it down to something akin to T-leger accounting, i.e. debit/credit with a chart of accounts. It requires dynamic processing to derive totals as you've found out, but anything short of that will lead to tricky queries when dealing with reports and audit trails.
You can speed things up significantly, by maintaining partial or complete aggregate balances using triggers (e.g. monthly stock movements per store). This will reduce the number of rows you need to sum when running larger queries. Which of these you'll want to maintain will depend on your app and your reporting requirements.

Tracking impressions/visits per web page

I have a site with several pages for each company and I want to show how their page is performing in terms of number of people coming to this profile.
We have already made sure that bots are excluded.
Currently, we are recording each hit in a DB with either insert (for the first request in a day to a profile) or update (for the following requests in a day to a profile). But, given that requests have gone from few thousands per days to tens of thousands per day, these inserts/updates are causing major performance issues.
Assuming no JS solution, what will be the best way to handle this?
I am using Ruby on Rails, MySQL, Memcache, Apache, HaProxy for running overall show.
Any help will be much appreciated.
Thx
http://www.scribd.com/doc/49575/Scaling-Rails-Presentation-From-Scribd-Launch
you should start reading from slide 17.
i think the performance isnt a problem, if it's possible to build solution like this for website as big as scribd.
Here are 4 ways to address this, from easy estimates to complex and accurate:
Track only a percentage (10% or 1%) of users, then multiply to get an estimate of the count.
After the first 50 counts for a given page, start updating the count 1/13th of the time by a count of 13. This helps if it's a few page doing many counts while keeping small counts accurate. (use 13 as it's hard to notice that the incr isn't 1).
Save exact counts in a cache layer like memcache or local server memory and save them all to disk when they hit 10 counts or have been in the cache for a certain amount of time.
Build a separate counting layer that 1) always has the current count available in memory, 2) persists the count to it's own tables/database, 3) has calls that adjust both places

Resources