I'm building a data warehouse. Each fact has it's timestamp. I need to create reports by day, month, quarter but by hours too. Looking at the examples I see that dates tend to be saved in dimension tables.
(source: etl-tools.info)
But I think, that it makes no sense for time. The dimension table would grow and grow. On the other hand JOIN with date dimension table is more efficient than using date/time functions in SQL.
What are your opinions/solutions ?
(I'm using Infobright)
Kimball recommends having separate time- and date dimensions:
design-tip-51-latest-thinking-on-time-dimension-tables
In previous Toolkit books, we have
recommended building such a dimension
with the minutes or seconds component
of time as an offset from midnight of
each day, but we have come to realize
that the resulting end user
applications became too difficult,
especially when trying to compute time
spans. Also, unlike the calendar day
dimension, there are very few
descriptive attributes for the
specific minute or second within a
day. If the enterprise has well
defined attributes for time slices
within a day, such as shift names, or
advertising time slots, an additional
time-of-day dimension can be added to
the design where this dimension is
defined as the number of minutes (or
even seconds) past midnight. Thus this
time-ofday dimension would either have
1440 records if the grain were minutes
or 86,400 records if the grain were
seconds.
My guess is that it depends on your reporting requirement.
If you need need something like
WHERE "Hour" = 10
meaning every day between 10:00:00 and 10:59:59, then I would use the time dimension, because it is faster than
WHERE date_part('hour', TimeStamp) = 10
because the date_part() function will be evaluated for every row.
You should still keep the TimeStamp in the fact table in order to aggregate over boundaries of days, like in:
WHERE TimeStamp between '2010-03-22 23:30' and '2010-03-23 11:15'
which gets awkward when using dimension fields.
Usually, time dimension has a minute resolution, so 1440 rows.
Time should be a dimension on data warehouses, since you will frequently want to aggregate about it. You could use the snowflake-Schema to reduce the overhead. In general, as I pointed out in my comment, hours seem like an unusually high resolution. If you insist on them, making the hour of the day a separate dimension might help, but I cannot tell you if this is good design.
I would recommend having seperate dimension for date and time. Date Dimension would have 1 record for each date as part of identified valid range of dates. For example: 01/01/1980 to 12/31/2025.
And a seperate dimension for time having 86400 records with each second having a record identified by the time key.
In the fact records, where u need date and time both, add both keys having references to these conformed dimensions.
Related
Heading ##CALL ga.timetree.single({time: 1463659567468, create: true})
https://github.com/graphaware/neo4j-timetree
https://graphaware.com/neo4j/2014/08/20/graphaware-neo4j-timetree.html
The above link says that time is in long format YYYYMMDDHHmmss. But the time parameter doesn't make any sense and random nodes are getting generated in neo4j. enter image description here
What does the time parameter hold and what is the meaning of it?
The time parameter is a millisecond timestamp, or milliseconds elapsed since the UNIX epoch, which is an extremely common means of storing time-related data, you can find this in use in nearly every digital system.
The timestamp cited here represents "2016-05-19 12:06:07". The timetree built starts from a root (this is a modeling convenience), and then its child is the year (2016) followed by the month (5), then the date of the month (19). Looks like it didn't automatically create any nodes for time resolutions beyond that.
Keep in mind that now that Neo4j has native temporal values that you can use in Cypher and store as properties (as well as index), time trees are going to be less useful, as you can always do index lookups on indexed temporal properties.
There are still some cases where time trees can still be very useful, however, such as when you're searching for events that happened within some unit of time that disregards its parent units...such as finding events that happened on Mondays regardless of month, or on Januaries regardless of year, and so forth.
Which way of storing date and time will provide the quickest search for this data segments if I plan to search separately for each? I always store in datetime type, but maybe it would be more pragmatically to store in the separate columns(e.g. date and time database types) for this purpose?
I haven't run any benchmarks but I could only see the need to saving them as different columns if you're planning on doing massive - and I mean massive - queries on Date alone.
timestamp (with time zone) 8 bytes both date and time, with time zone
date 4 bytes date (no time of day)
time (without time zone) 8 bytes time of day (no date)
time (with time zone) 12 bytes times of day only, with time zone
So if you store them separately, it takes at least more +4bytes, or +8 if you keep the time zone on the time field.
If you're going to have a massive number of rows to query, where you'll be querying only for Date then it might make sense, otherwise I don't think so (in fact it might still make no sense since with a massive number of rows it will also use more unnecessary space possibly offsetting any perceived advantage).
I have used integer fields representing Dates (up 'till months - e.g. 201704 as index), but because the records were reflecting unique monthly records and it made sense, there's no Date format reflecting only Year+Month, otherwise PG is already quite optimised to handle date timestamp situations.
I have been asked to model a star diagram.
I have 3 dimensions:
Date (day,month, year, week, quarter, ...)
place (500 distinct values)
Product (80k different products)
The main question is how many items (products) are stored at the end of a day in every place.
After some study-time with regards to dimensional modeling. I think I should implement a Periodic snapshot table. However reading trough the Kimball Docs, I noticed that a periodic snapshot demands an entry for every combination of the dimensions. This means I should add 40M rows every day (80k*500).
Knowing that the products are (real) slow movers and that many places store zero products during long periods, this sounds like an extreme overkill.
FYI the transactions in the source DB are 150k rows after three years.
So should I really add 40M rows every day, or could I just add the non-empty stores with their products specified? Also if for whatever reason one day all stores are empty, should I make an entry for that day (with dimensions N/A for store and product)?
You modeled correctly. It depends from the specifications, but normally you store only the products that are present in a location (you do not store zeroes), which could yield a number substantially lower than the maximum 80k.
If you want to further reduce your numbers, you could store the last N days and then start to move data in a "cold" table. You store (say) last 10 day snapshot, then only monthly snapshots in the main "hot" Fact Table.
Do not exclude the possibility to calculate the snapshot on the fly in report system, depending on your environment it could be easy (in MDX or DAX for example it is). Mixed solutions are also possible (i.e only the last month calculated on the fly).
Dataset: I'm given the number of minutes individual customers use a product each day and am trying to cluster this data in order to find common usage patterns.
My question: How can I format the data so that, for example, a power user with high levels of use for a year looks the same as a different power user who has only been able to use the device for a month before I ended data collection?
So far I've turned each customer into an array where each cell is the number of minutes used that day. This array starts when the user first uses the product and ends after the user's first year of use. All entries in the cells must be double values (e.x. 200.0 minutes used) for the clustering model. I've considered either setting all cells/days after the last day of data collection to either -1.0 or NULL. Are either of these a valid approach? If not what would you suggest?
For the problem where you want both users (one that used the product a lot every day for a year, and the other used it a lot for one month), create a new entry where it's values are:
avg_usage per time_bin
time_bin can be a month, a day or another time bin which best fits your needs.
This way, a user which use a product, let's say 200 minutes per day for one year, will get:
200 * 30 * 12 / 12 = 6000 minutes per month
and the other user, which joined just last month, will also get, with the exact same usage will get:
200 * 30 * 1 / 1 = 6000 minutes per month.
This way, it doesn't matter when you have started to use the product, the only thing that matter, is the usage rate.
An important thing you might take into consideration, that products, may be forgotten for some time. for example, a computer, and I'm away for a vacation. Those days I didn't use my computer, doesn't have (maybe) an effect of my general usage of this product. So, based on your data, product and intuition you might consider removing gaps like the one I mentioned, and not take it into account inside the calculation.
The amount of time a user has used your product could be a signal of something, but if indeed he only started some time ago, and still using it until today, it may be something you need to take into consideration, and for that use, this average binning technique may help.
What I want to do is very simple but I'm trying to find the best or most elegant way to do this. The Rails application I'm building now will have a schedule of daily classes. For each class the fields relevant to this question are:
Day of the week
Starting time
Ending time
A single entry could be something such as:
day of week: Wednesday
starting time: 10:00 am
ending time: Noon
Also I must mention that it's a bi-lingual Rails 2.2 app and I'm using the native i18n Rails feature. I actually have several questions.
Regarding the day of the week, should I create an extra table with list of days, or is there a built-in way to create that list on the fly? Keep in mind these days of the week will have to be rendered in English or Spanish in the schedule view depending on the locale variable.
While querying the schedule I will need to group and order the results by weekday, from Monday to Sunday, and of course order the classes within each day by starting time.
Regarding the starting time and ending time of each class would you use datetime fields or integer fields? If the latter how would you implement this exactly?
Looking forward to read the different suggestions you guys will come up with.
I would just store the day of the week as an integer. 0 => Monday ... 6 => Sunday (or any way you want. ie. 0 => Sunday). Then store the start time and end time as Time.
That would make grouping really easy. All you would have to do is sort by the day of the week and the start time.
You can display this in multiple ways, but here is what I would do.
Have functions like: #sunday_classes = DailyClass.find_sunday_classes that returns all the classes for Sunday sorted by start time. Then repeat for each day.
def find_sunday_classes
find_by_day_of_week(1, :order -> 'start_time')
end
Note: find_by probably should have id at the end but that's just preference in how you want to name the column.
If you want the full week then call all seven from the controller and loop trough them in the view. You could even create detail pages for each day.
Translation is the only tricky part. You can create a helper function that takes an integer and returns the text for the appropriate day of the week based on local.
That's very basic. Nothing complicated.
If your data is a Time then I would store that as a Time - otherwise you will always have to convert it out of the database when you do date and time related operations on it. The day is redundant data, as it will be part of the time object.
This should mean that you don't need to store a list of days.
If t is a time then
t.strftime('%A')
will always give you the day as a string in English. This could then be translated by i18n as required.
So you only need to store starting time and ending time, or starting time and duration. Both should be equivalent. I would be tempted to store ending time myself, in case you need to do data manipulations on ending times, which therefore won't have to be calculated.
I think most of the rest of what you describe should also fall out of storing time data as instances of Time.
Ordering by week day and time will just be a matter of ordering by your time column. i.e.
daily_class.find(:all, :conditions => ['whatever'], :order => :starting_time)
Grouping by day is a little more tricky. However this is an excellent post on how to group by week. Grouping by day will be analogous.
If you are dealing with non-trivial volumes of data, it may be better to do it in the database, with a find_by_sql and that may depend on your database's time and date functionality, but again storing the data as a Time will also help you here. For example in Postgresql (which I use), getting the week of a class is
date_trunc('week', starting_time)
which you can use in a Group By clause, or as a value to use in some loop logic in rails.
Re days-of-week, if you need to have e.g. classes that meet 09:00-10:00 on MWF, then you could either use a separate table for days a class meets (keyed by both class ID and DOW) or be evil (i.e. non-normalized) and keep the equivalent of an array of DOW in each class. The classic argument is this:
The separate table can be indexed in a way to support either class-oriented or DOW-oriented selects, but takes a bit more glue to put the entire picture together for a class.
The array-of-DOW is simpler to visualize for beginning programmers and slightly simpler to code about, but means that reasoning about DOW requires looking at all classes.
If this is only for your personal class schedule, do what gets you the value you're looking for, and live with the consequences; if you're trying to build a real system for multiple users, I'd go with a separate table. All those normalization rules are there for a reason.
As far as (human-readable) DOW names, that's a presentation-layer issue, and shouldn't be in the core concept of DOW. (Suppose you decided to move to Montreal, and needed French? That should be another "face" and not a change to the core implementation.)
As for starting/ending times, again the issue is your requirements. If all classes begin and end at hour (x:00) boundaries, you could certainly use 0..23 as the hours of the day. But then your life would be miserable as soon as you had to accommodate that 45-minute seminar. As the old commercial said, "Pay me now or pay me later."
One approach would be to define your own ClassTime concept and partition all reasoning about times to that class. It could start with a simplistic representation (integral hours 0..23, or integral minutes after midnight 0..1439) and then "grow" as needed.