BI Publisher with Complex Report - bi-publisher

I have a report with the following metrix:
Item, Total item quantity in month, Total item quantity in previous week, Total item quantity in 2 previous weeks, Total item quantity in 3 previous weeks
I write this matrix in a single query but it is very complex and take time to execute.
So is there any better solution?

Grouping and filtering in XSLT takes more time and memory than SQL filtering. Oracle recommends doing these kind of complex operations within the data mode (SQL in your case) before printing them in the report.
You should be able to tune your SQL for peformance. Maybe you can ask a DBA for help, or post the SQL under the oracle dba tag here.

Related

Rails: select records with maximum date

In my app users can save sales reports for given dates. What I want to do now is to query the database and select only the latest sales reports (all those reports that have the maximum date in my table).
I know how to sort all reports by date and to select the one with the highest date - however I don't know how to retrieve multiple reports with the highest date.
How can I achieve that? I'm using Postgres.
Something like this?
SalesReport.where(date: SalesReport.maximum('date'))
EDIT: Just to bring visibility to #muistooshort's comment below, you can reduce the two queries to a single query (with a subselect), using the following form:
SalesReport.where(date: SalesReport.select('MAX(date)'))
If there is a lot of latency between your web host and your database host, this could halve execution times. It is almost always the preferred form.
You can get the maximum date to search for matching reports:
max_date = Report.maximum('date')
reports = Report.where(date: max_date)

Periodic snapshot fact table with large dimensions

I have been asked to model a star diagram.
I have 3 dimensions:
Date (day,month, year, week, quarter, ...)
place (500 distinct values)
Product (80k different products)
The main question is how many items (products) are stored at the end of a day in every place.
After some study-time with regards to dimensional modeling. I think I should implement a Periodic snapshot table. However reading trough the Kimball Docs, I noticed that a periodic snapshot demands an entry for every combination of the dimensions. This means I should add 40M rows every day (80k*500).
Knowing that the products are (real) slow movers and that many places store zero products during long periods, this sounds like an extreme overkill.
FYI the transactions in the source DB are 150k rows after three years.
So should I really add 40M rows every day, or could I just add the non-empty stores with their products specified? Also if for whatever reason one day all stores are empty, should I make an entry for that day (with dimensions N/A for store and product)?
You modeled correctly. It depends from the specifications, but normally you store only the products that are present in a location (you do not store zeroes), which could yield a number substantially lower than the maximum 80k.
If you want to further reduce your numbers, you could store the last N days and then start to move data in a "cold" table. You store (say) last 10 day snapshot, then only monthly snapshots in the main "hot" Fact Table.
Do not exclude the possibility to calculate the snapshot on the fly in report system, depending on your environment it could be easy (in MDX or DAX for example it is). Mixed solutions are also possible (i.e only the last month calculated on the fly).

How to get the daily data of "Microsoft.VSTS.Scheduling.CompletedWork"?

We need to get the daily data from the "Microsoft.VSTS.Scheduling.CompletedWork"field (which is detailed in Workload, scheduling and time tracking field references). However I get data from the Analysis database and found that it only records one last new data,and can't get the historical data.
For example the task of ID 3356, who's "CompletedWork" is 3 hours in 2016/8/4, and I get the exact 3 hours-data from the Analysis database in the second day, 2016/8/5, as the pictures in this post show.
Then on the 2016/8/5, I update the "CompletedWork" from 3 hours to 4 hours and I get the exact 4 hours-data from the Analysis database in the second day, 2016/8/6. However the 3 hours-data of 2016/8/4 is lost. Well, How can I get the historical data of "Microsoft.VSTS.Scheduling.CompletedWork"?
First of all, it's important to understand that the CompletedWork is a cumulatieve data field. So when one user enters 3 and another enters 4, the total number of hours worked on the field is 4 not 7.
The warehouse has a granularity of a day and keeps that data int he cube, though the relational warehouse tables will store all the changes to the reportable fields on a per-revision bases. You can't easily query this data using the qube or Excel Power Pivot and they're lost in the Dim* and fact* tables, but you can write a SQL query against tfs_warehouse and iterate through the tables containing the workitem data (tbl_workitems[are|were|latest]). This is much slower and much harder to build unfortunately.
Your other alternative is to use the TFS Client Object Model and query the WorkItemStore object directly. You'll be able to query all work items of interest and iterate through them and their revisions. The API for workitems is relatively easy to use and is well documented.
If you're on TFS 2015 you can also use the new REST api to query workitem data and revisions.

Algorithm for tracking changes in value over time

I am writing a rails app that deals with product inventory. I would like to include the following features, and am struggling with developing an efficient algorithm:
View stock history (how many were in stock on each date)
Quantity removed from warehouse, and quantity added to warehouse over specific periods of time
Amount of time the product was out of stock in any given period
My questions are as follows:
What is the best way of tracking changes? In addition to my Products
table, should I create another table called
HistoricProductQuantities, and insert a new record each time there
is a change in the quantity?
What number should I track? The historic stock quantity (i.e. 50 in
stock on this day, 24 in stock on that day), or the CHANGE in stock
quantity i.e. -5 (5 sold) or 15 (15 added to inventory)? Or do I
track both in separate tables?
Thanks for your help.
First of all I recommend implementing Date Dimensions on your application, as it seems like you will be doing a lot of Time related calculations. Search on Google for date dimensions as it's beyond the scope of your questions. That said, I believe it will be of great benefit for your app to implement and use date dimensions.
As far as your direct questions go:
What is the best way of tracking changes? In addition to my Products table, should I create another table called HistoricProductQuantities, and insert a new record each time there is a change in the quantity?
Yes you could do this, I would probably call it HistoricProductSnapshot and keep track of the product activity in there on daily basis. With this information as well as time dimensions you could do calculations such as "how many of Product X Did we have 5 days ago or a month ago etc etc."
What number should I track? The historic stock quantity (i.e. 50 in stock on this day, 24 in stock on that day), or the CHANGE in stock quantity i.e. -5 (5 sold) or 15 (15 added to inventory)? Or do I track both in separate tables?
I do not have experience writing inventory control software but I believe with the Snapshot table I mentioned on the question above you would only have to keep track of quantities per day. The Change in product counts could then be calculated from your snapshot table. You could for example have a function that will output the product amount in a given time range as an array. Example: From March 1 to March 7 these were the stock amounts for Product Y [45,40,39,27,22,45,44].
Hope that helps. As I said I am not a product inventory guy but I have worked with Point of Sales Systems and the procedure above should give you a could enough start for what you are trying to do.
This gem could be usefull for tracking changes in models https://github.com/collectiveidea/audited
Keep the data raw. I would personally create a new data entry every day, displaying how much items you have in stock per day. Or you can make the interval much shorter, such as every 12 hours.
For our particular use case:
We had a table called Days, which had a many to many relationship with products, and each "relationship" will have a value called quantity (to keep track of quantity of product per day). Additionally per relationship, we had another value for the relationship with transactions (a one to many relationship) that has the entries for the time of transaction and remaining stocks.
I would personally advise you to use the quantity of stock as the raw data, as it will enable you to gather the data such as how much items were removed during a certain transaction, when the item was out of stock and when it became in stock, all through the data. When you have data in which you need to perform statistical calculations on, it's best to store this data as raw values (quantity of the item).

InfluxDB performance

For my case, I need to capture 15 performance metrics for devices and save it to InfluxDB. Each device has a unique device id.
Metrics are written into InfluxDB in the following way. Here I only show one as an example
new Serie.Builder("perfmetric1")
.columns("time", "value", "id", "type")
.values(getTime(), getPerf1(), getId(), getType())
.build()
Writing data is fast and easy. But I saw bad performance when I run query. I'm trying to get all 15 metric values for the last one hour.
select value from perfmetric1, perfmetric2, ..., permetric15
where id='testdeviceid' and time > now() - 1h
For an hour, each metric has 120 data points, in total it's 1800 data points. The query takes about 5 seconds on a c4.4xlarge EC2 instance when it's idle.
I believe InfluxDB can do better. Is this a problem of my schema design, or is it something else? Would splitting the query into 15 parallel calls go faster?
As #valentin answer says, you need to build an index for the id column for InfluxDB to perform these queries efficiently.
In 0.8 stable you can do this "indexing" using continuous fanout queries. For example, the following continuous query will expand your perfmetric1 series into multiple series of the form perfmetric1.id:
select * from perfmetric1 into perfmetric1.[id];
Later you would do:
select value from perfmetric1.testdeviceid, perfmetric2.testdeviceid, ..., permetric15.testdeviceid where time > now() - 1h
This query will take much less time to complete since InfluxDB won't have to perform a full scan of the timeseries to get the points for each testdeviceid.
Build an index on id column. Seems that he engine uses full scan on table to retrieve data. By splitting your query in 15 threads, the engine will use 15 full scans and the performance will be much worse.

Resources