Delphi Rbuilder: Calculate value across multiple pages of data not just first and last page - delphi

I am using Rbuilder within a application constructed with Delphi. I have a report already built that displays a list of items but then at the bottom I have some subtotal fields as well as a total field. The subtotals and totals are defined as variables which then total up the cost of the individual items.
Unfortunately both the subtotals and totals only give me calculations for items on the first and last pages of data. Lets say there are 5 pages of data that prints out. Page one the totals are accurate.
Page two totals are accurate. Page 3 totals include ONLY the totals from page 1 and page 3. Page 4 total includes page 1 and page 4 and so on. I have been trying to play around with timing settings as well as moving my code calculating the total to different operations (ongettext, onprint, oncalc, etc)
Has anybody ever run into this?

Ok, so I kept working at this and eventually found the problem.
At the report level I changed the report from TwoPass to OnePass. That ended up giving me very close to what I wanted. I ended up having to write some more code to get exactly what I wanted but changing the number of passes worked.
I was trying to display a running total page by page. And as I changed pages it would update the value.
Onepass worked.

Related

How to control infinite looping in Automation Anywhere?

Using Automation Anywhere (AA), I am extracting medicine names and prices from this link:
https://www.chemistwarehouse.com.au/shop-online/238/anti-fungal-amp-warts
It returns 3 pages. While extracting the pattern based data from the website, AA code loops through all 3 pages. Upon reaching the last page i.e page 3, it does not stop and the loop goes on indefinitely.
I have watched many Youtube videos but can't seem to find the solution.
Since I am new to AA, I am unable to debug the issue.
I have tried to throw some arrows in the dark but all in vain, so need your help.
I expect that AA should stop after page 'n' and write the result in CSV.
I think maybe there's some logic you're missing.
But first, it looks like the website is to blame - if you were to go to https://www.chemistwarehouse.com.au/shop-online/238/anti-fungal-amp-warts?page=4 even though there's not 4 pages worth of options, it will take you to page 1's results. That's probably why it's infinitely looping.
Consider something like this - object cloning the table for each page. Page 1 and 2 have 8 rows, but page 3 does not. It has 2 rows. Make a variable that's true that gets checked at the top of your loop. If it's false, break the loop. Set it to false if there's less than 8 rows.
This does not solve the problem if your last page also has 8 rows, but you get the idea.

I can't figure out how to filter or query in Google Sheets without returning a bunch of blank strings appended to actual data

I'm at my wit's end on trying to figure out why filtering/querying in Google Sheets is so broken. I have a sheet with some data about practice exams I'm taking and I'm attempting to pull some data from that sheet to another sheet for calculating statistics. I've made a shareable document with the pertinent stuff so you can see what I mean.
My raw data is in the TestScores sheet and I made a TESTSTATS sheet to test different methods of pulling data from TestScores. In my example, I'm only trying to pull unique dates from range TestScores!B2:B and I've added a few different methods to do so in TESTSTATS (removed the equal sign from each one so each can be tested on its own by putting in the equal sign).
The methods I've tried:
=UNIQUE(TestScores!B2:B)
=UNIQUE(FILTER(TestScores!B2:B, TestScores!B2:B<>""))
=UNIQUE(FILTER(TestScores!B2:B, TestScores!B2:B<>0))
=UNIQUE(FILTER(TestScores!B2:B, NOT(ISBLANK(TestScores!B2:B))))
=UNIQUE(QUERY(TestScores!B2:B, "select B"))
=ARRAY_CONSTRAIN(UNIQUE(QUERY(TestScores!B2:B, "select B")), ROWS(UNIQUE(TestScores!B2:B))+1,5)
You'll see that each one, when activated by adding the = in front of the formula returns the proper data, but also appends 500 empty rows which look empty, but are in fact blank strings (""). This makes it difficult to work with because there are a lot of calculations in my sheet that depend on one another. I also do not want to specify an explicit end to my ranges and would prefer to keep them open ended (B2:B instead of B2:B17) so everything updates automatically as new records are added.
What am I doing wrong? Why are the returned data appended with a bunch of empty cells, and why 500 specifically (seems arbitrary considering my source data is 29 or 30 rows depending on whether or not you include headers)?
Starting with only two rows in TESTSTATS more rows have to be added for somewhere to place the output. It seems Google choose to do so 500 rows at a time (from the last required cell). "Why?" would have to be a matter for Google.
If you know 14 rows are required for the output and increase the size of TESTSTATS to 16 no more rows will be added. Since you want room for expansion you can't extend to 16 and avoid further issues but you could allow some room, say to 30 rows, and delete the few extra, or, if 30 becomes insufficient (when sheet shoots up to say 540 rows) delete the rows not required but set the sheet size to say 60 rows - and so on.

HighChart with large amount of data(complex structure) not working

I have a demo project to show inventory trends, and the inventory of each product is frequently, there may be hundreds of inventory points in one day. Now I need to show the inventory report of one week,a month and more, the problem comes out---I have two series, one line disappeared when the points come to neer 3000(not accurate);the chart displays nothing when the amout of data is large(such as 7000 points and more) completely!
The demo is here CODE:demo here, the format of datapoints are like the demo, error occures when the point number is large,such as 4000 and more, you can try to mock up large data of this demo to find the problem.
Actully I see million points of data shows fine in others' demo, then I tried to min the size of the data points but failed, the problem still exists. How can I solve the problem?
You need to increase turboThreshold parameter, but for huge data we recommen to use Highstock which uses dataGrouping module, allowing to increase performance.

Summing up data from tables Ruby on Rails

What I have is a website where I add collected data of every single shift in a factory's production lines. I add data like (Quantity in tonnes). What I want is to be able to have the data of for instance; the morning, late and night shift of the (Quantity in tonnes) which are in the Shift table and are present and visible in the Shift Index view all combined and added, and added in another page which is the Days Index page (Day contains the shifts, one day has 3 shifts), so I could see the 3 shifts' data summed up together into the data combined to see as the total output of the day.
For example, in the "Quantity in tonnes", I would like 7 + 10 + 12 (These are the inputs I already have and I have added through a form to the shifts index) to be summed up, and appear in the Days index page automatically without me interfering as "29" in the Quantity of tonnes columns in it.
How is that possible to do? I can't seem to figure out how to write the code for it so that it would loop over all the inputs and constantly give me the summed out outputs.
Let me know if you need to see any parts of my code and if there is anymore info I could add for you to understand.
Have a look at the groupdate gem, it allows you to group by day, week, hour of the day, etc.
Some code from your end would help, but here's an example use, if I wanted to get revenue for past 3 weeks:
time_range = 90.days.ago..Time.zone.now
total = Sales.where('status > 2').group_by_week(:date_scheduled, Time.zone, time_range).sum(:price)

How can I get a report of all work items added to an iteration after a given date?

I need to produce a report, similar to the Unplanned Work report included with the MS Agile Process Template, but which lists me all work items which were added to an iteration after a given date.
The work item may have already been created before that date, so I can't used the created date.
Can anyone give any guidance on how I can go about this? If I can achieve it in Excel then that would be perfect...
Thanks.
Ok, took some work. Interesting enough though to put some effort in it ...
First screenshot is a Pivot table connected to the Analysis Cube. The most left colum shows the ID of a workitem. The second column shows the ChangeDate. In the row header I have included every iteration that I am interested in. What you see happening in the Excel sheet is items moving from one sprint to the other. For example, workitem 27 was created for iteration 1 at 14-3-2011. On 13-4-2011 it was moved to iteration 2. On 12-5-2011 it was moved to iteration 3. etc.
If I narrow down the filter to a specific iteration I actually see items entering the iteration and leaving the iteration. If I also change the ChangeDate filter, I can focus on items entering after a specific date, as you requested. Again, you can see item 27 enter iteration 2 at 13-4 and leave at 12-5. You can juggle around with the columns to get the view you want.
Finally, the options I used to get this view from TFS.
Hope this exceeds your expectations :-)

Resources