TFS aggregator not working as expected - tfs

I have tried to use TFS aggregator to simply total up a field..
<?xml version="1.0" encoding="utf-8"?>
<AggregatorItems tfsServerUrl="[server Url]">
<AggregatorItem operationType="Numeric" operation="Sum" linkType="Self" workItemType="Task">
<TargetItem name="Total Work"/>
<SourceItem name="Total Work"/>
<SourceItem name="Completed Work"/>
</AggregatorItem>
</AggregatorItems>
Now what I am wanting to do is have Total Work start at zero (so I have a default rule on that) And when someone enters(logs time) in completed work. It will simply +=.
but it seems to go crazy and when I refresh the page it is totalling many many times.
Is it because I am using Total Work in the SourceItem as well as TargetItem
Every time I refresh the task it gets bigger and bigger. I really only want it totalling when someone enters a value in the Completed Work.

As I recall, the way I made tfs aggregator to know the difference between a user change event and one that fired from an update by the aggregator was to re-run the aggregation and see if the value is the same.
If the aggregation sees a change is needed then it will update the work item.
since your target is a souce, that breaks.

Related

Circumventing negative side effects of default request sizes

I have been using Reactor pretty extensively for a while now.
The biggest caveat I have had coming up multiple times is default request sizes / prefetch.
Take this simple code for example:
Mono.fromCallable(System::currentTimeMillis)
.repeat()
.delayElements(Duration.ofSeconds(1))
.take(5)
.doOnNext(n -> log.info(n.toString()))
.blockLast();
To the eye of someone who might have worked with other reactive libraries before, this piece of code
should log the current timestamp every second for five times.
What really happens is that the same timestamp is returned five times, because delayElements doesn't send one request upstream for every elapsed duration, it sends 32 requests upstream by default, replenishing the number of requested elements as they are consumed.
This wouldn't be a problem if the environment variable for overriding the default prefetch wasn't capped to minimum 8.
This means that if I want to write real reactive code like above, I have to set the prefetch to one in every transformation. That sucks.
Is there a better way?

Grafana: Panel with time of last result

I have an elasticsearch instance that receives logs from multiple backup routines. I'd like to query ES for these logs from Grafana and set up a panel that shows the last time for the different backups. Ideally I would also like to be able to show this in color if the time is longer than a certain threshold.
Basically the idea is to have a display that shows, for instance, green if a certain backup has been completed in the last 24 hours, and red if it hasn't.
How would I do this in Grafana with ES as the datasource?
Exact implementation depends on the used panel.
Example for singlestat: write ES query and then select Stat: Time of last point, you may need to select suitable unit/format:
Unfortunately, Grafana doesn't understand thresholds in your requested time format (older than 24 hours). You will need to return it as metric (for example as age of last backup in seconds) = you will need to write query for that. That means, that you will have 2 stats to show (last time + age), so you won't be able to use singlestat. Probably table panel will be better - you can use thresholding based on the age metric there.
In addition to the great answer by Jan Garaj, it looks like there is work being done to make this type of thing much easier in the future.
Check out this issue to check progress.

Total aggregate over an unbounded stream in Dataflow

A number of examples show aggregation over windows of an unbounded stream, but suppose we need to get a count-per-key of the entire stream seen up to some point in time. (Think word count that emits totals for everything seen so far rather than totals for each window.)
It seems like this could be a Combine.perKey and a trigger to emit panes at some interval. In this case the window is essentially global, and we emit panes for that same window throughout the life of the job. Is this safe/reasonable, or perhaps there is another way to compute a rolling, total aggregate?
Ryan your solution of using a global window and a periodic trigger is the recommended approach. Just make sure you use accumulation mode on the trigger and not discarding mode. The Triggers page should have more information.
Let us know if you need additional help.

Calculated automatically completed work in TFS 2013 Agile template

How can I implement cumulative automatically hours in completed work field? I tried to use Aggregator plugin but it not working for me. I need to sum all change in remaining work.
I used Agile template, VS 2013 and TFS 2013.4
<!--Sum all remaining work -->
<AggregatorItem operationType="Numeric" operation="Sum" linkType="Self" linkLevel="1" workItemType="Task">
<TargetItem name="CompletedWork"/>
<SourceItem name="RemainingWork"/>
</AggregatorItem>
Can you help me?
If you're using the Scrum Process Template I don't think you have enough data to calculate this. The Remaining Work field is not a good option to try to capture this. I may start the day with Remaining Work = 8, work 6 hours on the item, but at the end of the day recognize that there is still 4 hours of work left (it was bigger than originally estimated). In that case the remaining work would only decrease by 4 even though I worked 6 hours.
If you need to capture actual work completed, you should be using a separate field on the Work Items. Both the Agile and CMMI process templates have fields for this. If you stick with the Scrum template I'd suggest adding a Completed Work field in addition to the Remaining Work field.

How to fix the endless printing loop bug in Nevrona Rave

Nevrona Designs' Rave Reports is a Report Engine for use by Embarcadero's Delphi IDE.
This is what I call the Rave Endless Loop bug. In Rave Reports
version 6.5.0 (VCL10) that comes bundled with Delphi 2006, there is a
nortorious bug that plagues many Rave report developers. If you have a
non-empty dataset, and the data rows for this dataset fit exactly into a
page (that is to say there are zero widow rows), then upon PrintPreview,
Rave will get stuck in an infinite loop generating pages.
This problem has been previously reported in this newsgroup under the
following headings:
"error: generating infinite pages"; Hugo Hiram 20/9/2006 8:44PM
"Rave loop bug. Please help"; Tomas Lazar 11/07/2006 7:35PM
"Loop on full page of data?"; Tony Chistiansen 23/12/2004 3:41PM
reply to (3) by another complainant; Oliver Piche
"Endless lopp print bug"; Richso 9/11/2004 4:44PM
In each of these postings, there was no response from Nevrona, and no
solution was reported.
Possibly, the problem has also been reported on an allied newsgroup
(nevrona.public.rave.reports.general), to wit:
6. "Continuously generating report"; Jobard 20/11/2005
Although it is not clear to me if (6) is the Rave Endless loop bug or
another problem. This posting did get a reply from Nevrona, but it was
more in relation to multiple regions ("There is a problem when using
multiple regions that go over a page-break.") than the problem of zero
widows.
This is more of a work-around than a true solution. I first posted this work-around on the Nevrona newsgroup (Group=nevrona.public.rave.developer.delphi.rave; Subject="Are you suffering from the Rave Endless Loop bug?: Work-around announced."; Date=13/11/2006 7:06 PM)
So here is my solution. It is more of a work-around than a good
long-term solution, and I hope that Nevrona will give this issue some
serious attention in the near future.
Given your particular report layout, count the maximum number of rows
per page. Let us say that this is 40.
Set up a counter to count the rows within the page (as opposed to rows within the whole report). You could do this either by event script or by a CalcTotal component.
Define an OnBeforePrint scripted event handler for the main data band.
In this event handler set the FinishNewPage property of the main data band to be True when the row-per-page count is one or two below the max (in our example, this would be 38). And set it to False in all other cases. The effect of this is to give every page a non-zero number of widows (in this case 1..38), thus avoiding the condition that gives rise to the Rave Endless loop problem.
Thanks so much for this Sean - unfortunately this wouldn't work for me but I came up with another solution...
You see I have a memo at the top of the region that might expand or contract depending on how many notes the user has left in the database. This means that the number of rows that can fit on a page varies.
However. there is another solution - you use the MaxHeightLeft property of a databand.
All you do is measure the height of your databand, multiply it by 2, and put this in your MaxHeightLeft property. This will force 1 or 2 records onto the next page if it fills up that much.
thank's a lot, this thread helps me out from my problem with endless printing loop in Nevrona Rave...., I set MinHeightLeft to 0,500, this setting is work but i'm not sure that it will work for anothers result set of my query report.
Master,
The solution is MinHeightLeft to 0,500 , i use property wastefit area in true and generated the loop in the second print, but when changed the property MinHeightLeft to 0,500 the error disapear.
Thanks !
Atte
Fabiola Herrera.
Fabi_ucv#hotmail.com

Resources