Measure to get the number of parent issues only if their subtask is in specific status in EazyBi for Jira Cloud - jira

I am struggling with MDX and custom measures.
I am looking to create a custom measure that would return the number of issues (from issue created), but only if those issues have a subtask (of type “deployment”) that is either in status “A” or “B”
I believe in need to use the Filter function somehow but really have no clue.
Can anyone help me?

Related

Jira jql query to filter issues which has child issue in it

I am in a situation where I have a Basic JQL i.e.
issueKey in (JIRA-1,JIRA-2,...JIRA-1000)
I would like to add a AND condition and filter out the only issues which has Child issues within
Expected Output - If JIRA-1,JIRA-2 only has child issues so the query should return JIRA-1 and JIRA-2 out of issue keys
Why this is needed - In long running project backlog I want to get issue filter to not to miss the child issues to be prioritised
I assume this is because your project is at a too high level since subtasks are usually just for developers to break down their work.
It also sounds that you are not prioritizing properly since you have the need to look below the user stories.
If you have the need to focus on subtasks inside a 3-week user story I suggest you break them out, so you can prioritize properly.
You can just sort on subtasks and prioritize them separately if you like?

Marking a key as complete in a GroupBy | Dataflow Streaming Pipeline

To our Streaming pipeline, we want to submit unique GCS files, each file containing multiple event information, each event also containing a key (for example, device_id). As part of the processing, we want to shuffle by this device_id so as to achieve some form of worker to device_id affinity (more background on why we want to do it is in this another SO question. Once all events from the same file are complete, we want to reduce (GroupBy) by their source GCS file (which we will make a property of the event itself, something like file_id) and finally write the output to GCS (could be multiple files).
The reason we want to do the final GroupBy is because we want to notify an external service once a specific input file has completed processing. The only problem with this approach is that since the data is shuffled by the device_id and then grouped at the end by the file_id, there is no way to guarantee that all data from a specific file_id has completed processing.
Is there something we could do about it? I understand that Dataflow provides exactly_once guarantees which means all the events will be eventually processed but is there a way to set a deterministic trigger to say all data for a specific key has been grouped?
EDIT
I wanted to highlight the broader problem we are facing here. The ability to mark
file-level completeness would help us checkpoint different stages of the data as seen by external consumers. For example,
this would allow us to trigger per-hour or per-day completeness which are critical for us to generate reports for that window. Given that these stages/barriers (hour/day) are clearly defined on the input (GCS files are date/hour partitioned), it is only natural to expect the same of the output. But with Dataflow's model, this seems impossible.
Similarly, although Dataflow guarantees exactly-once, there will be cases where the entire pipeline needs to be restarted since something went horribly wrong - in those cases, it is almost impossible to restart from the correct input marker since there is no guarantee that what was already consumed has been completely flushed out. The DRAIN mode tries to achieve this but as mentioned, if the entire pipeline is messed up and draining itself cannot make progress, there is no way to know which part of the source should be the starting point.
We are considering using Spark since its micro-batch based Streaming model seems to fit better. We would still like to explore Dataflow if possible but it seems that we wont be able to achieve it without storing these checkpoints externally from within the application. If there is an alternative way of providing these guarantees from Dataflow, it would be great. The idea behind broadening this question was to see if we are missing an alternate perspective which would solve our problem.
Thanks
This is actually tricky. Neither Beam nor Dataflow have a notion of a per-key watermark, and it would be difficult to implement that level of granularity.
One idea would be to use a stateful DoFn instead of the second shuffle. This DoFn would need to receive the number of elements expected in the file (from either a side-input or some special value on the main input). Then it could count the number of elements it had processed, and only output that everything has been processed once it had seen that number of elements.
This would be assuming that the expected number of elements can be determined ahead of time, etc.

Dataflow and Stackdriver monitoring integration

I would like to create a Stackdriver dashboard to monitor the number of elements being Read/Wrote by my Pipelines. The dataflow/job/element_count metric seems to cover this use case, unfortunately I can't get it to work properly (cf picture)
stackdriver dashboard
Did anyone have this problem before? Would you know how to filter this metric in order to have only the element count for the Read/Write PTransforms?
Thanks !
You should be able to just create a dashboard by picking Resource Type as 'Dataflow Job' and Metric Type as 'element count'.
As long as your source and sink are just reading and writing the amount of elements, you should be able to use the element counts on the output collections. You can put them on the graph and mouse over to see the separate amounts. I am not sure of a way to show only one though if that is what you want to do.

Show all failing builds in TFS/VSTS

I've created a widget for TFS/VSTS which allows you to see the number of failing builds. This number is based on the last builds result for each build definition. I've read the REST api documentation, but the only way to get this result is:
Get the list of definitions
Get the list of builds filtered by;definition=[allIds], maxBuildsPerDefinition = 1, resultFilter=failed
This is actually pretty slow (2x callback, lot's of response data) and I thought it should be possible in a single query. One of the problems is that the maxBuildsPerDefinition doesn't work without the definition filter. Does anyone have an idea how to load this data more efficient?
I'm afraid the answer is no. The way you use is the most efficient way for now.

Using Asana events API for task monitoring

I'm trying to use Asana events API to track changes in one of our projects, more specific task movement between sections.
Our workflow is as follows:
We have a project divided into sections.
Each section represents a
step in the process. When one step is done, the task is moved to
section below.
When a given task reaches a specific step we want to pass it to an external system. It doesn't have to be the full info - basic things + url would be enough.
My idea was to use https://asana.com/developers/api-reference/events to implement a pull-based mechanism to obtain recent changes in tasks.
My problems are:
Events API seem to generate a lot of information, but not the useful ones. Moving one single task between sections generates 3 events (2 "changed" actions, one "added" action marked as "system"). During work many tasks will be moved between many sections, but I'm interested one in one specific sections. How can I finds items moved into that section? I know that there's a
resource->text field, but it gives me something like moved from X to Y (ProjectName) which probably is a human readable message that might change in the future
According to documentation the resource key should contain task data, but the only info I see is id and name which is not enough for my case. Is it possible to get hold on tags using events API? Or any other data that would allow us to classify tasks in our system?
Can I listen for events for a specific section instead of tracking the whole project?
Ideas or suggestions are welcome. Thanks
In short:
Yes, answer below.
Yes, answer below.
Unfortunately not, sections are really tasks with a bit of extra functionality. Currently the API represents the relationship between sections and the tasks in them via the memberships field on a task and not the other way.
This should help you achieve what you are looking for, I think.
Let's say you have a project Ninja Pipeline with 2 sections Novice & Expert. Keep in mind, sections are really just tasks whose name ends with a : character with a few extra features in that tasks can belong to them.
Events "bubble up" from children to their parents; therefore, when you the Wombat task in this project form the Novice section to Expert you get 3 events. Starting from the top level going down, they are:
The Ninja Pipeline project changed.
The Wombat task changed.
A story was added to the Wombat task.
For your use case, the most interesting event is the second one about the task changing. The data you really want to know is now that the task changed what is the value of the memberships field on the task. If it is now a member of the section you are interested in, take action, otherwise ignore.
By default, many resources in the API are represented in compact form which usually only includes the id & name. Use the input/output options in order to expand objects or select specific fields you need.
In this case your best bet is to include the query parameter opt_expand=resource when polling events on the project. This should expand all of the resource objects in the payload. For events of type: "task" then if resource.memberships[0].section.id=<id_of_the_section> is true, take action, otherwise ignore.

Resources