Using Asana events API for task monitoring - asana

I'm trying to use Asana events API to track changes in one of our projects, more specific task movement between sections.
Our workflow is as follows:
We have a project divided into sections.
Each section represents a
step in the process. When one step is done, the task is moved to
section below.
When a given task reaches a specific step we want to pass it to an external system. It doesn't have to be the full info - basic things + url would be enough.
My idea was to use https://asana.com/developers/api-reference/events to implement a pull-based mechanism to obtain recent changes in tasks.
My problems are:
Events API seem to generate a lot of information, but not the useful ones. Moving one single task between sections generates 3 events (2 "changed" actions, one "added" action marked as "system"). During work many tasks will be moved between many sections, but I'm interested one in one specific sections. How can I finds items moved into that section? I know that there's a
resource->text field, but it gives me something like moved from X to Y (ProjectName) which probably is a human readable message that might change in the future
According to documentation the resource key should contain task data, but the only info I see is id and name which is not enough for my case. Is it possible to get hold on tags using events API? Or any other data that would allow us to classify tasks in our system?
Can I listen for events for a specific section instead of tracking the whole project?
Ideas or suggestions are welcome. Thanks

In short:
Yes, answer below.
Yes, answer below.
Unfortunately not, sections are really tasks with a bit of extra functionality. Currently the API represents the relationship between sections and the tasks in them via the memberships field on a task and not the other way.
This should help you achieve what you are looking for, I think.
Let's say you have a project Ninja Pipeline with 2 sections Novice & Expert. Keep in mind, sections are really just tasks whose name ends with a : character with a few extra features in that tasks can belong to them.
Events "bubble up" from children to their parents; therefore, when you the Wombat task in this project form the Novice section to Expert you get 3 events. Starting from the top level going down, they are:
The Ninja Pipeline project changed.
The Wombat task changed.
A story was added to the Wombat task.
For your use case, the most interesting event is the second one about the task changing. The data you really want to know is now that the task changed what is the value of the memberships field on the task. If it is now a member of the section you are interested in, take action, otherwise ignore.
By default, many resources in the API are represented in compact form which usually only includes the id & name. Use the input/output options in order to expand objects or select specific fields you need.
In this case your best bet is to include the query parameter opt_expand=resource when polling events on the project. This should expand all of the resource objects in the payload. For events of type: "task" then if resource.memberships[0].section.id=<id_of_the_section> is true, take action, otherwise ignore.

Related

How are Dataflow bundles created after GroupBy/Combine?

Setup:
read from pubsub -> window of 30s -> group by user -> combine -> write to cloud datastore
Problem:
I'm seeing DataStoreIO writer errors as objects with similar keys are present in the same transaction.
Question:
I want to understand how my pipeline combines results into bundles after a group by/combine operation. I would expect the bundle to be created for every window after the combine. But apparently, a bundle can contain more than 2 occurrences of the same user?
Can re-execution (retries) of bundles cause this behavior?
Is this bundling dependent of the runner?
Is deduplication an option? if so, how would I best approach that?
Note that I'm not looking for a replacement for the datastore writer at the end of the pipeline, I already know that we can use a different strategy. I'm merely trying to understand how the bundling happens.
There are two answers to your question. One is specific to your use case, and the other is in general about bundling / windowing in streaming.
Specific to your pipeline
I am assuming that the 'key' for Datastore is the User ID? In that case, if you have events from the same user in more than one window, your GroupByKey or Combine operations will have one separate element for every pair of user+window.
So the question is: What are you trying to insert into datastore?
An individual user's resulting aggregate over all time? In that case, you'd need to use a Global Window.
A user's resulting aggregate for every 30 seconds in time? Then you need to use the window as part of the key you use to insert to datastore. Does that help / make sense?
Happy to help you design your pipeline to do what you want. Chat with me in the comments or via SO chat.
The larger question about bundling of data
Bundling strategies will vary by runner. In Dataflow, you should consider the following two factors:
Every worker is assigned a key range. Elements for the same key will be processed by the same worker.
Windows belong to single elements; but a bundle may contain elements from multiple windows. As an example, if the data freshness metric makes a big jump*, a number of windows may be triggered - and elements of the same key in different windows would be processed in the same bundle.
*- when can Data freshness jump suddenly? A stream with a single element with a very old timestamp, and that is very slow to process may hold the watermark for a long time. Once this element is processed, the watermark may jump a lot, to the next oldest element (Check out this lecture on watermarks ; )).

Marking a key as complete in a GroupBy | Dataflow Streaming Pipeline

To our Streaming pipeline, we want to submit unique GCS files, each file containing multiple event information, each event also containing a key (for example, device_id). As part of the processing, we want to shuffle by this device_id so as to achieve some form of worker to device_id affinity (more background on why we want to do it is in this another SO question. Once all events from the same file are complete, we want to reduce (GroupBy) by their source GCS file (which we will make a property of the event itself, something like file_id) and finally write the output to GCS (could be multiple files).
The reason we want to do the final GroupBy is because we want to notify an external service once a specific input file has completed processing. The only problem with this approach is that since the data is shuffled by the device_id and then grouped at the end by the file_id, there is no way to guarantee that all data from a specific file_id has completed processing.
Is there something we could do about it? I understand that Dataflow provides exactly_once guarantees which means all the events will be eventually processed but is there a way to set a deterministic trigger to say all data for a specific key has been grouped?
EDIT
I wanted to highlight the broader problem we are facing here. The ability to mark
file-level completeness would help us checkpoint different stages of the data as seen by external consumers. For example,
this would allow us to trigger per-hour or per-day completeness which are critical for us to generate reports for that window. Given that these stages/barriers (hour/day) are clearly defined on the input (GCS files are date/hour partitioned), it is only natural to expect the same of the output. But with Dataflow's model, this seems impossible.
Similarly, although Dataflow guarantees exactly-once, there will be cases where the entire pipeline needs to be restarted since something went horribly wrong - in those cases, it is almost impossible to restart from the correct input marker since there is no guarantee that what was already consumed has been completely flushed out. The DRAIN mode tries to achieve this but as mentioned, if the entire pipeline is messed up and draining itself cannot make progress, there is no way to know which part of the source should be the starting point.
We are considering using Spark since its micro-batch based Streaming model seems to fit better. We would still like to explore Dataflow if possible but it seems that we wont be able to achieve it without storing these checkpoints externally from within the application. If there is an alternative way of providing these guarantees from Dataflow, it would be great. The idea behind broadening this question was to see if we are missing an alternate perspective which would solve our problem.
Thanks
This is actually tricky. Neither Beam nor Dataflow have a notion of a per-key watermark, and it would be difficult to implement that level of granularity.
One idea would be to use a stateful DoFn instead of the second shuffle. This DoFn would need to receive the number of elements expected in the file (from either a side-input or some special value on the main input). Then it could count the number of elements it had processed, and only output that everything has been processed once it had seen that number of elements.
This would be assuming that the expected number of elements can be determined ahead of time, etc.

How do i retrieve the tasks for a project under the priority heading?

How do i retrieve the tasks for a project under the priority heading?
For example i have recruitment project, i want to retrieve tasks under "Interviewed" heading (priority heading)
Thanks
There isn't currently a way to only get tasks in a given section, so the only way to do this at the moment is to fetch all tasks for the project and then filter on your side. Fortunately, the API will return the tasks in the appropriate order such that all the tasks in a given section appear after it.
It's clunky, and we do intend to provide better support for sections at some point in the future, but it's not on our immediate roadmap so I'd definitely recommend this workaround for now. If the response is simply too large, one hack could be to get the ID of the "Interviewed:" task, then fetch only the IDs from the project (GET /projects/.../tasks?opt_fields=id), and then iterate over the tasks by ID. I'd only recommend this approach if the project is genuinely too big to fetch at once, though.

How to get the section a task is in?

Is there a way to get the section a task is/was in from the API.
I see how we could use the entire project Json to figure it out. But once a task is completed it jumps to the top of the page. Wondering how to figure out what section it was in if it's complete.
In the API response we don't make tasks "jump" to the top - the position is kept in priority order, and should still be located under the section it belongs to.
Of course, this isn't a terribly elegant way to do it. We're currently considering how to handle Sections in the API in a better way, but we don't have a concrete plan for it yet and it's not yet on the actual roadmap.
But in the meantime, you should be able to get the project, as you said, and just look at which Section (i.e. a task that ends in ":") the tasks come after, and you're set!

Modelling indefinitely-recurring tasks in a schedule (calendar-like rails app)

This has been quite a stumbling block. Warning: the following is not a question, rather explanation of what I came up with. My question is — do you have a better way to do this? Is there some common technique for this that I'm not familiar with? Seems like this is a trivial problem.
So you have Task model. You can create tasks, complete them, destroy them. Then you have recurring tasks. It's just like regular task, but it has a recurrence rule attached to it. However, tasks can recur indefinitely — you can go a year ahead in the schedule, and you should see the task show up.
So when a user creates a recurring task, you don't want to build thousands of tasks for hundred years into the future, and save them to database, right? So I started thinking — how do you create them?
One way would be to create them as you view your schedule. So, when the user is moving a month ahead, any recurring tasks will be created. Of course that means that you can't simply work with database records of tasks any longer. Every SELECT operation on tasks you ever do has to be in the context of a particular date range, in order to trigger recurring tasks in that date range to persist. This is a maintenance and performance burden, but doable.
Alright, but how about the original task? Every recurrent task gets associated with the recurrence rule that created it, and every recurrence rule needs to know the original task that started the recurrence. The latter is important, because you need to clone the original task into new dates as the user browses their schedule. I guess doable too.
But what happens if the original task is updated? It means that now as we browse the schedule, we will be creating recurring tasks cloned off of the modified task. That's undesirable. All the implicitly persisted recurring tasks should show up the way the original task looked like when recurrence was added. So we need to store a copy of the original task separately, and clone from that, in order for recurrence to work.
However, when the user navigates the tasks in the schedule, how do we know if at a particular point a new recurrence task needs to be created? We ask recurrence rule: "hey, should I persist a task for this day?" and it says yes or no. If there is already a task for this recurrence for this day, we don't create one. All nice, except a user shall also be able to simply delete one of the recurring tasks that has been automatically persisted. In that case following our logic, the system will re-create the task that has been deleted. Not good. So it means we need to keep storing the task, but mark it as deleted task for this recurrence. Meh.
As I said in the beginning, I want to know if somebody else tackled this problem and can provide architectural advice here. Does it have to be this messy? Is there anything more elegant I'm missing?
Update: Since this question is hard to answer perfectly, I will approve the most helpful insight into design/architecture, which has the best helpfulness/trade-offs ratio for this type of problem. It does not have to encompass all the details.
I know this is an old question but I'm just starting to look into this for my own application and I found this paper by Martin Fowler illuminating: Recurring Events for Calendars
The main takeaway for me was using what he calls "temporal expressions" to figure out if a booking falls on a certain date range instead of trying to insert an infinite number of events (or in your case tasks) into the database.
Practically, for your use case, this might mean that you store the Task with a "temporal expression" property called schedule. The ice_cube recurrence gem has the ability to serialize itself into an active record property like so:
class Task < ActiveRecord::Base
include IceCube
serialize :schedule, Hash
def schedule=(new_schedule)
write_attribute(:schedule, new_schedule.to_hash)
end
def schedule
Schedule.from_hash(read_attribute(:schedule))
end
end
Ice cube seems really flexible and even allows you to specify exceptions to the recurrence rules. (Say you want to delete just one occurrence of the task, but not all of them.)
The problem is that you can't really query the database for a task that falls in a specific range of dates, because you've only stored the rule for making tasks, not the tasks themselves. For my case, I'm thinking about adding a property like "next_recurrence_date" which will be used to do some basic sorting/filtering. You could even use that to throw a task on a queue to have something done on the next recurring date. (Like check if that date has passed and then regenerate it. You could even store an "archived" version of the task once its next recurring date passes.)
This fixes your issue with "what if the task is updated" since tasks aren't ever persisted until they're in the past.
Anyway, I hope that is helpful to someone trying to think this through for their own app.
Having done a calendar-like component for an internal social networking app, here's my approach to that problem.
Tiny bit of background: I needed to book boardrooms for meetings for the entire company. Every boardroom needed to be booked either as a one-off or on a recurring basis. As you've found out, it's the recurrence rules that kill you. The additional twist to my problem was that there could be conflicts, i.e. two people could try to book the same boardroom for the same date and time.
I split my models into Boardroom (obviously) and Event (which is the booking associated to a User). I think there was a join model, as well, but it's been a while. When a User would try to book a boardroom, this is the process taken:
Attempt to book on the first available date (done through the calendar UI by the user similar to how Google Calendar creates events)
If it's a one-off, you're done
If it's a recurring event, try to immediately book the next 6 events based on the rule given (weekly, bi-weekly, monthly); If it fails, due to conflict, book the ones you can, e-mail the conflicts to the user
Book for the next year or up to the date the recurrence is ending in a background job; Follow the conflict resolution rule from #3
When resolving the conflicts, the user had the option of either resolving them on a case-by-case basis or moving the remaining bookings to the new, available date and time.
If the user updated the original booking (e.g changed the time and date), he/she had the option of updating only the that one or every following recurrence. If the latter was selected, steps 3 and 4 are re-invoked after the deletion of existing events.
If this sounds a lot like Google Calendar, then you've fully understood my approach, :)
Hope this helps.
I personally think that (in python which I know well), and ruby (which I know less well, but it's a dynamic language, and so I think the concepts map 1:1), you should be using generators. How's that for a minimalistic answer? Now, when you generate your UI, you pass in a reference to the generator, and it generates the objects you need, as they are requested.
As an interface, it has next item, and previous item methods, and acts a bit like a cursor that can wade forward and backward through the various interations. It is in fact, a piece of code masquerading as an infinite series (array) without using infinite memory.
Why do you need to proliferate objects? What you really need are virtual data display controls (for the web or desktop) also known as "paging" I think, in web contexts, and you can think of your schedule as an infinite generated-on-demand spreadsheet, with no top row, and no bottom row. The only values you need to be able to calculate (calculate, not store) are the ones that appear right now, as visible to the user.

Resources