Efficiently getting all completed tasks - asana

For an application I need all the user's recently completed tasks (recent meaning the last 7 days in this case). I've got it working now, but it's extremely inefficient and in the API reference I can't see a better way to do it.
What I currently need to do is get the current user (https://app.asana.com/api/1.0/users/me), iterate through the user's workspaces and call https://app.asana.com/api/1.0/tasks?workspace=workspaceId&assignee=me&completed_since=sevenDaysAgo for each workspace. This gives me the compact task data for all completed tasks for all projects in that workspace, and also all uncompleted tasks for all projects in that workspace. Since I only need the completed tasks, I need to filter the uncompleted tasks out of that list. However, the compact task data doesn't include the "completed" property. To figure out if a task is completed or uncompleted, I need to get the full task data, which means a call to https://app.asana.com/api/1.0/tasks/taskId per task.
Say I have two workspaces that each have three projects, and each project has on average 100 completed tasks and 200 uncompleted tasks. That means 1 + 2 + (2 * 3) + (2 * 3 * 300) = a whopping 1809 API requests. Apart from this being very slow, it's also a large hit on the Asana servers.
Is there a way to do this more efficiently? Only getting compact task data for completed tasks would go a long way. Getting it in one call would be even better: how does Asana itself do this in My Tasks > View: Recently Completed Tasks, for instance?

(Asana dev here) Have you considered using ?opt_fields=completed in your initial GET /tasks request? You can specifically request any fields you want, see the Input/Output option docs, and of course see Task docs for a reference on available fields. So if you really just want the name (for instance) you could request ?opt_fields=completed,name.
Sadly, you will need to do the filtering out of incomplete tasks on your side.

Related

Get all Durable Function instances over a time period

I have been trying to use the Durable Functions HTTP API Get Instances call to get a list of Completed/Failed/Terminated instances to delete over a given time period, batched in groups of 50: /runtime/webhooks/durabletask/instances?code=xxx&createdTimeFrom=2021-11-06T00:00:00.0Z&createdTimeTo=2021-11-07T00:00:00.0Z&top=50
As per the documentation, if the response contains the x-ms-continuation-token header then there are more results and I should make another call adding the x-ms-continuation-token to the request headers... even if I get no results in the body (the first few calls always seem to return no results but then I start getting results after that for a while before dropping back to no results). My issue is that this never seems to end because there is always a continuation token even after running for 20+ minutes and hundreds of calls for the same date range. This doesn't happen for the Durable Function Monitor extension for VS Code.
What am I missing from the documentation that will tell me when to stop looking for more records if the x-ms-continuation-token header is always present?

How to call multiple APIs in parallel for load testing (using Gatling)?

I'm currently trying to load test my APIs using Gatling, but I have a very specific test that I want to perform. I would like to simulate a Virtual User calling all my APIs (the 16 of them) simultaneously. I would like to repeat this multiple times so I can have an idea of the average time it takes for my APIs to respond when called all together at the same time.
The method I used was :
Creating a Scenario for each of my API.
Calling every single one of the scenarios in my SetUp()
Injecting 60 users in every scenarios with a throttle of 1 Request per Second & holding it for 60 seconds.
The aim was to have 60 iterations of what I wanted.
FYI I'm using Gatling 3.1.2
//This is what all my scenarios look like
val bookmarkScn = scenario("Bookmarks").exec(http("Listing bookmarks")
.get("/bookmarks")
.check(status.is(200))
)
//My setUp
setUp(
bookmarkScn.inject(
atOnceUsers(60)
).throttle(
jumpToRps(1),
holdFor(60)
),
permissionScn.inject(
atOnceUsers(60)
).throttle(
jumpToRps(1),
holdFor(60)
),
//Adding all the scenarios one after the other
).protocols(httpConfig)
I got some results with this method but they are not at all what I was expecting and if I keep the test going for too long eventually every call just timeout.
It was supposed to just take more time than usual (e.g from 100ms per API to 300ms).
My question is : Is this method correct ? Can you help me achieve my goal ?
What you've got should work, but there's probably an easier way to specify this injection. Instead of
bookmarkScn.inject(
atOnceUsers(60)
).throttle(
jumpToRps(1),
holdFor(60)
),
you could use
bookmarkScn.inject(
constantUsersPerSec(1) during (60 seconds)
),
in terms of your results, I'd expect that the issue lies somewhere downstream of gatling - 16 concurrent users making simple GET requests is very straightforward for Gatling. You may have issues elsewhere with performance in your app or infrastructure in-between.

Load testing html and objects embedded with yandex.tank

I have been using yandex.tank for a few days to perform load tests
I have set up the URL's list in different ways but I do not get my goal
I want to simulate a real visit (like a web navigator):
request
html response
request of objects embedded in the code
I can create a grouped list of the objects embedded in the code, but the results are oriented to each of the requests per individual. For example:
My "home" tag in "Cumulative Cases Info" shows me:
4554 28.21% / avg 171.2 ms
171.2 ms is the average time of each of the objects. I want the average time for the full request (html and embeded objects)
Is it possible to perform a load test by making requests like those indicated with yandex.tank? Or with another load testing tool?
Yandex-tank (actually default load generator Phantom in it), doesn't parse response and therefore knows nothing about embedded resources. You'd better try jmeter as a load generator, since it's HTTP Request Sampler has an option to retrieve resources - http://jmeter.apache.org/usermanual/component_reference.html#HTTP_Request

Getting Asana Task sort order via API

I am trying to get the sort order associated with of within a project. I don't see a field with info in it. When I get the tasks can I assume that order the tasks are in the json is the sort order in the Project? It looks like it that is the case, I just would like confirmation.
Thanks
Randy
When we return the tasks associated with a project we return them in "project priority" order - as in, the order they've been put into manually. If you change the view in the app (say, to sort by hearted) they'll still be in the manual order when fetched via the API.

Asana API - how to get incomplete tasks only?

How do I get a list of tasks from a project that are incomplete? I tried to add ?completed=false and ?completed=0 at the end of the tasks URL:
https://app.asana.com/api/1.0/projects/[project id]/tasks?completed=false
... doesn't seem to work. Whether its set to true or false, it is always returning the same tasks. I've spot checked to make sure there are completed tasks in there.
background info: I'm only trying to do this so that I don't get the entire list of tasks all the time. I need the entire list of tasks because right now as I understand it there is no way to get the section a task is in.
Under "Querying for Tasks" in the Developer Documentation you'll find the parameters you can pass to select different tasks. We don't support generalized queries (like completed=false) but we do have e.g. completed_since, which returns all "incomplete or completed since X" tasks. So, if you only want incomplete tasks, you can pass completed_since=now (since no completed tasks have been completed since the current time, it will only return incomplete tasks). It's not exactly intuitive, but it works :-)

Resources