How does one reorder Asana tasks within assignee_status via API - asana

I understand how to reorder tasks within a project, but what about when working across projects? I'd like to use the API to reorder tasks set to 'today' as if I were changing their priority within the Asana web UI's "My Tasks" view.
Can this be done somehow via the existing addProject call, or some sequence of calls?

Unfortunately this isn't supported - only (as you've discovered) ordering within projects. Dealing with things that aren't exactly projects (Assigned To Me, in particular) is something we want to tackle but there are no concrete plans as of yet.

Related

What is the best way to auto-generate PBIs in TFS?

I am looking for advice.
I have a requirement to generate 5 PBIs under a Feature. We need the user to be able to trigger the process by populating a field for example.
Do I need code for this or is there a codeless method?
I can see that I can build Plugins for TFS. I saw an example or two (although they were related to code check-ins). I would be looking to investigate a trigger around the save of a feature and if a specific thing has occurred generate some child PBIs.
Ideally I would like a codeless method within TFS for this but am happy with investigating a Plugin if necessary.
We are on premise now but may move to online in six months or more.
If you`ll move to online, you have several options and they are not codeless:
Use existing solutions, example: TFS Aggregator
Create a custom web service to listen events and do some actions: Web Hooks
Create a custom application and use it as scheduled task: Automation of state changing for Azure DevOps work items

Logging large volumes of actions in a production MVC/SQL application

We are happy users of the ASP.NET MVC framework and SQL Server, currently using LINQ-to-SQL. It serves our needs well with a consumer-facing application with about 1.4 million users and 2+ million active uniques per month.
We are long overdue to start logging all user actions (views of articles, searches on our site, etc.) and we're trying to scope out the right architecture to do so.
We'd like the archiving system to be its own entity, and not part of the main SQL cluster that stores the production articles and search engine. We'd like it to be its own SQL cluster, starting out with just one box initially.
To simplify the problem, let's say we just want to log the search terms that these millions of users enter into our site for the month, and we want to do so in the least cycle-intensive-way possible.
My questions:
(1) Is there an asynchronous way to dump the search terms to a remote box? Does LINQ support async for this?
(2) Would you recommend building up a cache of say 1,000 (userId, searchTerm, date) logging items in a RAM cache, and then flushing those at intervals to the database? I assume this method would cut down on open/close connections.
Or am I thinking about this entirely wrong? We'd like to strike a balance between ease of implementation and robustness.
1)Sure you can, there are different solution to achieve it. Linq is not the instrument you need.
2)There should not be any major improvement by doing it, the "logging" will be triggered only when a search is performed. You will end up with two calls instead of one, not a big deal.
A suggestion is to use AOP
You can create a clean and separate layer for logging using Postsharp (there are other alternatives though). You will then decorate your actions with the required logging attribute only when you need to trace what is passed to the action.
Main advantages with this approch are :
Logging logic doesn't reside inside your code (you don't need to change your methods code) but is executed before/after your method.
Clean separation of the Aspect from the target method.
You can easily switch on/off the aspects
AOP is a common practice specially when it comes to behavior that can be added to more than one method, like logging, authentication and so on. And yes it can be used in an async way.
1)I would suggest you to create an HttpModule that "catch" all the search terms used by the users. How and where you will dump those information(you said you will use a SQL box) it's another matter which is outside the scope of the module which should just catch the Search tems.
Then you can create a component that contains the login to store those information using Async call(or even a third part component like Log4Net )
2) if you want create a kind of batch insert caching all the information you need to store and at some point dump them on SQL I would use MSMQ or any other technology that support the Reliability: I think you want loose all those information in the case of a system-crash,etc

Struts2 and multiple active wizards / workflows

I'm currently working on a Struts2 application that integrates a wizard / workflow in order to produce the desired results. To make it more clear, there is a business object that is changed on three different pages (mostly with AJAX calls). At the moment I'm using a ModelDriven action (that's extended by all the actions working with the same business object) coupled with the Scope interceptor. While this works okay if the user is handling data for only one business object at a time, if the user opens the wizard for different objects in multiple tabs (and we all do this when we want to finish things faster) everything will get messy, mostly due to the fact that I have only one business object stored in the session.
I have read a few articles about using a Conversation Scope Interceptor (main article) and about using the Scope plug-in (here). However, both approaches seem to have problems:
the Conversation Scope Interceptor doesn't auto-expire the conversations, nor does it integrate properly with Struts2;
the Scope plug-in lacks proper documentation and the last build was made in 2007 (and actually includes some of the ideas written by Mark Menard when he defines his Conversation Scope Interceptor, though it doesn't use the same code).
Spring's WebFlow plug-in seems a bit too complex to be used at the moment. I'm currently looking for something that can be implemented in a few hours time, though I don't mind if you can suggest something that works as needed, even if it requires more time than I'd currently want to spend on this now.
So, seasoned Struts2 developers, what do you suggest? How should I implement this?
Okay this isn't a fully baked idea. But seeing as no else has provided anything, here is what I would start with.
1) See if you can move the whole flow into a single page. I'm a big believer in the less pages is better approach. It doesn't reduce complexity for the application at all, but the user generally finds the interface a lot more intuitive. One of the easiest ways to go about this is by using the json plugin and a lot of ajax calls to your json services.
2) If you must transition between pages (or simply think it is too much client side work to implement #1) then I'd look to the s:token tag. The very first page to kick off a flow will use this tag, which will create a unique value each invocation. You will store a map in your session of model objects. An action will need to be provided with a model by looking it up from the session.
There are a couple challenges with #2. One how do you keep the session from getting too many domain objects? a) Well it might not matter, if the session is set to say six hours you can be rather sure that over night they will get cleared up. b) provided a self management interface which can get/set/list objects in the session. It might be what you thought of at first but it would let a worker do a certain amount and then stop and work on another. If the unit of work has some meaningful name (an invoice number or whatever) it could be quite useful.
A little more sophistication would be to move the model objects out of the session and into the service layer. At which point when inserted you would set an insertion time. You would probably need a manager to hold each type of model object and each manager would have a daemon thread that would periodically scan the map of domain objects and clean out expired ones.
You can figure out more complicated system by kicking a flow off with a token and then using another token on each page. "flowId" and "currentPageId" respectively, then you can graph allowable transitions.
Mind you at this point spring web flow is starting to look pretty good.
There is now a conversation plugin for Struts2 that achieves all these goals with very little work required by the developer: http://code.google.com/p/struts2-conversation/
It has:
-nested conversations
-cleanup of dead conversations
-convention over configuration with annotations and naming conventions
-inherited conversations
-fully integrated with Struts2
-the conversation scope can also be used by Spring IoC container-managed beans
Hope it helps somebody.

Sharing views between Rails 3 applications

I'm wondering what you think of the several methods there are to accomplish this:
Use symlinks for the shared files
Create a gem/plugin that provides the shared files and code
Create a web service that pulls views/partials from the required app and stores it in a cache
My objective is to reduce complexity in a large application. Let's say I want to build an online community, and I want one app to handle forums, another to handle user galleries, etc., and a central one which manages users and provides common views to the other apps.
So, the master application would have to provide a common layout and widgets to all others, and each app would need to provide some views to the master app too.
For example, say the layout has a main menu with an item for each app, and each item has an over-sized sub-menu, so I can't just have a simple list of label and URL pairs.
So perhaps the master app would ask each child app to provide its menu item and contents through a private API, build the menu, save the output in a cache, and send the full menu to each app when asked.
As you can see, I'm already leaning towards option 3, but I wanted some feedback on my approach and if maybe there's a better way.
Thanks for your input.
From what you describe it really sounds like you should be using a single Rails application. The view interdependency makes me think that you might benefit from this approach. I also imagine that testing will be more difficult because your 'application' will span three actual Rails applications.
That said, if you are set on using three applications, I would recommend against using and API. APIs are great for passing data (json, xml...) back and forth, but they aren't as well suited to views. My recommendation would be to create a plugin of common views that could be stored in a separate git repository and simply used within each of your applications. That way the common code is shared amongst the applications yet still locally accessible to all of them.

Roll up project-level tasks to the project collection portal in TFS2010

I have a Project Collection setup in my TFS2010RC deployment. I have two Projects setup under this collection with their own task lists, which are populated with data.
I fully expected the tasks from these individual projects to "roll up" and appear in the task list at the Project Collection level, but they do not. The Project Collection task list is empty. Basically, I'm looking to provide a view so a supervisor could see all tasks across projects quickly and easily. I'm sure I could write a reporting services report, but it seems like this is something so basic that it would have been included and it just need to be turned on or something.
I'm sure I'm probably missing something really simple here.
Thanks.
Well, I believe I answered my own question.
I was mixed up with the difference between SharePoint "tasks" and TFS "work items" (which contain a Work_Item_Type of "task").
Once I resolved that misunderstanding, I simply added a "Query Results Web Part" in my Project Collection portal and modified the query to bring back tasks across all child projects.

Resources