Retrive a list of opened Work Items for a given project in TFS 2010 programmatically - tfs

I am trying to retrieve a list of opened work items for a given project programmatically. In searching through the web, the only way that I can see to do this is to use the WorkItemStore API and execute a query.
The major issue that I am having is that retrieving the workitemstore takes almost 2 minutes. I subsequently caches it, but the first hit is unacceptable. Beyond that, my application needs to refresh it every x number of minutes in case new work items are added.
Is there any way to get a list of opened work items associated with a project without using the WorkItemStore. I only need the work item number and optionally the title. I don't need any other information.
If not, is there something that I am doing wrong or something wrong with the TFS server (index missing perhaps) that makes the performance so slow. I have tried different ways of getting it by the way. They are all extremely slow.
WorkItemStore store = (WorkItemStore)tfs.GetService(typeof(WorkItemStore));
or
workItemStore = new WorkItemStore(tfsTeamProjectCollection);
or
workItemStore = new WorkItemStore(tfsServerName);
Any help in this matter would be greatly appreciated.

Even with an incredibly large DB you shouldn't experience two minutes delays.
I would load up SQL Profiler and take a look at the query generated to get work items. From there, you can probably identify what part of the query is causing the delay.
You can also consider running the query on the same box that the TFS DBs are on and see if that is the issue. As the comment above points out, remote connections can certainly cause delays.
If none of this resolves the issue then hopefully you can provide some more information like size of the project (shouldn't matter), TFS installation configuration (where are your servers and how are they setup) and what hardware it is on.

Related

TFS 2013: How to Create Alerts for Unfinished Work Items when a Sprint Ends

Is there a way to set up custom alerts on TFS? I already use the web interface to create alerts, but I need to create custom ones that are not based on work item fields only, but also on the current and past iterations. I know that Power Tools used to have an Alert Explorer in previous versions of Visual Studio, but I don't know if it would have supported what I am trying to do.
Essentially, this is what I need:
An alert that notifies users of unfinished work items assigned to them when the current iteration (sprint in my case) ends.
I know some of you might be concerned about TFS not knowing what the current sprint is, but I have used this workaround http://intellitect.com/transitioning-between-sprintsiterations-with-tfs/ so I don't believe it's an issue.
I know I could simply query for unfinished items and move them to another iteration (sprint) in Excel, but we are trying to get into the habit of getting everyone to finish their work on time, and if not, as quickly as possible, and the notifications would go a long way in helping with that.
Would there even be a way to do this via the TFS API or through the TeamFoundation PowerShell modules? I have searched extensively but I can't seem to find an answer to this question. Any help even with a work-around solution would be appreciated.
If you are trying to get people into the habit of updating their work items then this will cause you more issue than it fixes. They are not doing it because they do not see the value.
However you could write a TfsJob that sends the emails. It would need to be Scheduled job that checks to see if there is outstanding work...
This should get you started: http://blogs.msdn.com/b/chrisid/archive/2010/02/15/introducing-the-tfs-background-job-agent-and-service.aspx
However what you have is a people problem that cant be solved by tooling.
what I like is getting a job thing, whether a SQL server one or a windows service, running, then manipulate workitems by myself.

Nodes timing out in umbraco back-end

I'm having an issue with an umbraco site of mine: For some reason some of the nodes are timing out when I try to click on them in the back-end of the site.
The front-end works fine and there aren't any slowdown issues there, however I'm unable to edit these same nodes in the back-end as the system seems to just hang. This is making it incredibly difficult to debug as I have no idea what properties are actually causing the problems here. What's strange is I can create a node of the same document type and enter in some dummy values and that works fine, yet I can't seem to edit the existing nodes.
I've tried republishing the entire site, republishing the individual nodes, deleting the umbraco.config file and nothing has worked up to this point.
What's also interesting is that if I close down the browser the system seems to stop hanging and I can log in and try again.
Has anyone encountered this before or know where to begin?
Thanks
I have encountered something similar. The longer you work with Umbraco the slower it becomes and if you check the memory usage in Chrome's task manager, you can see that certain actions upon nodes bump the memory usage up a little further. The answer is just to close down the tab and open a new one.
I have reported this and Umbraco cannot replicate this. However, I do think that this is possibly due to maybe a package installed into Umbraco, maybe uComponents. It's very difficult to pin point.
Update:
If you can access some nodes but not others, then this is actually slightly easier to debug. I would check what similarities the nodes that timeout have.
Are they all of the same document type?
Do they all use the same data type?
I would guess that the nodes in question are using a data type that is performing an operation when the node is loading, and that operation is timing out. For example, do you have any data types that load data from the database, like enums? Do you have any datatypes that load data from a web service?
Do you have any usercontrol data types wrapped in the UserControlWrapper data type? These would be somewhere to check.
Finally, check:
The databases [umbracoLog] table. Any Umbraco-specific errors will be listed there.
Check the computer's event viewer. This will show any unhandled errors.
My money's on a database timeout.

MemCache, Rails, pages showing different data at different times

We've got a strange problem that is very hard to troubleshoot. We are looking for some assistance on methods that might help us troubleshoot this problem. We use memcache and thinksphinx. Recently we moved to a new server and suddenly elements on the pages are showing up missing.
So for instance, our home page has news items and latest files added. In one case I see that we are missing the last 2 news items. My developer checks and sees its there. 10 minutes later he checks and see all the news items missing. Check again 15 minutes later and missing 3 items.
We were able to notice that on the server move we had memcache set at 2mb, so we moved it up to 1gb. It looked like everything was fixed. However, now we are seeing similar inconsistencies when people are searching. Users will report problems, I will see them, send them to my developer and he sees different results. We both refresh and see something else.
We are able to realize this is somehow related to memcache and/or our thinkingsphinx, because when we clear and rebuild, everything acts normal.
My only assumption is that at some point we run out of memory in memcache, but it makes no sense that only certain data would not be shown.
Can anyone give any advice?
Thanks,
Will

Reconstructing sms.db

Backstory
This afternoon, I replied to a text from my girlfriend, then apparently neglected to sleep my phone before putting it back in my pocket. When I pulled it back out a few minutes later, my phone had decided to hit "Edit->Clear All" on the conversation, vaporizing two years and two phones worth of SMS history with her. While I have a backup of the phone, it's close to three weeks old at this point, and there's enough solid discussion that I'd like to reconstruct; I've already grabbed a copy of sms.db, but I think the method I used vacuumed the file, so there are no soft-deleted texts in it.
Meat of the Question
I have a three-week old backup of my sms.db, and have access to date copy of her sms.db. I'd like to
export the texts she has but I don't (easy, at least to CSV)
change the "perspective" info (the address field and the sent/received/deleted/unknown field), keeping the timestamp and text
import/merge these new entries into my old sms.db backup
merge this updated backup with my current sms.db (optional/there seems to be an online utility for that)
I don't really know SQL but would be willing to learn; the problem I have is that from what I understand, the tables within sms.db have become more interdependent over the OS's lifespan, and the triggers now call C functions that don't exist outside the phone, so it's not a simple matter of calling a single trigger on multiple entries. Does anyone know of any ways to work around this complexity, or even better, any utilities that have already figured out how to import individual entries into sms.db?
Edit:
I've been examining sms.db, and from what I can tell, the relationships are pretty straightforward:
for message, I need to mostly make sure that the ROWID of any added messages are higher than the current highest ROWID
msg_group holds the message:ROWID of the last message for each contact; I can lookup the correct address within group_member; group_member:group_id corresponds with msg_group:ROWID
msg_group has a hash column; this will probably be the hardest thing to update, since I'm not immediately sure what it's updating, or what hash to use
sqlite_sequencedoesn't seem like it's quite up-to-date; its entries seem to all be smaller than the actual ROWIDs, but I assume this means I won't have to mess with it very much.
I'm not really sure that I'll be able to change msg_pieces at all: it's the table in charge of handling the multiple parts of an MMS message.
Hey did you get this sorted out? if you haven't I suggest taking a look at http://smsmerge.homedns.org/
I have been in a similar position as you have, but I was lucky and had a more recent backup than that.
Let me know if you need a hand with it

How do I overcome poor SSIS debugging performance?

I’m using SSIS to synchronize data between two databases. I’ve used SSIS and DTS in the past, but I generally write an application for things of this nature (I’m coder and it just comes easier to me).
In my package I use a SQL Task that returns about 15,000 rows. I’ve hooked that up to a Foreach Container, and within that I assign the resultset column values to variables, and then map those variables to parameters that are fed to another SQL Task.
The problem I’m having is with debugging, and not just more complicated debugging like breakpoints and evaluating values at runtime. I simply mean that if I run this with debugging rather than without, it takes hours to complete.
I ended up rewriting the process in Delphi, and the following is what I came up with:
Full Push of Data:
This pulls 15,000 rows, updates a destination table for each row, then pulls 11,000 rows and updates a destination table for each row.
Debugging:
Delphi App: 139s
SSIS: 4 hours, 46 minutes
Not Debugging:
Delphi App: 132s
SSIS: 384s
Update of Data:
This pulls 3,000 rows, but no updates are needed or made to the destination table. It then pulls 11,000 rows but, again, no updates are needed or made to the destination table.
Debugging:
Delphi App: 42s
SSIS: 1 hours, 10 minutes
Not Debugging:
Delphi App: 34s
SSIS: 205s
The odd thing is, I get the feeling that most of this time spent debugging is just updating UI elements in Visual Studio. If I watch the progress tab, a node is added to a tree for each iteration (thousands total), and this gets slower and slower as the process goes on. Trying to stop debugging usually doesn’t work, as Visual Studio seems caught in a loop updating the UI. If I check the profiler for SQL Server no actual work is being done. I'm not sure if the machine matters, but it should be more than up to the job (quad core, 4 gig of ram, 512 mb video card).
Is this sort of behavior normal? As I’ve said I’m a coder by trade, so I have no problem writing an app for this sort of thing (in fact it takes much less time for me to code an application than “draw” it in SSIS, but I figure that margin will shrink with more work done in SSIS), but I’m trying to figure out where something like SSIS and DTS would fit into my toolbox. So far nothing about it has really impressed me. Maybe I’m misusing or abusing SSIS in some way?
Any help would be greatly appreciated, thanks in advance!
SSIS control flow and loops are not very high performance, and not designed for processing these amounts of data. Especially during the debugging - before and after each task execution, debugger sends notifications to designer process, which updates colors of the shapes and this could be slow.
You could get much better performance using data flow. Data flow does not operate with single rows, it works with buffers of rows - much faster, and the debugger is only notified about beginning/end of the buffers - so its impact is less noticeable.
SSIS is not designed to do a foreach like that. If you are doing something for each row coming in, you probably want to read those into a dataflow and then using a lookup or merge join, determine whether to do an INSERT (these happen in bulk) or a database command object for multiple SQL UPDATE commands (a better performing option is to batch these into staging table and do a single UPDATE).
In another typical sync situation, you read all the data into a staging table, and do a SQL Server UPDATE on the existing rows (INNER JOIN) and INSERT on the new rows (LEFT JOIN, rhs IS NULL). There is also the possibility of using linked servers, but joins over that can be slow, since all (or a lot of) the data may have to come across the network.
I have SSIS packages that regular import 24 million rows, including handling data conversion and validation and slowly changing dimensions using the TableDifference component, and it performs relatively quickly for that large amount of data versus a separate client program.
I have noticed this is the behavior, I had an SSIS package for moves, that did somewhere in the neighborhood of 3 million entries, it was not possible to debug as it would run for about 3-4 days.
SSIS is still the way I did it, I just don't "debug" with SSIS, I run them when working with the full datasets. If I must debug, I use very small datasets.

Resources