We are upgrading to TFS 2015 and it seem like almost 8 day the system is still trying to upgrade. It is stuck on job step
I'm not sure if I should reboot because one of the article http://nokitel.im/index.php/2015/03/24/tfs-2013-upgrade-project-collection-stuck-offline-servicing-state/ said rebooting would make the process start all over. Any suggestions?
spwho2 shows
8 days is definitely too long. As you can see from the log, upgrade job is waiting for fulltext index population and reports status every minute. If the last entry is from July 2nd, then most likely upgrade job has failed.
You should verify that tfsjobagent service is running on your server first.
If it is not running, you should definitely start it.
If is running, you should query vw_ServicingJobDetail view in the Tfs_Configuration database to find IDs of upgrade jobs.
You can use the following query to see 100 latest step details for a servicing job:
SELECT TOP 100 *
FROM vw_ServicingStepDetail
WHERE JobId = 'your-job-id'
ORDER BY DetailId DESC
Are all 3 upgrade jobs stuck on the same step?
During upgrade, there is a servicing step that checks the status of SQL full-text index population. It waits until all work item long text field values are indexes or the crawl is idle. However, the logic doesn't handle the special status code (the status code 6) returned by SQL, and thus keep checking status in a loop.
TFS team is working on getting the problem fixed. However, there isn't a good workaround at this point except trying to identify the problem in SQL full-text index population, and resolve that (so it no longer returns 6 as its status).
As a starting point, check crawl logs inSQL logs folder, and see the exact error being logged there. Also, try pausing/resuming full-text index on WorkItemLongTexts_Dataspace table, and see if that helps.
If your database are large then this process can take many days. I know of one instance that took over 5 days to upgrade.
If you mean that it is actually in it's 8th day of upgrade then I would suggest that you raise a support call with MSFT.
Full Text Daemon Search service needs to be turned on.
Related
I want to do a Jira issue query, but I dont know if it is possible.
I am looking at how many of our bugs have been re-opened ever. So they were worked on, closed, re-opened, and then fixed and closed again. Its a measure of how well bugs are fixed.
That query uses:
AND status was Reopened
However, we have a behaviour where we close an issue, realise that the issue needs editing, so re-open the issue to change the resolution for example, and then close it again.
I think the best way of doing this is to search for something like
'AND status was Reopened for more than 3 hours'
Is there anything like that? The data is there in the history, it is just a matter of weather we can query it or not.
There's no way to write a JQL for issues which were in a status for a given amount of time. JQL only supports searching the time an issue has been in a status relative to a date. If you are using Jira Service Desk, the usual workaround for something like this is to create an SLA for 3 hours which is triggered when the issue moves into the Reopened status, and then query for this SLA being breached.
Otherwise, there are add-ons for adding this functionality to JQL. Or add-ons for creating automations which could set a flag that you could query. Automation for Jira and Scriptrunner are popular plugins that could pull this off, and soon Automation for Jira will be built into Jira Cloud.
I've read all post with the same or very close headline, but still can't find a proper solution or explanation to my problem.
I'm working with MySQL Workbench 6.3 CE. I have been able to create a database with several tables, and create a conexion with python to write data to it. Still, I had a problem related to a varchar field that needed to be set to more than 45 characters. When I try to set it to bigger limits, like VARCHAR(70), no matter how many times I try, wether I set higher limits for timeout, I get the 2013 error, saying my connection was closed during the query.
I'm using the above version of workbench, on windows 10, and I'm trying to modify that field from the workbench. Afer that first time, I can't drop a table either, nor can I connect from python.
What is happening?
Ok, apparently what was happening is that I had a block, and there where a lot of query waiting in a situation of "waiting for table metadata block".
I did the following in the console of workbench
Select concat('KILL ',id,';') from information_schema.processlist where user='root'
that generates a list of all those processes. I copy that list in a new tab, and execute a massive kill of processes. After that it worked again.
Can anybody explain me how did I arrive to that situation and what precautions to take in my python scripts so as to avoid it?
Thnak you
I have started using TFS Integration Tools to migrate work items from one TFS2010 project to another team project within the same collection. After some small trial runs and modifications to the field and value mappings I started a migration on our entire product backlog. Approximately 170000 change groups were discovered and analyse started. However, during the analysis the connection to the TFS server was lost so the migration had to be restarted. After the restart approx 340000 change groups were identified (roughly double) without any significant changes being made to work items in the backlog.
Has anyone experienced a similar problem or are aware of settings or changes that can be made in the tool to limit this increase in change groups? The amount of time taken to analyse so many groups is causing the migration to take much longer that was initially expected.
After several runs, I found out that the count appears to be a running total so logically enough when I experienced a break in connection all change groups had to be re-analysed causing the "doubling" in change groups.
I have a Rails 3 app that I am running on Heroku. The app is usually really fast but sometimes I'll get cases where the app seems to hang for upwards of 2 minutes before finally returning the requested page.
I have the New Relic addon installed and there doesn't seem to be anything sticking out at me. It seems to be kind of sporadic and doesn't seem to be connected to a particular controller/action.
How would you suggest I go about pinpointing the cause of this problem?
http://github.com/kyledecot/skateparks-web
Always check the logs. When it happens, immediately go check your logs. Pretty sure all SQL queries are logged and timed, and you might want to add logging and timing to some of your own service calls.
If you upgrade to the Pro level of New Relic, you can get detailed traces specifically of your slow transactions. Turn up your Transaction Trace threshold to a large number (1s is pretty big), and wait for traces to show up. You'll see a detailed breakdown of the performance of an individual request, including SQL queries.
(Full disclosure: I work for New Relic.)
My TfsVersionControl database has grown to 40+ GB in size. We recently did a TFS Destroy on a folder tree that should have cleared up at least 10 GB but instead it seemed to have no effect.
When I look at the tables in TfsVersionControl, I am first shocked to see that there are no foreign keys at all in the database. Running a few queries, I see that there is some orphaning going on:
tbl_Content has 13.9 GB of records that don't have a related tbl_File record
tbl_File and tbl_Content have 2.4 GB that don't have a related tbl_Namespace record
The cleanup job seems to be running nightly (prc_DeleteUnusedContent) and running it against the database manually doesn't remove any orphans. I see in the log for the cleanup job that it failed on 3/16, which is the morning after I destroyed the large amount of data. The error was due to a full transaction log.
Could that error be the reason I'm left with all this orphaned data that can't be deleted? How can I permanently destroy this unneeded content?
See the blog post on MSDN
http://social.msdn.microsoft.com/Forums/en-SG/tfsversioncontrol/thread/5f3f8916-1c6d-46f7-9dae-2cdaeaee98db
As noted by Chandru from the TFS team:
This is due to a bug in TFS 2008 - where if the nightly job failed, it caused this problem. Please contact microsoft support and they can provide you a fix for it. Please do not attempt to fix this yourself.
After a long back and forth with the folks at Microsoft, it turns out this is a known bug in the failure of some cleanup processes. There's a knowledge base article here: http://support.microsoft.com/kb/974596
The hotfix described is obsolete if you've already installed TFS 2010.
In addition, the tech at Microsoft had me run a DELETE statement on tbl_Content to delete all records which didn't point to an actual tbl_File record. I'd post the SQL, but don't want to be responsible for anyone copying and pasting. It's pretty self explanatory and as easy as you think.