I have deployed TFS 2019 (Azure DevOps Server 2019). I have problem that my Tfs_Configuration database grow very fast (± 1 GB per month).
Currently table dbo.tbl_Content have ±10 GBs. I do not use this TFS daily so for me is mystery why it grows so fast.
Especially why Tfs_Configuration database and not collection database.
What is stored in this table ? Is here way how to extract data from this table to more readable format ?
Had someone the same problem with configuration database ?
Thanks.
EDIT: table dbo.tbl_Content grow fast in database Tfs_Configuration
Related
is there a way to increase the limit of depictable items in a historicized diagram (like tests or bugs)? In our setup, it is hardcoded by Microsoft to a limit of 1000 items. Is there a file on the server where this limit can be changed?
Thanks in advance
Unfortunately, this is by design. We can not change this limitation. Here is already a User voice, you can vote it:
https://visualstudio.uservoice.com/forums/330519-visual-studio-team-services/suggestions/16438078-i-cannot-plot-in-a-chart-widget-a-query-with-more
Generally we do not recommend to do actions against the TFS Databases.
However if you insist on changing the limitation, then you can run below sql query against the Tfs_Configuration database (You'd better backup the DBs first):
Open SSMS and connect to TFS SQL server
Navigate to the Tfs_Configuration DB -> Right Click -> New Query
-> Run below query:
exec prc_SetRegistryValue 1, '#\Service\WorkItemTracking\Settings\MaxTrendChartTimeSliceResultSize\', 5000
Please note that change the limitation may affect the performance when using TFS.
We are hosting a new Umbraco 7 site with the database on SQL server 2012 Express. The database seems to be growing rapidly. It's currently about 5Gb and SQL 2012 express has a maximum database size of 10Gb so we are starting to get a little concerned. The cmsPropertyData and cmsPreviewXml seem to be taking up the most space at about 2.5Gb each. Is there any general housekeeping that needs to be done to keep these tables under control. Have tried shrinking the database but there is no unused space.
Any Advice?
I don't know for sure this is the problem in your case, but Umbraco creates a new version of the content node each time the node is saved. This can cause your database to grow rapidly. There's a cool package that automatically removes older versions called "UnVersion". https://our.umbraco.org/projects/website-utilities/unversion/
Our TFS database size was growing really quick and I figured out that the issue was with tbl_TestResult table. I am not sure why it is growing that fast. It seems there will be a record for each test case. In my case, we have more than 1000 test cases that will be fired in each check-in. That means we do average 20 check-ins a day. That is around 20,000 records.
My question is can I manually delete the records on that table? Will it make any problems to the TFs other than losing the tests results?
UPDATE:
We have TFS 2015
Deleting data manually or changing the schema in any way would result in your TFS instance no longer being supportable by Microsoft. It effectively invalidates your warranty.
In TFS 2015 you can change the Test Management retention settings in the Team Project admin page. Default is 30 days, but someone may have changed it.
Other than that this is the normal meta data that is collected as part of your ALM/DevOps platform.
**
This was "fixed" in TFS 2017 because they changed the schema for the test results https://www.visualstudio.com/en-us/news/releasenotes/tfs2017-relnotes#test. Brian Harry mentioned a 8X reduction in storage from the new schema https://blogs.msdn.microsoft.com/bharry/2016/09/26/team-foundation-server-15-rc-2-available/
I am developing with Microsoft SQL Server 2008 R2 Express.
I have a stored procedure that uses temp tables and outputs some processed data usually within 1 second.
Over a few months, my DB has gathered a lot of data almost reaching the 10 GB limit. At this point, this particular stored procedure started taking as much as 5 mins for the same input parameters. After I emptied some of the huge tables in DB, it got back to normal.
After this incident, I am worried if my stored procedure needs more than necessary space in DB. How can I be sure? Any leads?
Thanks already
Jyotsna
Follow this article
Other old school way is run spwho2 check your spid related to the database see CPU and IO usage.
To validate run DBCC INPUTBUFFER(spid)
Also check STATISTICS of SP in original scenario without purging data from tables.
SET STATISTICS IO ON
EXEC [YourSPName]
see the logical reads , also refer article
We're trying to reload the TFS 2010 SSAS cube, but when the warehouse is processing, we get an exception in the log. It is important to note that the cube does not fail completely, but loads incompletely. For example, we have data up to June 2011, but not beyond.
Microsoft.TeamFoundation.Server.WarehouseException: OLE DB error: OLE
DB or ODBC error: Snapshot isolation transaction failed in database
'Tfs_Warehouse' because the object accessed by the statement has been
modified by a DDL statement in another concurrent transaction since
the start of this transaction. It is disallowed because the metadata
is not versioned. A concurrent update to metadata can lead to
inconsistency if mixed with snapshot isolation.; 42000
This is our future production system, and contains data migrated over from a TFS 2008 system. The database size of the version control repository is close to 200GB, so we're dealing with a relatively large instance of TFS.
We could remove snapshot isolation from our warehouse, but I'm a little concerned about doing this, as I can't find anything that tells me whether snapshot isolation is required on the TFS_Warehouse database. Any insight would be appreciated.
From this source (see the TempDB and RCSI section), it would seem that removing snapshot isolation could be a big mistake.
Here are some other options ordered easy to difficult from and implementation standpoint...
increase the size of TempDB to accomodate longer running SELECTs
decrease the partition size the measure groups in the cube. You may
want to run a profiler trace durring SSAS processing first to
determine which measure groups are taking the longest and chop those
partitions up first
implement an incremental processing strategy
Here's a link providing more information on cube-partitioning...