We're about to deploy TFS 2012 - mainly for source control at this stage but will hopefully ultimately provide a full work-flow for us.
Can anybody point me towards a sizing guide for the database aspect ?
The short answer is "how long is a piece of string?".
To qualify that short answer a bit, there is obviously an overhead to begin with. TFS is much better than SourceSafe in that only changes are stored, so you don't get a different version of the file in the database for each check-in. This is a good thing.
That said the answer to this question really depends on how often you're going to be checking in, the amount of changes there are between those check-ins and the overall size of all the projects and their related files.
To give you some metric, on our TFS server, the supporting TFS databases plus our "collection" database which has been running for 6 months now, with regular daily check-ins, is hitting 800mb.
Now, unless you head a massive project, I can't see you going over a half a TB anytime soon. That said, given that TFS is SQL Server based - should you need to upgrade in the future it's not as much of a nightmare as you may think.
According to Microsoft's documentation:
Fewer than 250 users: 1 disk at 7.2k rpm (125 GB)
250 to 500 users: 1 disk at 10k rpm (300 GB)
500 to 2,200 users: 1 disk at 7.2k rpm (500 GB)
2,200 to 3,600 users:1 disk at 7.2k rpm (500 GB)
However, as Moo-Juice said, the real-world numbers are dependent on how you actually use TFS.
Also keep in mind that you'll want to also create, store, and maintain backups of your TFS databases.
Related
we have an on premises TFS 2018 Server with Update 2 RC.
The perfomance of the web is very slow. We work with kanban and scrum boards and it take a few seconds to load, furthermore, moving a task to one column to another can take a few seconds
The operating system is Windows 2016 Server. 16GB RAM, and 4 processors of 2.6ghz. It´s a virtual machine. It has been working propertly until the last month.
I have checked and changed:
The elastic search extension has been deactivated because it takes more than 5GB of memory
The antivirus has been disabled.
The processor is below 15% and the memory is around 50%
IIS Worker Process is taking 1.2GB of RAM
I have deleted the "TfsData\ApplicationTier_fileCache" with no success
We are out of ideas, any help would be really appreciated.
Thanks in advance
It's difficult to know what's going on there with this information, an approach to let you perform a best analysis could be the typical divide an conquer approach, in your case:
Put the Agent in other machine.
Install the TFS and MsSql Server in different machines, you can try a clean install of TFS using your current MsSql.
Once you did this you will:
- Have a more stable system.
- Be able to analyze which part is creating problems.
I have a server with 48 GB memory and a sql server analysis service (tabular mode), 2016 standard version SP1 CU7 installed on it.
I can deploy a tabular model from visual studio.
I can manually run a XMLA script:
{
"refresh": {
"type": "full",
"objects": [
{
"database": "MyCube"
}
]
}
}
But when i run that script from sql agent job, i get this error :
the JSON DDL request failed with the following error: Failed to execute XMLA. Error returned: 'There's not enough memory to complete this operation. Please try again later when there may be more memory available.'.. at Microsoft.AnalysisServices.Xmla.XmlaClient.CheckForSoapFault
The memory before porcessing is about 4GB, it increases during processing the cube, but when it hits about 18.5 GB, it fails.
Does anybody know a solution?
Analysis Services Tabular instances in SQL Server 2016 are limited to 16GB of RAM as documented here if you are running Standard Edition. Enterprise Edition removes that cap.
When you do a process full you keep a working copy of the cube and in the background you process a shadow copy. When the shadow copy is ready then it will replace the working copy. Basically this means that at processing time you need twice the amount of memory as the size of your cube. This can be an issue when you have the 16 GB limitation per instance with SSAS Standard edition.
One solution is to do a process with clearValues first, this empties the cube, and then to do the full process. More details here http://byobi.com/2016/12/how-much-ram-do-i-need-for-my-ssas-tabular-server/
Or another one is to play with the Memory \ VertiPaqPagingPolicy settings of the SSAS server. See more details here https://www.jamesserra.com/archive/2012/05/what-happens-when-a-ssas-tabular-model-exceeds-memory/ and here https://www.sqlbi.com/articles/memory-settings-in-tabular-instances-of-analysis-services/
And of course another solution is to upgrade to Enterprise Edition.
To follow up on Greg comments, i am facing similar issue at work and the workaround was instead of doing a database refresh, i did table refresh instead. I created 2 SQL jobs. My tabular model had 40 tables. So based on the sizes of the tables, i refresh x amount of tables in one job and y amount of table in the other jobs. You can create more than 2 SQL jobs and has less tables per job if you wish. This will put less load on the memory.
You can process small subsets of your data by partitioning your tables, this can be handled in SSMS. This Article provides a nice overview on how to achieve this.
We are hosting a new Umbraco 7 site with the database on SQL server 2012 Express. The database seems to be growing rapidly. It's currently about 5Gb and SQL 2012 express has a maximum database size of 10Gb so we are starting to get a little concerned. The cmsPropertyData and cmsPreviewXml seem to be taking up the most space at about 2.5Gb each. Is there any general housekeeping that needs to be done to keep these tables under control. Have tried shrinking the database but there is no unused space.
Any Advice?
I don't know for sure this is the problem in your case, but Umbraco creates a new version of the content node each time the node is saved. This can cause your database to grow rapidly. There's a cool package that automatically removes older versions called "UnVersion". https://our.umbraco.org/projects/website-utilities/unversion/
We have around 32 datamarts loading around 200+ tables out of which 50% of tables are on 11g Oracle database and 30% on 10g and rest 20 are flat files.
Lately we are facing performance issues while loading the datamarts.
Database parameters as well are network parameters are looking and as throughput is decreasing drastically we are of the opinion now that it is informatica which has problem.
Recently when through put had gone down and server was utilized to its 90% informatica application was restarted and the performance there after was little better than previous performance.
So my question is should we have Informatica restart as a scheduled activity ? Does restart actually improves the performance of the application or there are some other things which can play a role in the same?
What you have here is a systemic problem, but you have not established which component(s) of the system are the cause.
Are all jobs showing exactly the same degradation in performance? If not, what is the common characteristic of those that are? Not all jobs will have the same reliance on the Informatica server -- some will be dependent more on the performance of their target system(s), some on their source system(s), so I would be amazed if all showed exactly the same level of degradation.
What you have here is an exercise in data gathering, and then turning that data into useful information.
If you can isolate the problem to only certain jobs then I would take a log file from a time when the system is performing well, and from a time when it is not, and compare them directly, looking for differences in the performance of their components. You can also look at any database monitoring tools for changes in execution plan.
Rebooting servers? Maybe, but that is not necessarily the solution -- the real problem is the lack of data you have to diagnose your system.
Yes, It is good to do a restart every quarter.
It will refresh the Integration service cache.
Delete files from Cache and storage before you restart.
Since you said you have recently seen some reduced performance recently it might be due various reasons.
Some tips that may help:
Ensure all Indexes are in valid and compiled state.
If you are calling a procedure via worflow check the EXPLAIN plan and Cost ensure it is not doing a full table scan(cost should be less).
3.Gather stats on the source or target tables (especially which have deletes )) - This will help in de fragmentation - deleting the un allocated space. DBMS_STATS
Always good to have an house keeping scheduled weekly to do the above checks on indexes,remove temp/unnecessary files and gather stats (analyze indexes and tables).
Some best practices here performance tips
Our TFS2012 deployment is currently quite simple:
Virtual Windows Server with TFS, Sharepoint, Reporting, SQL Server and Builds all on the same machine!
Is using the TFS admin console backup tool and/or backup of the entire machine enough to recover from a disaster?
There is no clear-cut criteria, you may take a look at TFS planning and disaster recovery guidance for a more comprehensive answer.
Shortly, you must be sure at least that
Backups are saved on different hardware, and possibly copied to a remote location
Along with your backups you have the recover instructions and install packages
This guarantees that you are able to recover, but it can take a long time, depending on the disaster impact (someone deleted a record vs. the server room has burnt) and the size of your data.