I have a server with 48 GB memory and a sql server analysis service (tabular mode), 2016 standard version SP1 CU7 installed on it.
I can deploy a tabular model from visual studio.
I can manually run a XMLA script:
{
"refresh": {
"type": "full",
"objects": [
{
"database": "MyCube"
}
]
}
}
But when i run that script from sql agent job, i get this error :
the JSON DDL request failed with the following error: Failed to execute XMLA. Error returned: 'There's not enough memory to complete this operation. Please try again later when there may be more memory available.'.. at Microsoft.AnalysisServices.Xmla.XmlaClient.CheckForSoapFault
The memory before porcessing is about 4GB, it increases during processing the cube, but when it hits about 18.5 GB, it fails.
Does anybody know a solution?
Analysis Services Tabular instances in SQL Server 2016 are limited to 16GB of RAM as documented here if you are running Standard Edition. Enterprise Edition removes that cap.
When you do a process full you keep a working copy of the cube and in the background you process a shadow copy. When the shadow copy is ready then it will replace the working copy. Basically this means that at processing time you need twice the amount of memory as the size of your cube. This can be an issue when you have the 16 GB limitation per instance with SSAS Standard edition.
One solution is to do a process with clearValues first, this empties the cube, and then to do the full process. More details here http://byobi.com/2016/12/how-much-ram-do-i-need-for-my-ssas-tabular-server/
Or another one is to play with the Memory \ VertiPaqPagingPolicy settings of the SSAS server. See more details here https://www.jamesserra.com/archive/2012/05/what-happens-when-a-ssas-tabular-model-exceeds-memory/ and here https://www.sqlbi.com/articles/memory-settings-in-tabular-instances-of-analysis-services/
And of course another solution is to upgrade to Enterprise Edition.
To follow up on Greg comments, i am facing similar issue at work and the workaround was instead of doing a database refresh, i did table refresh instead. I created 2 SQL jobs. My tabular model had 40 tables. So based on the sizes of the tables, i refresh x amount of tables in one job and y amount of table in the other jobs. You can create more than 2 SQL jobs and has less tables per job if you wish. This will put less load on the memory.
You can process small subsets of your data by partitioning your tables, this can be handled in SSMS. This Article provides a nice overview on how to achieve this.
Related
we have an on premises TFS 2018 Server with Update 2 RC.
The perfomance of the web is very slow. We work with kanban and scrum boards and it take a few seconds to load, furthermore, moving a task to one column to another can take a few seconds
The operating system is Windows 2016 Server. 16GB RAM, and 4 processors of 2.6ghz. It´s a virtual machine. It has been working propertly until the last month.
I have checked and changed:
The elastic search extension has been deactivated because it takes more than 5GB of memory
The antivirus has been disabled.
The processor is below 15% and the memory is around 50%
IIS Worker Process is taking 1.2GB of RAM
I have deleted the "TfsData\ApplicationTier_fileCache" with no success
We are out of ideas, any help would be really appreciated.
Thanks in advance
It's difficult to know what's going on there with this information, an approach to let you perform a best analysis could be the typical divide an conquer approach, in your case:
Put the Agent in other machine.
Install the TFS and MsSql Server in different machines, you can try a clean install of TFS using your current MsSql.
Once you did this you will:
- Have a more stable system.
- Be able to analyze which part is creating problems.
For demo purposes, I am running Neo4j in a low memory environment -- A laptop with 4GB of RAM, 1644MB is use for video memory, leaving only 2452 MB available for use.. It's also running SQL Server, our WCF services, and our clients.. So there's little memory for Neo4j.
I'm running LOAD CSV cypher scripts via REST from a C# service. There are more than 20 scripts, and theyt work well in a server environment. I've written code to paginate, so that they run in smaller batches. I've reduced the batch size very low ( 25 csv rows ) and a given script may do 300 batches, but I continue to get "Java heap space" errors at some point.
I've tried configuring Neo4j with a relatively large heap space ( 640MB ) which is all the available RAM size plus setting the cache_type to none, and it gets much further before I get the java heap space error. What I don't understand is in that case, why does it grow that much? Also until I restart the neo4j service, I get these java heap space errors quickly. The batch size doesn't seem to impact how much memory is used appreciably.
However, after doing that, and I run the application with these settings, the query performance becomes very slow due to the cache settings.
I am running this on a Windows 7 laptop with 4G RAM -- using Neo4j 2.2.1 Community Edition.
Thoughts?
Perhaps you can share your LOAD CSV statement and the other queries you run.
I think you just run into this:
http://markhneedham.com/blog/2014/10/23/neo4j-cypher-avoiding-the-eager/
So PROFILE or EXPLAIN your queries and make it not to use that much intermediate state. We can help if you share your statements.
And you should use PERIODIC COMMIT 100.
Something like:
heap=512M
dbms.pagecache.memory=200M
keep_logical_logs=false
cache_type=none
http://console.neo4j.org runs neo4j in memory putting up to 50 instances in a single gigabyte of memory. So it should be doable.
I am developing with Microsoft SQL Server 2008 R2 Express.
I have a stored procedure that uses temp tables and outputs some processed data usually within 1 second.
Over a few months, my DB has gathered a lot of data almost reaching the 10 GB limit. At this point, this particular stored procedure started taking as much as 5 mins for the same input parameters. After I emptied some of the huge tables in DB, it got back to normal.
After this incident, I am worried if my stored procedure needs more than necessary space in DB. How can I be sure? Any leads?
Thanks already
Jyotsna
Follow this article
Other old school way is run spwho2 check your spid related to the database see CPU and IO usage.
To validate run DBCC INPUTBUFFER(spid)
Also check STATISTICS of SP in original scenario without purging data from tables.
SET STATISTICS IO ON
EXEC [YourSPName]
see the logical reads , also refer article
We're about to deploy TFS 2012 - mainly for source control at this stage but will hopefully ultimately provide a full work-flow for us.
Can anybody point me towards a sizing guide for the database aspect ?
The short answer is "how long is a piece of string?".
To qualify that short answer a bit, there is obviously an overhead to begin with. TFS is much better than SourceSafe in that only changes are stored, so you don't get a different version of the file in the database for each check-in. This is a good thing.
That said the answer to this question really depends on how often you're going to be checking in, the amount of changes there are between those check-ins and the overall size of all the projects and their related files.
To give you some metric, on our TFS server, the supporting TFS databases plus our "collection" database which has been running for 6 months now, with regular daily check-ins, is hitting 800mb.
Now, unless you head a massive project, I can't see you going over a half a TB anytime soon. That said, given that TFS is SQL Server based - should you need to upgrade in the future it's not as much of a nightmare as you may think.
According to Microsoft's documentation:
Fewer than 250 users: 1 disk at 7.2k rpm (125 GB)
250 to 500 users: 1 disk at 10k rpm (300 GB)
500 to 2,200 users: 1 disk at 7.2k rpm (500 GB)
2,200 to 3,600 users:1 disk at 7.2k rpm (500 GB)
However, as Moo-Juice said, the real-world numbers are dependent on how you actually use TFS.
Also keep in mind that you'll want to also create, store, and maintain backups of your TFS databases.
We're trying to reload the TFS 2010 SSAS cube, but when the warehouse is processing, we get an exception in the log. It is important to note that the cube does not fail completely, but loads incompletely. For example, we have data up to June 2011, but not beyond.
Microsoft.TeamFoundation.Server.WarehouseException: OLE DB error: OLE
DB or ODBC error: Snapshot isolation transaction failed in database
'Tfs_Warehouse' because the object accessed by the statement has been
modified by a DDL statement in another concurrent transaction since
the start of this transaction. It is disallowed because the metadata
is not versioned. A concurrent update to metadata can lead to
inconsistency if mixed with snapshot isolation.; 42000
This is our future production system, and contains data migrated over from a TFS 2008 system. The database size of the version control repository is close to 200GB, so we're dealing with a relatively large instance of TFS.
We could remove snapshot isolation from our warehouse, but I'm a little concerned about doing this, as I can't find anything that tells me whether snapshot isolation is required on the TFS_Warehouse database. Any insight would be appreciated.
From this source (see the TempDB and RCSI section), it would seem that removing snapshot isolation could be a big mistake.
Here are some other options ordered easy to difficult from and implementation standpoint...
increase the size of TempDB to accomodate longer running SELECTs
decrease the partition size the measure groups in the cube. You may
want to run a profiler trace durring SSAS processing first to
determine which measure groups are taking the longest and chop those
partitions up first
implement an incremental processing strategy
Here's a link providing more information on cube-partitioning...