Memory error: Allocation failure : Excel Powerpivot Add on - excel-2010

The table I am trying to refresh using the Excel 2010 PowerPivot add on is currently 2,670,634 rows, and approximately 473 MB when I export the query results from SQL server into a CSV file format. The operating system is in 64bit but my installed Excel + Powerpivot add on are both 32bit.
I get the Memory error: Allocation failure when I refresh my PowerPivot to retrieve the entire table. At the last PowerPivot refresh, I was able to get 2,153,464 rows into PowerPivot. But today I am unable to refresh and always get the memory error. I am a bit confused by this. I thought I have yet to exceed the max row limit of PowerPivot 2010? I thought that the row limit was 1,999,999,997. What can I do to make it work in 32bit Excel?
Thank you in advance for your tips.

PowerPivot on 32bit can be a memory hog, requiring about a 1 GB or so of memory. So given how much memory you have available, you can easily run into memory allocation issues with PowerPivot. If you can't use 64bit version, then for starters, simplest thing is to continue to filter the data - reduce the number of rows, reduce the number of columns. After that, you'll have to look at the calculations and such being done, the more rows/data you have, the more expensive it can get.

Related

SSAS Tabular Table Processing Memory issue

When I'am trying to edit SSAS Tabular project using with Visual Studio 2015 in table properties section,I am getting error like
"The operation has been cancelled because there is not enough memory available for the application. If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine."
when counted row is equal to almost 5 million row.
Is there any permanent solution for the issue?
These errors can be caused by an incorrect memory setting in SSAS.
According to MSDN Memory Properties document https://msdn.microsoft.com/en-us/library/ms174514.aspx
Values between 1 and 100 represent percentages of Total Physical Memory or Virtual Address Space, whichever is less. Values over 100 represent memory limits in bytes.
If the admin thinks a value above 100 is in KBs or MBs, the admin may put in a setting that is way too low for SSAS to operate properly, giving this "not enough memory" error when the server still has a lot of memory available.
The solution is to make the memory settings change to give a proper value to the memory limits in either percentage of server memory, or in bytes.

Neo4j inserting large files - huge difference in time between

I am inserting a set of files (pdfs, of each 2 MB) in my database.
Inserting 100 files at once takes +- 15 seconds, while inserting 250 files at once takes 80 seconds.
I am not quite sure why this big difference is happening, but I assume it is because the amount of free memory is full between this amount. Could this be the problem?
If there is any more detail I can provide, please let me know.
Not exactly sure of what is happening on your side but it really looks like what is described here in the neo4j performance guide.
It could be:
Memory issues
If you are experiencing poor write performance after writing some data
(initially fast, then massive slowdown) it may be the operating system
that is writing out dirty pages from the memory mapped regions of the
store files. These regions do not need to be written out to maintain
consistency so to achieve highest possible write speed that type of
behavior should be avoided.
Transaction size
Are you using multiple transactions to upload your files ?
Many small transactions result in a lot of I/O writes to disc and
should be avoided. Too big transactions can result in OutOfMemory
errors, since the uncommitted transaction data is held on the Java
Heap in memory.
If you are on linux, they also suggest some tuning to improve performance. See here.
You can look up the details on the page.
Also, if you are on linux, you can check memory usage by yourself during import by using this command:
$ free -m
I hope this helps!

Spreadsheet Gear -- Generating large report via copy and paste seems to use a lot of memory and processor

I am attempting to generate a large workbook based report with 3 supporting worksheets of 100,12000 and 12000 rows and a final output sheet all formula based that ends up representing about 120 entities at 100 rows a piece. I generate a template range and copy and paste it replacing the entity ID cell after pasting each new range. It is working fine but I noticed that memory usage in the IIS Express process is approx 500mb and it is taking 100% processor usage as well.
Are there any guidelines for generating workbooks in this manner?
At least in terms of memory utilization, it would help to have some comparison, maybe against Excel, in how much memory is utilized to simply have the resultant workbook opened. For instance, if you were to open the final report in both Excel and the "SpreadsheetGear 2012 for Windows" application (available in the SpreadsheetGear folder under the Start menu), what does the Task Manager measure for each of these applications in terms of memory consumption? This may provide some insight as to whether the memory utilization you are seeing in the actual report-building process is unusually high (is there a lot of extra overhead for your routine?), or just typical given the size of the workbook you are generating.
In terms of CPU utilization, this one is a bit more difficult to pinpoint and is certainly dependent on your hardware as well as implementation details in your code. Running a VS Profiler against your routine certainly would be interesting to look into, if you have this tool available to you. Generally speaking, the CPU time could potentially be broken up into a couple broad categories—CPU cycles used to "build" your workbook and CPU cycles to "calculate" it. It could be helpful to better determine which of these is dominating the CPU. One way to do this might be to, if possible, ensure that calculations don't occur until you are finished actually generating the workbook. In fact, avoiding any unnecessary calculations could potentially speed things up...it depends on the workbook, though. You could avoid calculations by setting IWorkbookSet.Calculation to Manual mode and not calling any of the IWorkbook’s "Calculate" methods (Calculate/CalculateFull/CalculateFullRebuild) until you are fished up with this process. If you don't have access to a Profiler too, maybe set some timers, Console.WriteLines and monitor the Task Manager to see how your CPU fluctuates during different parts of your routine. With any luck you might be able to better isolate what part of the routine is taking the most amount of time.

sybase stored procedure slow after deleting rows

We deleted table rows in order to improve performance since we had a very large database. The database size reduced to 50% but the stored procedure became even more slower after the delete. It used to run within 3 minutes and now it is taking 3 hours. No changes made to procedure.
We ran the same procedure again in old database(before delete) and it worked fine. All other procedures run faster after the database size reduction. What could be the problem?
Deleting rows in the database doesn't truly free up space on it's own.
Space usually isn't really freed up until you run a command that can reorganize the data stored in the table. In SAP ASE the command reorg can be run with options such as reclaim space, rebuild and forwarded rows on the database. Logically, it's a lot like defragmenting a hard drive, the data is reorganized to use less physical space.
In SQL Anywhere the command is REORGANIZE TABLE, or can be found on the Fragmentation tab in Sybase Central. This will also help with index fragmentation.
The other thing that frequently needs to be done after large changes to the database is to update the table or index statistics. The query optimizer builds the query plans based of the table statistics stored in system tables. When large transactions, or a large number of small transactions happen, the statistics can lead the optimizer to make less optimal choices.
In SQL Anywhere this can be done using Sybase Central.
You may also want to check out the Monitoring and improving database performance section of the SQL Anywhere documentation. It covers these procedures, and much more.

SQLite: ON disk Vs Memory Database

We are trying to Integrate SQLite in our Application and are trying to populate as a Cache. We are planning to use it as a In Memory Database. Using it for the first time. Our Application is C++ based.
Our Application interacts with the Master Database to fetch data and performs numerous operations. These Operations are generally concerned with one Table which is quite huge in size.
We replicated this Table in SQLite and following are the observations:
Number of Fields: 60
Number of Records: 1,00,000
As the data population starts, the memory of the Application, shoots up drastically to ~1.4 GB from 120MB. At this time our application is in idle state and not doing any major operations. But normally, once the Operations start, the Memory Utilization shoots up. Now with SQLite as in Memory DB and this high memory usage, we don’t think we will be able to support these many records.
When I create the DB on Disk, the DB size sums to ~40MB. But still the Memory Usage of the Application remains very high.
Q. Is there a reason for this high usage. All buffers have been cleared and as said before the DB is not in memory?
Any help would be deeply appreciated.
Thanks and Regards
Sachin
You can use the vacuum command to free up memory by reducing the size of sqlite database.
If you are doing a lot of insert update operations then the db size may increase. You can use vaccum command to free up space.
SQLite uses memory for things other than the data itself. It holds not only the data, but also the connections, prepared statements, query cache, query results, etc. You can read more on SQLite Memory Allocation and tweak it. Make sure you are properly destroying your objects too (sqlite3_finalize(), etc.).

Resources