We have bunch of SQL Server 2012 Standard and SQL Server 2014 Standard database instances running on Windows 2012, each have different storage and memory. We are getting
Error Code: 802; There is insufficient memory available in the buffer pool
from 3 database servers when we build large SQL Server indexes, and it's happening intermittently, once in 3 or 4 months. The same indexes run fine on servers with same version of SQL Server and only 1/3 of memory and CPU.
SQL Server 2012 Standard
Server1: memory - 110GB allocated out of 128 GB version 11.0.6 - "Error Code: 802; There is insufficient memory available in the buffer pool." reported when we run heavy index.
Server2: memory -78GB allocated to SQL out of 96GB version 11.0.6 - "Error Code: 802; There is insufficient memory available in the buffer pool." reported when we run heavy index.
Server: memory allocated 18GB out of 24GB. VM. No indexing error - 11.0.5
Server : memory 24GB out of 32GB. Physical. No indexing error - 11.0.5
SQL Server 2014 Standard
Server 1 : memory 56GB out of 64GB physical memory version 12.0.5 - "Error Code: 802; There is insufficient memory available in the buffer pool." reported when we run heavy index.
Server 2: memory 18GB out 24GB- VM – no indexing error - version 12.0.5.
Related
I was running a large delete query and got an out of memory error, so the DB shutdown automatically. I restarted it, but it is still showing as 'offline' in Neo4j desktop.
Here are the log entries from the restart:
2021-08-01 23:47:03.506+0000 INFO Starting...
2021-08-01 23:47:06.804+0000 INFO ======== Neo4j 4.2.1 ========
Exception in thread "neo4j.Scheduler-1" java.lang.OutOfMemoryError: Java heap space
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2021-08-01 23:47:22.505+0000 INFO Sending metrics to CSV file at /Users/my_user/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-########-####-####-####-##########/metrics
2021-08-01 23:47:22.524+0000 INFO Bolt enabled on localhost:7687.
2021-08-01 23:47:23.836+0000 INFO Remote interface available at http://localhost:7474/
2021-08-01 23:47:23.837+0000 INFO Started.
Similarly, when I attempt to connect from a browser it tells me that the Neo4j database is unavailable.
In the log I can see that there is a Java out of memory error. Why would this appear? Does Neo4j queue/cache incomplete queries? And how do I go about clearing it if I can't access the server?
The data is only test data, so I don't need to save it. I do need to understand if I can fix it, and how, since I am putting the product through its paces for a new project.
While running a SQL code on Greenplum cluster of 10 servers we are encountering this issue
Detail: VM protect failed to allocate 517656 bytes from system, VM Protect 7672 MB availabe
Here are a few resources to read up on
Docs:
https://gpdb.docs.pivotal.io/6-0/install_guide/prep_os.html
Knowledge Base Article
https://community.pivotal.io/s/article/Out-of-memory--VM-Protect-errors
Calculator:
https://greenplum.org/calc/
hope this helps..
Running a three node Neo4j causal cluster (deployed on a Kubernetes cluster) our leader seems to have trouble with replicating transaction to it's followers. We are seeing the following error/warning appear in the debug.log:
2019-04-09 16:21:52.008+0000 WARN [o.n.c.c.t.TxPullRequestHandler] Streamed transactions [868842--868908] to /10.0.31.11:38968 Connection reset by peer
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:51)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:403)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:934)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.forceFlush(AbstractNioChannel.java:367)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:639)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at java.lang.Thread.run(Thread.java:748)
at org.neo4j.helpers.NamedThreadFactory$2.run(NamedThreadFactory.java:110)
In our application we seem to catch this error as:
Database not up to the requested version: 868969. Latest database version is 868967
The errors occur when we apply WRITE loads to the cluster using an asynchronous worker process that reads chunks of data from a queue and pushes them in to the database.
We have looked into obvious culprits:
Networking bandwidth limits are not reached
No obvious peaks on CPU / memory
No other Neo4j exceptions (specifically, no OOMs)
We have unbound/rebound the cluster and performed a validity check on the database(s) (they're all fine)
We tweaked the causal_clustering.pull_interval to 30s, which seems to improve performance but does not alleviate this issue
We have removed resource constraints on the db to mitigate bugs that might induce throttling on Kubernetes (without reaching actual CPU limits), this also did nothing to alleviate the issue
I have 8 core/ 16GB RAM server.
But when i test load on this server. the cpu reached 100% and crash in between process.
my landing page send 250+ HTTP requests/user.
the server is configured with nginx.
please comment required detail, i will edit this post.
MVC4 / Mono 3.2.1 application is running in Debian with Nginx and
mono-fastcgi-4 server. It is started as
MONOSERVER=$(which fastcgi-mono-server45)
WEBAPPS="/:/var/www/html/test/"
${MONOSERVER} /applications=${WEBAPPS} /socket=tcp:127.0.0.1:9000 &
For testing, browser F5 key is hold down for 30 seconds.
After that there is long delay, browser shows page load icon.
After delay message
504 Gateway Time-out
nginx/0.7.67
appears
and top command (output below) shows that mono fastcgi server takes 200% cpu
forever or for a long time (2 cores).
Only way to stop this is to kill mono fastcgi server manually and manually
to restart it
How to make mono fastcgi to return pages immediately and not use so much cpu
?
If same application is hosted with Apache and mod_mono , holding and
releasing F5 key in browser returns
page immediately and cpu usage goes to 0 immediately after F5 is released in
browser.
top - 00:40:38 up 1:43, 3 users, load average: 16.49, 15.92, 15.35
Tasks: 59 total, 1 running, 58 sleeping, 0 stopped, 0 zombie
Cpu(s): 34.5%us, 65.5%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Mem: 2097152k total, 744828k used, 1352324k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 120120k cached
PID VIRT RES SHR %CPU %MEM TIME+ COMMAND
4366 500m 121m 21m 198 5.9 6:24.45 /opt/mono-3.2/bin/mono
/opt/mono-3.2/lib/mono/4.5/fastcgi- ....
Update
Answer in
Bad gateway 502 after small load test on fastcgi-mono-server through nginx and ServiceStack
recommends to use same number of threads in nginx and in fastcgi server.
I'm using default nginx and mono fastcgi server configuraton where both probably allow 1024 threads.
Will mono allow actually less threads. Maybe this causes the issue, fastcgi mono server is very old?
Can adding /multiplex to fastcgi mono server fix this?
Is it resonable to decrease number of threads for this not very powerful VPS server above ?
Are there some mono settings which cause the failure ?
Nothing is written to fastcgi log file, how to diagnose this ?
Additional information about this is posted in https://stackoverflow.com/questions/20512978/how-to-limit-mono-197-cpu-usage-in-mono-fastcgi-server