Here's my situation:
From my Grails controller, I call a service, which queries a database read-only, transforms the result into JSON, and returns the result.
Specs are: JDK 1.6, Tomcat 5.5, Grails 1.3.4, DB via JNDI
Tomcats MaxPermSize is set to 256m and Xmx to 128m.
EDIT: Increasing the memory should be the last resort
The service method:
String queryDB(String queryString) {
StringWriter writer = new StringWriter()
JSonBuilder json = new JSonBuilder(writer)
def queryResult = SomeDomain.findAllBySomePropIlike("%${queryString}%")
json.whatever {
results {
queryResult.eachWithIndex { qr, i ->
// insert domain w/ properties
}
}
}
queryResult = null
return writer.toString()
}
Now, when queryString == 'a' the result set is huge and I end up with this:
[ERROR] 03/Nov/2010#09:46:39,604 [localhost].[/grails-app-0.1].[grails] - Servlet.service() for servlet grails threw exception
java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.codehaus.groovy.util.ComplexKeyHashMap.init(ComplexKeyHashMap.java:81)
at org.codehaus.groovy.util.ComplexKeyHashMap.<init>(ComplexKeyHashMap.java:46)
at org.codehaus.groovy.util.SingleKeyHashMap.<init>(SingleKeyHashMap.java:29)
at groovy.lang.MetaClassImpl$Index.<init>(MetaClassImpl.java:3381)
at groovy.lang.MetaClassImpl$MethodIndex.<init>(MetaClassImpl.java:3364)
at groovy.lang.MetaClassImpl.<init>(MetaClassImpl.java:140)
at groovy.lang.MetaClassImpl.<init>(MetaClassImpl.java:190)
at groovy.lang.MetaClassImpl.<init>(MetaClassImpl.java:196)
at groovy.lang.ExpandoMetaClass.<init>(ExpandoMetaClass.java:298)
at groovy.lang.ExpandoMetaClass.<init>(ExpandoMetaClass.java:333)
at groovy.lang.ExpandoMetaClassCreationHandle.createNormalMetaClass(ExpandoMetaClassCreationHandle.java:46)
at groovy.lang.MetaClassRegistry$MetaClassCreationHandle.createWithCustomLookup(MetaClassRegistry.java:139)
at groovy.lang.MetaClassRegistry$MetaClassCreationHandle.create(MetaClassRegistry.java:122)
at org.codehaus.groovy.reflection.ClassInfo.getMetaClassUnderLock(ClassInfo.java:165)
at org.codehaus.groovy.reflection.ClassInfo.getMetaClass(ClassInfo.java:182)
at org.codehaus.groovy.runtime.callsite.ClassMetaClassGetPropertySite.<init>(ClassMetaClassGetPropertySite.java:35)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.createClassMetaClassGetPropertySite(AbstractCallSite.java:308)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.createGetPropertySite(AbstractCallSite.java:258)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.acceptGetProperty(AbstractCallSite.java:245)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:237)
at org.codehaus.groovy.grails.plugins.web.filters.FilterToHandlerAdapter.accept(FilterToHandlerAdapter.groovy:196)
at org.codehaus.groovy.grails.plugins.web.filters.FilterToHandlerAdapter$accept.callCurrent(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:44)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:143)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:159)
at org.codehaus.groovy.grails.plugins.web.filters.FilterToHandlerAdapter.preHandle(FilterToHandlerAdapter.groovy:107)
at org.springframework.web.servlet.HandlerInterceptor$preHandle.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:40)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
at org.codehaus.groovy.grails.plugins.web.filters.CompositeInterceptor.preHandle(CompositeInterceptor.groovy:42)
at org.codehaus.groovy.grails.web.servlet.GrailsDispatcherServlet.doDispatch(GrailsDispatcherServlet.java:282)
One approach I found on the web regards some leaks in Hibernate and domain validation, explained here and in detail here. I'm just about to test it, but I don't know if this is really the solution for my problem and (if it is) at which point it's best to clean up GORM.
Or is there another memory leak in my code?
Ideas anyone?
EDIT: As far as I am now, the exception occurs at the point where the finder method is called. That means that GORM isn't able to handle the amount of data returned by the database, right? Sorry for asking like a greenhorn, but I have never encountered such a problem, even with very large result sets.
Sun (this link isn't valid anymore) had documented this OutOfMemoryError as follows:
The parallel / concurrent collector
will throw an OutOfMemoryError if too
much time is being spent in garbage
collection: if more than 98% of the
total time is spent in garbage
collection and less than 2% of the
heap is recovered, an OutOfMemoryError
will be thrown. This feature is
designed to prevent applications from
running for an extended period of time
while making little or no progress
because the heap is too small. If
necessary, this feature can be
disabled by adding the option
-XX:-UseGCOverheadLimit to the command line.
In other words, that error is a feature, a hint to increase available memory (which is not a preferred option in your case, as you've mentioned). Some developers consider this feature not to be useful in every use case, so check out turning it off.
Another option to those already suggested would be to work in pages of results. Instead of using the dynamic finder, use Criteria and page through the results yourself. Here's a naive, pseudocode example:
def offset = 0
def max = 50
while(stillMoreResults) {
def batch = SomeDomain.findAllBySomePropIlike("%${queryString}%", [max: max, offset: offset])
appendBatchToJsonResult(batch)
offset += max
}
You could tweak the batch size according to your memory requirements. This would avoid having to adjust the memory.
Edit
I just re-read Fletch's answer and noticed that he mentioned this as a solution and you commented on it. I'll leave mine here since it's got an example, but if Fletch adds a paging example to his, I'll delete this answer since he mentioned it before I did.
If you don't want to increase memory, maybe you should only search strings larger than a certain amount. I guess this is some kind of type-ahead/suggestion function; maybe you could start searching when there are three characters or so. Otherwise, maybe paged results is an option?
By the way architecturally the controller is intended to handle interaction with the outside world and its formats, i.e. you would probably want your service just to return the objects and your controller to do the JSON conversion. But this won't solve your current problem.
I would also suggest you only return the properties you need with this query type ahead query and get the full domain object with the actual data the user requires.
The JSON builder is going to create alot of objects and comsume memory. For example in a type-ahead for users, I would only return basic name information and an id instead of the complete object
In a Grails app using a MySQL database (MySQL having been installed through Homebrew), I got this same problem, oddly enough, only by running the app without having started the MySQL server first. Thus simply running
mysql.server start
fixed the problem for me.
Related
I'm trying to log to file / seq an event that's an API response from a web service. I know it's not best practice, but under some circumstances, I need to do so.
The JSON saved on disk is around 400Kb.to be honest, I could exclude 2 part of it (that are images returned as base64), I think I should use a destructured logger, is it right?
I've tried increasing the Seq limit to 1mb but the content is not saved even to log file so I think that's not the problem...I use Microsoft Logging (ILogger interface) with Serilog.AspnetCore
Is there a way I can handle such a scenario?
Thanks in advance
You can log a serialized value by using the # format option on the property name. For example,
Log.Information("Created {#User} on {Created}", exampleUser, DateTime.Now);
As you've noted it tends to be a bad idea unless you are certain that the value being serialized will always be small and simple.
If I expose a VIEW
CREATE VIEW myView AS
SELECT ...
FROM ...
via xsodata
service namespace "oData" {
entity "mySchema"."myView" as "myView";
}
and GET /myView for the first time after VIEW creation the performance is very low:
However: After performing the same request again (and everytime after that) the performance is what I want it to be:
Questions:
Why?
How to avoid the first long-running request?
Already tried:
Execution of the sql profiler-output (without statement preparation) in HANA Studios SQL console gives good performance always
Table hotloading (LOAD myTable ALL;) had no effect
Update
We found out the "Why"-Part: xs-engine is running the query as a prepared statement even if there are no parameters in the request. On first execution (within the user's context) the query gets perpared, resulting in an entry in M_SQL_PLAN_CACHE (SELECT * FROM M_SQL_PLAN_CACHE WHERE USER_NAME = 'myUser'). Clearing the plan cache (ALTER SYSTEM CLEAR SQL PLAN CACHE) makes the oData request slow again, leading to the assumption that the performance gap lies in the re-preparation of the query.
We are now stuck with the 2nd question: How to avoid that? Our approach to mark certain plan cache entries for recompilation (ALTER SYSTEM RECOMPILE SQL PLAN CACHE ENTRY 123) just invalidated the entry and did not update it automatically...
I'm not to sure you can REMOVE the first execution long time, but you can try changing the view to a Calculation View executed in the SQL Engine.
HANA is been super optimized for using its Calculation Views, and the Plan Cache should run faster with them, maybe reducing the first execution time significantly. Also, Plan Cache of Calc. Views should be shared between users (since _SYS_REPO is the one who generates them).
If you use the script version I believe you could reuse a lot of your current SQL, but you can also try using the graphical approach as well.
Let us know if you had any luck. Modeling with Big Data is always a surprise.
I'm currently querying an Entity using projections to avoid returning the entire object.
It works flawlessly, however, when looking at the actual response from the server, I'm seeing the same type definition repeated for every single element.
For example:
["$type":"_IB_4NdB_p8LiaC3WlWHHQ_pZzrAC_plF4[[System.Int32, mscorlib],[System.String, mscorlib],[System.String, mscorlib],[System.String, mscorlib],[System.Nullable`1[[System.Int32, mscorlib]], mscorlib],[System.Int32, mscorlib],[System.Single, mscorlib]], _IB_4NdB_p8LiaC3WlWHHQ_pZzrAC_plF4_IdeaBlade"
Now, given that every item in the result is sharing the same projection for that query, is there a way to have Breeze only define the Type Description ONCE instead of for every element?
It may not seem like a big deal but as result size increases those bytes do start to add up. At the moment There is little difference between returning the projected values and the entire entity itself due to this overhead.
NOTE: As it turns out, since we use Dynamic Compression of JSON in our real environments, this actually turns out to be a minor issue, since 200KB responses actually turn into less than 20KB traffic after gzip compression. Will probably be closing this question, unless someone has something to add that could be of use to others.
Update 18 September 2014
I decided to "cure" the problem of the long ugly $type names in serialized data for both dynamic types from projection queries and anonymous types created for an endpoint such as "Lookups".
There's a new Breeze Labs nuget package, "Breeze.DynamicTypeRenaming" (search for "Breeze Dynamic Type Renaming"). This adds two files to your Web API project's "Controllers" folder. One is a CustomBreezeConfig which replaces Breeze's default config and resets the Json.Net "Binder" setting with the new DynamicTypeRenamingSerializationBinder; this binder does the type name magic.
Just install the nuget package in your Web API project and it should "just work". In your case, the $type value would become "_IB_4NdB_p8LiaC3WlWHHQ_pZzrAC_plF4, Dynamic".
See an example of it in the "DocCode" sample.
As always, this is a Breeze Lab product, not part of the core Breeze product. It is offered "as is" with no promise of support. I'm pretty sure it's good and has no adverse side-effects. No guarantees. I'm sure you'll let me know if there's a problem.
That IS atrocious, isn't it! That's the C# generated anonymous type. You can get rid of it by casting into a custom DTO type.
I don't know if it is actually harmful. I hate looking at it in any case.
Lately I've been thinking about adding a JSON.NET IContractResolver that detects such uglies and turns them into shorter uglies. Wouldn't be hard. Just haven't had the time.
Why not write that yourself and contribute to the community? We'd be grateful! :-)
Using Dynamic Compression of JSON output has turned this into a non-issue, at least for now, since all that repeated content is heavily compressed server-side.
I have a very large block of SQL that I am trying to execute inside of Delphi, against a Microsoft SQL Database. I am getting this:
Multiple-step OLE DB operation generated errors.
Check each OLE DB status value, if available. No work was done.
The script has multiple sql IF statements followed by BEGIN and END blocks with invocations of stored procedures, declaration of variables, and EXEC inside that. Finally it returns some of the variable values by SELECT #Variable1 AsName1,#Variable2 AsName2....
The above multi-step error is coming in as an OLEException from ADO, not from the Delphi code, and happens after all the SQL exec-stored-procedure have occurred, and therefore I suspect it's firing this OLE exception when it reaches the final stage which SELECT #Variable1 AsName1,... to get back a few variable values for my program to see them.
I know about this retired/deprecated MS KB article, and this is unfortunately not my actual issue:
http://support.microsoft.com/kb/269495
In short that KB article says to fix a registry key and remove "Persist Security Info" from the connection string. That's not my problem. I'm asking this question because I found the answer already and I think that someone else who gets stuck here might not want to waste several hours finding potential issues when there are several that I have found after searching for solutions for several hours. Anyone who wants to add another answer with different options, is fine, and I'll select yours if it's reproducible, and if necessary I'll turn this one into a Community Wiki because there could be a dozen obscure causes for this "ADO recordset is in a bad mood and is unhappy with your T-SQL" exception.
I have found several potential causes that are listed in various sources of documentation. The original KB article in the question suggests removing the 'Persist Security Info' from my ADO connection string, however in a standalone test in an application with just a TADOConnection and a single TADOQuery, the presence or absence of Persist Security Info had no effect, nor did explicitly setting it True or False.
What DID fix it was removing this CursorType declaration:
CursorType=ctKeyset
What I have learned is that bidirectional ADO datasets are fine for SELECT * FROM TABLE in ADO but are not so fine for complex SQL scripts.
Potential source of this error is updating char field with large value.
Example: Form has edit box with max length property set to 20 characters and Oracle database table has field defined as char(10).
Updating with 10 characters (or less) will work fine while updating with more then 10 characters will cause 'Multiple step...' error on ADOQuerry.UpdateBatch().
You also have to know that CHAR will allways have 20 characters. Consider Trimming value in edit box. CHAR behaves different than VARCHAR2 type.
If you have a query with parameter ,check the number of parameters in the query is matched with script...!
I don't really know how to describe why it happens, or how to reproduce it. I have a method that downloads some data from an external site, and saves it to a document. If i look for the document later (ex: via find) it's gone.
If i'm in the console and i have the object assigned to a variable before hand, i can access the data via that variable, but .find won't find it, and the Collection.count is one fewer.
Why would this this happen? Any ideas?
This is hosted on heroku using mongolab. I was thinking maybe the database is running out of space, but the stats page seems to indicate otherwise.
Am I reading it wrong? Here is the db.stats() output:
{
"serverUsed": "A_URL_HERE",
"db": "DB_NAME_HERE",
"collections": 11,
"objects": 116295,
"avgObjSize": 3300.993611075283,
"dataSize": 383889052,
"storageSize": 474427392,
"numExtents": 49,
"indexes": 9,
"indexSize": 4259696,
"fileSize": 520093696,
"nsSizeMB": 16,
"ok": 1
}
Could be a lot of different problems, but the most likely is that you've hit quota. You can see an explanation for each of the different storage stats in the MongoLab UI. The fact that storageSize and fileSize are near quota mean you're close if you're not hitting it.
No matter the source of the problem you should make sure you're using safe mode. It's the default when using the new MongoClient method of creating connections through your driver. It will check for errors before moving one, whereas in the past that was not the default behavior. There's a good chance the server is returning an error (which it will do when you hit quota) but the driver isn't checking for it.
As always, you can write us directly at support#mongolab.com and we'd be more than happy to help!