If I expose a VIEW
CREATE VIEW myView AS
SELECT ...
FROM ...
via xsodata
service namespace "oData" {
entity "mySchema"."myView" as "myView";
}
and GET /myView for the first time after VIEW creation the performance is very low:
However: After performing the same request again (and everytime after that) the performance is what I want it to be:
Questions:
Why?
How to avoid the first long-running request?
Already tried:
Execution of the sql profiler-output (without statement preparation) in HANA Studios SQL console gives good performance always
Table hotloading (LOAD myTable ALL;) had no effect
Update
We found out the "Why"-Part: xs-engine is running the query as a prepared statement even if there are no parameters in the request. On first execution (within the user's context) the query gets perpared, resulting in an entry in M_SQL_PLAN_CACHE (SELECT * FROM M_SQL_PLAN_CACHE WHERE USER_NAME = 'myUser'). Clearing the plan cache (ALTER SYSTEM CLEAR SQL PLAN CACHE) makes the oData request slow again, leading to the assumption that the performance gap lies in the re-preparation of the query.
We are now stuck with the 2nd question: How to avoid that? Our approach to mark certain plan cache entries for recompilation (ALTER SYSTEM RECOMPILE SQL PLAN CACHE ENTRY 123) just invalidated the entry and did not update it automatically...
I'm not to sure you can REMOVE the first execution long time, but you can try changing the view to a Calculation View executed in the SQL Engine.
HANA is been super optimized for using its Calculation Views, and the Plan Cache should run faster with them, maybe reducing the first execution time significantly. Also, Plan Cache of Calc. Views should be shared between users (since _SYS_REPO is the one who generates them).
If you use the script version I believe you could reuse a lot of your current SQL, but you can also try using the graphical approach as well.
Let us know if you had any luck. Modeling with Big Data is always a surprise.
Related
i need to refresh data in a TFDQuery which is in cached updates.
to simplify my problem, let's suppose my MsACCESS database is composed of 2 tables that i have to join.
LABTEST(id_test, dat_test, id_client, sample_typ)
SAMPLEType(id, SampleName)
in the Delphi application, i am using TFDConnection and 1 TFDQuery (in cached updates) in which i join the 2 tables which script is:
"SELECT T.id_test, T.dat_test, T.id_client, T.sample_typ, S.SampleName
FROM LABTEST T
left JOIN SAMPLEType S ON T.sample_typ = S.id"
in my application, i also use a DBGrid to show the result of the query.
and a button to edit the field "sample_typ", like this:
qr.Edit;
qr.FieldByName('sample_typ').AsString:=ce2.text;
qr.Post;
the edition of the 'sample_typ' field works fine but the corresponding 'sampleName' field is not changing (in the grid) after an update.
in fact it is not refreshed !
the problem is here: if i do refresh of the query, an exception is raised: "cannot refresh dataset. cached updates must be commited or canceled
and batch mode terminated before refreshing"
if i commit the updates, data will be sent to database and i don't want that, i need to keep the data in cache till the end of the operation.
also if i get out of the cache, data will be refreshed in the grid but will be sent to the database after qr.post and i don't want that.
i need to refresh data in the cache. what is the solution ?
Thanks in advance.
The issue comes down to the fact that you haven't told your UI that there is any dependency on the two fields - it clearly can't know how to do the join itself without resubmitting it so if you don't want to send the updates and reload you will have a problem.
It's not clear exactly what you are trying to do, but these two ideas may help you.
If you are not going to edit the fields in the SAMPLEType tables (S) then load the values from that table into a lookup table. You can load this into a TFDMemTable. You can use an adapter which loads from a query. Your UI controls can then show the value based on the valus looked up in your local TFDMemTable. Dependiong on the UI control this might be a 'LookupField' or some such.
You may also be able to store your main data in a TFDMemTable with an Adapter - you can specify diferent TFDCommands to read the whole recordset, refresh a record, update, insert and delete a record. The TFDCommands can act on multiple tables for joined recordsets like this. That would automatically refresh the individual record for you when you post it.
I have an application that writes to my neo4j database. Immediately after this write, another application performs a query and expects the previously written item as the result.
This doesn't happen, I don't get any result from my query.
Introducing a 100ms artificial delay between the write and the query yields the expected result, but that's not feasible.
I'm writing in TypeScript using neo4j-driver. I'm awaiting every promise the API's throwing at me. I even promisified the session.close function and I await that too (not sure if that does anything).
Is there a cache on neo4j's side that could be at fault? Can I somehow flush it?
If I run the following command from my bitcoin console client:
bitcoind -reindex -txindex -debug=net -printtoconsole
it take extremely long to run, does this reindex all the previous bitcoin transactions ?
Here are the detail about the options you use:
-txindex: Maintain a full transaction index (default: 0)
-reindex: Rebuild blockchain index from current blk000??.dat files
-debug: Output extra debugging information. Implies all other -debug* options
It's normal that this operation takes time because txindex represent a uge amount of data and you force the bitcoin core to rebuild the blockchain from your local saves each time you run it (which is, from my experience, not necessary). My suggestion is to remove -reindex and try to figure out if you really need -txindex.
If you want to check on all the transactions related to your wallet, I think this option is more appropriate:
-rescan: Rescan the block chain for missing wallet transactions
note: this will also be time consuming
information from : http://we.lovebitco.in/bitcoin-qt/command-line-options/
Tips for faster reindexing:
Use -printtoconsole=0 (will not output anything at all to the console)
Increase dbcache from default 450 - for example, to 1000: -dbcache=1000
I'm using a TClientDataSet as a local dataset without the provider concept. After working with it a method is called that should generate the corresponding SQL statements using the StatusFilter to resolve the changes (generate SQL basically).
This looked easy initially after reading documentation (set StatusFilter to [dsInsert], process all inserts SQL, set StatusFilter to [dsModified] process all updates, the same with deletes) but after a few tests now looks far from trivial, for example:
If I add a record, then edit it: setting the StatusFilter to [dsInserted] displays it, but with the original data.
If I add a record, then edit, then delete it: the record appears with StatusFilter set to [dsInserted] and [dsModified] also.
And other similar situations..
1) I know that if first I process all inserts, then all updates then all deletes the database will be updated in the correct state but it looks far from right this approach (generating useless sql statements).
2) I've tried to access the PRecInfo(ClientDataSet.ActiveBuffer + ClientDataSet.RecordSize).Attribute information (dsRecNew, dsRecOrg, etc.) but still not manage to resolve the logic.
3) I can program the logic to resolve it, for example before processing and insert, set StatusFilter to [dsDeleted], and locating by the primary key if the record to see if its deleted thereafter.. the same with edits, before inserting, checking if the record was updated after so the insert sql in the updated version and so on.. but it should be more easy..
¿Did someone tried to solve this in an elegant and straightforward way? ¿I'm missing something? Thanks
Here's my situation:
From my Grails controller, I call a service, which queries a database read-only, transforms the result into JSON, and returns the result.
Specs are: JDK 1.6, Tomcat 5.5, Grails 1.3.4, DB via JNDI
Tomcats MaxPermSize is set to 256m and Xmx to 128m.
EDIT: Increasing the memory should be the last resort
The service method:
String queryDB(String queryString) {
StringWriter writer = new StringWriter()
JSonBuilder json = new JSonBuilder(writer)
def queryResult = SomeDomain.findAllBySomePropIlike("%${queryString}%")
json.whatever {
results {
queryResult.eachWithIndex { qr, i ->
// insert domain w/ properties
}
}
}
queryResult = null
return writer.toString()
}
Now, when queryString == 'a' the result set is huge and I end up with this:
[ERROR] 03/Nov/2010#09:46:39,604 [localhost].[/grails-app-0.1].[grails] - Servlet.service() for servlet grails threw exception
java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.codehaus.groovy.util.ComplexKeyHashMap.init(ComplexKeyHashMap.java:81)
at org.codehaus.groovy.util.ComplexKeyHashMap.<init>(ComplexKeyHashMap.java:46)
at org.codehaus.groovy.util.SingleKeyHashMap.<init>(SingleKeyHashMap.java:29)
at groovy.lang.MetaClassImpl$Index.<init>(MetaClassImpl.java:3381)
at groovy.lang.MetaClassImpl$MethodIndex.<init>(MetaClassImpl.java:3364)
at groovy.lang.MetaClassImpl.<init>(MetaClassImpl.java:140)
at groovy.lang.MetaClassImpl.<init>(MetaClassImpl.java:190)
at groovy.lang.MetaClassImpl.<init>(MetaClassImpl.java:196)
at groovy.lang.ExpandoMetaClass.<init>(ExpandoMetaClass.java:298)
at groovy.lang.ExpandoMetaClass.<init>(ExpandoMetaClass.java:333)
at groovy.lang.ExpandoMetaClassCreationHandle.createNormalMetaClass(ExpandoMetaClassCreationHandle.java:46)
at groovy.lang.MetaClassRegistry$MetaClassCreationHandle.createWithCustomLookup(MetaClassRegistry.java:139)
at groovy.lang.MetaClassRegistry$MetaClassCreationHandle.create(MetaClassRegistry.java:122)
at org.codehaus.groovy.reflection.ClassInfo.getMetaClassUnderLock(ClassInfo.java:165)
at org.codehaus.groovy.reflection.ClassInfo.getMetaClass(ClassInfo.java:182)
at org.codehaus.groovy.runtime.callsite.ClassMetaClassGetPropertySite.<init>(ClassMetaClassGetPropertySite.java:35)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.createClassMetaClassGetPropertySite(AbstractCallSite.java:308)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.createGetPropertySite(AbstractCallSite.java:258)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.acceptGetProperty(AbstractCallSite.java:245)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:237)
at org.codehaus.groovy.grails.plugins.web.filters.FilterToHandlerAdapter.accept(FilterToHandlerAdapter.groovy:196)
at org.codehaus.groovy.grails.plugins.web.filters.FilterToHandlerAdapter$accept.callCurrent(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:44)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:143)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:159)
at org.codehaus.groovy.grails.plugins.web.filters.FilterToHandlerAdapter.preHandle(FilterToHandlerAdapter.groovy:107)
at org.springframework.web.servlet.HandlerInterceptor$preHandle.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:40)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
at org.codehaus.groovy.grails.plugins.web.filters.CompositeInterceptor.preHandle(CompositeInterceptor.groovy:42)
at org.codehaus.groovy.grails.web.servlet.GrailsDispatcherServlet.doDispatch(GrailsDispatcherServlet.java:282)
One approach I found on the web regards some leaks in Hibernate and domain validation, explained here and in detail here. I'm just about to test it, but I don't know if this is really the solution for my problem and (if it is) at which point it's best to clean up GORM.
Or is there another memory leak in my code?
Ideas anyone?
EDIT: As far as I am now, the exception occurs at the point where the finder method is called. That means that GORM isn't able to handle the amount of data returned by the database, right? Sorry for asking like a greenhorn, but I have never encountered such a problem, even with very large result sets.
Sun (this link isn't valid anymore) had documented this OutOfMemoryError as follows:
The parallel / concurrent collector
will throw an OutOfMemoryError if too
much time is being spent in garbage
collection: if more than 98% of the
total time is spent in garbage
collection and less than 2% of the
heap is recovered, an OutOfMemoryError
will be thrown. This feature is
designed to prevent applications from
running for an extended period of time
while making little or no progress
because the heap is too small. If
necessary, this feature can be
disabled by adding the option
-XX:-UseGCOverheadLimit to the command line.
In other words, that error is a feature, a hint to increase available memory (which is not a preferred option in your case, as you've mentioned). Some developers consider this feature not to be useful in every use case, so check out turning it off.
Another option to those already suggested would be to work in pages of results. Instead of using the dynamic finder, use Criteria and page through the results yourself. Here's a naive, pseudocode example:
def offset = 0
def max = 50
while(stillMoreResults) {
def batch = SomeDomain.findAllBySomePropIlike("%${queryString}%", [max: max, offset: offset])
appendBatchToJsonResult(batch)
offset += max
}
You could tweak the batch size according to your memory requirements. This would avoid having to adjust the memory.
Edit
I just re-read Fletch's answer and noticed that he mentioned this as a solution and you commented on it. I'll leave mine here since it's got an example, but if Fletch adds a paging example to his, I'll delete this answer since he mentioned it before I did.
If you don't want to increase memory, maybe you should only search strings larger than a certain amount. I guess this is some kind of type-ahead/suggestion function; maybe you could start searching when there are three characters or so. Otherwise, maybe paged results is an option?
By the way architecturally the controller is intended to handle interaction with the outside world and its formats, i.e. you would probably want your service just to return the objects and your controller to do the JSON conversion. But this won't solve your current problem.
I would also suggest you only return the properties you need with this query type ahead query and get the full domain object with the actual data the user requires.
The JSON builder is going to create alot of objects and comsume memory. For example in a type-ahead for users, I would only return basic name information and an id instead of the complete object
In a Grails app using a MySQL database (MySQL having been installed through Homebrew), I got this same problem, oddly enough, only by running the app without having started the MySQL server first. Thus simply running
mysql.server start
fixed the problem for me.