java.lang.stackoverflowerror on small import - neo4j

I am doing some work with neo4j based around working out who knows who and what they do, it is in the format of
Company-node
product-node
person-node
and the relationships between them as
company borders,by location
person works at company
company has product.
I have a spreadsheet that has all the information written down and a macro that takes the iformation adn converts it into cypher. THe code comes to around 5000 lines.
When I try to import it I get an unknown error if I try to run it in the internet browser. If i run it in the shell it goes the whole way through and then gives the error
Error occurre in server thread ; nested exception is:
java.lang.StackOverflowerror
my heap size is set to 3gb
anyone have any ideas on what the error is and how to fix it?

Fist of all it has nothing to do with your heap size, it is related to stack size. if you want to increase stack size use -Xss parameter.
Also stack is used to hold intermediate variables and function calls, your import is somehow crossing the stack size set in you configuration.

Related

How to configure js node stack size when using Graal?

I faced with problem when i try to use deep recursive function in js i get exception (RangeError: Maximum call stack size exceeded). This function perfect work out of Graal.
It is only reproduced when calling polyglot Context.execute(). First call finish without exception but other throw.
I use docker and graaljdk image oracle/graalvm-ce:20.0.0-java11 and use one Engine for all threads and create Context per thread. Can i increase node stack size via graal options or something else?
From your description, i assume you are starting Graal JS from some Java code using the polyglot API.
Graal JS runs on the same threads as the rest of the JVM. You can increase the stack size of the JVM using the -Xss argument. For example
$ <graalvm>/bin/java -Xss2m …
2m means 2MB (i think the default on x86_64 is 1MB). You can experiment with various sizes but remember that the higher you set it, the less threads you can fit in a fixed amount of memory.

BPEL, threads stucked on HashMap.getEntry?

I am new to SOA, and currently we met a problem when using BPEL to do some XML transformation.
we have 3 SOA projects will do something like:
Read input files from folder which is in text format
Save file content in Database and put on AQ
Read file id from AQ, load content from database, and transform to our internal XML format
apply some business logic and transform content back to text format.
SOA proejct1 do step 1-2, project2 do step 3 and project3 to step4.
We are doing some load test with input 7000 files.
the problem we experienced is that the memory use of "Old Generation" keep accumulating, although major GC can reduce it, it still keep growing, until 100%. Then no new BEPL instance can be created, and we met transaction timeout.
after analyze heap dump, we get a result like below, it seems that BPELFactoryImpl hold a HashMap which more than 180M, and it will keep growing. so does anyone experienced something similar?
we use SOA version 12.1.3. this problem stopped us for weeks, please help, thanks a lot.
Image of heap analysis
Guys
Finally we got an answer on this, it was caused by a bug, as said by Oracle Support, we are waiting for the patch.
thanks for your attention.
It's a bug. You should raise an SR referring for: stuck threads on
at java.util.HashMap.getEntry(HashMap.java:465)
at java.util.HashMap.get(HashMap.java:417)
at oracle.xml.parser.v2.XMLNode.setUserData(XMLNode.java:2137)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.doCreateElement(ExtensibleElementImpl.java:502)
at oracle.dp.entity.impl.EmFacadeObjectImpl.getElement(EmFacadeObjectImpl.java:35)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.performDOMChange(ExtensibleElementImpl.java:707)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.doOnChange(ExtensibleElementImpl.java:636)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl$DOMUpdater.notifyChanged(ExtensibleElementImpl.java:535)
at oracle.dp.notify.impl.NotifierImpl.emNotify(NotifierImpl.java:39)
at oracle.dp.entity.impl.EmHolderImpl.doNotifyOnSet(EmHolderImpl.java:53)
at oracle.dp.entity.impl.EmHolderImpl.set(EmHolderImpl.java:47)
at oracle.bpel.lang.v20.model.impl.CopyImpl.setTo(CopyImpl.java:115)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP$CallArgument$1.evaluate(BPEL2xCallWMP.java:190)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP.invokeMethod(BPEL2xCallWMP.java:103)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP.__executeStatements(BPEL2xCallWMP.java:62)
at com.collaxa.cube.engine.ext.bpel.common.wmp.BaseBPELActivityWMP.perform(BaseBPELActivityWMP.java:188)
at com.collaxa.cube.engine.CubeEngine.performActivity(CubeEngine.java:2880)
....
Bug 20857627 (20867804) : Performance issue due to large number of threads stuck in HashMap.get

Failure on CSV import into Neo4j 2.2.0-RC01

I'm having some weird issues when using the batch load into Neo4j 2.2.0-RC1. I am trying to import 10 different node sets (for different labels) along with 12 relationship files. The data sets vary in size - some node types have ~200-300k records, some are small (50-100 records). For most node types I have a separate file with a header and separate file with data for each of the sets (the data is generated from the DB and I want to be able to regenerate the dump files without worrying about preparing the :ID columns, describing data types etc.)
I am re-running the import task a number of times (with options --processors 1 --stacktrace) and I keep getting different errors (not a single change in the actual dataset) which makes me think it might be something concurrency-related. Sometimes import simply hangs with a message like this:
Nodes
[>:36.75 MB/s------------------------|*PROPERTIES-----------------------------------------|NOD|] 0
In most cases, it crashes with an error like below, except the number of nodes that it manages to import fine differs from run to run.
[>:27.23 MB/s-------------|*PROPERTIES--------------------------|NO|v:19.62 MB/s---------------]100kImport error: Panic called, so exiting
java.lang.RuntimeException: Panic called, so exiting
at org.neo4j.unsafe.impl.batchimport.staging.StageExecution.stillExecuting(StageExecution.java:63)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutionSupervisor.anyStillExecuting(ExecutionSupervisor.java:79)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutionSupervisor.finishAwareSleep(ExecutionSupervisor.java:102)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutionSupervisor.supervise(ExecutionSupervisor.java:64)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutionSupervisors.superviseDynamicExecution(ExecutionSupervisors.java:65)
at org.neo4j.unsafe.impl.batchimport.ParallelBatchImporter.executeStages(ParallelBatchImporter.java:226)
at org.neo4j.unsafe.impl.batchimport.ParallelBatchImporter.doImport(ParallelBatchImporter.java:151)
at org.neo4j.tooling.ImportTool.main(ImportTool.java:263)
Caused by: java.lang.RuntimeException: Panic called, so exiting
at org.neo4j.unsafe.impl.batchimport.staging.AbstractStep.assertHealthy(AbstractStep.java:189)
at org.neo4j.unsafe.impl.batchimport.staging.ProducerStep.process(ProducerStep.java:77)
at org.neo4j.unsafe.impl.batchimport.staging.ProducerStep$1.run(ProducerStep.java:54)
Caused by: java.lang.IllegalStateException: Nodes for any specific group must be added in sequence before adding nodes for any other group
at org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.EncodingIdMapper.put(EncodingIdMapper.java:137)
at org.neo4j.unsafe.impl.batchimport.NodeEncoderStep.process(NodeEncoderStep.java:76)
at org.neo4j.unsafe.impl.batchimport.NodeEncoderStep.process(NodeEncoderStep.java:41)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutorServiceStep$2.call(ExecutorServiceStep.java:96)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutorServiceStep$2.call(ExecutorServiceStep.java:87)
at org.neo4j.unsafe.impl.batchimport.executor.DynamicTaskExecutor$Processor.run(DynamicTaskExecutor.java:217)
I managed to run it successfully once, which, again, seems to imply that some sort of timing issue is at play.
Unfortunately I cannot provide the datasets as they contain confidential data.
The weirdest thing of all is that if I split the load into 2 different sets (the datasets are almost separate subgraphs, they have only 2 relationships in common) then all works fine (so not likely to be data related), but even loading just nodes doesn't work if I put them all into a single command. And because it's not possible to force a load into an existing database, loading it in 2 steps is sadly not an option.
1) Is that a known issue and if so, any ETA on a fix / issue that I could follow?
2) If not, is there any troubleshooting I can do to get to the bottom of it? The messages.log file in the target DB directory contains VERY little output, it would be nice if I could get some more details on what's going wrong.
I've spotted the problem. Thanks for reporting/asking. The next release will include this fix. I see an additional set of integration tests for the import tool. I'll provide link to commit once it's in.

Method code too large in Groovy & Grails?

2014-06-17 11:22:18,622 [Thread-11] ERROR compiler.GrailsProjectWatcher - Compilation Error: startup failed:
General error during class generation: Method code too large!
What is the solution? Only 4-5 line code hide and restart then fully run in successfully, the bootStrap file size is 149k. When I comment or delete 4-5 line code, it will be run without no error!
The Java Virtual Machine has a limitation that methods cannot be larger than 64k (65536 bytes). This post describes this limitation in details.
The best way to overcome this issue is simply splitting your large method into smaller ones, which is generally a good practice.
Also notice that the JVM JIT compiler will not compile methods larger than 8K. You can however change this behavior using the -XX:-DontCompileHugeMethods option.
The problem: Just got in Jenkins pipeline the following exception error: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
General error during class generation: Method code too large!
java.lang.RuntimeException: Method code too large!
Explanation : The root cause related to 64kB limit of byte code of a single method. Java Virtual Machine has implicit limitations on class that there are mandatory to be followed and restricted according to performance and the language limitiation - such as: size of an operand stack in frame, length of fields and method names, number of methods may be declared in class etc... you can follow this "check list" on Oracle JVM documentation. You got the method size limitation on this scenario.
Solution: In order to solve this issue, just separate the class methods into shared-lib library or sub internal / external class (such as Utils.Groovy for example) and import that library in your main class. In general a code should be readable , lean and high level. if it's too long export the functionality use object oriented architecture and you would earn readable and maintainable code as well.

Any way to find more detail about WARNING: ID3D10Buffer::SetPrivateData: Existing private data of same name with different size found!

I'm encountering this error when I'm running my DirectX10 program in debug mode:
D3D10: WARNING: ID3D10Buffer::SetPrivateData: Existing private data of same name with different size found! [ STATE_SETTING WARNING #55: SETPRIVATEDATA_CHANGINGPARAMS ]
I'm trying to make the project highly OOP as a learning exercise, so there's a chance that this may be occurring, but is there a way to get some more details?
It appears this warning is raised by D3DX10CreateSprite, which is internally called by font->DrawText
You can ignore this warning, seems to be a bug in the Ms code :)
Direct3D11 doesn't have built-in text rendering anymore, so you won't encounter it in the future.
Since this is a D3D11 warning, you could always turn it off using ID3D11InfoQueue:
D3D11_MESSAGE_ID hide [] = {
D3D11_MESSAGE_ID_SETPRIVATEDATA_CHANGINGPARAMS,
// Add more message IDs here as needed
};
D3D11_INFO_QUEUE_FILTER filter;
memset(&filter, 0, sizeof(filter));
filter.DenyList.NumIDs = _countof(hide);
filter.DenyList.pIDList = hide;
d3dInfoQueue->AddStorageFilterEntries(&filter);
See this page for more. I found your question while googling for the answer and had to search a bit more to find the above snippet, hopefully this will help someone :)
What other data are you looking for or interested in?
The warning is pretty clear about what is going on, but if you want to hunt down a bit more data, there may be a few things to try.
Try calling ID3D10Buffer::GetPrivateData with the same name or do some other check to see if there is data with that name already, and if so, what the contents are. Print your results to a file, output window, or console. This may be combined with breakpoints to see where the duplicate is occurring (break when there's already data).
You may (not positive) be able to set the D3D runtimes to debug mode and to break on warnings (not sure if it can do warnings or just errors). Debug your app in VS or your preferred debugger, and when the warning is shown, it will break and you can look at the parameters.
Go through your code and track down all calls to ID3D10Buffer::SetPrivateData and look to see if there are any obvious duplicates. If there are, work up the program flow and see why and what you can do about them (this may work best after you use one of the former methods to know where to start).
How are your data names set up, and what is the buffer used for? Examining one or both may lead you to a conflict somewhere.
You may also try unicorns, they've been known to help with this kind of problem.

Resources