How to configure js node stack size when using Graal? - graalvm

I faced with problem when i try to use deep recursive function in js i get exception (RangeError: Maximum call stack size exceeded). This function perfect work out of Graal.
It is only reproduced when calling polyglot Context.execute(). First call finish without exception but other throw.
I use docker and graaljdk image oracle/graalvm-ce:20.0.0-java11 and use one Engine for all threads and create Context per thread. Can i increase node stack size via graal options or something else?

From your description, i assume you are starting Graal JS from some Java code using the polyglot API.
Graal JS runs on the same threads as the rest of the JVM. You can increase the stack size of the JVM using the -Xss argument. For example
$ <graalvm>/bin/java -Xss2m …
2m means 2MB (i think the default on x86_64 is 1MB). You can experiment with various sizes but remember that the higher you set it, the less threads you can fit in a fixed amount of memory.

Related

--workerCacheMB setting missing in apache beam 0.6?

In Google Cloud Dataflow 1.x, I presumably had access to this critical pipeline option called:
workerCacheMb
I tried to set in in my beam 0.6 pipeline, but couldn't do so (it said that no such option existed.). I then scoured through the options source code to see if any option had a similar name -- but I still couldn't find it.
I need to set it as I think that my worfklow's incredibly slowness is due to a side input that 3GB but that appears to be taking well over 20 minutes to read. (I have a View.asList() and then I'm trying to do a for-loop on the list -- it's taking more than 20 minutes and still going; even at 3 GB, that's way too slow.) So, I was hoping that setting the workerCacheMb would help. (The only other theory I have is to switch from serializablecoder to AvroCoder....)
Are you using the right class of options?
The following code works for me in Beam:
DataflowWorkerHarnessOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().create()
.cloneAs(DataflowWorkerHarnessOptions.class);
options.setWorkerCacheMb(3000);

Using pthreads with MPICH

I am having trouble using pthreads in my MPI program. My program runs fine without involving pthreads. But I then decided to execute a time-consuming operation in parallel and hence I create a pthread that does the following (MPI_Probe, MPI_Get_count, and MPI_Recv). My program fails at MPI_Probe and no error code is returned. This is how I initialize the MPI environment
MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided_threading_support);
The provided threading support is '3' which I assume is MPI_THREAD_SERIALIZED. Any ideas on how I can solve this problem?
The provided threading support is '3' which I assume is MPI_THREAD_SERIALIZED.
The MPI standard defines thread support levels as named constants and only requires that their values are monotonic, i.e. MPI_THREAD_SINGLE < MPI_THREAD_FUNNELED < MPI_THREAD_SERIALIZED < MPI_THREAD_MULTIPLE. The actual numeric values are implementation-specific and should never be used or compared against.
MPI communication calls by default never return error codes other than MPI_SUCCESS. The reason for that is, MPI calls the communicator's error handler before an MPI call returns and all communicators are initially created with MPI_ERRORS_ARE_FATAL installed as their error handler. That error handler terminates the program and usually prints some debugging information, e.g. the reason for the failure. Both MPICH (and its countless variants) and Open MPI produce quite elaborate reports on what led to the termination.
To enable user error handling on communicator comm, you should make the following call:
MPI_Comm_set_errhandler(comm, MPI_ERRORS_RETURN);
Watch out for the error codes returned - their numerical values are also implementation-specific.
If your MPI implementation isn't willing to give you MPI_THREAD_MULTIPLE, there's three things you can do:
Get a new MPI implementation.
Protect MPI calls with a critical section.
Cut it out with the threading thing.
I would suggest #3. The whole point of MPI is parallelism -- if you find yourself creating multiple threads for a single MPI subprocess, you should consider whether those threads should have been independent subprocesses to begin with.
Particularly with MPI_THREAD_MULTIPLE. I could maybe see a use for MPI_THREAD_SERIALIZED, if your threads are sub-subprocess workers for the main subprocess thread... but MULTIPLE implies that you're tossing data around all over the place. That loses you the primary convenience offered by MPI, namely synchronization. You'll find yourself essentially reimplementing MPI on top of MPI.
Okay, now that you've read all that, the punchline: 3 is MPI_THREAD_MULTIPLE. But seriously. Reconsider your architecture.

Changing Delphi Thread Size on linecode

Here's my problem:
My current threads are created by default with 1024kb, when I normally need less than 50kb.
Is there a way to parametrize its size by coding? I could only find a way to change it via menu.
Thanks in advance.
It's not possible to specify stack size using TThread. TThread's thread creating code path leads to CreateThread API to be called to use the default stack size for the executable. This is by default 1MB for a Delphi executable (as you have noted). Although you can modify this value (*) through linker options (maximum stack size), or through the corresponding compiler directive, that will have an effect on all threads that use default stack in the application (main, 3rd party TThread, ...).
If you can do without TThread, you can use the BeginThread RTL function to have the StackSize you pass it to be used when you include STACK_SIZE_IS_A_RESERVATION in CreationFlags.
(*) The value that will be reserved for thread stack, Te Waka o Pascal has an article showing effects of using different values.

java.lang.stackoverflowerror on small import

I am doing some work with neo4j based around working out who knows who and what they do, it is in the format of
Company-node
product-node
person-node
and the relationships between them as
company borders,by location
person works at company
company has product.
I have a spreadsheet that has all the information written down and a macro that takes the iformation adn converts it into cypher. THe code comes to around 5000 lines.
When I try to import it I get an unknown error if I try to run it in the internet browser. If i run it in the shell it goes the whole way through and then gives the error
Error occurre in server thread ; nested exception is:
java.lang.StackOverflowerror
my heap size is set to 3gb
anyone have any ideas on what the error is and how to fix it?
Fist of all it has nothing to do with your heap size, it is related to stack size. if you want to increase stack size use -Xss parameter.
Also stack is used to hold intermediate variables and function calls, your import is somehow crossing the stack size set in you configuration.

How do you increase the maximum heap size for the javac process in Borland JBuilder 2005/2006

In most modern IDEs there is a parameter that you can set to ensure javac gets enough heap memory to do its compilation. For reasons that are not worth going into here, we are tied for the time being to JBuilder 2005/2006, and it appears the amount of source code has exceeded what can be handled by javac.
Please keep the answer specific to JBuilder 2005/2006 javac (we cannot migrate away right now, and the Borland Make compiler does not correctly support Java 1.6)
I realize how and what parameters should be passed to javac, the problem is the IDE doesn't seem to allow these to be set anywhere. A lot of configuration is hidden down in the Jbuilder Install\bin*.config files, I feel the answer may be in there somewhere, but have not found it.
did you find a good solution for that problem?
I have the same problem and the only solution I found is the following:
The environment variable JAVA_TOOL_OPTIONS can be used to provide parameters for the JVM.
http://java.sun.com/javase/6/docs/platform/jvmti/jvmti.html#tooloptions
I have created a batch file "JBuilderw.bat" with the following content:
set JAVA_TOOL_OPTIONS=-Xmx256m
JBuilderw.exe
Each time I start JBuilder using this batch file the env.var. JAVA_TOOL_OPTIONS will be set and javac.exe will receive the setting.
The JVM displays at the end the following message: "Picked up JAVA_TOOL_OPTIONS: -Xmx256m"
Drawback: all virtual machines started by JBuilder will get that setting. :(
Thanks,
JB
Have a look at http://javahowto.blogspot.com/2006/06/fix-javac-java-lang-outofmemoryerror.html
The arguments that you need to pass to JBuilder's javac is "-J-Xms256m -J-Xmx256m". Replace the 256m with whatever is appropriate in your case. Also, remove the quotes.
This should work for java 1.4, java 1.5 and forward.
BR,
~A
"I realize how and what parameters should be passed to javac, the problem is the IDE doesn't seem to allow these to be set anywhere."
I realized now that you know how to pass the right arguments ONLY not where/how to pass those arguments :-(
How about this : Can you locate where is the JAVA_HOME/bin directory that borland uses ? If yes, then you can rename the javac.exe(to say javacnew.exe) with a javac.bat which in turn will call the javacnew.exe (as well as pass the required arguments) ?
I don't know if this will help since I don't use Borland but in Eclipse, this is a setting that you attach to the program you're going to run. Each program you run in the IDE has configuration specific to it, including arguments to the VM. Is there something like that?
Do you have a jdk.config file located in JBuilder2005/bin/?
You should be able to modify vm parameters in that file like:
vmparam -Xms256m
vmparam -Xmx256m
Let me know if this works, I found it on a page talking about editing related settings in JBuilder 2005.
Edit the jbuilder.config file.
Put in comment those two lines:
vmmemmax 75%
vmmemmin 32m
has they ought to be <1Gb and with a > 1Gb PC , 75% is too big?

Resources