Short
I am building an electron app with Vue, using electron-vue. I need it to run a heavy work, say f(). But out of memory error is thrown and it cannot be done. How can I increase memory limit of electron renderer process?
Long
First, when I build a CLI app and run f(), i.e.:
// Start of file
function f() {
// Do some heavy work
}
if (require.main === module) {
f()
}
// End of file
It takes a long time, but f() is completed without any error.
However, when I build an electron app and run f() in a renderer, it takes a long time and finally throws out of memory error.
My app has the following structure:
+----------+ +------+ +----------+
| UI |--work request-->| Main |--work request-->| Worker |
| renderer |<-work response--| proc |<-work response--| renderer |
+----------+ +------+ +----------+
The main process forwards request and response. (You may think Worker renderer is a bit weird, but I need to keep this structure for a while...)
My guess is that difference of memory limit between CLI app and the worker renderer of the electron app causes this problem.
Is my guess is correct? If so, is there any suggestion to solve this problem?
Related
The question is mostly about step #3.
I need to implement the following loop in F#:
Load some input parameters from database. If input parameters specify that the loop should be terminated, then quit. This is straightforward, thanks to type providers.
Run a model generator, which generates a single F# source file, call it ModelData.fs. The generation is based on input parameters and some random values. This may take up to several hours. I already have that.
Compile the project. I can hard code a call to MSBuild, but that does not feel right.
Copy executables into some temporary folder, let’s say <SomeTempData>\<SomeModelName>.
Asynchronously run the executable from the output location of the previous step. The execution time is usually between several hours to several days. Each process runs in a single thread because this is the most efficient way to do it in this case. I tried the parallel version, but it did not beat the single threaded one.
Catch when the execution finishes. This seems straightforward: https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.process.exited?view=netframework-4.7.2 . The running process is responsible for storing useful results, if any. This event can be handled by F# MailBoxProcessor.
Repeat from the beginning. Because execution is async, monitor the number of running tasks to ensure that it does not exceed some allowed number. Again, MailBoxProcessor will do that with ease.
The whole thing will run on Windows, so there is no need to maintain multiple platforms. Just NET Framework (let’s say 4.7.2 as of this date) will do fine.
This seems like a very straightforward CI-like exercise and F# FAKE seemed as a proper solution. Unfortunately, none of the provided scarce examples worked (even with reasonable tweaks) and the bugs were cryptic. However, the worst part was that the compiling feature did not work at all. The provided example: http://fake.build/fake-gettingstarted.html#Compiling-the-application cannot be run at all and even after accounting for something like that: https://github.com/fsharp/FAKE/issues/1579 : it still silently choses not to compile the project. I’d appreciate any advice.
Here is the code that I was trying to run. It is based on the references above:
#r #"C:\GitHub\ClmFSharp\Clm\packages\FAKE.5.8.4\tools\FakeLib.dll"
#r #"C:\GitHub\ClmFSharp\Clm\packages\FAKE.5.8.4\tools\System.Reactive.dll"
open System.IO
open Fake.DotNet
open Fake.Core
open Fake.IO
open Fake.IO.Globbing.Operators
let execContext = Fake.Core.Context.FakeExecutionContext.Create false "build.fsx" []
Fake.Core.Context.setExecutionContext (Fake.Core.Context.RuntimeContext.Fake execContext)
// Properties
let buildDir = #"C:\Temp\WTF\"
// Targets
Target.create "Clean" (fun _ ->
Shell.cleanDir buildDir
)
Target.create "BuildApp" (fun _ ->
!! #"..\SolverRunner\SolverRunner.fsproj"
|> MSBuild.runRelease id buildDir "Build"
|> Trace.logItems "AppBuild-Output: "
)
Target.create "Default" (fun _ ->
Trace.trace "Hello World from FAKE"
)
open Fake.Core.TargetOperators
"Clean"
==> "BuildApp"
==> "Default"
Target.runOrDefault "Default"
The problem is that it does not build the project at all but no error messages are produced! This is the output when running it in FSI:
run Default
Building project with version: LocalBuild
Shortened DependencyGraph for Target Default:
<== Default
<== BuildApp
<== Clean
The running order is:
Group - 1
- Clean
Group - 2
- BuildApp
Group - 3
- Default
Starting target 'Clean'
Finished (Success) 'Clean' in 00:00:00.0098793
Starting target 'BuildApp'
Finished (Success) 'BuildApp' in 00:00:00.0259223
Starting target 'Default'
Hello World from FAKE
Finished (Success) 'Default' in 00:00:00.0004329
---------------------------------------------------------------------
Build Time Report
---------------------------------------------------------------------
Target Duration
------ --------
Clean 00:00:00.0025260
BuildApp 00:00:00.0258713
Default 00:00:00.0003934
Total: 00:00:00.2985910
Status: Ok
---------------------------------------------------------------------
In Erlang, can I call some function f (BIF or not), whose job is to spawn a process, run the function argf I provided, and doesn't "return" until argf has "returned", and do this without using receive clause (the reason for this is that f will be invoked in a gen_server, I don't want pollute the gen_server's mailbox).
A snippet would look like this:
%% some code omitted ...
F = fun() -> blah, blah, timer:sleep(10000) end,
f(F), %% like `spawn(F), but doesn't return until 10 seconds has passed`
%% ...
The only way to communicate between processes is message passing (of course you can consider to poll for a specific key in an ets or a file but I dont like this).
If you use a spawn_monitor function in f/1 to start the F process and then have a receive block only matching the possible system messages from this monitor:
f(F) ->
{_Pid, MonitorRef} = spawn_monitor(F),
receive
{_Tag, MonitorRef, _Type, _Object, _Info} -> ok
end.
you will not mess your gen_server mailbox. The example is the minimum code, you can add a timeout (fixed or parameter), execute some code on normal or error completion...
You will not "pollute" the gen_servers mailbox if you spawn+wait for message before you return from the call or cast. A more serious problem with this maybe that you will block the gen_server while you are waiting for the other process to terminate. A way around this is to not explicitly wait but return from the call/cast and then when the completion message arrives handle it in handle_info/2 and then do what is necessary.
If the spawning is done in a handle_call and you want to return the "result" of that process then you can delay returning the value to the original call from the handle_info handling the process termination message.
Note that however you do it a gen_server:call has a timeout value, either implicit or explicit, and if no reply is returned it generates an error in the calling process.
Main way to communicate with process in Erlang VM space is message passing with erlang:send/2 or erlang:send/3 functions (alias !). But you can "hack" Erlang and use multiple way for communicating over process.
You can use erlang:link/1 to communicate stat of the process, its mainly used in case of your process is dying or is ended or something is wrong (exception or throw).
You can use erlang:monitor/2, this is similar to erlang:link/1 except the message go directly into process mailbox.
You can also hack Erlang, and use some internal way (shared ETS/DETS/Mnesia tables) or use external methods (database or other things like that). This is clearly not recommended and "destroy" Erlang philosophy... But you can do it.
Its seems your problem can be solved with supervisor behavior. supervisor support many strategies to control supervised process:
one_for_one: If one child process terminates and is to be restarted, only that child process is affected. This is the default restart strategy.
one_for_all: If one child process terminates and is to be restarted, all other child processes are terminated and then all child processes are restarted.
rest_for_one: If one child process terminates and is to be restarted, the 'rest' of the child processes (that is, the child processes after the terminated child process in the start order) are terminated. Then the terminated child process and all child processes after it are restarted.
simple_one_for_one: A simplified one_for_one supervisor, where all child processes are dynamically added instances of the same process type, that is, running the same code.
You can also modify or create your own supervisor strategy from scratch or base on supervisor_bridge.
So, to summarize, you need a process who wait for one or more terminating process. This behavior is supported natively with OTP, but you can also create your own model. For doing that, you need to share status of every started process, using cache or database, or when your process is spawned. Something like that:
Fun = fun
MyFun (ParentProcess, {result, Data})
when is_pid(ParentProcess) ->
ParentProcess ! {self(), Data};
MyFun (ParentProcess, MyData)
when is_pid(ParentProcess) ->
% do something
MyFun(ParentProcess, MyData2) end.
spawn(fun() -> Fun(self(), InitData) end).
EDIT: forgot to add an example without send/receive. I use an ETS table to store every result from lambda function. This ETS table is set when we spawn this process. To get result, we can select data from this table. Note, the key of the row is the process id of the process.
spawner(Ets, Fun, Args)
when is_integer(Ets),
is_function(Fun) ->
spawn(fun() -> Fun(Ets, Args) end).
Fun = fun
F(Ets, {result, Data}) ->
ets:insert(Ets, {self(), Data});
F(Ets, Data) ->
% do something here
Data2 = Data,
F(Ets, Data2) end.
I have a strange problem: My Delphi application raises an EOutOfRessources exception just after starting up on a Application.CreateForm call. Is somebody out there, who solved such a problem?
The strange things are
that this happens on a single machine only. I do not have this
problem on other computers
that the application runs properly, if a teleservice is active (we use Danware NetOP). If teleservice is not running (Netop waits for a guest log in), the application fails.
The application was developed under D7; OS is WinXP SP3.
Thanks for your help
--- Update 1 ---
Application uses EurekaLog to catch exceptions and to store error information. It says, the EOutOfRessources happens on Application.CreateForm (some 50 forms already created, a few other forms pending to create), the message is "out of system ressources". Exception address is 7C81EB2E.
The EurekaLog also provides the call stack :
|*Exception Thread: ID=2088; Priority=0; Class=; [Main] |
|-----------------------------------------------------------------------|
|7C81EB2E|kernel32.dll| | | | |
|77D56C4F|user32.dll | | |CreateIcon | |
|7C9205D4|ntdll.dll | | |RtlAllocateHeap | |
|7C9110ED|ntdll.dll | | |RtlLeaveCriticalSection| |
|77D2058E|user32.dll | | |SystemParametersInfoA | |
|77D205A3|user32.dll | | |SystemParametersInfoA | |
|7C809AE4|kernel32.dll| | |VirtualAllocEx | |
|7C809AA2|kernel32.dll| | |VirtualAllocEx | |
|7C809A94|kernel32.dll| | |VirtualAlloc | |
|0060E359|_765013.exe |_765013.dpr| | |235[58]|
|7C91E64C|ntdll.dll | | |NtSetInformationThread | |
-------------------------------------------------------------------------
Total memory use is about 60 MB; the application has some 20 MB in use.
I do not know the used number of handles; EurekaLog does not provide this.
--- Update 2 ---
Now we exchanged the PC by another one of same type. The exception did not raise again. However, we had a similar effect on another machine, now not being able to open a file during Application.CreateForm. The file name string was empty ... After a number of poor people resets (power shut down) the problem disappeared.
We suspect, that the exceptions are caused by a network problem. At this customer, we have four applications running (per two identical projects). They share data over a company network; for that there is a NAS. Network login is done with Windows start up, about 2 minutes before starting the applications.
Teleservice runs over company network too.
The question is now, if Application.CreateForm is trying to get connected to network. Our OnCreate event handlers do not require an open network.
The applications' source code is on the NAS too (encrypted by TrueCrypt). After compilation we copy the EXE and all other needed files to a local hard drive and run the application from that place. Normally, the TrueCrypt container is closed.
Could it happen, that the EXE is searching for some files on NAS resp. the TrueCrypt container?
Maybe, somebody is familiar with such issues. Thanks for your help.
I am trying to have a better performance visualization of my software using fprof. However, the default is measured using wall clock but I want to measure it with cpu time. Hence, I run the following command on the shell but I get an error. Could not really find the reason why it fails on the Internet or erlang documentation. Does anybody have any hints?
% fprof:trace([start, {cpu_time, true}]).
{error,not_supported}
fprof's cpu_time flag is translated to trace's cpu_timestamp as showed in the code snippet above:
According to http://erlang.org/doc/man/erlang.html#trace-3:
cpu_timestamp
A global trace flag for the Erlang node that makes all trace timestamps be in CPU time, not wallclock. It is only allowed with PidSpec==all. If the host machine operating system does not support high resolution CPU time measurements, trace/3 exits with badarg.
So, if the call erlang:trace(all, true, [cpu_timestamp]) returns a badarg exception it means that this feature is unsupported on your platform.
The following code is from file fprof.erl, It shows where the error message comes from. But I don't know how to continue find the source code of erlang:trace. It may be written by c. If trace's source code can be found, the secret can be unveiled.
trace_on(Procs, Tracer, {V, CT}) ->
case case CT of
cpu_time ->
try erlang:trace(all, true, [cpu_timestamp]) of _ -> ok
catch
error:badarg -> {error, not_supported} %% above error message is shown here
end;
wallclock -> ok
end
of ok ->
MatchSpec = [{'_', [], [{message, {{cp, {caller}}}}]}],
erlang:trace_pattern(on_load, MatchSpec, [local]),
erlang:trace_pattern({'_', '_', '_'}, MatchSpec, [local]),
lists:foreach(
fun (P) ->
erlang:trace(P, true, [{tracer, Tracer} | trace_flags(V)])
end,
Procs),
ok;
Error ->
Error
end.
Im using a port to run a pipeline with uncompresses and dd's some data:
Port = open_port({spawn, "bzcat | sudo dd of=/dev/foo},
[stream, use_stdio, exit_status]),
What I would like to do is produce a end-of-file situation on the output which causes the pipeline to complete and eventually exit.
I would like to wait for this completion and also capture the exit_status.
When I just call port_close it looks to me as if the pipeline is just terminated and there is no wait for completion. Also I don't get any exit_status ....
How can I accomplish waiting for exit before my next step (which requires the dd to have completed).
Did some experiments and it looks like at least port_close doesn't kill the process, you just don't find out when its done. Is this correct?
If you just need to wait for spawned by open_port command to complete you need to wait for exit_status message:
1> Port = open_port({spawn, "sleep 7"}, [exit_status]).
#Port<0.497>
2> receive {Port, {exit_status, Code}} -> Code after 10000 -> timeout end.
0
Update (about to say a port just close the output pipe): I think you can't just close the output pipe with the default spawn driver. Default driver doesn't have any control commands and port_close although don't kill spawned command but completely erase all port's state.
Possible solutions:
Write input stream to a file first and then run bzip/dd sequence on that file;
Write your own driver or NIF (Maybe some open source implementations already exist?)
Use some external script and control protocol, for example full (or chunk) length can be transferred before the actual content so the script will know when to close the connection
Several rather ugly workarounds to this problem can be found here: limitations of erlang:open_port() and os:cmd()
Some even use netcat to map the problem to a tcp connection.