Sharing Electron main process - electron

I have multiple electron apps. One is the main app and several other feature apps. There are few buttons on the main app which will cause feature apps to open. The problem here is every app has a main process that is causing more CPU to be utilized. Is it possible to use only one main process and share the renderer process.

That's exactly what it's designed for, take a look at this repo:
https://github.com/electron/electron-api-demos/tree/master/renderer-process/windows

Depends. If you have a single electron application, then you can have it displaying multiple browser windows, each with its own render process. But we can't separate main process and the render processes into different 'executables' and connect them.
I have one electron application running that 'hosts' several apps and the main app is a launcher that allows me to start them (it's a tray application). So all webapps are connected to the same main process.

Related

How can I debug high cpu usage in electron?

I'm writing an Electron app and a few builds back testers started noticing that two electron.exe processes were consuming a lot of CPU time all the time. One pegging a CPU core and the other using about 85% of a core.
I'm certain that this was not always the case as builds several months ago didn't do this. But I'm at a loss to know how to debug what code changes may have introduced this as the code base has evolved dramatically over that time.
process.getIOCounters() reports that several gigabytes of IO is occurring every few minutes. The application is not deadlocked and everything still works it is just chewing through CPU. It happens anytime the app is open even if it is in the background without any user input. I only have windows 10 x64 systems that I've deployed this to as Electron 1.7.9 and also 1.7.5.
Based on the behavior I'm certain that this IO is interprocess communication between the render and main threads, but I'm not manually performing any IPC. I think this problem is being caused by some module we've introduced that improperly resides in the rendered thread.
My question, how does one debug the The Electron render/main thread IPC pipe? Can it be hooked to know what the contents of the gigabytes of traffic are?
Based on the past few days of attempting to debug this I've answer the question for myself:
My question, how does one debug the The Electron render/main thread IPC pipe?
Don't, electron seemed like a good idea, writing all your client and platform code in the same place. But there are a lot of catches, and out of the blue libraries will have strange bugs that are costly to address because they are outside the main stream use case. This certainly has a lot to do with me not being an Electron Expert, but in the real world there are deadlines and timelines and I can't always get up to speed as much as I would like to.
I've updated my architecture to the tried an true Service/GUI model. I'll be maintaining full browser support for the client code as well as an Electron mode with hooks for some features when electron is detected.
This allows me to quickly identify issues that are specific to browser, version or platform framework. It also lets me use which ever version of NodeJS that I would like to for the service which has also been an issue in my case.
I still love Electron though, I'm just going to be more careful as I use it. If I do discover the specifics of why I had this problem I'll check back and report those details.
Update
So this issue was not directly related to Electron like I had supposed, the IPC was not between the renderer and main threads and was a red herring. It was actually a chrome key frame animation issue which was causing a 60 FPS redraw rate, still not sure why this caused GBs of IPC, but whatever. See https://github.com/Microsoft/vscode/issues/22900
I was able to discover this by porting this app back to native browser ( with nodejs service ). I then ran in chrome, edge and firefox. Only chrome behaved this way.

Scheduled Job equivalent functionality in MVC

I have a requirement in my MVC app.
I had an export to excel functionality that is taking 3 mins of time (user clicks on a export button and waits on).
This export downloads an excel that has multiple worksheets after applying certain rules on the data.
These rules are datamanipulations plus applying colors on the cells belonging to certain columns.
Inorder to avoid the wait time, I was asked to develop a code with in the MVC app that can run like a scheduled job.
This job has to export the excel to a dedicated folder with in the network on the scheduled time (daily once).
Also i was asked to develop a web page within the app which has links to download these excels.
Quesions here (Any help would be appreciated) :
I have chosen Quartz.NET to implement this requirement. This is an open source (to my little knowledge) that can
provide the facility to schedule a job (class developed in .NET). Is it the right choice or would there be any implications in future?
Is it really needed to develop a job like code or any other way of coding can address this?
I'm not very familiar with Quartz.net, but I do know that trying to run background/scheduled tasks from within the same process as the MVC application can be problematic.
Ref 1: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/
Ref 2: http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
Essentially, you can't guarantee that the process will complete correctly when running it due to how IIS handles app pools (which is where you MVC process runs: assuming hosting on IIS anyway).
You mention running a scheduled task within your MVC app. Again, this is incorrect. Why can't you just slap a console app project into the solution and drive the code from there, then put it on the server and use the Windows Task Scheduler?
In terms of background tasks, the "correct" way to do this is to send a command from your MVC app to some sort of message queue, which can then ensure that the command doesn't get dropped. I've used RabbitMQ in the past (a middleware message broker). Perhaps this is the aim of Quartz.net.
This setup typically involves another app (for me, usually a console app run on the server) that receives the command message from the message queue and runs in it's own process, entirely separate from MVC and thus the issues inherent with IIS AppPools and background tasks.
A lot of work, really... one would think it'd be easier, but that's the surefire way to do it and maintain the integrity of the task to be run.

How many copies of V8 in NW.js and Electron?

Oversimplifying a bit, NW.js and Electron consist of parts of a browser (webkit, Chromium, or whatever) joined with a JavaScript engine (node.js, io.js, or whatever).
My question is: How many copies of V8 are in there? I ask because I know that there is one in Chromium, and one in node.js.
Same question for NW.js.
I'm not asking how many are actually engaged in executing the app, but how many are present in the distributed app and, potentially, in memory.
There is only one copy of the V8 code, the node.js standard library is integrated into Chromium's V8 runloop. In memory, since Electron follows Chromium's multi-process model, each process will run a V8 runloop (i.e. the main process, plus one process per window).

Rails Engines, mountable apps etc, effect of failure of one

I assume that if you mount an app in a main rails or use a rails engine. If any of the sub-apps fail/crash than that means the main app and all the sub-app fail/crash.
Does anyone know this for sure? I am wondering if I am building a system if I should separate my architecture into multiple standalone application and instances or build Engine/mountable apps instead of a larger app. The worry is what would happen if part of the app ecosystem goes don't how does it effect the rest of the applications.
Thanks
No, it won't crash the main app.

Auto update a service

I have written several services in Delphi now, but I want to add the facility of auto updating the service either from a LAN unc path or from a http server. I have been pondering this and I am interested to hear peoples ideas. I can create a thread that will check for the update periodically, but how do I go about stopping the service uninstalling and installing automatically. My initial thoughts where to write a console app to do this and start it using create process, then let the service stop and the console app do the work, starting the new version of the service before it exits. Is this a good stratergy or shoul I consider something else. Thanks in advance
I do as you suggest. A thread checks occasionally for an update. If it is present, it downloads it and puts it into an appropriate place. It then verifies that it is wholesome (don't want it to be broken!). Finally, the thread then launches another app with parameters to tell it what to do, specifically, the name of the service, the location of the file to replace, and the file to replace it with. Then the service just waits.
When the updater app starts, it pauses a moment to make sure that the service is all stable, and then it uses the service control API to stop the service. It then monitors it until it is gone. Finally, it pauses a little to ensure that Windows has really finished with the file. Then it starts the process of renaming the old file to move it out of the way (if still in use, it retries a few times), and then copying the new file into place. And finally, it starts the service up again. Then the updater quits.
This has worked quite reliably for my services, and also standalone apps too (with different parameters for the updater app to know which mode). And if you are careful, you can update the updater using the exact same system, which is nice to watch.
I would have the service be a shell that only updates another executable or DLL file where the real code is at.
Have some communication method between the shell and the child process to force a shutdown and then have the shell perform the upgrade and relaunch the child.
As a side note, this makes debugging the service much easier as well as you'll be able to run the child process directly without having to worry about the extra efforts required to debug windows services.
your idea seems very good to me, however take this into consideration aswell:
- add module(the main core) to the service that will be unloaded and will load the updated module(*.dll file) when an update is available -- in this time the service should put the "tasks" in a queue or something...
additionally you can use plugins and/or scripts like Pascal script or DWScript
Last versions of Windows (I think since windows 10) does not allow a service to start other programs. So you will need an other program to run the update. It could be an other service.
Windows Services cannot start additional applications because they are
not running in the context of any particular user. Unlike regular
Windows applications, services are now run in an isolated session and
are prohibited from interacting with a user or the desktop.

Resources