How many copies of V8 in NW.js and Electron? - electron

Oversimplifying a bit, NW.js and Electron consist of parts of a browser (webkit, Chromium, or whatever) joined with a JavaScript engine (node.js, io.js, or whatever).
My question is: How many copies of V8 are in there? I ask because I know that there is one in Chromium, and one in node.js.
Same question for NW.js.
I'm not asking how many are actually engaged in executing the app, but how many are present in the distributed app and, potentially, in memory.

There is only one copy of the V8 code, the node.js standard library is integrated into Chromium's V8 runloop. In memory, since Electron follows Chromium's multi-process model, each process will run a V8 runloop (i.e. the main process, plus one process per window).

Related

How can I debug high cpu usage in electron?

I'm writing an Electron app and a few builds back testers started noticing that two electron.exe processes were consuming a lot of CPU time all the time. One pegging a CPU core and the other using about 85% of a core.
I'm certain that this was not always the case as builds several months ago didn't do this. But I'm at a loss to know how to debug what code changes may have introduced this as the code base has evolved dramatically over that time.
process.getIOCounters() reports that several gigabytes of IO is occurring every few minutes. The application is not deadlocked and everything still works it is just chewing through CPU. It happens anytime the app is open even if it is in the background without any user input. I only have windows 10 x64 systems that I've deployed this to as Electron 1.7.9 and also 1.7.5.
Based on the behavior I'm certain that this IO is interprocess communication between the render and main threads, but I'm not manually performing any IPC. I think this problem is being caused by some module we've introduced that improperly resides in the rendered thread.
My question, how does one debug the The Electron render/main thread IPC pipe? Can it be hooked to know what the contents of the gigabytes of traffic are?
Based on the past few days of attempting to debug this I've answer the question for myself:
My question, how does one debug the The Electron render/main thread IPC pipe?
Don't, electron seemed like a good idea, writing all your client and platform code in the same place. But there are a lot of catches, and out of the blue libraries will have strange bugs that are costly to address because they are outside the main stream use case. This certainly has a lot to do with me not being an Electron Expert, but in the real world there are deadlines and timelines and I can't always get up to speed as much as I would like to.
I've updated my architecture to the tried an true Service/GUI model. I'll be maintaining full browser support for the client code as well as an Electron mode with hooks for some features when electron is detected.
This allows me to quickly identify issues that are specific to browser, version or platform framework. It also lets me use which ever version of NodeJS that I would like to for the service which has also been an issue in my case.
I still love Electron though, I'm just going to be more careful as I use it. If I do discover the specifics of why I had this problem I'll check back and report those details.
Update
So this issue was not directly related to Electron like I had supposed, the IPC was not between the renderer and main threads and was a red herring. It was actually a chrome key frame animation issue which was causing a 60 FPS redraw rate, still not sure why this caused GBs of IPC, but whatever. See https://github.com/Microsoft/vscode/issues/22900
I was able to discover this by porting this app back to native browser ( with nodejs service ). I then ran in chrome, edge and firefox. Only chrome behaved this way.

System replacement technology

I have some questions that I am hoping someone out there will be able to answer for me.
Our situation is that we are considering a ground up replacement for an existing system. Firstly I will describe the existing system that we have.
We are currently operating on a pure object stack. The enviroment is OO and the database is OO. We currently have 3-4 million lines of code which was developed by 2-3 people, and we currently have a development team of 6, which continues to develop. The initial development started in 1997, and we have many clients installed. The environment is 64-bit, language and database, mulit-lingual, and is UNICODE. The operatiung system we use is Windows (latest versions). We have a number of modules which are delivered via a thin client (not browser), and the bandwidth usage is very low (Operates on 64KB WAN network performance level which is still prevalent in some countries in which we operate, i.e. the infrastructure is poor). Our biggest implementation is for one of the biggest companies in the world, and the target is to deliver the functionality for 30+ countries from one system instance (one physical db) for that client, and deliver the functionality using thin client to all countries from one set of application servers (the application servers are located with the db server and perform all of the processing), the thin client deals with the interactions with the users and the display of the data and collection of the data only. The system is used by 1000s of users, on the thin client. We also have mobile and portal components also, which are developed in C#, they are a small segment of the overall system and connect using APis. There are maybe 1000 mobile application users, with a final number expected to be 5000 mobile users. Within the system there will be 500000-1000000 vendors, with each vendor expected to have at least two transactions every single day.
The DB itself is partitioned, and replicated to a number of locations in real time. The final size of the DB when implementation is complete is expected to be in the 2TB range, and the current system will deal with that, no problem. The way the replication works is that there are mutiple replicated enviroments on hot-standy, i,e. all application servers and API servers are replicated. Our largest client routinely (once per month) performs scheduled windows updates, and when this occurs the primary environments are automatically rolled over to the secondaries, so the system remains available all of the time. In subsequent months, the system is rolled back to the primaries, this transition is very fast, i.e. real time.
At our largest client, the system was installed in 2014, and since that time it has not experienced any outage, except for planned outages because of server maintenace of whateveer in that time period, i.e. it has not crashed or faulted in the first three years of operation. For the purposes of providing updates and enhanced functionality to the target organisation or specifically one of their subsiduries in the countries in which they operate we are able to make changes to the system, via the loading of functional updates on-line. This is a very important component of my question, as for many years we have been able to update at one central location and have the new functionality immeadiately available to all users in all countries whilst they are continuosly using the application. This is without change to any .EXE or .DLL or whatever files that the end user is operating. This is a huge advantage for us currently, as many of the organisations we provide services to do NOT allow any change to EXE or DLL files on end user devices, and there is generally some approval process which takes some days and requires manual intervention by the users to make this process happen.
For further information, we have a support team of 6 providing support services to all of our clients in all of these countries, we operate three shifts of 2 people to provide these services. So this should give you some background to the stability of the system and the level of support we provide. Our service level is described as outstanding. We do have of course SLA agreements in place and we have not violated any SLA term ever.
So, now for my question. What technology would people choose to replace such a system, and how many people would it take to replace ? It has been recommended to me that C# and SQL server be used to replace this, and that it would take a couple of good people a year or two to re-develop from the ground up (we have all of the functional specifications from the last 20 years to work from). However, without having in depth knowledge of this technology stack I am quite concerned about the time period (I think it is very optimistic), I am concerned about the scaleability of the SQL server, and most importantly I am deeply concerned that we will loose this advantage that we have enjoyed that allows us to change the functionality of the current system via updates online without effecting logged on users. I am told that this sort of thing is just not possible in C# and if we have to provide an update to fix a bug, or provide new functionality then all users will have to replace the effected EXE and DLL files, i.e. all of them, 1000s of users would have to do this each and every time we update. This would be done automatically via a process called OneClick, but I am assuming if there is a company policy within our client environment that EXE changes are not allowed, then OneClick will not be viable. I am told if we took a browser approach to the new development then any updates would be server side (which is better), but, would still require an outage to apply updates.
Finally, more information on the online updates that are now possible. Currently all of the systems are replicated for disaster recovery and 100% uptime during update purposes. When we currently update our systems (at one central location) those logical updates are automatically applied at all replicated systems also without user intervention. Another concern that I have is that as well as the problem we face with updating multiple locations with the same update, which it seems is a requirement in C# or so I am being told, we will also have all of the replicated systems to update manually as well. As you can see our support team is small, so I am worried about a future blowout in maintenance resources required to maintain all of this, and then the cost in terms of times fixing mistakes that may creep in with all these additional tasks that may be required to perform the same exercise that we currently do only once.
Finally, a final peice of information on how we currently do updates. If the update is structural in nature, i.e. changes the physical structure of the database, then an outage is required, a full system down outage. When we apply the update the structural change is made, and this is automatically replicated across all secondary (standby enviroments). The users are not effected in terms of the software for the thin client or browsers. They simply log back on after the outage is complete. We currently have a window at a set time, once per month to perform these updates, however, it is rarely required. Once per week, we have a window for functional changes to be applied, and these are appled on line whilst the users are all on line performing their daily and periodic tasks.
So, if anyone out there can give me some insight into what technologies are available for such a system replacement or whether C# and SQL server can provide the necessary services and performance we actually need, i.e. I would be particularly interested to know whether in fact C# applications can be updated in real time, then that would be fantastic. We are obviously in the very early stages of this process in terms of how this should be done, so any information you can provide would be greatly appreciated and will save many hours of research.
Thank you in advance.
From the basic requirements you describe, my first thought is that you should probably adopt a full Web-based solution for your system, that way all updates can be done centrally without too much negative effect on your client access.
But if I understand correctly your question, one aspect you're requiring is to have executable code ready at the client-side (so a pure Web-solution won't work).
In that case, something that can quickly & easily update at the client side is needed.
We've been using the node.js and MongoDB stack for a few years now, there are some quite interesting effects of using pure scripts for your business logic: besides being easy to develop, the scripts themselves, when designed with certain guidelines, can perform "hot reload" on the fly to update your business logic. So this is what I'd recommend trying / looking at.
Efficiency of node.js and the flexibility provided by NoSQL DB such as MongoDB is well described in many places if you do a simple Google search.

My Erlang project has multiple applications: How should I start them?

I'm struggling, as an OTP noobie, to understand how to structure my Erlang project. So far it has several applications under an app directory managed by rebar:
proj_root
apps
app1
app2
appN
rebar.config
I can start app1, say, in the shell with application:start(app1). No doubt I could repeat this through appN. But is there a preferred or better way? Can I, say, write a function that bundles all these starts? If so, where do I put it?
I have several other questions along this line, but will post separately.
Many thanks,
LRP
You can indeed start applications manually as you suggest. This can quickly become burdensome if you have many applications and there are dependencies between them.
Automating the process is quite easy to implement with a recursive function. If you try to start an application while one or more dependencies is not running, application:start/1 will fail and return {error, {not_started, App}}. This function can be in any of your applications or even in its own.
However, this manual (or automated) way to proceed is not the OTP way, even if it can prove useful (typically for tests…). If you follow OTP principles, you are supposed to create a release with a .rel file containing all your applications. OTP releases are composed of a set of applications (yours and system applications they depend on), an emulator and a boot script that will start all applications (and handle dependencies). Starting a node with your applications can be performed by using the -boot flag to erl pointing to the proper boot script.
This is quite complex and rebar can actually build releases. It will even produce shell scripts to start nodes with all your applications using the OTP boot mechanism.

Auto update a service

I have written several services in Delphi now, but I want to add the facility of auto updating the service either from a LAN unc path or from a http server. I have been pondering this and I am interested to hear peoples ideas. I can create a thread that will check for the update periodically, but how do I go about stopping the service uninstalling and installing automatically. My initial thoughts where to write a console app to do this and start it using create process, then let the service stop and the console app do the work, starting the new version of the service before it exits. Is this a good stratergy or shoul I consider something else. Thanks in advance
I do as you suggest. A thread checks occasionally for an update. If it is present, it downloads it and puts it into an appropriate place. It then verifies that it is wholesome (don't want it to be broken!). Finally, the thread then launches another app with parameters to tell it what to do, specifically, the name of the service, the location of the file to replace, and the file to replace it with. Then the service just waits.
When the updater app starts, it pauses a moment to make sure that the service is all stable, and then it uses the service control API to stop the service. It then monitors it until it is gone. Finally, it pauses a little to ensure that Windows has really finished with the file. Then it starts the process of renaming the old file to move it out of the way (if still in use, it retries a few times), and then copying the new file into place. And finally, it starts the service up again. Then the updater quits.
This has worked quite reliably for my services, and also standalone apps too (with different parameters for the updater app to know which mode). And if you are careful, you can update the updater using the exact same system, which is nice to watch.
I would have the service be a shell that only updates another executable or DLL file where the real code is at.
Have some communication method between the shell and the child process to force a shutdown and then have the shell perform the upgrade and relaunch the child.
As a side note, this makes debugging the service much easier as well as you'll be able to run the child process directly without having to worry about the extra efforts required to debug windows services.
your idea seems very good to me, however take this into consideration aswell:
- add module(the main core) to the service that will be unloaded and will load the updated module(*.dll file) when an update is available -- in this time the service should put the "tasks" in a queue or something...
additionally you can use plugins and/or scripts like Pascal script or DWScript
Last versions of Windows (I think since windows 10) does not allow a service to start other programs. So you will need an other program to run the update. It could be an other service.
Windows Services cannot start additional applications because they are
not running in the context of any particular user. Unlike regular
Windows applications, services are now run in an isolated session and
are prohibited from interacting with a user or the desktop.

Serving multiple Rails applications through one ruby interpreter instance

Given that each Ruby-On-Rails application needs at least about 40MBs of memory, I was wondering if there is a way of running multiple rails-application instances (different ones) over one interpreter of Ruby so that all shared libraries (rmagick etc) are shared between different application instances, saving space.
If that would be possible, then, I could be running 5-6 rails applications in a single 256RAM virtual server.
Is that possible?
I don't think this is possible without substantially changing the current code base.
But all is not lost.
If these websites are fairly low traffic and you have a fast vps you should take in mind that mod passenger drops the instances from memory if they're inactive for a while. So in theory you could run an unlimited amount of applications as long as you only have a few active at the same time. The price is a slower response on the first request that loads the instance.
Another option would be to load all the shared libraries, then fork off as many child processes as you have apps (use Process.fork) and run a different app in each child.
Pages of memory which are only read and not written will be shared between the parent and child processes.

Resources