Where to Put Heavy Computing Functions in Electron - electron

I hope you are doing well! I am relatively new to Electron and after reading numerous articles, I am still confused on where I should put heavy computing functions in Electron. I plan on using node libraries in these functions and I have read numerous articles that state that these functions should be put in the main process. However, isn't there a chance that this would possibly overhead my main process and thus, block my renderers? This is definitely not desired and I was wondering why could I not just put these functions in preload.js. Wouldn't it be better for performance? Also, if I am only going to require node modules and only connect to my API, would there still be security concerns if I were to put these functions in the preload.js? Sorry for the basic questions and please let me know!
Thanks

You can use web workers created in your renderer thread. They won't block.
However you mentioned planning to use node modules. So depending on what they are, it could make more sense to run them from the main process. (But see also https://www.electronjs.org/docs/latest/tutorial/multithreading which points out that you can set nodeIntegrationInWorker, independently of nodeIntegration)
You can use https://nodejs.org/api/worker_threads.html in Node too, or for a process-level separation there is also https://nodejs.org/api/child_process.html.
Note that worker threads in the browser (and therefore the renderer thread) cannot share memory. Instead you have to serialize it to pass it back and forth. If your heavy compute process is working on large data structures, bear this in mind. I notice that node worker threads say they do allow sharing memory between threads.

Related

Use of serialized target queues for Concurrent queues in iOS

I was going through this excellent blog post
(http://www.humancode.us/2014/08/14/target-queues.html)
of target threads in iOS and I could not help but wonder why do we need such a mechanism. In the example, we are specifying a serialised target queue for a custom concurrent queue. Can we not achieve the same by executing the blocks in the original concurrent queue in a serialised queue instead?
Whats the point of having a serialised target queue for a concurrent queue????
If I got You right, you're asking why would someone start serial task on a concurrent queue.
You would need that kind of behaviour in case, if most tasks with some resource can be performed concurrently (aka, simultaneously), but some tasks are, by nature, unsafe to be performed concurrently with others.
The most common example is readers/writers problem. Here you are accessing, for example, some resource of a file system. It's ok to read it even from different threads - every reader will get exactly what it needs. But here comes necessity to update contents of that file. Modifying it while someone reads it leads to unpredicted results - reader is not guaranteed to get the right, expected, info (partially from old version, partially from new). Even worse - there can be two writers (if file contents changes by application user and from some central storage via net) - result will be some crazy mix of two versions (actually, it can be now even corrupted)
Here comes necessity for each writer to wait till all other tasks performed (no one reads, no one writes), and for each reader to wait until no writing tasks take place (no one writes, no matter how many readers)
Wikipedia has nice article on this one. I haven't run into any other practical situations, where you would need this, but I believe there're more of them.
Hope it answers your question

Anything else I can use to keep program responsive besides processmessage?

I have an application that can run for quite a long time scanning a database.
During this process I keep my program responsive by using processmessage.
This processmessage is triggered when my progress bar is updated and inc'ed.
This works fine is most cases, but when the databases get larger it takes longer for the progress bar to jump up 1%, the program becomes unresponsive until that time.
Is there another way to keep my program alive besides processmessages?
Multi threading is the answer. A standard Delphi application is basically a single threaded application that can do one thing at a time. Hence the gui lockup, it can't remain responsive if it's doing something else.
If you want to have a responsive gui and do heavy lifting at the same time, you need to have the heavy lifting in a separate thread or threads. This way your main thread can make sure you have a responsive program and the worker threads do the heavy lifting.
This works nice for heavy database work but also for for instance the downloading of files or situations where an answer of for instance a remote server can take a long time.
But this answer will probably give you more questions then answers because to explain HOW to use multi threading would be too big of an explanation for this question.
One other thing though: have a long and hard look at your database code. How are you retrieving records from the database, are there good indexes on the database etc. etc. etc. You can get insane speed improvements by optimizing this code before you have to start thinking about multi threading.
I've found the following resource: http://thaddy.co.uk/threads/ which you can download with pictures at: http://cc.embarcadero.com/item/14809 to be very usefull threading tutorial.
If you want to make your GUI program appear responsive, you must service the message queue in a timely fashion. There is no alternative.
When it comes to running database queries, the way to do that without freezing your UI, is to move the query to a different thread.

Clone a lua state

Recently, I have encountered many difficulties when I was developing using C++ and Lua. My situation is: for some reason, there can be thousands of Lua-states in my C++ program. But these states should be same just after initialization. Of course, I can do luaL_loadlibs() and lua_loadfile() for each state, but that is pretty heavy(in fact, it takes a rather long time for me even just initial one state). So, I am wondering the following schema: What about keeping a separate Lua-state(the only state that has to be initialized) which is then cloned for other Lua-states, is that possible?
When I started with Lua, like you I once wrote a program with thousands of states, had the same problem and thoughts, until I realized I was doing it totally wrong :)
Lua has coroutines and threads, you need to use these features to do what you need. They can be a bit tricky at first but you should be able to understand them in a few days, it'll be well worth your time.
take a look to the following lua API call I think it is what you exactly need.
lua_State *lua_newthread (lua_State *L);
This creates a new thread, pushes it on the stack, and returns a pointer to a lua_State that represents this new thread. The new thread returned by this function shares with the original thread its global environment, but has an independent execution stack.
There is no explicit function to close or to destroy a thread. Threads are subject to garbage collection, like any Lua object.
Unfortunately, no.
You could try Pluto to serialize the whole state. It does work pretty well, but in most cases it costs roughly the same time as normal initialization.
I think it will be hard to do exactly what you're requesting here given that just copying the state would have internal references as well as potentially pointers to external data. One would need to reconstruct those internal references in order to not just have multiple states pointing to the clone source.
You could serialize out the state after one starts up and then load that into subsequent states. If initialization is really expensive, this might be worth it.
I think the closest thing to doing what you want that would be relatively easy would be to put the states in different processes by initializing one state and then forking, however your operating system supports it:
http://en.wikipedia.org/wiki/Fork_(operating_system)
If you want something available from within Lua, you could try something like this:
How do you construct a read-write pipe with lua?

using Kernel#fork for backgrounding processes, pros? cons?

I'd like some thoughts on whether using fork{} to 'background' a process from a rails app is such a good idea or not...
From what I gather fork{my_method; Process#setsid} does in fact do what it's supposed to do.
1) creates another processes with a different PID
2) doesn't interrupt the calling process (e.g. it continues w/o waiting for the fork to finish)
3) executes the child until it finishes
..which is cool, but is it a good idea? What exactly is fork doing? Does it create a duplicate instance of my entire rails mongrel/passenger instance in memory? If so that would be very bad. Or, does it somehow do it without consuming a huge swath of memory.
My ultimate goal was to do away with my background daemon/queue system in favor of forking these processes (primarily sending emails) -- but if this won't save memory then it's definitely a step in the wrong direction
The fork does make a copy of your entire process, and, depending on exactly how you are hooked up to the application server, a copy of that as well. As noted in the other discussion this is done with copy-on-write so it's tolerable. Unix is built around fork(2), after all, so it has to manage it fairly fast. Note that any partially buffered I/O, open files, and lots of other stuff are also copied, as well as the state of the program that is spring-loaded to write them out, which would be incorrect.
I have a few thoughts:
Are you using Action Mailer? It seems like email would be easily done with AM or by Process.popen of something. (Popen will do a fork, but it is immediately followed by an exec.)
immediately get rid of all that state by executing Process.exec of another ruby interpreter plus your functionality. If there is too much state to transfer or you really need to use those duplicated file descriptors, you might do something like IO#popen instead so you can send the subprocess work to do. The system will share the pages containing the text of the Ruby interpreter of the subprocess with the parent automatically.
in addition to the above, you might want to consider the use of the daemons gem. While your rails process is already a daemon, using the gem might make it easier to keep one background task running as a batch job server, and make it easy to start, monitor, restart if it bombs, and shut down when you do...
if you do exit from a fork(2)ed subprocess, use exit! instead of exit
having a message queue and a daemon already set up, like you do, kinda sounds like a good solution to me :-)
Be aware that it will prevent you from using JRuby on Rails as fork() is not implemented (yet).
The semantics of fork is to copy the entire memory space of the process into a new process, but many (most?) systems will do that by just making a copy of the virtual memory tables and marking it copy-on-write. That means that (at first, at least) it doesn't use that much more physical memory, just enough to make the new tables and other per-process data structures.
That said, I'm not sure how well Ruby, RoR, etc. interacts with copy-on-write forking. In particular garbage collection could be problematic if it touches many memory pages (causing them to be copied).

Erlang gen_server vs stateless module

I've recently finished Joe's book and quite enjoyed it.
I'm since then started coding a soft realtime application with erlang and I have to say I am a bit confused at the use of gen_server.
When should I use gen_server instead of a simple stateless module?
I define a stateless module as follow:
- A module that takes it's state as a parameter (much like ETS/DETS) as opposed to keeping it internally (like gen_server)
Say for an invoice manager type module, should it initialize and return state which I'd then pass subsequently to it?
SomeState = InvoiceManager:Init(),
SomeState = InvoiceManager:AddInvoice(SomeState, AnInvoiceFoo).
Suppose I'd need multiple instances of the invoice manager state (say my application manages multiple companies each with their own invoices), should they each have a gen_server with internal state to manage their invoices or would it better fit to simply have the stateless module above?
Where is the line between the two?
(Note the invoice manage example above is just that, an example to illustrate my question)
I don't really think you can make that distinction between what you call a stateless module and gen_server. In both cases there is a recursive receive loop which carries state in at least one argument. This main loop handles requests, does work depending on the requests and, when necessary, sends results back the requesters. The main loop will most likely handle a number of administrative requests as well which may not be part of the main API/protocol.
The difference is that gen_server abstracts away the main receive loop and allows the user to only the write the actual user code. It will also handle many administrative OTP functions for you. The main difference is that the user code is in another module which means that you see the passed through state more easily. Unless you actually manage to write your code in one big receive loop and not call other functions to do the work there is no real difference.
Which method is better depends very much on what you need. Using gen_server will simplify your code and give you added functionality "for free" but it can be more restrictive. Rolling your own will give you more power but also you give more possibilities to screww things up. It is probably a little faster as well. What do you need?
It strongly depend of your needs and application design. When you need shared state between processes you have to use process to keep this state. Then gen_server, gen_fsm or other gen_* is your friend. You can avoid this design when your application is not concurrent or this design doesn't bring you some other benefits. For example break your application to processes will lead to simpler design. In other case sometimes you can choose single process design and using "stateless" modules for performance or such. "stateless" module is best choice for very simply stateless (pure functional) tasks. gen_server is often best choice for thinks that seems naturally "process". You must use it when you want share something between processes (using processes can be constrained by scalability or concurrency).
Having used both models, I must say that using the provided gen_server helps me stay structured more easily. I guess this is why it is included in the OTP stack of tools: gen_server is a good way to get the repetitive boiler-plate out of the way.
If you have shared state over multiple processes you should probably go with gen_server and if the state is just local to one process a stateless module will do fine.
I suppose your invoices (or whatever they stand for) should be persistent, so they would end up in an ETS/Mnesia table anyway. If this is so, you should create a stateless module where you put your API for accessing the invoice table.

Resources