Let's assume that we have some expensive object that we can create and pool ahead of time. Let's further assume these objects can be consumed at an unknown rate and are discarded once they are used. We would like a mechanism so that we can tell the producer...start building more of these once you have less than N objects. What is the correct operator or process to achieve such a pattern?
Have you tried using the Reactor pool project?
It is a reactive-first pooling solution from Project Reactor by Simon Baslé and seems to cover your requirements pretty well.
Related
I hope you are doing well! I am relatively new to Electron and after reading numerous articles, I am still confused on where I should put heavy computing functions in Electron. I plan on using node libraries in these functions and I have read numerous articles that state that these functions should be put in the main process. However, isn't there a chance that this would possibly overhead my main process and thus, block my renderers? This is definitely not desired and I was wondering why could I not just put these functions in preload.js. Wouldn't it be better for performance? Also, if I am only going to require node modules and only connect to my API, would there still be security concerns if I were to put these functions in the preload.js? Sorry for the basic questions and please let me know!
Thanks
You can use web workers created in your renderer thread. They won't block.
However you mentioned planning to use node modules. So depending on what they are, it could make more sense to run them from the main process. (But see also https://www.electronjs.org/docs/latest/tutorial/multithreading which points out that you can set nodeIntegrationInWorker, independently of nodeIntegration)
You can use https://nodejs.org/api/worker_threads.html in Node too, or for a process-level separation there is also https://nodejs.org/api/child_process.html.
Note that worker threads in the browser (and therefore the renderer thread) cannot share memory. Instead you have to serialize it to pass it back and forth. If your heavy compute process is working on large data structures, bear this in mind. I notice that node worker threads say they do allow sharing memory between threads.
For testing purposes, I want to send only one thing at a time, but the thing(s) that FluxSink is sending to the other side do not match the think that I literally just called the FluxSink.next method with. The thing(s) that it is sending over are things that were "nexted" a while ago. Is there any way to prevent FluxSink from doing any kind of queing/batching or to set the queue/batch size to 1, just like I'm setting my batch size to one for my test?
I'm not clearly understood what you're going to achieve, so it what I'm guessing base on title:
You may use FluxSink.OverflowStrategy but there no strategy to block FluxSink.next, because reactive programming is NOT about blocking programming and if producer in reactive programming is faster that consumer, reactive programming wil take care and will BUFFER|DROP etc. so You have to speed up Your consumer or choose appropriate FluxSink.OverflowStrategy.
Think how You will implement it without reactive programming. If your producer is faster than consumer you probably will queue data from producer or throw an error, because data are too old.
Anyway, probably the best choice in Your case will be Flux.create or Flux.generate Difference Between Flux.create and Flux.generate
Remember: Flux.generate is created to calculate and emit values on demand, so you may put BlockingQueue.poll() inside - it's not what I recommend, but it sth what You probably looking for.
Is it better to use factory classes or closures in Zend Framework 2, and why?
I know that closures cannot be serialized, but if you return them from Module#getServiceConfig(), this will not affect the caching of the rest of your configuration data, and the closures would be cached in your opcode cache anyway.
How does performance differ in constructing a factory class vs executing a closure? Does PHP wrap and instantiate closures only when you execute them, or would it do this for every closure defined in your configuration file on every request?
Has anyone compared the execution times of each method?
See also:
Dependency management in Zend Framework 2 MVC applications
Passing forms vs raw input to service layer
PHP will convert the anonymous functions in your configuration to instances of the closure class at compile time so it would do this on every request. This is unlike create_function which will create the function at run time. However, since closures do this at compile time it should be in your opcache cache and therefore should not matter.
In terms of the performance impact of constructing a service using a factory vs closure firstly you have to remember that the service will only ever be constructed once per request regardless of how many times you ask for the service. I ran a quick benchmark of fetching a service using a closure and a factory and here is what I got (I ran a few times and all of the results were about the same value):
Closure: 0.026999999999999ns
Factory: 0.30200000000002ns
Those are nanoseconds, i.e 10-9 seconds. Basically the performance difference is so small that there is no effective difference.
Also ZF2 can't cache my whole module's configuration with closures. If I use purely factories then my whole configuration can be merged, cached and a simple file can be read on each request rather than having to worry about loading and merging configuration files every time. I've not measured the performance impact of this, but I'd guess it is marginal in any case.
However, I prefer factories for mainly readability and maintainability. With factories you don't end up with some massive configuration file with tons of closures all over the place.
Sure, closures are great for rapid development but if you want your code to be readable and maintainable then I'd say stick with factories.
I aim to create a browser game where players can set up buildings.
Each building will have several modules (engines, offices,production lines, ...). Each module will have enentually one or more actions running, like creation of 2OO 'item X' with ingredients Y, Z.
The game server will be set up with erlang : An OTP application as the server itself, and nitrogen as the web front.
I need persistence of data. I was thinking about the following :
When somebody or something interacts with a building, or a timer representing some production line ends up, a supervisor spawns a gen_server (if not already spawned) which loads the state of the building from a database, so the gen_server can answer messages like 'add this module', 'starts this action', 'store this production to warehouse', 'die', etc. (
But when a building don't receive any messages during X seconds or minutes, he will terminate (thanks to the gen_server timeout feature) and drop its current state back to the database.
So, as it will be a (soft) real time game, the gen_server must be set up very fastly. I was thinking of membase as the database, because it's known to have very good response time.
My question is : when a gen server is up an running, his states fills some memory, and this state is present in the memory handled by membase too, so the state use two times his size in memory. Is that a bad design ?
Is membase a good solution to handle persistence in my case ? would be use mnesia a better choice , or something else ?
I fear mnesia 2 Go (or 4 ?) table size limit because i don't know at the moment the average state size of my gen_servers (buildings in this example, butalso players, production lines, whatever) and i may have someday more than 1 player :)
Thank you
I agree with Hynek -Pichi- Vychodil. Riak is a great thing for key-valye storage.
We use Riak almost 95% for the same thing you described. Everything works so far without any issues. In case you will hit performance limitation of Riak - add more nodes and it good to go!
Another cool thing about Riak is its very low performance degradation over the time. You can find more information about benchmarking Riak here: http://joyeur.com/2010/10/31/riak-smartmachine-benchmark-the-technical-details/
In case you go with it:
a driver: https://github.com/basho/riak-erlang-client
a connection pool you may need to work with it: https://github.com/dweldon/riakpool
About membase and memory usage: I also tried membase, but I found that it is not suitable for my tasks - (membase declares fault tolerance, but I could not setup it in the way it should work with faults, even with help from membase guys I didn't succeed). So at the moment I use the following architecture: All players that are online and play the game are presented as player-processes (gen_server). All data data and business logic for each player is in its player-process. From time to time each player-process desides to save its state in riak.
So far seems to be very fast and efficient approach.
Update: Now we are with PostgreSQL. It is awesome!
You can look to bitcask or other Riak backends to store your data. Avoid IPC is definitely good idea, so keep it inside Erlang.
I've recently finished Joe's book and quite enjoyed it.
I'm since then started coding a soft realtime application with erlang and I have to say I am a bit confused at the use of gen_server.
When should I use gen_server instead of a simple stateless module?
I define a stateless module as follow:
- A module that takes it's state as a parameter (much like ETS/DETS) as opposed to keeping it internally (like gen_server)
Say for an invoice manager type module, should it initialize and return state which I'd then pass subsequently to it?
SomeState = InvoiceManager:Init(),
SomeState = InvoiceManager:AddInvoice(SomeState, AnInvoiceFoo).
Suppose I'd need multiple instances of the invoice manager state (say my application manages multiple companies each with their own invoices), should they each have a gen_server with internal state to manage their invoices or would it better fit to simply have the stateless module above?
Where is the line between the two?
(Note the invoice manage example above is just that, an example to illustrate my question)
I don't really think you can make that distinction between what you call a stateless module and gen_server. In both cases there is a recursive receive loop which carries state in at least one argument. This main loop handles requests, does work depending on the requests and, when necessary, sends results back the requesters. The main loop will most likely handle a number of administrative requests as well which may not be part of the main API/protocol.
The difference is that gen_server abstracts away the main receive loop and allows the user to only the write the actual user code. It will also handle many administrative OTP functions for you. The main difference is that the user code is in another module which means that you see the passed through state more easily. Unless you actually manage to write your code in one big receive loop and not call other functions to do the work there is no real difference.
Which method is better depends very much on what you need. Using gen_server will simplify your code and give you added functionality "for free" but it can be more restrictive. Rolling your own will give you more power but also you give more possibilities to screww things up. It is probably a little faster as well. What do you need?
It strongly depend of your needs and application design. When you need shared state between processes you have to use process to keep this state. Then gen_server, gen_fsm or other gen_* is your friend. You can avoid this design when your application is not concurrent or this design doesn't bring you some other benefits. For example break your application to processes will lead to simpler design. In other case sometimes you can choose single process design and using "stateless" modules for performance or such. "stateless" module is best choice for very simply stateless (pure functional) tasks. gen_server is often best choice for thinks that seems naturally "process". You must use it when you want share something between processes (using processes can be constrained by scalability or concurrency).
Having used both models, I must say that using the provided gen_server helps me stay structured more easily. I guess this is why it is included in the OTP stack of tools: gen_server is a good way to get the repetitive boiler-plate out of the way.
If you have shared state over multiple processes you should probably go with gen_server and if the state is just local to one process a stateless module will do fine.
I suppose your invoices (or whatever they stand for) should be persistent, so they would end up in an ETS/Mnesia table anyway. If this is so, you should create a stateless module where you put your API for accessing the invoice table.