In the current v.1.0 version of Akka.NET it is not possible to combine a FSM approach with persistency from the F# API. I was wondering if it is possible to use a functional approach like this
http://bartoszsypytkowski.com/blog/2014/07/05/fsharp-akka-net/
and at the same time hook into the prestart method of actors. That would be one way of implement some kind of "custom" persistency if one can read from persistent storage during the execution of the prestart method.
Any other suggestion on how to use a custom persistent solution are more than welcome.
At the moment (Akka.Persistence.FSharp v0.8) there is no way to create an equivalent of the pre started actor execution block. You can read from persistent storage right by defining custom apply method - it automatically receives the most recent snapshots (within SnapshotOffer message) and events from persistent stores.
Related
I am trying to replace the in-house-written logger solution of one of my customers. Pretty much everything is straight forward, but i need to implement one sink that sends the logs to a custom log window that i cannot change (for now). It communicates using named pipes. this pipe may be broken or busy, so the current solution actually blocks on every log call - which I want to improve.
The question is what the best practice is when using serilog: whats the best way to tell serilog the sink is currently broken so it is not slowing down the system. Is throwing an exception enough?
Serilog itself doesn't know (or care) when a sink is broken or not, so I'm not sure I understand your goal.
Writing to a Serilog logger is supposed to be a safe operation, by design, thus any exceptions that happen in your sink will automatically be caught by Serilog to make sure the app doesn't crash. Serilog will make sure these exceptions are written to the SelfLog which developers can use to troubleshoot sink issues. See an example here.
Therefore, if your goal is to have a way that a developer can see when the sink experienced problems, the recommendation is to write error messages to the SelfLog and throw your own exceptions from within your sink.
If you can detect from within your sink that the named pipe is not available without blocking, then just write to SelfLog and return/short-circuit without trying to write to it. It's really up to you to implement any kind of resilience policy from within your sink.
If your goal is to improve the blocking calls, you might want to consider making your sink asynchronous, with the messages sent on a separate thread, without blocking the main thread of the app.
Given you're implementing your own custom sink, an easy way to do that is to turn your sink into a Periodic Batching sink and leverage the infrastructure it provides. Alternatively, you can use Serilog.Sinks.Async wrapper sink.
The app I'm developing requires dynamically adding/removing/rearranging components in the sound chain.
So far, I have mostly been using the .disconnectOutput() method on most components, then reconnecting everything. This works most of the time, but occasionally it seems that a node is connected at multiple points in the sound chain, and I also get crashes if the node is connected to AudioKit.output.
AudioKit provides a number of public methods such as .detach(), .disconnectInput(), .disconnect() and I'm not really clear on what is cleanest or safest way to modify the sound chain. What is the best way to do this?
Also, is there some way to keep track of which nodes are connected to which?
Use the detach() method on an AKNode to remove it from the chain.
The disconnect() and disconnect(nodes: ) methods of AKNode are deprecated. Use AKNode.detach() and AudioKit.detach(nodes: ) instead.
I agree this terminology is very unclear and not explained in the existing documentation. I am still struggling with the lifecycle and runtime chain dynamics as I learn the API, so I can't convey best practices. In general, you don't want to break your object graph. I'm using AKMixer objects and then dynamically attaching child nodes using the .connect(input:bus:) and .disconnectInput(bus:) methods and internal tracking of the associated busses, but I am still running into crashes with this approach :(
Apple's parent AVAudioEngine documentation page provides a couple rules of thumb for dynamic chaining practices: https://developer.apple.com/documentation/avfoundation/avaudioengine
I started using project reactor. Does anyone know how can I pass thread local variables from one thread to another? I saw some methods on Hooks.java but could not figure out what is the recommended way of doing this. Can someone point me to some documentation or with a code snippet on how to do it. Thanks.
I have a working example in this github repository based on the spring-cloud-sleuth's implementation: https://github.com/gumartinm/JavaForFun/tree/master/SpringJava/WebReactive/spring-webreactive-reactor-context-enrich
The key classes are: ContextCoreSubscriber.java, SubscriberContext.java, ThreadContextEnrichmentAutoConfiguration.java and UsernameFilter.java
ContextCoreSubscriber.java:
Enables you to fill the Mapped Diagnostic Context: MDC
SubscriberContext.java:
Helper class for inserting data in the Reactor's Context.
ThreadContextEnrichmentAutoConfiguration.java:
In charge of configuring the Reactor's Hooks: Hooks.onEachOperator
UsernameFilter.java:
Example where we want to register the username information based on some HTTP header.
Reactor doesn't guarantee that the processing done by a Flux or Mono chain of operators will stick executing on a single thread. On the contrary, it performs work-stealing and lets the user switch execution context.
As such, using ThreadLocal is not very adapted to Reactor.
There is currently some work done in 3.1.0 towards providing an equivalent, at least for library authors that use Reactor, but nothing definite in place yet.
Keep your eyes peeled for 3.1.0, that should be the main theme of that release (and will probably be the focus of the second upcoming milestone, M2).
I have a server application made in Erlang. In it I have an mnesia table
that store some information on photos. In the spirit of "everything is a
process" I decided to wrap that table in a gen_server module, so that the
gen_server module is the only one that directly accesses the table. Querying
and adding information to that table is done by sending messages to that process
(which has a registered name). The idea is that there will be several client
processes querying information from that table.
This works just fine, but that gen_server module has no state. Everything it
requires is stored in the mnesia table. So, I wonder if a gen_server is perhaps
not the best model for encapsulating that table?
Should I simply not make it a process, and instead only encapsulate the table
through the functions in that module? In case of a bug in that module, that
would cause the calling process to crash, which I think might be better, because
it would only affect a single client, as opposed to now, when it would cause the
gen_server process to crash, leaving everyone without access to the table (until
the supervisor restarts it).
Any input is greatly appreciated.
I guess according to Occam's razor there is no need for this gen_server to exist, especially since there is absolutely no state stored in it. Such process could be needed in situations when you need access to the table (or any other resource) to be strictly sequential (for example you might want to avoid any aborted transactions at cost of a bottleneck).
Encapsulating access to the table in a module is a good solution. It creates no additional complexity, while providing proper level of abstraction and encapsulation.
I'm not sure I understand why you've decided to encapsulate a table with a process. Mnesia is designed to mediate multiple concurrent accesses to tables, both locally and distributed across a cluster.
Creating an API module that performs all the particular table access operations and updates is a good idea as the API functions will convey your intent better in the code that calls them. It will be more readable than putting the mnesia operations directly into the calling code.
An API module also gives you the option to switch from mnesia to some other storage system later if you need to. Using mnesia transactions inside your API module protects you from some programming errors as mnesia will roll-back operations that crash. The API module will always be available to callers and allows any number of callers to perform operations concurrently, whereas a gen_server based API has a point of failure, the process, that can render the API unavailable.
The only thing a gen_server based API gives you over a pure-functional API is to serialize access to the table - which is an unusual requirement and unless you specifically need it, it will be a performance killer.
It may be a good idea to handle a mnesia table using single gen_server process when you want to use dirty access and avoid transactions. This approach might be faster than txs, but as usually you need to benchmark it.
I've recently finished Joe's book and quite enjoyed it.
I'm since then started coding a soft realtime application with erlang and I have to say I am a bit confused at the use of gen_server.
When should I use gen_server instead of a simple stateless module?
I define a stateless module as follow:
- A module that takes it's state as a parameter (much like ETS/DETS) as opposed to keeping it internally (like gen_server)
Say for an invoice manager type module, should it initialize and return state which I'd then pass subsequently to it?
SomeState = InvoiceManager:Init(),
SomeState = InvoiceManager:AddInvoice(SomeState, AnInvoiceFoo).
Suppose I'd need multiple instances of the invoice manager state (say my application manages multiple companies each with their own invoices), should they each have a gen_server with internal state to manage their invoices or would it better fit to simply have the stateless module above?
Where is the line between the two?
(Note the invoice manage example above is just that, an example to illustrate my question)
I don't really think you can make that distinction between what you call a stateless module and gen_server. In both cases there is a recursive receive loop which carries state in at least one argument. This main loop handles requests, does work depending on the requests and, when necessary, sends results back the requesters. The main loop will most likely handle a number of administrative requests as well which may not be part of the main API/protocol.
The difference is that gen_server abstracts away the main receive loop and allows the user to only the write the actual user code. It will also handle many administrative OTP functions for you. The main difference is that the user code is in another module which means that you see the passed through state more easily. Unless you actually manage to write your code in one big receive loop and not call other functions to do the work there is no real difference.
Which method is better depends very much on what you need. Using gen_server will simplify your code and give you added functionality "for free" but it can be more restrictive. Rolling your own will give you more power but also you give more possibilities to screww things up. It is probably a little faster as well. What do you need?
It strongly depend of your needs and application design. When you need shared state between processes you have to use process to keep this state. Then gen_server, gen_fsm or other gen_* is your friend. You can avoid this design when your application is not concurrent or this design doesn't bring you some other benefits. For example break your application to processes will lead to simpler design. In other case sometimes you can choose single process design and using "stateless" modules for performance or such. "stateless" module is best choice for very simply stateless (pure functional) tasks. gen_server is often best choice for thinks that seems naturally "process". You must use it when you want share something between processes (using processes can be constrained by scalability or concurrency).
Having used both models, I must say that using the provided gen_server helps me stay structured more easily. I guess this is why it is included in the OTP stack of tools: gen_server is a good way to get the repetitive boiler-plate out of the way.
If you have shared state over multiple processes you should probably go with gen_server and if the state is just local to one process a stateless module will do fine.
I suppose your invoices (or whatever they stand for) should be persistent, so they would end up in an ETS/Mnesia table anyway. If this is so, you should create a stateless module where you put your API for accessing the invoice table.