SingleLiveEvent post, if called multiple times, then only the last event is dispatched (but I need all events in my view) - android-livedata

I'm using SingleLiveEvent to communicate my ViewModel and my Activity. Something like that (pseudocode):
class MyActivity: BaseActivity{
fun onCreate(){
//Init viewmodel and so on
viewModel.commands.observe(this, { command ->
logger.debug("Command received","------>>>>>>>"+command.javaClass.simpleName)
processCommand(command)
})
}
}
And my ViewModel is something like:
class MyViewModel(application: Application) : BaseAndroidViewModel(application) {
val command: SingleLiveEvent<CustomCommands> = SingleLiveEvent()
init{
loadOneThing()
command.postValue(CustomCommands.MessageCommand("one thing loaded"))
loadAnotherThing()
command.postValue(CustomCommands.MessageCommand("another thing loaded"))
}
}
The problem that I'm having, is that the Activity is receiving only the last command, and that is per design. SingleLiveEvent is a Child class from LiveData, and the documentation says the following for the method postValue:
* If you called this method multiple times before a main thread executed a posted task, only
* the last value would be dispatched.
Interestingly, if I set a breakpoint on the line that posts the commands, the emulator/device/main thread has time enough to process the first command, and the second command is sent too. But when executing the app without breakpoints, if the tasks that the viewmodel does between commands are done very fast (no rest requests or things like that, but some calculations), the main thread does not have time enough to finish the first command, and the second command is ignored.
But I really need the View to receive all events/commands that the ViewModel sends.
I suppose the SingleLiveEvent is not the right tool for that use case, nor is LiveData, because of the problem of already consumed events being resent when the device is rotated and so on.
Somebody knows a better approach to do this?
Thanks in advance!

I have faced same problem today. I'm also using SingleLiveEvent for commands/event. I have solved this problem using
commands.value = event instead of commands.postValue(event). Then I wonder why it behaving like that. I found this article. In the article,
But for postValue, the value will be updated twice and the number of times the observers will receive the notification depends on the execution of the main thread. For example, if the postValue is called 4 times before the execution of the main thread, then the observer will receive the notification only once and that too with the latest updated data because the notification to be sent is scheduled to be executed on the main thread. So, if you are calling the postValue method a number of times before the execution of the main thread, then the value that is passed lastly i.e. the latest value will be dispatched to the main thread and rest of the values will be discarded.
I hope it help someone that faced same problem.

have you tried using EventObserver?
/**
* Used as a wrapper for data that is exposed via a LiveData that represents an event.
*/
open class Event<out T>(private val content: T) {
#Suppress("MemberVisibilityCanBePrivate")
var hasBeenHandled = false
private set // Allow external read but not write
/**
* Returns the content and prevents its use again.
*/
fun getContentIfNotHandled(): T? {
return if (hasBeenHandled) {
null
} else {
hasBeenHandled = true
content
}
}
/**
* Returns the content, even if it's already been handled.
*/
fun peekContent(): T = content
}
/**
* An [Observer] for [Event]s, simplifying the pattern of checking if the [Event]'s content has
* already been handled.
*
* [onEventUnhandledContent] is *only* called if the [Event]'s contents has not been handled.
*/
class EventObserver<T>(private val onEventUnhandledContent: (T) -> Unit) : Observer<Event<T>> {
override fun onChanged(event: Event<T>?) {
event?.getContentIfNotHandled()?.let {
onEventUnhandledContent(it)
}
}
}
Use it with live data
val someEvent: MutableLiveData<Event<Unit>>= MutableLiveData()
when you need to some event
fun someEventOccured(){
someEvent.value = Event(Unit)
}
Fragment file, observe the Event
viewModel.someEvent.observe(this, EventObserver {
//somecode
})

Related

Should we unregister Flow collection in Fragment/Activity?

In the Android official documentation, it says the following about StateFlow vs Livedata:
LiveData.observe() automatically unregisters the consumer when the view goes to the STOPPED state, whereas collecting from a StateFlow or any other flow does not.
and they recommend to cancel eacy flow collection like this:
// Coroutine listening for UI states
private var uiStateJob: Job? = null
override fun onStart() {
super.onStart()
// Start collecting when the View is visible
uiStateJob = lifecycleScope.launch {
latestNewsViewModel.uiState.collect { uiState -> ... }
}
}
override fun onStop() {
// Stop collecting when the View goes to the background
uiStateJob?.cancel()
super.onStop()
}
As far as I know about structured concurrency, when we cancel the parent job, all children jobs are cancelled automatically and flow terminal operator collect should not be an exception. In this case, we are using lifecycleScope that should cancel when the lifecycleOwner is destroyed. Then, why do we need to manually cancel flow collection in this case? what am I missing here?
When using lifecycleScope you do not need to cancel the jobs or scope yourself, as this scope is bound to the LifecycleOwner's lifecycle and gets cancelled when the lifecycle is detroyed. The flow is then cancelled too.
It changes as soon you are using your own CoroutineScope:
val coroutineScope = CoroutineScope(Dispatchers.Main)
override fun onCreate() {
// ...
coroutineScope.launch {
latestNewsViewModel.uiState.collect { uiState -> ... }
}
}
override fun onDestroy() {
// ...
coroutineScope.coroutineContext.cancelChildren()
}
In that case you would need to take care of cancelling the scope (or job's) yourself.
According to #ChristianB response, we don't need to cancel coroutines that are attached to a lifecycle. the collect() is running inside a coroutine and will cancel too, when coroutines cancels.
Then, why do we need to manually cancel flow collection in this case?
what am I missing here?
When we want to use coroutines with a custom lifecycle (not using fragment/activity lifecycle). For example, I've created a helper class that needed to do sth inside itself. It launches a coroutine and it was managing (cancel/join...) its job.

Safety of using an empty reference instance across multiple threads

Background
I have a class Data that stores multiple input parameters and a single output value.
The output value is recalculated whenever one of the input parameters is mutated.
The calculation takes a non-trivial amount of time so it is performed asynchronously.
If one of the input parameters changes during recalculation, the current calculation is cancelled, and a new one is begun.
The cancellation logic is implemented via a serialized queue of calculation operations and a key (reference instance) (Data.key). Data.key is set to a new reference instance every time a new recalculation is added to the queue. Also only a single recalculation can occur at a time — due to the queue. Any executing recalculation constantly checks if it was the most recently initiated calculation by holding a reference to both the key that what was created with it when it was initiated and the currently existing key. If they are different, then a new recalculation has been queued since it began, and it will terminate.
This will trigger the next recalculation in the queue to begin, repeating the process.
The basis for my question
The reassignment of Data.key is done on the main thread.
The current calculation constantly checks to see if its key is the same as the current one. This means another thread is constantly accessing Data.key.
Question(s)
Is it safe for me to leave Data.key vulnerable to being read/written to at the same time?
Is it even possible for a property to be read and written to simultaneously?
Yes Data.Key vulnerable to being read/written to at the same time.
Here is example were i'm write key from main thread and read from MySerialQueue.
If you run that code, sometimes it would crash.
Crash happens because of dereference of pointer that point to memory released during writing by main queue.
Xcode have feature called ThreadSanitizer, it would help to catch such problems.
Discussion About Race condition
func experiment() {
var key = MyClass()
var key2 = MyClass()
class MyClass {}
func writer() {
for _ in 0..<1000000 {
key = MyClass()
}
}
func reader() {
for _ in 0..<1000000 {
if key === key2 {}
}
}
DispatchQueue.init(label: "MySerialQueue").async {
print("reader begin")
reader()
print("reader end")
}
DispatchQueue.main.async {
print("writer begin")
writer()
print("writer end")
}
}
Q:
Is it safe for me to leave Data.key vulnerable to being read/written to at the same time?
A:
No
Q:
Is it even possible for a property to be read and written to simultaneously?
A:
Yes, create a separate queue for the Data.Key that only through which you access it. As long as any operation (get/set) is restricted within this queue you can read or write from anywhere with thread safety.

Know when all SKActions are complete or there aren't any running

I have a number of SKActions running on various nodes. How can I know when they are all completed? I want to ignore touches while animations are running. If I could somehow run actions in parallel on a number of nodes, I could wait for a final action to run, but I don't see any way to coordinate actions across nodes.
I can fake this by running through all the scene's children and checking for hasActions on each child. Seems a little lame, but it does work.
The simplest way to do this is using a dispatch group. In Swift 3 this looks like
func moveAllNodes(withCompletionHandler onComplete:(()->())) {
let group = DispatchGroup()
for node in nodes {
let moveAction = SKAction.move(to:target, duration: 0.3)
group.enter()
node.run(moveAction, completion: {
...
group.leave()
}
}
group.notify(queue: .main) {
onComplete()
}
}
Before running each action we call group.enter(), adding that action to the group. Then inside each action completion handler we call group.leave(), taking that action out of the group.
The group.notify() block runs after all other blocks have left the dispatch group.
To my knowledge there is no way to do this via the default framework capabilities.
However, I think you could achieve something like this by creating a class with methods that act as a wrapper for calling SKAction runAction: on a node.
In that wrapper method, you could push the node into an array, and then append a performSelector action to each action/group/sequence. So whatever method you specify gets called after completion of the action/group/sequence. When that method is called, you can just remove that node from the array.
With this implementation you would always have an array of all nodes that currently have an action running on them. If the array is empty, none are running.
Each action you run has a duration. If you keep track of the longest running action's duration you know when it'll be finished. Use that to wait until the longest running action is finished.
Alternatively, keep a global counter of running actions. Each time you run an action that pauses input increase the counter. Each action you run needs a final execute block that then decreases the counter. If the counter is zero, none of the input-ignoring actions are running.
It looks like in the two years since this question was first posted, Apple has not extended the framework to deal with this case. I was hesitant to do a bunch of graph traversals to check for running actions, so I found a solution using an instance variable in my SKScene subclass (GameScene) combined with the atomic integer protection functions found in /usr/include/libkern/OSAtomic.h.
In my GameScene class, I have an int32_t variable called runningActionCount, initialized to zero in initWithSize().
I have two GameScene methods:
-(void) IncrementUILockCount
{
OSAtomicIncrement32Barrier(&runningActionCount);
}
-(void) DecrementUILockCount
{
OSAtomicDecrement32Barrier(&runningActionCount);
}
Then I declare a block type to pass to SKNode::runAction completion block:
void (^SignalActionEnd)(void);
In my method to launch the actions on the various SKSpriteNodes, set that completion block to point to the safe decrement method:
SignalActionEnd = ^
{
[self DecrementUILockCount];
};
Then before I launch an action, run the safe increment block. When the action completes, DecrementUILockCount will be called to safely decrement the counter.
[self IncrementUILockCount];
[spriteToPerformActionOn runAction:theAction completion:SignalActionEnd];
In my update method, I simply check to see if that counter is zero before re-enabling the UI.
if (0 == runningActionCount)
{
// Do the UI enabled stuff
}
The only other thing to note here is that if you happen to delete any of the nodes that have running actions before they complete, the completion block is also deleted (without being run) and your counter will never be decremented and your UI will never re-enable. The answer is to check for running actions on the node you are about to delete, and manually run the protected decrement method if there are any actions running:
if ([spriteToDelete hasActions])
{
// Run the post-action completion block manually.
[self DecrementUILockCount];
}
This is working fine for me - hope it helps!
I was dealing with this issue while fiddling around with a sliding-tile type game. I wanted to both prevent keyboard input and wait for as short a time as possible to perform another action, while the tiles were actually moving.
All the tiles that I was concerned about were instances of the same SKNode subclass, so I decided to give that class the resposibility for keeping track of animations in progress, and for responding to queries about whether animations were running.
The idea I had was to use a dispatch group to "count" activity: it has a built-in mechanism to be waited on, and it can be added to at any time, so that the waiting will continue as long as tasks are added to the group.*
This is a sketch of the solution. We have a node class, which creates and owns the dispatch group. A private class method allows instances to access the group so they can enter and leave it when they are animating. The class has two public methods that allow checking the group's status without exposing the actual mechanism: +waitOnAllNodeMovement and +anyNodeMovementInProgress. The former blocks until the group is empty; the latter simply returns a BOOL immediately indicating whether the group is busy or not.
#interface WSSNode : SKSpriteNode
/** The WSSNode class tracks whether any instances are running animations,
* in order to avoid overlapping other actions.
* +waitOnAllNodeMovement blocks when called until all nodes have
* completed their animations.
*/
+ (void)waitOnAllNodeMovement;
/** The WSSNode class tracks whether any instances are running animations,
* in order to avoid overlapping other actions.
* +anyNodeMovementInProgress returns a BOOL immediately, indicating
* whether any animations are currently running.
*/
+ (BOOL)anyNodeMovementInProgress;
/* Sample method: make the node do something that requires waiting on. */
- (void)moveToPosition:(CGPoint)destination;
#end
#interface WSSNode ()
+ (dispatch_group_t)movementDispatchGroup;
#end
#implementation WSSNode
+ (void)waitOnAllNodeMovement
{
dispatch_group_wait([self movementDispatchGroup],
DISPATCH_TIME_FOREVER);
}
+ (BOOL)anyNodeMovementInProgress
{
// Return immediately regardless of state of group, but indicate
// whether group is empty or timeout occurred.
return (0 != dispatch_group_wait([self movementDispatchGroup],
DISPATCH_TIME_NOW));
}
+ (dispatch_group_t)movementDispatchGroup
{
static dispatch_group_t group;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
group = dispatch_group_create();
});
return group;
}
- (void)moveToPosition:(CGPoint)destination
{
// No need to actually enqueue anything; simply manually
// tell the group that it's working.
dispatch_group_enter([WSSNode movementDispatchGroup]);
[self runAction:/* whatever */
completion:^{ dispatch_group_leave([WSSNode movementDispatchGroup])}];
}
#end
A controller class that wants to prevent keyboard input during moves then can do something simple like this:
- (void)keyDown:(NSEvent *)theEvent
{
// Don't accept input while movement is taking place.
if( [WSSNode anyNodeMovementInProgress] ){
return;
}
// ...
}
and you can do the same thing in a scene's update: as needed. Any other actions that must happen ASAP can wait on the animation:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0),
^{
[WSSNode waitOnAllNodeMovement];
dispatch_async(dispatch_get_main_queue(), ^{
// Action that needs to wait for animation to finish
});
});
This is the one tricky/messy part of this solution: because the wait... method is blocking, it obviously has to happen asynchronously to the main thread; then we come back to the main thread to do more work. But the same would be true with any other waiting procedure as well, so this seems reasonable.
*The other two possibilities that presented themselves were a queue with a barrier Block and a counting semaphore.
The barrier Block wouldn't work because I didn't know when I could actually enqueue it. At the point where I decided to enqueue the "after" task, no "before" tasks could be added.
The semaphore wouldn't work because it doesn't control ordering, just simultaneity. If the nodes incremented the semaphore when they were created, decremented when animating, and incremented again when done, the other task would only wait if all created nodes were animating, and wouldn't wait any longer than the first completion. If the nodes didn't increment the semaphore initially, then only one of them could function at a time.
The dispatch group is being used much like a semaphore, but with privileged access: the nodes themselves don't have to wait.

Launching multiple async futures in response to events

I would like to launch a fairly expensive operation in response to a user clicking on a canvas element.
mouseDown(MouseEvent e) {
print("entering event handler");
var future = new Future<int>(expensiveFunction);
future.then((int value) => redrawCanvas(value);
print("done event handler");
}
expensiveFunction() {
for(int i = 0; i < 1000000000; i++){
//do something insane here
}
}
redrawCanvas(int value) {
//do stuff here
print("redrawing canvas");
}
My understanding of M4 Dart, is that this future constructor should launch "expensiveFunction" asynchronously, aka on a different thread from the main one. And it does appear this way, as "done event handler" is immediately printed into my output window in the IDE, and then some time later "redrawing canvas" is printed. However, if I click on the element again nothing happens until my "expensiveFunction" is done running from the previous click.
How do I use futures to simply launch an compute intensive function on new thread such that I can have multiple of them queued up in response to multiple clicks, even if the first future is not complete yet?
Thanks.
As mentioned in a different answer, Futures are just a "placeholder for a value that is made available in the future". They don't necessarily imply concurrency.
Dart has a concept of isolates for concurrency. You can spawn an isolate to run some code in a parallel thread or process.
dart2js can compile isolates into Web Workers. A Web Worker can run in a separate thread.
Try something like this:
import 'dart:isolate';
expensiveOperation(SendPort replyTo) {
var result = doExpensiveThing(msg);
replyTo.send(result);
}
main() async {
var receive = new ReceivePort();
var isolate = await Isolate.spawn(expensiveOperation, receive.sendPort);
var result = await receive.first;
print(result);
}
(I haven't tested the above, but something like it should work.)
Event Loop & Event Queue
You should note that Futures are not threads. They do not run concurrently, and in fact, Dart is single-threaded. All Dart code runs in an event loop.
The event loop is a loop that runs as long as the current Dart isolate is alive. When you call main() to start a Dart application, the isolate is created, and it is no longer alive after the main method is completed and all items on the event queue are completed as well.
The event queue is the set of all functions that still need to finish executing. Because Dart is single threaded, all of these functions need to run one at a time. So when one item in the event queue is completed, another one begins. The exact timing and scheduling of the event queue is something that's way more complicated than I can explain myself.
Therefore, asynchronous processing is important to prevent the single thread from being blocked by some long running execution. In a UI, a long process can cause visual jankiness and hinder your app.
Futures
Futures represent a value that will be available sometime in the Future, hence the name. When a Future is created, it is returned immediately, and execution continues.
The callback associated with that Future (in your case, expensiveFunction) is "started" by being added to the event queue. When you return from the current isolate, the callback runs and as soon as it can, the code after then.
Streams
Because your Futures are by definition asynchronous, and you don't know when they return, you want to queue up your callbacks so that they remain in order.
A Stream is an object that emits events that can be subscribed to. When you write canvasElement.onClick.listen(...) you are asking for the onClick Stream of MouseEvents, which you then subscribe to with listen.
You can use Streams to queue up events and register a callback on those events to run the code you'd like.
What to Write
main() {
// Used to add events to a stream.
var controller = new StreamController<Future>();
// Pause when we get an event so that we take one value at a time.
var subscription = controller.stream.listen(
(_) => subscription.pause());
var canvas = new CanvasElement();
canvas.onClick.listen((MouseEvent e) {
print("entering event handler");
var future = new Future<int>(expensiveFunction);
// Resume subscription after our callback is called.
controller.add(future.then(redrawCanvas).then(subscription.resume()));
print("done event handler");
});
}
expensiveFunction() {
for(int i = 0; i < 1000000000; i++){
//do something insane here
}
}
redrawCanvas(int value) {
//do stuff here
print("redrawing canvas");
}
Here we are queuing up our redrawCanvas callbacks by pausing after each mouse click, and then resuming after redrawCanvas has been called.
More Information
See also this great answer to a similar question.
A great place to start reading about Dart's asynchrony is the first part of this article about the dart:io library and this article about the dart:async library.
For more information about Futures, see this article about Futures.
For Streams information, see this article about adding to Streams and this article about creating Streams.

how to lock an asp.net mvc action?

I've written a controller and action that I use as a service.
This service runs quite a costly action.
I'd like to limit the access to this action if there is already a currently running action.
Is there any built in way to lock an asp.net mvc action?
Thanks
Are you looking for something like this?
public MyController : Controller
{
private static object Lock = new object();
public ActionResult MyAction()
{
lock (Lock)
{
// do your costly action here
}
}
}
The above will prevent any other threads from executing the action if a thread is currently processing code within the lock block.
Update: here is how this works
Method code is always executed by a thread. On a heavily-loaded server, it is possible for 2 or more different threads to enter and begin executing a method in parallel. According to the question, this is what you want to prevent.
Note how the private Lock object is static. This means it is shared across all instances of your controller. So, even if there are 2 instances of this controller constructed on the heap, both of them share the same Lock object. (The object doesn't even have to be named Lock, you could name it Jerry or Samantha and it would still serve the same purpose.)
Here is what happens. Your processor can only allow 1 thread to enter a section of code at a time. Under normal circumstances, thread A could begin executing a code block, and then thread B could begin executing it. So in theory you can have 2 threads executing the same method (or any block of code) at the same time.
The lock keyword can be used to prevent this. When a thread enters a block of code wrapped in a lock section, it "picks up" the lock object (what is in parenthesis after the lock keyword, a.k.a. Lock, Jerry, or Samantha, which should be marked as a static field). For the duration of time where the locked section is being executed, it "holds onto" the lock object. When the thread exits the locked section, it "gives up" the lock object. From the time the thread picks up the lock object, until it gives up the lock object, all other threads are prevented from entering the locked section of code. In effect, they are "paused" until the currently executing thread gives up the lock object.
So thread A picks up the lock object at the beginning of your MyAction method. Before it gives up the lock object, thread B also tries to execute this method. However, it cannot pick up the lock object because it is already held by thread A. So it waits for thread A to give up the lock object. When it does, thread B then picks up the lock object and begins executing the block of code. When thread B is finished executing the block, it gives up the lock object for the next thread that is delegated to handle this method.
... but I'm not sure if this is what you are looking for...
Using this approach will not necessarily make your code run any faster. It only ensures that a block of code can only be executed by 1 thread at a time. It is usually used for concurrency reasons, not performance reasons. If you can provide more information about your specific problem in the question, there may be a better answer than this one.
Remember that the code I presented above will cause other threads to wait before executing the block. If this is not what you want, and you want the entire action to be "skipped" if it is already being executed by another thread, then use something more like Oshry's answer. You can store this info in cache, session, or any other data storage mechanism.
I prefer to use SemaphoreSlim because it support async operations.
If you need to control the read/write then you can use the ReaderWriterLockSlim.
The following code snip uses the SemaphoreSlim:
public class DemoController : Controller
{
private static readonly SemaphoreSlim ProtectedActionSemaphore =
new SemaphoreSlim(1);
[HttpGet("paction")] //--or post, put, delete...
public IActionResult ProtectedAction()
{
ProtectedActionSemaphore.Wait();
try
{
//--call your protected action here
}
finally
{
ProtectedActionSemaphore.Release();
}
return Ok(); //--or any other response
}
[HttpGet("paction2")] //--or post, put, delete...
public async Task<IActionResult> ProtectedActionAsync()
{
await ProtectedActionSemaphore.WaitAsync();
try
{
//--call your protected action here
}
finally
{
ProtectedActionSemaphore.Release();
}
return Ok(); //--or any other response
}
}
I hope it helps.
Having read and agreed with the above answer I wanted a slightly different solution:
If you want to detect a second call to an action, use Monitor.TryEnter:
if (!Monitor.TryEnter(Lock, new TimeSpan(0)))
{
throw new ServiceBusyException("Locked!");
}
try
{
...
}
finally {
Monitor.Exit(Lock);
}
Use the same static Lock object as detailed by #danludwig
You can create a custom attribute like [UseLock] as per your requirements and put it before your Action
i have suggestions about that.
1- https://github.com/madelson/DistributedLock
system wide lock solution
2- Hangfire BackgroundJob.Enqueue with [DisableConcurrentExecution(1000)] attribute.
Two solution are pending for process to be finished. i don't want to throw error when request same time.
The simplest way to do that would be save to the cache a Boolean value indicating the action is running the required BL already:
if (System.Web.HttpContext.Current.Cache["IsProcessRunning"])
{
System.Web.HttpContext.Current.Cache["IsProcessRunning"] = true;
// run your logic here
System.Web.HttpContext.Current.Cache["IsProcessRunning"] = false
}
Of course you can do this, or something similar, as an attribute as well.

Resources