I noticed in a comment on this post Overhead of NSNotifications, user JustSid says
the overhead won't be noticeable if the App doesn't send out 30+ per run loop cycle
I'd like to write a little helper class that keeps track of how many NSNotifications have been sent on the current run loop cycle and alert me if it's above a certain pre-configurable number. I know I can register for all notifications (pass nil to name and object), but how do I track what run loop cycle they've been sent from?
It would be easy enough to have a category on NSNotificationCenter that increments some internal integer when a notification observer is added (and consequently decrements it when it is removed), but you have to ask yourself: How much is too much?
If you consider it to be some arbitrary integer (say, 30 for example's sake), then what happens when you test it on a device that has more memory and processor constraints than the one you have now? What happens if you test it on a device that can easily handle 30 observers and notifications floating around (it would be a complete waste)? While it would be possible to code general rules, it would be impossible to gauge the impact notifications have on application response time in every case.
The other possibility would be having a background process query the notification stack (or somehow just gauge it internally like above) when the amount of observers brings certain system functions to a crawl. Of course, ignoring the fact that this is way too much work, you'd be designing a sub-system which probably utilizes as much memory and steals as much away from performance as you tried to remedy with it in the first place!
TL;DR There are so many other patterns and structures you could be using instead of notifications, so why are you the one catering to NSNotification's needs?
Related
I have a general efficiency question about dart streams.
I have a project that makes some use of them, but it has been proposed that we convert nearly everything (functions and data) to be dart streams. This is in order to achieve a fully reactive architecture.
I don't know how streams really work under the hood, so I don't really know if this kind of design comes with any kind of memory or computational overhead.
Thanks for your attention to this question.
There is an overhead. It's not necessarily big, but it's there.
Streams have a well-defined asynchronous behavior, and it's documented how they react to listeners being added, paused or cancelled, even if that happens while an event is being delivered (because, most often, that is when it happens).
Streams are asynchronous, which means there is a delay between adding an event to the stream (through a StreamController), and that event being received by the listener. That delay makes it necessary to store (buffer) the event, schedule a microtask, and then unbuffer the event and deliver it in that later microtask. Scheduling a microtask costs. There might be zones involved, which can cost extra.
On top of that, because the stream needs to be able to react to pause and cancel events in a timely manner, which means that each event delivery is also flanked by extra checks of whether the event handler has paused or cancelled. It's not a lot of overhead, but it's there.
For single-subscription streams, that's about it.
For broadcast streams, which can have multiple listeners, there can be a little extra overhead to handle new listeners being added while delivering the event. Again, not a lot, but it's there. The state-space for a stream is actually quite complicated.
(You can create "a synchoronous StreamController" which delivers events "immediately", but most of the time, you shouldn't. Those are not for avoiding asynchrony, they are for avoiding adding extra asynchronous delays when propagating already synchronous events, and should be used very carefully to avoid breaking code assuming that they won't get events in the middle of something else. A properly implemented reactive framework will use such controllers in their implementation, but that will not get rid of the original inherent delay of delivering the original asynchronous event.)
Now, performance is not absolute. Using streams everywhere might make your life easier, and if the performance is good enough for your application (it's not dominating the actual computations), then the increased development speed and maintainability might pay for itself. You should measure (and have repeatable benchmarks to measure) before making a decision about an implementation strategy based on performance alone.
Currently what I want to achieve is download files from an array that download only one file at a time and it still performs download even the app goes to the background state.
I'm using Rob code as stated in here but he's using URLSessionConfiguration.default which I want to use URLSessionConfiguration.background(withIdentifier: "uniqueID") instead.
It did work in the first try but after It goes to background everything became chaos. operation starts to download more than one file at a time and not in order anymore.
Is there any solution to this or what should I use instead to achieve what I want. If in android we have service to handle that easily.
The whole idea of wrapping requests in operation is only applicable if the app is active/running. It’s great for things like constraining the degree of concurrency for foreground requests, managing dependencies, etc.
For background session that continues to proceed after the app has been suspended, though, none of that is relevant. You create your request, hand it to the background session to manage, and monitor the delegate methods called for your background session. No operations needed/desired. Remember, these requests will be handled by the background session daemon even if your app is suspended (or if it terminated in the course of its normal lifecycle, though not if you force quit it). So the whole idea of operations, operation queues, etc., just doesn’t make sense if the background URLSession daemon is handling the requests and your app isn’t active.
See https://stackoverflow.com/a/44140059/1271826 for example of background session.
By the way, true background sessions are really useful when download very large resources that might take a very long time. But it introduces all sorts of complexities (e.g., you often want to debug and diagnose when not connected to the Xcode debugger which changes your app lifecycle, so you have to resort to mechanisms like unified messaging; you need to figure out how to restore UI if the app was terminated between the time the requests were initiated and when they finished; etc.).
Because of this complexity, you might want to consider whether this is absolutely needed. Sometimes, if you only need less than 30 seconds to complete some requests, it’s easier to just ask the OS to keep your app running in the background for a little bit after the user leaves the app and just use standard URLSession. For more information, see Extending Your App's Background Execution Time. It’s a much easier solution, bypassing many background URLSession hassles. But it only works if you only need 30 seconds or less. For larger requests that might exceed this small window, a true background URLSession is needed.
Below, you asked:
There are some downside with [downloading multiple files in parallel] as I understanding.
No, it’s always better to allow downloads to progress asynchronously and in parallel. It’s much faster and is more efficient. The only time you want to do requests consecutively, one after another, is where you need the parse the response of one request in order to prepare the next request. But that is not the case here.
The exception here is with the default, foreground URLSession. In that case you have to worry about latter requests timing out waiting for earlier requests. In that scenario you might bump up the timeout interval. Or we might wrap our requests in Operation subclass, allowing us to constrain not only how many concurrent requests we will allow, but not start subsequent requests until earlier ones finish. But even in that case, we don’t usually do it serially, but rather use a maxConcurrentOperationCount of 4 or something like that.
But for background sessions, requests don’t time out just because the background daemon hasn’t gotten around to them yet. Just add your requests to the background URLSession and let the OS handle this for you. You definitely don’t want to download images one at a time, with the background daemon relaunching your app in the background when one download is done so you can initiate the next one. That would be very inefficient (both in terms of the user’s battery as well as speed).
You need to loop inside an array of files and then add to the session to make it download but It will be download asynchronously so it's hard to keeping track also since the files are a lot.
Sure, you can’t do a naive “add to the end of array” if the requests are running in parallel, because you’re not guaranteed the order that they will complete. But it’s not hard to capture these responses as they come in. Just use a dictionary for example, perhaps keyed by the URL of the original request. Then you can easily look up in that dictionary to find the response associated with a particular request URL.
It’s incredibly simple. And we now can perform requests in parallel, which is much faster and more efficient.
You go on to say:
[Downloading in parallel] could lead the battery to be high consumption with a lot of requests at the same time. that's why I tried to make it download each file one at a time.
No, you never need to perform downloads one at a time for the sake of power. If anything, downloading one at a time is slower, and will take more power.
Unrelated, if you’re downloading 800+ files, you might want to allow the user to not perform these requests when the user is in “low data mode”. In iOS 13, for example, you might set allowsExpensiveNetworkAccess and allowsConstrainedNetworkAccess.
Regardless (and especially if you are supporting older iOS versions), you might also want to consider the appropriate settings isDiscretionary and allowsCellularAccess.
Bottom line, you want to make sure that you are respectful of a user’s limited cellular data plan or if they’re on some expensive service (e.g. connecting on an airplane’s expensive data plan or tethered via some local hotspot).
For more information on these considerations, see WWDC 2019 Advances in Networking, Part 1.
My question involves keeping an app that monitors user interactions in the background, for example time spent in one or the app. The issue arises when you can not have a background process run for more than 10 min or violate Apple's sandbox restrictions. Since I am relatively new to the Apple API, and could not find a direct answer that didn't involve location services or VOIP (which are both interesting options, but my application will not be able to use either viably), I come to ask for options in which I can understand when another app opens, when it closes, when it fetches data, and when user holds phone in certain orientation (ie when people hold their phone at certain angles to read text and etc.) for certain amount of time.
The purpose of this analyzation is to predict an attention span for the user, so I need to be able to run in the background, as the user will not be using my app while it predicts attention span.
My thoughts on this are possibly accessing the system log, and somehow parse previous statements (I don't think sandbox will allow), but inevitably the iOS system will suspend my processes after some point unless I put a timer. There is also the option of having the system wake up my app via opportunistic fetching, but that won't be useful if I don't collect the data.
Keep in mind this is within IOS 11, so it is much more restrictive than previous iterations. I understand this may seem like a complex problem, but even a direction in which to head could be useful to me.
This solution might work, (not recommended since it drains the battery quicker).
Just update your current location, every 10 mins. It will reset the background thread timer.
I need to implement a draft application for a fantasy sports website. Each users will have 1m30 to choose a player on its team and if that time has elapsed it will be selected automatically. Our planned implementation will use Juggernaut to push the turn changes to each user participating in the draft. But I'm still not sure about how to handle latency.
The main issue here is if a user got a higher latency than the others, he will receive the turn changes a little bit later and his timer won't be synchronized. Say someone receive a turn change after choosing a player himself while on his side he think he still got 2 seconds left, how can we handle that case? Is it better to try to measure each user latency and adjust the client-side timer to minimize that issue? If so, how could we implement that?
This is a tricky issue, but there are some good solutions out there. Look into what time.gov does, and how it does it; essentially, as I understand it, they use Java to perform multiple repeated requests to the server, to attempt to get an idea of the latency involved in the communication, then they generate a measure of latency that they use to skew the returned time data. You could use the same process for your application, with even more accuracy; keeping track of what the latency is and how it varies over time lets you make some statistical inferences about how reliable your latency numbers are, etc. It can be a bit complex, but it can definitely allow you to smooth out your performance. My understanding is that this is what most MMOs do as well, to manage lag.
I'd like to delay the handling for some captured events in ActionScript until a certain time. Right now, I stick them in an Array when captured and go through it when needed, but this seems inefficient. Is there a better way to do this?
Well, to me this seems a clean and efficient way of doing that.
What do you mean by delaying? you mean simply processing them later, or processing them after a given time?
You can always set a timout to the actual processing function in your event handler (using flash.utils.setTimeout), to process the event at a precise moment in time. But that can become inefficient, since you may have many timeouts dangeling about, that need to be handled by the runtime.
Maybe you could specify your needs a little more.
edit:
Ok, basically, flash player is single threaded - that is bytecode execution is single threaded. And any event, that is dispatched, is processed immediatly, i.e. dispatchEvent(someEvent) will directly call all registered handlers (thus AS bytecode).
Now there are events, which actually are generated in the background. These come either from I/O (network, userinput) or timers (TimerEvents). It may happen, that some of these events actually occur, while bytecode is executed. This usually happens in a background thread, which passes the event (in the abstract sense of the term) to the main thread through a (de)queue.
If the main thread is busy executing bytecode, then it will ignore these messages until it is done (notice: nearly any bytecode execution is always the implicit consequence of an event (be it enter frame, or input, or timer or load operation or whatever)). When it is idle, it will look in all queues, until it finds an available message, wraps the information into an ActionScript Event object, and dispatches it as previously described.
Thus this queueing is a very low level mechanism, that comes from thread-to-thread communication (and appears in many multi-threading scenarios), and is inaccessible to you.
But as I said before, your approach both is valid and makes sense.
Store them into Vector instead of Array :p
I think it's all about how you structure your program, maybe you can assign the captured event under the related instance? So that it's all natural to process the captured event with it instead of querying from a global vector