How will an Emlish app respond to high frequency updates? - f#

If an Elmish app were to receive updates faster than it can render the view, will it queue each update and resulting model change and try to render them one by one, or will it accumulate the resulting model and only render it whenever it gets a gap?

What would you do if you would implement an elmish app? It depends on the point of bottleneck, but it would certainly be a good idea to wait with visualisation until the model is computed until the latest delta.

Related

WebGL is it possible to emulate an asynchronous call to gl.finish()

WebGL is nice and asynchronous in that you can send off a long list of rendering commands without waiting for them to complete. However, if for some reason you do need to wait for the rendering to complete, you have to do it synchronously with gl.finish(). Surely it would be better if gl.finish accepted a callback and returned immediately?
Question: Is there any way to emulate this reliably?
Usage case: I am rendering a large number of vertices to a large off-screen canvas and then using drawImage to copy sections of this large canvas to small canvases on the page. I don't actually use gl.finish() but drawImage() seems to have the same effect. In my application, re-rendering is only triggered when the user performs an action (e.g. clicking a button), and it may take several hundred milliseconds. It would be nice if during rendering the browser was still responsive allowing scrolling etc. I am looking in particular for a Chrome solution, though something that also works in Firefox and Safari would be good.
Possible (bad) answer: You could try and estimate how long rendering is going to take and then set a timeout that begins with the call to gl.finish(). However, reliably doing this estimation for all sizes of vertex buffer and all users is going to be pretty tricky and inaccurate.
Possible (non-)answer: requestAnimationFrame does what I'm looking for...it doesn't though, does it?
Possible answer in 2018: Perhaps the ImageBitmap API solves this problem - see MDN docs.
You've already partially hit on your answer: drawImage() does indeed have finish-like behavior in that it forces all outstanding drawing commands to complete before it reads back the image data. The problem is that even if gl.finish() did what you wanted it to, wait for rendering to complete, you would still have the same behavior using it as you do now. The main thread would be blocked while the rendering finishes, interrupting the user's ability to interact with the page.
Ideally what you would want in this scenario is some sort of callback that indicates when a set of draw commands have been completed without actually blocking to wait for them. Unfortunately no such callback exists (and it would be surprisingly difficult to provide one, given the way the browser's internals work!)
A decent middle-ground in your case may be to do some intelligent estimations of when you feel the image may be ready. For example, once you have dispatched your draw call spin through 3 or 4 requestAnimationFrames before you call drawImage. If you consistently observe it taking longer (10 frames?) then spin for longer. This would allow users to continue interacting with the page normally and either produce no delay when doing the draw image, because the contents have finished rendering, or much less delay because you do the synchronous step mid-way through the render. Depending on the intended usage of your site non-realtime rendering could probably even stand to spin for a full second or so before presenting.
This certainly isn't a perfect solution, and I wish I had a better answer for you. Perhaps WebGL will gain the ability to query this type of status in the future, because it would be valuable in cases like yours, but for now this is likely the best you can do.

Monitoring NSNotifications

I noticed in a comment on this post Overhead of NSNotifications, user JustSid says
the overhead won't be noticeable if the App doesn't send out 30+ per run loop cycle
I'd like to write a little helper class that keeps track of how many NSNotifications have been sent on the current run loop cycle and alert me if it's above a certain pre-configurable number. I know I can register for all notifications (pass nil to name and object), but how do I track what run loop cycle they've been sent from?
It would be easy enough to have a category on NSNotificationCenter that increments some internal integer when a notification observer is added (and consequently decrements it when it is removed), but you have to ask yourself: How much is too much?
If you consider it to be some arbitrary integer (say, 30 for example's sake), then what happens when you test it on a device that has more memory and processor constraints than the one you have now? What happens if you test it on a device that can easily handle 30 observers and notifications floating around (it would be a complete waste)? While it would be possible to code general rules, it would be impossible to gauge the impact notifications have on application response time in every case.
The other possibility would be having a background process query the notification stack (or somehow just gauge it internally like above) when the amount of observers brings certain system functions to a crawl. Of course, ignoring the fact that this is way too much work, you'd be designing a sub-system which probably utilizes as much memory and steals as much away from performance as you tried to remedy with it in the first place!
TL;DR There are so many other patterns and structures you could be using instead of notifications, so why are you the one catering to NSNotification's needs?

iOS- start receiving gyroscope updates

I make use of CMMotionManager in order to access the gyroscope data for iOS. I see that there are 2 methods :
startGyroUpdates
startGyroUpdatesToQueue:withHandler:
to start receiving the gyro updates. How can we differentiate between calling these two methods. What are the situations when the either of them can be called? IS there any significance of one over the other?
Any help appreciated,
A queue is used to guarantee that all events are processed, even when the update interval you set in deviceMotionUpdateInterval is producing events at a faster rate than you can process in real time. If you don't mind missing events, it doesn't matter which one of the two you use, just discard them.
The relevant Apple doc is the Core Motion section of the Event Handling Guide:
For each of the data-motion types described above, the CMMotionManager
class offers two approaches for obtaining motion data, a push approach
and a pull approach:
Push. An application requests an update interval and implements a
block (of a specific type) for handling the motion data; it then
starts updates for that type of motion data, passing into Core Motion
an operation queue as well as the block. Core Motion delivers each
update to the block, which executes as a task in the operation queue.
Pull. An application starts updates of a type of motion data and
periodically samples the most recent measurement of motion data.
The pull approach is the recommended approach for most applications,
especially games; it is generally more efficient and requires less
code. The push approach is appropriate for data-collection
applications and similar applications that cannot miss a sample
measurement.
It's not on your question, but I wonder if you want the raw x,y,z rotation or the more useful pitch,roll,yaw. For the later use startDeviceMotionUpdatesToQueue:withHandler: instead startGyroUpdatesToQueue:withHandler:.
Edit: See Tommy's comment on this answer. My assumption of the delegate pattern was wrong.
I'm not particularly familiar with CMMotionManager, but from the naming here's my guess:
startGyroUpdates
Delivers gyroscope updates by invoking delegate methods on the main thread.
startGyroUpdatesToQueue:withHandler:
Delivers gyroscope updates by invoking the handler block on the given queue.
The first would be the pre-block style using delegates, and the second would be the blockified version based on GCD.

Handling latency while synchronizing client-side timers using Juggernaut

I need to implement a draft application for a fantasy sports website. Each users will have 1m30 to choose a player on its team and if that time has elapsed it will be selected automatically. Our planned implementation will use Juggernaut to push the turn changes to each user participating in the draft. But I'm still not sure about how to handle latency.
The main issue here is if a user got a higher latency than the others, he will receive the turn changes a little bit later and his timer won't be synchronized. Say someone receive a turn change after choosing a player himself while on his side he think he still got 2 seconds left, how can we handle that case? Is it better to try to measure each user latency and adjust the client-side timer to minimize that issue? If so, how could we implement that?
This is a tricky issue, but there are some good solutions out there. Look into what time.gov does, and how it does it; essentially, as I understand it, they use Java to perform multiple repeated requests to the server, to attempt to get an idea of the latency involved in the communication, then they generate a measure of latency that they use to skew the returned time data. You could use the same process for your application, with even more accuracy; keeping track of what the latency is and how it varies over time lets you make some statistical inferences about how reliable your latency numbers are, etc. It can be a bit complex, but it can definitely allow you to smooth out your performance. My understanding is that this is what most MMOs do as well, to manage lag.

Storing Data In Memory: Session vs Cache vs Static

A bit of backstory: I am working on an web application that requires quite a bit of time to prep / crunch data before giving it to the user to edit / manipulate. The data request task ~ 15 / 20 secs to complete and a couple secs to process. Once there, the user can manipulate vaules on the fly. Any manipulation of values will require the data to be reprocessed completely.
Update: To avoid confusion, I am only making the data call 1 time (the 15 sec hit) and then wanting to keep the results in memory so that I will not have to call it again until the user is 100% done working with it. So, the first pull will take a while, but, using Ajax, I am going to hit the in-memory data to constantly update and keep the response time to around 2 secs or so (I hope).
In order to make this efficient, I am moving the intial data into memory and using Ajax calls back to the server so that I can reduce processing time to handle the recalculation that occurs w/ this user's updates.
Here is my question, with performance in mind, what would be the best way to storing this data, assuming that only 1 user will be working w/ this data at any given moment.
Also, the user could potentially be working in this process for a few hours. When the user is working w/ the data, I will need some kind of failsafe to save the user's current data (either in a db or in a serialized binary file) should their session be interrupted in some way. In other words, I will need a solution that has an appropriate hook to allow me to dump out the memory object's data in the case that the user gets disconnected / distracted for too long.
So far, here are my musings:
Session State - Pros: Locked to one user. Has the Session End event which will meet my failsafe requirements. Cons: Slowest perf of the my current options. The Session End event is sometimes tricky to ensure it fires properly.
Caching - Pros: Good Perf. Has access to dependencies which could be a bonus later down the line but not really useful in current scope. Cons: No easy failsafe step other than a write based on time intervals. Global in scope - will have to ensure that users do not collide w/ each other's work.
Static - Pros: Best Perf. Easies to maintain as I can directly leverage my current class structures. Cons: No easy failsafe step other than a write based on time intervals. Global in scope - will have to ensure that users do not collide w/ each other's work.
Does anyone have any suggestions / comments on what I option I should choose?
Thanks!
Update: Forgot to mention, I am using VB.Net, Asp.Net, and Sql Server 2005 to perform this task.
I'll vote for secret option #4: use the database for this. If you're talking about a 20+ second turnaround time on the data, you are not going to gain anything by trying to do this in-memory, given the limitations of the options you presented. You might as well set this up in the database (give it a table of its own, or even a separate database if the requirements are that large).
I'd go with the caching method of for storing the data across any page loads. You can name the cache you want to store the data in to avoid conflicts.
For tracking user-made changes, I'd go with a more old-school approach: append to a text file each time the user makes a change and then sweep that file at intervals to save changes back to DB. If you name the files based on the user/account or some other session-unique indicator then there's no issue with conflict and the app (or some other support app, which might be a better idea in general) can sweep through all such files and update the DB even if the session is over.
The first part of this can be adjusted to stagger the write out more: save changes to Session, then write that to file at intervals, then sweep the file at larger intervals. you can tune it to performance and choose what level of possible user-change loss will be possible.
Use the Session, but don't rely on it.
Simply, let the user "name" the dataset, and make a point of actively persisting it for the user, either automatically, or through something as simple as a "save" button.
You can not rely on the session simply because it is (typically) tied to the users browser instance. If they accidentally close the browser (click the X button, their PC crashes, etc.), then they lose all of their work. Which would be nasty.
Once the user has that kind of control over the "persistent" state of the data, you can rely on the Session to keep it in memory and leverage that as a cache.
I think you've pretty much just answered your question with the pros/cons. But if you are looking for some peer validation, my vote is for the Session. Although the performance is slower (do you know by how much slower?), your processing is going to take a long time regardless. Do you think the user will know the difference between 15 seconds and 17 seconds? Both are "forever" in web terms, so go with the one that seems easiest to implement.
perhaps a bit off topic. I'd recommend putting those long processing calls in asynchronous (not to be confused with AJAX's asynchronous) pages.
Take a look at this article and ping me back if it doesn't make sense.
http://msdn.microsoft.com/en-us/magazine/cc163725.aspx
I suggest to create a copy of the data in a new database table (let's call it EDIT) as you send the initial results to the user. If performance is an issue, do this in a background thread.
As the user edits the data, update the table (also in a background thread if performance becomes an issue). If you have to use threads, you must make sure that the first thread is finished before you start updating the rows.
This allows a user to walk away, come back, even restart the browser and commit whenever she feels satisfied with the result.
One possible alternative to what the others mentioned, is to store the data on the client.
Assuming the dataset is not too large, and the code that manipulates it can be handled client side. You could store the data as an XML data island or JSON object. This data could then be manipulated/processed and handled all client side with no round trips to the server. If you need to persist this data back to the server the end resulting data could be posted via an AJAX or standard postback.
If this does not work with your requirements I'd go with just storing it on the SQL server as the other comment suggested.

Resources