In image processing, what is real time? - image-processing

in image processing applications what is considered real time? Is 33 fps real time? Is 20 fps real time? If 33 and 20 fps are considered real time then is 1 or 2 fps also real time?
Can anyone throw some light.

In my experience, it's a pretty vague term. Often, what is meant is that the algorithm will run at the rate of the source (e.g. a camera) supplying the images; however, I would prefer to state this explicitly ("the algorithm can process images at the frame rate of the camera").

Real time image processing = produce output simultaneously with the input.
The input may be 25 fps but you may choose to process 1 of every 5 frames(that makes 5 fps processing) and your application is still real time.
TV streaming software: all the frames are processed.
Security application and the input is CCTV security cams: you may choose to skip some frames to fit the performance.
3d game or simulation: fps changes depending on the current scene.
And they are all real time.

Strictly speaking, I would say real-time means that the application is generating images based on user input as it occurs, e.g. a mouse movement which changes the facing of an avatar.
How successful it is at this task - 1 fps, 10 fps, 100 fps, etc - is actually another question.
Real-time describes an approach, not a performance metric.
If however you ask what is the slowest fps which passes as usable by a human, the answer is about 15, I think.

i think it depends on what the real time application is. If the app is showing slideshows with 1 picture every 3 seconds, and the app can process 1 picture within this 3 seconds and show it, then it is real time processing.
If the movie is 29.97 frames per second, and the app can process all 29.97 frames within the second, then it is also real time.
An example is, if an app can take the movie from a VCR or Cable's analog output, and compress it into 29.97 frames per second video and also send all that info to a remote area for another person to watch, then it is real time processing.

(Hard) Real time is when an outcome has no value when delivered too early or too late.
Any FPS is real time provided that displayed frames represent what should be displayed at the very instant they are displayed.

The notion of real-time display is not really tied to a specific frame rate - it could be defined as the minimum frame rate at which movement is perceived as being continuous. So for slow moving objects in a visual frame (e.g. ships in a harbour, or stars in the night sky) a relatively slow frame rate might suffice, whereas for rapid movement (e.g. a racing car simulator) a much higher frame rate would be needed.
There is also a secondary consideration of latency. A real-time display must have sufficiently low latency in relation to other events (e.g. behaviour of a real-time simulation) that there is no perceptible lag in display updates.

That's not actually an easy question (even without taking into account differences between individulas).
Wikipedia has a good article explaining why. For what it's worth, I think cinema films run at 24fps so, if you're happy with that, that's what I'd consider realtime.

It depends on what exactly you are trying to do. For some purposes 1fps or even 2 spf (Seconds per frame) could be considered real-time. For others thats way too slow ...
That said, real-time means that it takes as long (or less) to process x frames as it would take to just present those x frames.

It depends.
automatic aircraft cannon - 1000 fps
monitoring - 10 - 15 fps
authentication - 1 fps
medical devices - 1 fph

I guess the term is used with different meanings in different contexts. In industrial image processing, real time processing is usually the opposite of offline processing. In offline processing applications, you record images (many of them) and process them at a later time. In real time processing, the system that acquires the images also processes them, at the same time, so the processing frame rate must not be higher than the acquisition frame rate.

Real-time means your implementation is fast enough to meet some deadline. The deadline is part of your system's specification. If it's an interactive UI and the users are not too picky, 15Hz update can be OK, although it can feel laggy. If you're using it to drive a car along the motorway 30Hz is about right. If it's a missile, well, maybe 100Hz?

Related

SceneKit scenes lag when resuming app

In my app, I have several simple scenes (a single 80 segment sphere with a 500px by 1000px texture, rotating once a minute) displaying at once. When I open the app, everything goes smoothly. I get constant 120fps with less than 50mb of memory usage and around 30% cpu usage.
However, if I minimize the app and come back to it a minute later, or just stop interacting with the app for a while, the scenes all lag terribly and get around 4 fps, despite Xcode reporting 30fps, normal memory usage, and super low (~3%) cpu usage.
I get this behavior when testing on a real iPhone 7 iOS 10.3.1, and I'm not sure if this behavior exists on other devices or the emulator.
Here is a sample project I pulled together to demonstrate this issue. (link here) Am I doing something wrong here? How can I make the scenes wake up and resume using as much cpu as they need to maintain good fps?
I won't probably answer the question you've asked directly, but can give you some points to think about.
I launched you demo app on my iPod 6-th gen (64-bit), iOS 10.3.1 and it lags from the very beginning up to about a minute with FPS 2-3. Then after some time it starts to spin smoothly. The same after going background-foreground. It can be explained with some caching of textures.
I resized one of the SCNView's so that it fits the screen, other views stayed behind. Set v4.showsStatistics = true
And here what I got
as you can see Metal flush takes about 18.3 ms for one frame and its only for one SCNView.
According to this answer on Stackoverflow
So, if my interpretation is correct, that would mean that "Metal
flush" measures the time the CPU spends waiting on video memory to
free up so it can push more data and request operations to the GPU.
So we might suspect that problem is in 4 different SCNViews working with GPU simultaneously.
Let's check it. Comparing to the 2-nd point, I've deleted 3 SCNViews behind and put 3 planets from those views to the front one. So that one SCNView has 4 planets at once. And here is the screenshot
and as you can see Metal flush takes up to 5 ms and its from the beginning and everything goes smoothly. Also you may notice that amount of triangles (top right icon) is four times as many as what we can see on the first screenshot.
To sum up, just try to combine all SCNNodes on one SCNView and possibly you'll get a speed up.
So, I finally figured out a partially functional solution, even though its not what I thought it would be.
The first thing I tried was to keep all the nodes in a single global scene as suggested by Sander's answer and set the delegate on one of the SCNViews as suggested in the second answer to this question. Maybe this used to work or it worked in a different context, but it didn't work for me.
How Sander ended up helping me was the use of the performance statistics, which I didn't know existed. I enabled them for one of my scenes, and something stood out to me about performance:
In the first few seconds of running, before the app gets dramatic frame drops, the performance display read 240fps. "Why was this?", I thought. Who would need 240 fps on a mobile phone with a 60hz display, especially when the SceneKit default is 60. Then it hit me: 60 * 4 = 240.
What I guess was happening is that each update in a single scene triggered a "metal flush", meaning that each scene was being flushed 240 times per second. I would guess that this fills the gpu buffer (or memory? I have no idea) slowly, and eventually SceneKit needs to start clearing it out, and 240 fps across 4 views is simply too much for it to keep up with. (which explains why it initially gets good performance before dropping completely.).
My solution (and this is why I said "partial solution"), was to set the preferedFramesPerSecond for each SceneView to 15, for a total of 60 (I can also get away with 30 on my phone, but I'm not sure if this holds up on weaker devices). Unfortunately 15fps is noticeably choppy, but way better than the terrible performance I was getting originally.
Maybe in the future Apple will enable unique refreshes per SceneView.
TL;DR: set preferredFramesPerSecond to sum to 60 over all of your SceneViews.

How fast can I send a UIImage between an iPhone and Apple Watch, with watchOS 2?

I'm building an Apple WatchOS 2 app which is continuously animated with generated images.
Because these can't be bundled with the app, they're generated in InterfaceController, and then set to display on the watch like so:
self.imageGroup?.setBackgroundImage(self.image)
Until this point, I've been generating these at a rate of 1 image per second, which feels fairly safe, but obviously gives a very low framerate of 1fps. Now I'm wondering how much this could be improved?
I measured the speed at which the UIImages themselves are generated, which is a fairly low .017 seconds. The size of these images is fairly consistent, too at about 10000 bytes. If there was no further delay, that'd give me a much more acceptable performance of about 58fps.
My question is - Is there a typical speed at which bluetooth communicates with my phone, which I could compare to that image size to determine a realistic frame rate?
Or - I presume that calling setBackgroundImage doesn't block the main thread while that happens. Is there a way that I can find out how long it takes for that to actually be set?
Apple doesn't have this speed documented because so much of it depends on connection strength. And since a user doesn't need to have the watch and phone right next to each other, the further away (or the type of objects between the phone and watch) a user is the slower it will transfer.
Your images are 10 KB, and you want to send 58 images per second so 580 KB or .58 MB per second? The amount of data doesn't sound unrealistic (though it will be a battery drain). However, each network call between the two devices will have some overhead. Do these image need to be sent in real time? If not you would likely get better performance if you could delay for 1-2 seconds initially and then batch a group of 58 images together which you would them animate on the watch. You would only have 1 network call every second which would be more more manageable for the devices than 58 calls per second.

iOS dynamically slow down the playback of a video, with continuous value

I have a problem with the iOS SDK. I can't find the API to slowdown a video with continuous values.
I have made an app with a slider and an AVPlayer, and I would like to change the speed of the video, from 50% to 150%, according to the slider value.
As for now, I just succeeded to change the speed of the video, but only with discrete values, and by recompiling the video. (In order to do that, I used AVMutableComposition APIs.
Do you know if it is possible to change continuously the speed, and without recompiling?
Thank you very much!
Jery
The AVPlayer's rate property allows playback speed changes if the associated AVPlayerItem is capable of it (responds YES to canPlaySlowForward or canPlayFastForward). The rate is 1.0 for normal playback, 0 for stopped, and can be set to other values but will probably round to the nearest discrete value it is capable of, such as 2:1, 3:2, 5:4 for faster speeds, and 1:2, 2:3 and 4:5 for slower speeds.
With the older MPMoviePlayerController, and its similar currentPlaybackRate property, I found that it would take any setting and report it back, but would still round it to one of the discrete values above. For example, set it to 1.05 and you would get normal speed (1:1) even though currentPlaybackRate would say 1.05 if you read it. Set it to 1.2 and it would play at 1.25X (5:4). And it was limited to 2:1 (double speed), beyond which it would hang or jump.
For some reason, the iOS API Reference doesn't mention these discrete speeds. They were found by experimentation. They make some sense. Since the hardware displays video frames at a fixed rate (e.g.- 30 or 60 frames per second), some multiples are easier than others. Half speed can be achieved by showing each frame twice, and double speed by dropping every other frame. Dropping 1 out of every 3 frames gives you 150% (3:2) speed. But to do 105% is harder, dropping 1 out of every 21 frames. Especially if this is done in hardware, you can see why they might have limited it to only certain multiples.

iOS - how can I programmatically calculate the time limit to record audio/video with the known file limit size

I have tried to google a lot but it seems like no one have done it beforein iOS.
My issue is: my server only allow the client to upload the video / audio / image file with limited size (e.g: 30M for video, 1M for audio). With that limit, I want to figure how much time the users are allow to record audio / video. This calculation must consider the difference devices for example the iPad 3 has better camera then ipad 2 so we will have less time to record the video.
I am wondering if we can programmatically calculate the time limit base on the known file size.
Thanks,
Luan.
When working with large amounts of data such as video and audio, compression should play a part in your calculation.
Compression results can vary greatly depending on what you are recording and as a result it would be unrealistic to try to forecast a certain maximum duration.
I can think of two options:
Predetermine very restrictive recording times per device (I believe it is possible in iOS to tell an iPad3 from an iPad2)
Figure out a way to re-encode a smaller part of the video until it is within limits.
Best of luck!
Cantgetright has the reason this is hard described perfectly.
What you really care about is megapixels of the camera (definition), worst case storage size of one second of video, and how many free megs are on the phone as well.
If you know most of these elements, time can be the constraint by which you determine the last one.
Always overestimate size to guarantee it'll work no matter what. People don't know how big 5secs of video is on their iDevices anyway so you can be stingy with allotted time

Flash Player magical frame rate

A long time ago (5+ years) I read an article about optimal frame rates for the Flash Player. The article reasoned through some calculations that 31 frames per second was the optimal fps to run your movies at and seemed, at the time, logical to me and have been using 31 fps ever since.
However, I have forgotten the reasoning from that article and I was wondering if 31 fps is still considered a good or optimal fps to run your swf's at.
What fps do you prefer for your swf's and why?
The reason for the 31 fps was that during the time of Flash 5/6 there was an issue with the Mac version of the Flash Player where it would plateau at certain frame rates. That is, if you ran at 12-17 FPS, it would rarely get past 12. However, if you set the fps to 18, it would stick to 18 just fine.
The "sweet spot" plateau was at 31 fps because it offered the smoothest animation (assuming you weren't doing frame-by-frame animation, in which case 31 was just too work intensive) while not being nearly as CPU intensive as the next plateau, which I believe was 61 fps.
Even though those days are behind us it is still important to strike that balance between smooth animations and CPU. Make sure you set some time aside at the beginning of your project (particularly if it will have any hand-done tweening!) to figure out where the sweet spot is for your goals.
I'm no Flash expert, but this sounded interesting enough to at least do some Googling. This forum thread implies that the "industry standard" of 31 fps comes from a Flash 5 bug. Since Flash 5 was a while ago, people seem to agree that you're more free to pick a framerate these days, everything doesn't have to be made using 31 fps.
Also don't forget that you can set the framerate dynamically at runtime by setting Stage.frameRate property. Some people have implemented reduced framerates when app is not in focus to save on CPU use, or increased it before doing more intensive data processing.
Usually 12-16 for animation, and 25-30 for coding stuff.
Also, take a look at this class: http://www.gskinner.com/blog/archives/2009/05/idle_cpu_usage.html
It lets you take advantage of high framerates without the consequence of high background CPU usage! Plus, it is easily adaptable for non-air stuff. (just comment out anything that gives you a compiler error).
31-33 FPS was the magic number for AS2.
You can smoothly run around 50-60 FPS with AS3, and you'll notice a huge improvement.

Resources