Is it possible to change the rate that didOutputSampleBuffer delegate is called? - ios

I am trying to process the camera frames returned by the callback didOutputSampleBuffer, and I want a high frame rate to capture sudden changes in the image (like a flash light going off). The rate at which the callback is called seems to be independent of the frame rate set for the connection. Even if I set frame rate to say 60 via videoOut.minFrameDuration = CMTimeMake(1, 60);, the interval between two consecutive didOutputSampleBuffer seems to be around 60 - 80mS (which is a frame rate of around 17 - 12fps. Why is that so? Is it possible to increase it?

Looks like I have found the answer to the question. Basically the frame rate is affected by the preset you use to select the resolution of the image. So for 1MP resolution, you can get a faster frame rate compared to 8MP or and so on. So even if you try to set a minFrameDuration of 160, the hardware gives you what it can based on your resolution setting, which is probably fps of 15 or less.
So to increase the FPS, decrease the resolution along with increasing the minFrameDuration property.

Related

How to dynamically change the playback rate of video in iOS?

The perfect example of what I am trying to achieve can be seen in the Flow ● Slow and Fast Motion app .
One can change the playback rate of the video by dragging points on the curve up or down. The video can also be saved in this state.
I am looking for a way to dynamically speed up/down a video , so that the playback rate can be changed while the video is being played.
Video explanation
WHAT I'VE TRIED
The playback rate property of AVPlayer .But it Only works with a few values for playback Rate(0.50, 0.67, 0.80, 1.0, 1.25, 1.50, and 2.0 ) and one cannot save the video
The scaleTimeRange(..) property of AVMutableComposition. But it doesn't work when you want to ramp the video for gradually decreasing slow/fast motion.
Display video frames on screen using CAEAGLLayer and CADisplayLink. But my many attempts on trying to achieve Slow/Fast motion with this have been unsuccessful .
All this has taken me months and I'm starting to doubt if I'll be able to accomplish this at all.
Thus any suggestion , would be immensely valuable.
In IOS, the MPNowPlayingInfoCenter object contains a 'nowPlayingInfo' dictionary whose contents describe the item being played. It is advised that you start the playback at the 'currentplaybackrate' and then set the speed. See this thread on the developer's forum.
You might possibly end up with something like this (but this is javascript) where the playback rate of the video has been sped up by 4.
document.querySelector('video').playbackRate = 4.0;
document.querySelector('video').play();
video{width:400px;
height:auto;}
<video controls preload="true" autoplay>
<source src="http://www.rachelgallen.com/nature.mp4" type="video/mp4" >
</video>
So I'm not sure I fully understand the use case you're going for, but I think
func setRate(_ rate: Float,
time itemTime: CMTime,
atHostTime hostClockTime: CMTime)
[Apple Documentation Source]
Is something that you're looking for. While this may not be exactly what you need, I'm also not sure where in the docs there is exactly what you're looking for, but with the above method alone, you could do the following to save videos at a variable rate:
Use the above method to play the video throughout (assuming it's not too long, otherwise this will be computationally impossible/timeout-worthy on some devices) at the desired rates each second. Design UI to adjust this per second rate.
under the hood you can actually play the video at that speed "frame by frame" and capture the frames you want (in the right # which will give you the rate you desire) and voila -- saving the right number of frames together (skipping/duplicating as needed to increase/lower desired rate based on "picker" UI) you've now accomplished what you desire
To be clear, what I'm talking about here is a video output # 60FPS has 60 frames per second. You would literally "cut and paste" frames together from the source video into the "destination" video based on whatever UI steppers values you receive from your user (in the screenshot-ed example the question contains, as my basis), and pick up that many frames. AKA if the user says seconds 2-10 of their 20 second video should be at 2X, only put in 30 frames for each of those seconds (if filmed at 60 FPS) alternating frames. The output will, at 60FPS, seem like 2X speed (since there are now 30 frames per 1 second of original video, which is 0.5 seconds at 60 FPS). Similarly, any value can appropriately be factored into:
(desired consistent FPS) = (source video FPS) = (destination video FPS) (ie 60 or 90)
(rate) = (rate from UI steppers/graph UI to pick rate # each time interval) (ie 1X/2x/0.25X)
(desired consistent FPS) * (rate) = (# frames kept in destination video)
(destination video frames) = (source video) * (desired consistent FPS) ~modulated by~ (per custom time interval rate)
The exact mechanisms for ^^ might actually be built into AVPlayer and I didn't find the details, but this alone should be a good start to get you going in that direction.

iOS dynamically slow down the playback of a video, with continuous value

I have a problem with the iOS SDK. I can't find the API to slowdown a video with continuous values.
I have made an app with a slider and an AVPlayer, and I would like to change the speed of the video, from 50% to 150%, according to the slider value.
As for now, I just succeeded to change the speed of the video, but only with discrete values, and by recompiling the video. (In order to do that, I used AVMutableComposition APIs.
Do you know if it is possible to change continuously the speed, and without recompiling?
Thank you very much!
Jery
The AVPlayer's rate property allows playback speed changes if the associated AVPlayerItem is capable of it (responds YES to canPlaySlowForward or canPlayFastForward). The rate is 1.0 for normal playback, 0 for stopped, and can be set to other values but will probably round to the nearest discrete value it is capable of, such as 2:1, 3:2, 5:4 for faster speeds, and 1:2, 2:3 and 4:5 for slower speeds.
With the older MPMoviePlayerController, and its similar currentPlaybackRate property, I found that it would take any setting and report it back, but would still round it to one of the discrete values above. For example, set it to 1.05 and you would get normal speed (1:1) even though currentPlaybackRate would say 1.05 if you read it. Set it to 1.2 and it would play at 1.25X (5:4). And it was limited to 2:1 (double speed), beyond which it would hang or jump.
For some reason, the iOS API Reference doesn't mention these discrete speeds. They were found by experimentation. They make some sense. Since the hardware displays video frames at a fixed rate (e.g.- 30 or 60 frames per second), some multiples are easier than others. Half speed can be achieved by showing each frame twice, and double speed by dropping every other frame. Dropping 1 out of every 3 frames gives you 150% (3:2) speed. But to do 105% is harder, dropping 1 out of every 21 frames. Especially if this is done in hardware, you can see why they might have limited it to only certain multiples.

OpenCV - camera resolution

working on face tracking system and have problem with OpenCV.
If I instantiate the Capture and call QueryFrame() i get 640x480 image resolution. Everything is silky smooth.
_grabber = new Capture();
_grabber.QueryFrame();
However, if i try to increase the resolution lets say to 800x600 the fps decreases drastically.
_grabber = new Capture();
_grabber.SetCaptureProperty(CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, LiveFeedSize.Height);
_grabber.SetCaptureProperty(CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, LiveFeedSize.Width);
_grabber.QueryFrame();
I am setting the width/height before every QueryFrame().
Anyone with ideas how to increase the frames per seconds? Thanks
so after long night I find the solution to the problem. The camera itself is able to work with full resolution, however, the problem was in this line of code
MCvAvgComp[][] facesDetected = _gray.DetectHaarCascade(_face, 1.2, 10, HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new Size(150, 150));
where the last size parameter was small (27*27) so it takes significant amount of time to calculate. The solution was to increase the value to 150 which is basically minimal face detected size

Corona Sdk frame number per second

I read that frame rate is 30 or 60 in Corona Sdk. However in my piece of code, the numbers are written 33 times with enterframe listener. Can you explain the reason to me, please?
local start=os.time()
local function countDown(event)
if((os.time()-start)==3) then
Runtime: removeEventListener("enterFrame", countDown)
end
print(os.time()-start)
end
Runtime:addEventListener("enterFrame", countDown)
These two values 30 or 60 are the max limit that you want in your app but it depends on device hardware if it will be able to reach that limit.
From Corona site:
Frame rate control
By default, the frame rate is 30 fps. We now allow you to optionally
set the frame rate to 60 fps by adding the fps key to config.lua
If you want to have higher fps rate you should optimize your code. Maybe you are drawing too many images.
Check this post:
http://forums.coronalabs.com/topic/32962-low-fps-on-android-hd-devices/
and this blog post (8. Conserve Texture Memory):
http://www.coronalabs.com/blog/2013/03/12/performance-optimizations/
For more info check this:
http://developer.coronalabs.com/content/configuring-projects
Here is a library to show on screen FPS and texture memory:
http://developer.coronalabs.com/code/output-fps-and-texture-memory-usage-your-app

In image processing, what is real time?

in image processing applications what is considered real time? Is 33 fps real time? Is 20 fps real time? If 33 and 20 fps are considered real time then is 1 or 2 fps also real time?
Can anyone throw some light.
In my experience, it's a pretty vague term. Often, what is meant is that the algorithm will run at the rate of the source (e.g. a camera) supplying the images; however, I would prefer to state this explicitly ("the algorithm can process images at the frame rate of the camera").
Real time image processing = produce output simultaneously with the input.
The input may be 25 fps but you may choose to process 1 of every 5 frames(that makes 5 fps processing) and your application is still real time.
TV streaming software: all the frames are processed.
Security application and the input is CCTV security cams: you may choose to skip some frames to fit the performance.
3d game or simulation: fps changes depending on the current scene.
And they are all real time.
Strictly speaking, I would say real-time means that the application is generating images based on user input as it occurs, e.g. a mouse movement which changes the facing of an avatar.
How successful it is at this task - 1 fps, 10 fps, 100 fps, etc - is actually another question.
Real-time describes an approach, not a performance metric.
If however you ask what is the slowest fps which passes as usable by a human, the answer is about 15, I think.
i think it depends on what the real time application is. If the app is showing slideshows with 1 picture every 3 seconds, and the app can process 1 picture within this 3 seconds and show it, then it is real time processing.
If the movie is 29.97 frames per second, and the app can process all 29.97 frames within the second, then it is also real time.
An example is, if an app can take the movie from a VCR or Cable's analog output, and compress it into 29.97 frames per second video and also send all that info to a remote area for another person to watch, then it is real time processing.
(Hard) Real time is when an outcome has no value when delivered too early or too late.
Any FPS is real time provided that displayed frames represent what should be displayed at the very instant they are displayed.
The notion of real-time display is not really tied to a specific frame rate - it could be defined as the minimum frame rate at which movement is perceived as being continuous. So for slow moving objects in a visual frame (e.g. ships in a harbour, or stars in the night sky) a relatively slow frame rate might suffice, whereas for rapid movement (e.g. a racing car simulator) a much higher frame rate would be needed.
There is also a secondary consideration of latency. A real-time display must have sufficiently low latency in relation to other events (e.g. behaviour of a real-time simulation) that there is no perceptible lag in display updates.
That's not actually an easy question (even without taking into account differences between individulas).
Wikipedia has a good article explaining why. For what it's worth, I think cinema films run at 24fps so, if you're happy with that, that's what I'd consider realtime.
It depends on what exactly you are trying to do. For some purposes 1fps or even 2 spf (Seconds per frame) could be considered real-time. For others thats way too slow ...
That said, real-time means that it takes as long (or less) to process x frames as it would take to just present those x frames.
It depends.
automatic aircraft cannon - 1000 fps
monitoring - 10 - 15 fps
authentication - 1 fps
medical devices - 1 fph
I guess the term is used with different meanings in different contexts. In industrial image processing, real time processing is usually the opposite of offline processing. In offline processing applications, you record images (many of them) and process them at a later time. In real time processing, the system that acquires the images also processes them, at the same time, so the processing frame rate must not be higher than the acquisition frame rate.
Real-time means your implementation is fast enough to meet some deadline. The deadline is part of your system's specification. If it's an interactive UI and the users are not too picky, 15Hz update can be OK, although it can feel laggy. If you're using it to drive a car along the motorway 30Hz is about right. If it's a missile, well, maybe 100Hz?

Resources