Recording Video at 25 Frames per second in iOS - ios

Im doing a custom camera to film at Full HD or just HD quality. The issue is that after I set the Camera to 25 Frames with the follow code:
- (void) setFrameRate:(AVCaptureDevice*) camera {
NSError *error;
if (![camera lockForConfiguration:&error]) {
NSLog(#"Could not lock device %# for configuration: %#", camera, error);
return;
}
AVCaptureDeviceFormat *format = camera.activeFormat;
double epsilon = 0.00000001;
int desiredFrameRate = 25;
for (AVFrameRateRange *range in format.videoSupportedFrameRateRanges) {
if (range.minFrameRate <= (desiredFrameRate + epsilon) &&
range.maxFrameRate >= (desiredFrameRate - epsilon)) {
[camera setActiveVideoMaxFrameDuration:CMTimeMake(10, desiredFrameRate*10)];
[camera setActiveVideoMinFrameDuration:CMTimeMake(10, desiredFrameRate*10)];
break;
}
}
[camera unlockForConfiguration];
}
It changes the video fps but not to exactly 25 frames per second like I set in method. It fluctuate between 23.93 and 25.50 frames per second.
Anyone knows why?

After several attempts and debugging I find out that the issue with the frame rate not being exactly 25 frame has to do with the recording method and not wiht the device setup.
I was using AVAssetWriter object to record the video like the example showed in the follow link (https://reformatcode.com/code/ios/ios-8-film-from-both-back-and-front-camera).
But in no way was possible to get the exactly 25 fps.
Change the object for recording video for AVCaptureMovieFileOutput and was supper easy from there setting up and recording. Result its much more precise, between 25 and 25.01.

Related

Why I can not get auto change exposure duration when set exposureMode is AVCaptureExposureModeContinuousAutoExposure

I'm developing a photo capture app, and want when the light is low to auto set exposure duration auto down, so I search the API AVCaptureExposureMode, and API said when exposureMode set to AVCaptureExposureModeAutoExposure or AVCaptureExposureModeContinuousAutoExposure the device will automatically adjusts the exposure levels. But when I move iPad(Apple/iPad Pro 10.5-inch iOS(12.1.1)) light to dark, there just change iso not change exposure duration.
Sorry for my English... : (
Here is I tried:
set sessionPreset to AVCaptureSessionPresetPhoto
invoke setExposurePointOfInterest before setExposureMode
my iPad is not support lowLightBoost
set device.activeMaxExposureDuration to device.activeFormat.maxExposureDuration
if (![device lockForConfiguration:&error]) {
if (error) {
// on error
}
return;
}
if([device isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]){
[device setExposureMode: AVCaptureExposureModeContinuousAutoExposure];
}
[device unlockForConfiguration];
I expect when I move iPad light to dark, the exposure duration and iso will both auto adjusting.

How to get the available video dimensions/quality from a video url iOS?

I am creating custom video player using AVPlayer in ios (OBJECTIVE-C).I have a settings button which on clicking will display the available video dimensions and audio formats.
Below is the design:
so,I want to know:
1).How to get the available dimensions from a video url(not a local video)?
2). Even if I am able to get the dimensions,Can I switch between the available dimensions while playing in AVPlayer?
Can anyone give me a hint?
If it is not HLS (streaming) video, you can get Resolution information with the following code.
Sample code:
// player is playing
if (_player.rate != 0 && _player.error == nil)
{
AVAssetTrack *track = [[_player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
if (track != nil)
{
CGSize naturalSize = [track naturalSize];
naturalSize = CGSizeApplyAffineTransform(naturalSize, track.preferredTransform);
NSInteger width = (NSInteger) naturalSize.width;
NSInteger height = (NSInteger) naturalSize.height;
NSLog(#"Resolution : %ld x %ld", width, height);
}
}
However, for HLS video, the code above does not work.
I have solved this in a different way.
When I played a video, I got the image from video and calculated the resolution of that.
Here is the sample code:
// player is playing
if (_player.rate != 0 && _player.error == nil)
{
AVAssetTrack *track = [[_player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
CMTime currentTime = _player.currentItem.currentTime;
CVPixelBufferRef buffer = [_videoOutput copyPixelBufferForItemTime:currentTime itemTimeForDisplay:nil];
NSInteger width = CVPixelBufferGetWidth(buffer);
NSInteger height = CVPixelBufferGetHeight(buffer);
NSLog(#"Resolution : %ld x %ld", width, height);
}
As you have mentioned the that it is not a local video, you can call on some web service to return the available video dimensions for that particular video. After that change the URL to other available video and seek to the current position.
Refer This

AVCaptureSessionPresetLow on iPhone 6

I'm a n00b to AVCaptureSession. I'm using OpenTok to implement video chat. I want to preserve bandwidth and the UI is designed so the video views are only 100 x 100 presently.
This is part of the code from an OpenTok example where it sets the preset:
- (void) setCaptureSessionPreset:(NSString*)preset {
AVCaptureSession *session = [self captureSession];
if ([session canSetSessionPreset:preset] &&
![preset isEqualToString:session.sessionPreset]) {
[_captureSession beginConfiguration];
_captureSession.sessionPreset = preset;
_capturePreset = preset;
[_videoOutput setVideoSettings:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
kCVPixelBufferPixelFormatTypeKey,
nil]];
[_captureSession commitConfiguration];
}
}
When I pass in AVCaptureSessionPresetLow (on an iPhone 6) I get NO. Is there any way I can set AVCaptureSession so I can only capture video with a frame as close to 100 x 100 as possible?
Also, is this the correct strategy for trying to save bandwidth?
You cannot force the camera to a resolution it does not support.
A lower resolution frame size will lead to lower network traffic.
Lowering FPS is one other way.
A view size does not have to map to a resolution. You can always fit a frame in any size view.
Look at the Let-Build-OTPublisher app in OpenTok SDK and more specifically TBExampleVideoCapture.m file, on how resolution and FPS are handled.

What is the default buffer size of AVPlayer?

What is the default maximum and minimum buffer size of iOS AVPlayer? How to increasing its size for mp3 streaming? thank you
If your buffer size means playback buffer, then I believe this size is undocumented.
This behaviour is provided by AVFoundation, quoted on Advances in AVFoundation Playback
And we see that playback starts but then we quickly run into a stall because we didn't have enough buffer to play to the end.
In that case, we'll go into the waiting state and re-buffer until we have enough to play through.
They just mentioned enough not the exact time or size, it make sense that is a dynamic unit of measurement according to current network situation, but who knows.
Back to the topic, if you would like to take control of buffering.
You can observe AVPlayerItem's loadedTimeRanges property.
below code snippet had 5 seconds buffer time:
if ([keyPath isEqualToString:#"loadedTimeRanges"]) {
NSArray *timeRanges = (NSArray *)[change objectForKey:NSKeyValueChangeNewKey];
if (timeRanges && [timeRanges count]) {
CMTimeRange timerange = [[timeRanges objectAtIndex:0] CMTimeRangeValue];
if (self.audioPlayer.rate == 0 && !pauseReasonForced) {
CMTime bufferdTime = CMTimeAdd(timerange.start, timerange.duration);
CMTime milestone = CMTimeAdd(self.audioPlayer.currentTime, CMTimeMakeWithSeconds(5.0f, timerange.duration.timescale));
if (CMTIME_COMPARE_INLINE(bufferdTime , >, milestone) && self.audioPlayer.currentItem.status == AVPlayerItemStatusReadyToPlay && !interruptedWhilePlaying && !routeChangedWhilePlaying) {
if (![self isPlaying]) {
[self.audioPlayer play];
}
}
}
}
}

IOS Video recording using UIImages arriving at random times

I'm developing an iOS app that gets UIImages at random times from an internet connection, and progressively constructs a video file from them as the images come in. I got it working a little, but the fact that the images dont arrive at the same rate all the time is messing up the video.
How do I re-calculate CMTime when each new UIImage arrives so that it adjusts for the varying frame rate of the arriving UIImages, which can arrive anywhere from milliseconds to seconds apart??
Here is what I'm doing so far, some code is not shown, but here is the basic thing
.
.
adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput: videoStream
sourcePixelBufferAttributes:attributes];
CMTime frameTime=CMTimeMake(1,10); // assumed initial frame rate
.
.
-(void)addImageToMovie:(UIImage*)img {
append_ok=FALSE;
buffer = [self pixelBufferFromCGImage:[img CGImage] andSize:img.size];
while (!append_ok) {
if (adaptor.assetWriterInput.readyForMoreMediaData){
frameTime.value +=1;
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
[NSThread sleepForTimeInterval:0.01];
} else {
[NSThread sleepForTimeInterval:0.01];
}
}
if(buffer) {
CVBufferRelease(buffer);
}
}
It depends on the number of frames. Instead of adding 1, add 10 to frameTime.value.

Resources