How to create several mp4 files with AVAssetWriter at the same time - ios

I try to save four video streams with AVAssetWriter on the iPhone as .mp4. With three streams everything works fine, but the 4th mp4 file is always empty.
Here is a piece of my code:
-(void)writeImagesToMovie:(CVPixelBufferRef) buffer :(int) cameraID
{
AVAssetWriterInput* writerInput;
AVAssetWriterInputPixelBufferAdaptor *adaptor;
int* frameNumber;
switch (cameraID) {
case 1:
writerInput = writerInput1;
adaptor = adaptor1;
frameNumber =&frameNumber1;
break;
case 2:
writerInput = writerInput2;
adaptor = adaptor2;
frameNumber =&frameNumber2;
break;
case 3:
writerInput = writerInput3;
adaptor = adaptor3;
frameNumber =&frameNumber3;
break;
default:
writerInput = writerInput4;
adaptor = adaptor4;
frameNumber =&frameNumber4;
break;
}
if(writerInput.readyForMoreMediaData){
CMTime frameTime = CMTimeMake(1, 30);//150, 600
// CMTime = Value and Timescale.
// Timescale = the number of tics per second you want
// Value is the number of tics
// For us - each frame we add will be 1/4th of a second
// Apple recommend 600 tics per second for video because it is a
// multiple of the standard video rates 24, 30, 60 fps etc.
CMTime lastTime=CMTimeMake((int64_t)*frameNumber, 30);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
if (*frameNumber == 0) {presentTime = CMTimeMake(0, 30);}//600
// This ensures the first frame starts at 0.
// Give the Image to the AVAssetWriter to add to your video
[adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
*frameNumber=*frameNumber+1;
}
}
I call this method in a loop and give in each run a image, which should be written in one of the four mp4 files.
The last called AVAssetWriterInput gets a image, but the file remains empty.
If I change the order of the calls, always the last called AVAssetWriterInput leaves an empty file.
Are there any ideas?

Related

How to get frame from video on iOS

In my app I want to take frames from a video for filtering them. I try to take frame frim video by time offset. This is my code:
- (UIImage*)getVideoFrameForTime:(NSDate*)time {
CGImageRef thumbnailImageRef = NULL;
NSError *igError = nil;
NSTimeInterval timeinterval = [time timeIntervalSinceDate:self.videoFilterStart];
CMTime atTime = CMTimeMakeWithSeconds(timeinterval, 1000);
thumbnailImageRef =
[self.assetImageGenerator copyCGImageAtTime:atTime
actualTime:NULL
error:&igError];
if (!thumbnailImageRef) {
NSLog(#"thumbnailImageGenerationError %#", igError );
}
UIImage *image = thumbnailImageRef ? [[UIImage alloc] initWithCGImage:thumbnailImageRef] : nil;
return image;
}
Unfortunately, I see only frames which located on integer seconds: 1, 2, 3.. Even when time interval is non-integer (1.5, etc).
How to get frames at any non-integer interval?
Thnx to #shallowThought I found an answer in this question Grab frames from video using Swift
You just need to add this two lines
assetImgGenerate.requestedTimeToleranceAfter = kCMTimeZero;
assetImgGenerate.requestedTimeToleranceBefore = kCMTimeZero;
Use this project to get more frame details
The corresponding project on github: iFrameExtractor.git
If I remember correctly NSDate's accuracy only goes up to the second which explains why frames are only take on integer seconds. You'll have to use a different type of input to get frames at non-integer seconds.

How to get the available video dimensions/quality from a video url iOS?

I am creating custom video player using AVPlayer in ios (OBJECTIVE-C).I have a settings button which on clicking will display the available video dimensions and audio formats.
Below is the design:
so,I want to know:
1).How to get the available dimensions from a video url(not a local video)?
2). Even if I am able to get the dimensions,Can I switch between the available dimensions while playing in AVPlayer?
Can anyone give me a hint?
If it is not HLS (streaming) video, you can get Resolution information with the following code.
Sample code:
// player is playing
if (_player.rate != 0 && _player.error == nil)
{
AVAssetTrack *track = [[_player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
if (track != nil)
{
CGSize naturalSize = [track naturalSize];
naturalSize = CGSizeApplyAffineTransform(naturalSize, track.preferredTransform);
NSInteger width = (NSInteger) naturalSize.width;
NSInteger height = (NSInteger) naturalSize.height;
NSLog(#"Resolution : %ld x %ld", width, height);
}
}
However, for HLS video, the code above does not work.
I have solved this in a different way.
When I played a video, I got the image from video and calculated the resolution of that.
Here is the sample code:
// player is playing
if (_player.rate != 0 && _player.error == nil)
{
AVAssetTrack *track = [[_player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
CMTime currentTime = _player.currentItem.currentTime;
CVPixelBufferRef buffer = [_videoOutput copyPixelBufferForItemTime:currentTime itemTimeForDisplay:nil];
NSInteger width = CVPixelBufferGetWidth(buffer);
NSInteger height = CVPixelBufferGetHeight(buffer);
NSLog(#"Resolution : %ld x %ld", width, height);
}
As you have mentioned the that it is not a local video, you can call on some web service to return the available video dimensions for that particular video. After that change the URL to other available video and seek to the current position.
Refer This

Why AVSampleBufferDisplayLayer stops showing CMSampleBuffers taken from AVCaptureVideoDataOutput's delegate?

I want to display some CMSampleBuffer's with the AVSampleBufferDisplayLayer, but it freezes after showing the first sample.
I get the samplebuffers from the AVCaptureVideoDataOutputSampleBuffer delegate:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CFRetain(sampleBuffer);
[self imageToBuffer:sampleBuffer];
CFRelease(sampleBuffer);
}
put them into a vector
-(void) imageToBuffer: (CMSampleBufferRef )source{
//buffers is defined as: std::vector<CMSampleBufferRef> buffers;
CMSampleBufferRef newRef;
CMSampleBufferCreateCopy(kCFAllocatorDefault, source, &newRef);
buffers.push_back(newRef);
}
Then try to show them via AVSampleBufferDisplayLayer (in another ViewController)
AVSampleBufferDisplayLayer * displayLayer = [[AVSampleBufferDisplayLayer alloc] init];
displayLayer.bounds = self.view.bounds;
displayLayer.position = CGPointMake(CGRectGetMidX(self.displayOnMe.bounds), CGRectGetMidY(self.displayOnMe.bounds));
displayLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
displayLayer.backgroundColor = [[UIColor greenColor] CGColor];
[self.view.layer addSublayer:displayLayer];
self.view.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
dispatch_queue_t queue = dispatch_queue_create("My queue", DISPATCH_QUEUE_SERIAL);
[displayLayer setNeedsDisplay];
[displayLayer requestMediaDataWhenReadyOnQueue:queue
usingBlock:^{
while ([displayLayer isReadyForMoreMediaData]) {
if (samplesKey < buffers.size()) {
CMSampleBufferRef buf = buffers[samplesKey];
[displayLayer enqueueSampleBuffer:buffers[samplesKey]];
samplesKey++;
}else
{
[displayLayer stopRequestingMediaData];
break;
}
}
}];
but it shows the first sample then freezes, and does nothing.
And my video data output settings are as follows:
//set up our output
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
dispatch_queue_t queue = dispatch_queue_create("VideoQueue", DISPATCH_QUEUE_SERIAL);
[_videoDataOutput setSampleBufferDelegate:self queue:queue];
[_videoDataOutput setVideoSettings:[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],(id)kCVPixelBufferPixelFormatTypeKey,
nil]];
I came across this problem in the same context, trying to take the output from AVCaptureVideoDataOutput and display it in a AVSampleDisplay layer.
If your frames come out in display order, then the fix is very easy, just set the display immediately flag on the CMSampleBufferRef.
Get the sample buffer returned by the delegate and then...
CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);
CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);
If your frames come out in encoder order (not display order), then the time stamps on the CMSampleBuffer need to be zero biased and restamped such that the first frames timestamp is equal to time 0.
double pts = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer));
// ptsStart is equal to the first frames presentationTimeStamp so playback starts from time 0.
CMTime presentationTimeStamp = CMTimeMake((pts-ptsStart)*1000000,1000000);
CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer, presentationTimeStamp);
Update:
I ran into a situation where some video still wasn't playing smoothly when I used the zero bias method and I investigated further. The correct answer seems to be using the PTS from the first frame you intend to play.
My answer is here, but I will post it here, too.
Set rate at which AVSampleBufferDisplayLayer renders sample buffers
The Timebase needs to be set to the presentation time stamp (pts) of the first frame you intend to decode. I was indexing the pts of the first frame to 0 by subtracting the initial pts from all subsequent pts and setting the Timebase to 0. For whatever reason, that didn't work with certain video.
You want something like this (called before a call to decode):
CMTimebaseRef controlTimebase;
CMTimebaseCreateWithMasterClock( CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase );
displayLayer.controlTimebase = controlTimebase;
// Set the timebase to the initial pts here
CMTimebaseSetTime(displayLayer.controlTimebase, CMTimeMake(ptsInitial, 1));
CMTimebaseSetRate(displayLayer.controlTimebase, 1.0);
Set the PTS for the CMSampleBuffer...
CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer, presentationTimeStamp);
And maybe make sure display immediately isn't set....
CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanFalse);
This is covered very briefly in WWDC 2014 Session 513.

How to set videoMaximumDuration for AvcaptureSession in iOS

I am using AVCaptureSession for Recording Video.But i can't able to set maximum Video length.If i am using ImagePicker Controller there is method is used for set maximum video duration like videoMaximumDuration .But In AVCaptureSession how i can set MaximumDuration .please help me..Advance Thanks
You can set the maximum duration using the property maxRecordedDuration of your AVCaptureMovieFileOutput settings.
Here's an example.
self.movieFileOutput = [[AVCaptureMovieFileOutput alloc]init];
Float64 maximumVideoLength = 60; //Whatever value you wish to set as the maximum, in seconds
int32_t prefferedTimeScale = 30 //Frames per second
CMTime maxDuration = CMTimeMakeWithSeconds(maximumVideoLength, preferredTimescale_;
self.movieFileOutput.maxRecordedDuration = maxDuration;
self.movieFileOutput.minFreeDiskSpaceLimit = 1024*1024;
if(self.captureSession canAddOutput:self.movieFileOutput){
[self.captureSession addOutput:self.movieFileOutput];
}
I hope this answers your question

IOS Video recording using UIImages arriving at random times

I'm developing an iOS app that gets UIImages at random times from an internet connection, and progressively constructs a video file from them as the images come in. I got it working a little, but the fact that the images dont arrive at the same rate all the time is messing up the video.
How do I re-calculate CMTime when each new UIImage arrives so that it adjusts for the varying frame rate of the arriving UIImages, which can arrive anywhere from milliseconds to seconds apart??
Here is what I'm doing so far, some code is not shown, but here is the basic thing
.
.
adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput: videoStream
sourcePixelBufferAttributes:attributes];
CMTime frameTime=CMTimeMake(1,10); // assumed initial frame rate
.
.
-(void)addImageToMovie:(UIImage*)img {
append_ok=FALSE;
buffer = [self pixelBufferFromCGImage:[img CGImage] andSize:img.size];
while (!append_ok) {
if (adaptor.assetWriterInput.readyForMoreMediaData){
frameTime.value +=1;
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
[NSThread sleepForTimeInterval:0.01];
} else {
[NSThread sleepForTimeInterval:0.01];
}
}
if(buffer) {
CVBufferRelease(buffer);
}
}
It depends on the number of frames. Instead of adding 1, add 10 to frameTime.value.

Resources