AVAssetWriter can merge two video files - ios

Do not tell me to use AVAssetExportSession, thank you.
I tried this, but failed.
for (int i =0; i < count; i++) {
assetWriterInput = nil;
assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSParameterAssert(assetWriterInput);
NSParameterAssert([assetWriter canAddInput:assetWriterInput]);
[assetWriterInput setExpectsMediaDataInRealTime:YES];
[assetWriter addInput:assetWriterInput];
}
[assetWriter startWriting];

The sample app that shows you how to do exactly what you're talking about is AVCompositionDebugVieweriOS:
https://developer.apple.com/library/ios/samplecode/AVCompositionDebugVieweriOS/Introduction/Intro.html#//apple_ref/doc/uid/DTS40013421
I'm sure you can pare down the code to just the parts you need; but, if that's not yet where you're at with your level of understanding, let me know.
One more thing: this app not only contains the code you need, but also draws a graph of your output, showing you where you made the connection between the two clips, and any transition you may have inserted between them.

Related

AVAssetWriter / AVAssetWriterInputPixelBufferAdaptor - black frames and frame rate

I'm capturing the camera feed and writing it to a movie.
The problem I'm having is that after the export the movie has a couple of black seconds in front of it (relative to the actual recording start time).
I think this is related to [self.assetWriter startSessionAtSourceTime:kCMTimeZero];
I had a half working solution by having a frameStart variable that just counted upwards in the samplebuffer delegate method.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
frameStart++;
if (self.startRecording == YES) {
static int64_t frameNumber = 0;
if(self.assetWriterInput.readyForMoreMediaData) {
[self.pixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:CMTimeMake(frameNumber, 25)];
}
frameNumber++;
}
}
and then call this method when the user pressed a button:
[self.assetWriter startSessionAtSourceTime:CMTimeMake(frameStart,25)];
this works. but only once... if I want to record a second movie the black frames are back again.
Also, when I look at the outputted movie the frame rate is 25fps like I want it to. but the video looks as if it's sped up. as if there is not enough space between the frames. So the movie plays about twice as fast.
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:640], AVVideoWidthKey, [NSNumber numberWithInt:480], AVVideoHeightKey, AVVideoCodecH264, AVVideoCodecKey, nil];
self.assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
self.assetWriterInput.expectsMediaDataInRealTime = YES;
You don't need to count frame timestamps on your own. You can get the timestamp of the current sample with
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
However, it seems to me you are just passing the pixel buffer of the frame to the adaptor without modifications. Wouldn't it be easier to pass the sample buffer itself directly to the assetWriterInput like the following?
[self.assetWriterInput appendSampleBuffer:sampleBuffer];
First of all, why are you incrementing frameNumber twice for every frame?
Increment once, remove the first one.
This should fix the playback speed.
Second, are you resetting frameNumber to 0 when you finish recording?
If not than this is your problem.
If not I need more explanation about what is going on here..
Regards

camera not focusing on iPhone 4, running iOS 7.1

We are having trouble after the iOS upgrade went from 7.0.6 to 7.1.0. I don't see this issue on iPhone 4s, 5, 5c, nor 5s running iOS 7.1 So much for all the non-fragmentation talk. I am posting the camera initialization code:
- (void)initCapture
{
//Setting up the AVCaptureDevice (camera)
AVCaptureDevice* inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError* cameraError;
if ([inputDevice lockForConfiguration:&cameraError])
{
if ([inputDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
NSLog(#"AVCaptureDevice is set to video with continuous auto focus");
CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f);
[inputDevice setFocusPointOfInterest:autofocusPoint];
[inputDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
}
[inputDevice unlockForConfiguration];
}
//setting up the input streams
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:nil];
//setting up up the AVCaptureVideoDataOutput
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
[captureOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
//setting up video settings
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
//passing the settings to the AVCaptureVideoDataOutput
[captureOutput setVideoSettings:videoSettings];
//setting up the AVCaptureSession
captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
[captureSession addInput:captureInput];
[captureSession addOutput:captureOutput];
if (!prevLayer)
{
prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
}
NSLog(#"initCapture preview Layer %p %#", self.prevLayer, self.prevLayer);
self.prevLayer.frame = self.view.bounds;
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer: self.prevLayer];
[self.captureSession startRunning];
}
Any help would be greatly appreciated...
The code provided by Apple you are using is outdated - they have fully rewritten it now. I'd try my luck and go for the new workflow.
Check it out here.
To close this thread up, we were using the camera for scanning of QR codes in addition to the libzxing. We decided to implement native iOS 7.0 AVCaptureMetadataOutputObjectsDelegate instead of the older AVCaptureVideoDataOutputSampleBufferDelegate. The Metadata delegate is much simpler and cleaner, and we found the example in http://nshipster.com/ios7/ very helpful.
Here are some ideas to diagnose your problem:
You have no else case for if ([inputDevice lockForConfiguration:&cameraError]). Add one.
In the else case, log the error contained in cameraError.
You have no else case for if ([inputDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]). Add one; log that, or add a breakpoint there to test in your debugging.
You don't check the return value of the property focusPointOfInterestSupported, before attempting setFocusPointOfInterest.
Consider calling setFocusMode before setFocusPointOfInterest (not sure if it matters, but that's what I have)
In general, you may want to do all your checks before attempting to lock the configuration.
Following neuman8's comment stating that something in libzxing is preventing the refocus, I did some investigating myself
I found the following line in the Decoder.mm file to be the culprit.
ArrayRef<char> subsetData (subsetBytesPerRow * subsetHeight);
It seems that ArrayRef is a class in zxing/common/Array.h file that attempts to allocate an array with the specified size. It did not seem to do anything wrong, but I guessed that the allocation of about 170k char element array may take some time and be the culprit for slowing down the blocking call enough to prevent other threads from running.
So, I tried to just put in a brute force solution to test the hypothesis. I added a sleep just after the allocation.
[NSThread sleepForTimeInterval:0.02];
The camera started focusing again and was able to decipher the QR codes.
I am still unable to find a better way to resolve this. Is there anyone who is able to figure a more efficient allocation of the large array, or have a more elegant way of yielding the thread for the camera focus?Otherwise this should solve the problem for now, even if it is ugly.

How build ffmpeg optimized for iOS, using hardware decoding probably?

I make a FFMPEG-based player for ios. It works fine on simulator, but on real-device (iPhone 4) the frame rate is low and make my audio and video out of sync. the player works fine on iPhone 4s, so I guess it's just problem about device's computing power.
So, is there anyway to build FFMPEG optimized for iOS device (armv7, arvm7s arch)? or is there anyway to utilize ios device hardware to decode video stream?
My video stream is encode in H264/AAC.
Those streams should play just fine, I assume since your using ffmpeg you are not using a video protocol that iOS supports directly.
We use ffmpeg to do rtsp/rtmp and we get good performance with h264/aac
There are a number of factors that contribute to av/sync issues, usually some type of pre-buffering of the video is required, also network plays a big part in it.
As to your second question, hardware encoding is only available via avfoundation, you can use avassetwriter to encode your video, but again depends wether or not you need real-time.
see this link https://github.com/mooncatventures-group/FFPlayer-beta1/blob/master/FFAVFrames-test/ViewController.m
-(void) startRecording {
// // create the AVComposition
// [mutableComposition release];
// mutableComposition = [[AVMutableComposition alloc] init];
movieURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/%llu.mov", NSTemporaryDirectory(), mach_absolute_time()]];
NSError *movieError = nil;
assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL
fileType: AVFileTypeQuickTimeMovie
error: &movieError];
NSDictionary *assetWriterInputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:FRAME_WIDTH], AVVideoWidthKey,
[NSNumber numberWithInt:FRAME_HEIGHT], AVVideoHeightKey,
nil];
assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo
outputSettings:assetWriterInputSettings];
assetWriterInput.expectsMediaDataInRealTime = YES;
[assetWriter addInput:assetWriterInput];
assetWriterPixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc]
initWithAssetWriterInput:assetWriterInput
sourcePixelBufferAttributes:nil];
[assetWriter startWriting];
firstFrameWallClockTime = CFAbsoluteTimeGetCurrent();
[assetWriter startSessionAtSourceTime:kCMTimeZero];
startSampleing=YES;
}
The one drawback right now is that a way needs to be determined to read the encoded data as its being written, believe me when I say there are a few of us developers trying to figure out how to do that as we I write this.

Saved video filtering on iOS

How can I make the process that filtering saved video in photo library in iOS?
I got URLs of videos in the library using AssetsLibrary framework,
then, made a preview for the video.
Next step, I wanna make filtering process for video using CIFilter.
In case of real time issue, I made video filter process using AVCaptureVideoDataOutputSampleBufferDelegate.
But in case of saved video, I don't know how to make filter process.
Do I use AVAsset? If I must use that, how can I filter it? and how to save it?
always thank you.
I hope this will help you
AVAsset *theAVAsset = [[AVURLAsset alloc] initWithURL:mNormalVideoURL options:nil];
NSError *error = nil;
float width = theAVAsset.naturalSize.width;
float height = theAVAsset.naturalSize.height;
AVAssetReader *mAssetReader = [[AVAssetReader alloc] initWithAsset:theAVAsset error:&error];
[theAVAsset release];
NSArray *videoTracks = [theAVAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *videoTrack = [videoTracks objectAtIndex:0];
mPrefferdTransform = [videoTrack preferredTransform];
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
AVAssetReaderTrackOutput* mAssetReaderOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack outputSettings:options];
[mAssetReader addOutput:mAssetReaderOutput];
[mAssetReaderOutput release];
CMSampleBufferRef buffer = NULL;
//CMSampleBufferRef buffer = NULL;
while ( [mAssetReader status]==AVAssetReaderStatusReading ){
buffer = [mAssetReaderOutput copyNextSampleBuffer];//read next image.
}
You should have a look at CVImageBufferRef pixBuf = CMSampleBufferGetImageBuffer(sbuf) then you can have the image pointer first address, so you can add filter to pixBuf, but i find that the performance is not good, If you have any new idea,we can discuss about it further.

Can someone explain source time, movie time, presentation time, etc.?

In my iOS app, I need to save an image as a short video segment. I have this working using AVAssetWriter and AVAssetWriterPixelBufferAdaptor, thanks to some of the great posts on this site, but I've had to fudge the start and end session times, and presentation times, because I don't really understand them.
The following fragment creates a 2 second video, but I've set the various times by trial and error. I'm not sure why it doesn't create a 3 second video, to be honest.
// start session
videoWriter.movieFragmentInterval = CMTimeMake(1,600);
[videoWriter startWriting];
CMTime startTime = CMTimeMake(0, 600);
[videoWriter startSessionAtSourceTime:startTime];
while (1) {
if (![writerInput isReadyForMoreMediaData]) {
NSLog(#"Not ready for data");
} else {
[avAdaptor appendPixelBuffer:pixelBuffer
withPresentationTime:CMTimeMake(1200,600)];
break;
}
}
//Finish the session:
[writerInput markAsFinished];
CMTime endTime = CMTimeMake(1800, 600);
[videoWriter endSessionAtSourceTime:endTime];
[videoWriter finishWriting];
Can anyone explain the various time settings in this fragment, or point me to a document that will help? I've read the apple docs until I'm cross-eyed, but they assume more knowledge than I currently have, I guess.
TIA: John

Resources