Scenario
I am working on an application that does video processing and streaming. I already have video capture from the back camera streaming perfectly. The problem is I have to do my processing to the video data also, but only locally. As it turns out the API I am using to do the local video processing requires a different pixel format than the APIs I am using to stream the data to my server. It seems I need to have two separate sessions capturing video from the back camera simultaneously. That would allow one session to do the processing and one for streaming.
Problem
Every time I attempt to create a new session to use the same AVCaptureDevice (back), my streaming immediately stops. Code below:
captureSession = [[AVCaptureSession alloc] init];
AVCaptureDeviceInput *videoIn = [[AVCaptureDeviceInput alloc]
initWithDevice:[self videoDeviceWithPosition:AVCaptureDevicePositionBack]
error:nil];
if ([captureSession canAddInput:videoIn])
{
[captureSession addInput:videoIn];
}
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
[videoOut setAlwaysDiscardsLateVideoFrames:YES];
[videoOut setVideoSettings:
#{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)}];
dispatch_queue_t videoCaptureQueue =
dispatch_queue_create("Video Process Queue", DISPATCH_QUEUE_SERIAL);
[videoOut setSampleBufferDelegate:self queue:videoCaptureQueue];
if ([captureSession canAddOutput:videoOut]) {
[captureSession addOutput:videoOut];
}
I receive an interruption reason videoDeviceInUseByAnotherClient.
videoDeviceInUseByAnotherClient: An interruption caused by the video device temporarily being made unavailable (for example, when used by another capture session).
I have also tried adding the output of the original capture session to the new session but every time the canAddOutput: method returns NO. My guess is because there is already a session associated with that output.
Question
How do I use the same AVCaptureDevice to output to two separate AVCaptureVideoDataOutputs at the same time? Or how can I achieve the same thing as the diagram below?
Related
In iOS, To preview the video, I found that I should use AVCaptureVideoPreviewLayer with an instance of AVCaptureSession. For example,
AVCaptureSession *captureSession = <#Get a capture session>;
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = <#The view in which to present the layer#>;
previewLayer.frame = aView.bounds;
[aView.layer addSublayer:previewLayer];
And AVCaptureSession needs some AVCaptureDevices and AVCaptureDeviceInputs
For example,
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
if (audioInput) {
[captureSession addInput:audioInput];
}
else {
// Handle the failure.
}
I referred to apple development documents for the above examples.
But, whether the devices are audio or video, all examples uses only built-in camera, and mic of iPhone/iPad.
My project doesn't use built-in camera and mic but external accessories which supports MP4 and is already MFi compliant.
I've already tested MFi authentication and identification and MP4 bitstream coming to iPhone devices using external accessory framework.
But, I have no idea how I can use bitstream from external accessory (instead of built-in camera and mic) for displaying preview on UI view of iPhone.
Is there any expert in this kind of problems ?
didFinishRecordingToOutputFileAtURL - {
AVErrorRecordingSuccessfullyFinishedKey = 0;
NSLocalizedDescription = "Cannot Record";
NSLocalizedRecoverySuggestion = "Try recording again.";
NSUnderlyingError = "Error Domain=NSOSStatusErrorDomain Code=-16418 \"(null)\"";}
This is the output I am getting while recording a video.
My code to create the AVCaptureMovieFileOutput is as follows,-
movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
// SET MAX DURATION
CMTime maxDuration = CMTimeMakeWithSeconds(121, Preferred_Time_Scale); // length I can record is 120 seconds
movieFileOutput.maxRecordedDuration = maxDuration;
// SET MIN FREE SPACE IN BYTES FOR RECORDING TO CONTINUE ON A VOLUME
movieFileOutput.minFreeDiskSpaceLimit = 1024 * 1024; // 1MB
if ([captureSession canAddOutput:movieFileOutput])
[captureSession addOutput:movieFileOutput];
[captureSession setSessionPreset:AVCaptureSessionPresetMedium];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession commitConfiguration];
[captureSession startRunning];
I have tried searching for description of the NSOSStatusErrorDomain code=-16148, but could not find it any where in any document (even from apple sites)
Any help is appreciated
Thanks
Satyaranjan
I encountered the same issue and finally found out what's wrong after several hours of debugging.
I was trying to create a AVCapgureVideoPreviewLayer after I started the recording.
//somewhere after i called [aAVCaptureMovieFileOutput startRecording]
[AVCaptureVideoPreviewLayer layerWithSession:aAVCaptureSession];
So apparently you should not create a preview layer from the capture session after the recording has been started. The correct way is to configure the preview layer before [_captureSession startRunning];, then followed by [aAVCaptureMovieFileOutput startRecording].
You can find a useful blog post with a very nice sample project by obj.io here.
I got a solution for the above problem (not sure if this is the final one though).
I just commented the line which is setting maxRecordedDuration i.e. movieFileOutput.maxRecordedDuration = maxDuration; and now I can record the maximum I wish for.
I am using the OpenTok iOS sdk to stream from iphone to chrome. What I would like to do is record a high res version of the video while streaming.
Using a custom video capturer via the OTVideoCapture interface from Example 2 Let's Build OTPublisher, I can successfully record the video sample buffer to file. The problem is, I cannot find any reference to the audio data gathered from the microphone.
I assume its using a audioInput(AVCaptureDeviceInput), to an audioOutput(AVCaptureAudioDataOutput) via AVCaptureAudioDataOutputSampleBufferDelegate is used somewhere.
Does anyone know how to access it from the OpenTok iOS SDK?
The captureOutput:didOutputSampleBuffer:fromConnection , fromConnection field will differentiate the audio and sound connection and provides the corresponding buffer.
To setup the audio input/output you can try in Let-Build-OTPublisher initCapture method
//add audio input / outputs
AVCaptureDevice * audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
if([_captureSession canAddInput:_audioInput])
{
NSLog(#"added audio device input");
[_captureSession addInput:_audioInput];
}
_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if([_captureSession canAddOutput:_audioOutput])
{
NSLog(#"audio output added");
[_captureSession addOutput:_audioOutput];
}
[_audioOutput setSampleBufferDelegate:self queue:_capture_queue];
We are having trouble after the iOS upgrade went from 7.0.6 to 7.1.0. I don't see this issue on iPhone 4s, 5, 5c, nor 5s running iOS 7.1 So much for all the non-fragmentation talk. I am posting the camera initialization code:
- (void)initCapture
{
//Setting up the AVCaptureDevice (camera)
AVCaptureDevice* inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError* cameraError;
if ([inputDevice lockForConfiguration:&cameraError])
{
if ([inputDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
NSLog(#"AVCaptureDevice is set to video with continuous auto focus");
CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f);
[inputDevice setFocusPointOfInterest:autofocusPoint];
[inputDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
}
[inputDevice unlockForConfiguration];
}
//setting up the input streams
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:nil];
//setting up up the AVCaptureVideoDataOutput
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
[captureOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
//setting up video settings
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
//passing the settings to the AVCaptureVideoDataOutput
[captureOutput setVideoSettings:videoSettings];
//setting up the AVCaptureSession
captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
[captureSession addInput:captureInput];
[captureSession addOutput:captureOutput];
if (!prevLayer)
{
prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
}
NSLog(#"initCapture preview Layer %p %#", self.prevLayer, self.prevLayer);
self.prevLayer.frame = self.view.bounds;
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer: self.prevLayer];
[self.captureSession startRunning];
}
Any help would be greatly appreciated...
The code provided by Apple you are using is outdated - they have fully rewritten it now. I'd try my luck and go for the new workflow.
Check it out here.
To close this thread up, we were using the camera for scanning of QR codes in addition to the libzxing. We decided to implement native iOS 7.0 AVCaptureMetadataOutputObjectsDelegate instead of the older AVCaptureVideoDataOutputSampleBufferDelegate. The Metadata delegate is much simpler and cleaner, and we found the example in http://nshipster.com/ios7/ very helpful.
Here are some ideas to diagnose your problem:
You have no else case for if ([inputDevice lockForConfiguration:&cameraError]). Add one.
In the else case, log the error contained in cameraError.
You have no else case for if ([inputDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]). Add one; log that, or add a breakpoint there to test in your debugging.
You don't check the return value of the property focusPointOfInterestSupported, before attempting setFocusPointOfInterest.
Consider calling setFocusMode before setFocusPointOfInterest (not sure if it matters, but that's what I have)
In general, you may want to do all your checks before attempting to lock the configuration.
Following neuman8's comment stating that something in libzxing is preventing the refocus, I did some investigating myself
I found the following line in the Decoder.mm file to be the culprit.
ArrayRef<char> subsetData (subsetBytesPerRow * subsetHeight);
It seems that ArrayRef is a class in zxing/common/Array.h file that attempts to allocate an array with the specified size. It did not seem to do anything wrong, but I guessed that the allocation of about 170k char element array may take some time and be the culprit for slowing down the blocking call enough to prevent other threads from running.
So, I tried to just put in a brute force solution to test the hypothesis. I added a sleep just after the allocation.
[NSThread sleepForTimeInterval:0.02];
The camera started focusing again and was able to decipher the QR codes.
I am still unable to find a better way to resolve this. Is there anyone who is able to figure a more efficient allocation of the large array, or have a more elegant way of yielding the thread for the camera focus?Otherwise this should solve the problem for now, even if it is ugly.
If anything is playing, recording, how to we check to see if the MIC is available (idle) for recording? Currently using
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
VCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice : audioCaptureDevice error:&error];
AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init];
[captureSession addInput : audioInput];
[captureSession addOutput : audioOutput];
[captureSession startRunning];
Need to check before grabbing the MIC / Playback from something that is already has it.
The mic device can not be busy/access to it can not be locked, even if you call [AVCaptureDevice lockForConfiguration] on a mic device it will not lock it and it is still accessible to the foreground application.
To see if other audio is playing you can check kAudioSessionProperty_OtherAudioIsPlaying e.g.:
UInt32 propertySize, audioIsAlreadyPlaying=0;
propertySize = sizeof(UInt32);
AudioSessionGetProperty(kAudioSessionProperty_OtherAudioIsPlaying, &propertySize, &audioIsAlreadyPlaying);
Additionally on Audio Session Programming Guide it is stated: "There is no programmatic way to ensure that an audio session is never interrupted. The reason is that iOS always gives priority to the phone. iOS also gives high priority to certain alarms and alerts"