Mute Audio AVCaptureSession - ios

I am trying to mute and unmute the audio for AVCaptureSession. Once I start the sessions I can enable and disable the audio connection, but once I play back the video all of the audio portions are pushed back to back at the front of the video leaving only the end of the video with no sound. This seems to be a time stamp issue but I don't know how it could be. Just in case I have tried to adjust the PTS of the audio sample buffer to match the previous video buffer.
For pausing
if (self.muted) {
[self.session beginConfiguration];
self.audioConnection.enabled = NO;
[self.session commitConfiguration];
} else {
[self.session beginConfiguration];
self.audioConnection.enabled = YES;
[self.session commitConfiguration];
}
For adjusting the time stamp I grab last timestamp
if ( connection == self.videoConnection ) {
// Get timestamp
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
testPreviousTimeStamp = timestamp;
Then I adjust the timestamp on the sample buffer
CMSampleBufferSetOutputPresentationTimeStamp (sampleBuffer,testPreviousTimeStamp);
if (![self.assetWriterAudioIn appendSampleBuffer:sampleBuffer]) {
[self showError:[self.assetWriter error]];
NSLog(#"Problem writing audio sample buffer");
}
Any ideas what the problem could be and how to fix it?

Rather than adjusting the time stamp which I had no luck with. I just wrote zero to the data buffer. It works and this way I don't need to disable/enable any connections.
// Write audio data to file
if (readyToRecordAudio && readyToRecordVideo) {
if (self.muted) {
CMBlockBufferRef buffRef = CMSampleBufferGetDataBuffer(sampleBuffer);
char fillByte = 0;
CMBlockBufferFillDataBytes(fillByte,buffRef,0,CMBlockBufferGetDataLength(buffRef));
}
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeAudio];
}

Related

Objective-C: To mute/unmute audio while recording video using AVFoundation on iOS

I am using the sample provided by Apple at this link to record and save video recording.
Wanted the ability to mute and un-mute audio before recording the video.
On Objective-C, I tried the belowmentioned code to mute/unmute on button click before starting video recording. But the video is getting recorded with the audio.
Tried without calling the beginConfiguration and commitConfiguration on the session object but still issue exists.
Any idea how to handle the same in Objective-C ?
- (IBAction)muteAudio:(id)sender
{
self.muteAudio = !self.muteAudio;
NSError *error = nil;
[self.session beginConfiguration];
if(self.muteAudio == FALSE)
{
// Add audio input.
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error];
if ( ! audioDeviceInput ) {
NSLog( #"Could not create audio device input: %#", error );
}
if ( [self.session canAddInput:audioDeviceInput] ) {
[self.session addInput:audioDeviceInput];
}
else {
NSLog( #"Could not add audio device input to the session" );
}
}
else
{
// Add audio input.
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error];
if ( ! audioDeviceInput ) {
NSLog( #"Could not create audio device input: %#", error );
}
[self.session removeInput:audioDeviceInput];
}
[self.session commitConfiguration];
}
Found the solution. Add the below-mentioned code in the toggleMovieRecording method which will get called when you hit the record button.
AVCaptureConnection *audioConnection = [self.movieFileOutput connectionWithMediaType:AVMediaTypeAudio];
audioConnection.enabled = !self.muteAudio;
The method after adding the logic to disable/enable audio.
- (IBAction)toggleMovieRecording:(id)sender
{
/*
Disable the Camera button until recording finishes, and disable
the Record button until recording starts or finishes.
See the AVCaptureFileOutputRecordingDelegate methods.
*/
self.cameraButton.enabled = NO;
self.recordButton.enabled = NO;
self.captureModeControl.enabled = NO;
/*
Retrieve the video preview layer's video orientation on the main queue
before entering the session queue. We do this to ensure UI elements are
accessed on the main thread and session configuration is done on the session queue.
*/
AVCaptureVideoOrientation videoPreviewLayerVideoOrientation = self.previewView.videoPreviewLayer.connection.videoOrientation;
dispatch_async( self.sessionQueue, ^{
if ( ! self.movieFileOutput.isRecording ) {
if ( [UIDevice currentDevice].isMultitaskingSupported ) {
/*
Setup background task.
This is needed because the -[captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:]
callback is not received until AVCam returns to the foreground unless you request background execution time.
This also ensures that there will be time to write the file to the photo library when AVCam is backgrounded.
To conclude this background execution, -[endBackgroundTask:] is called in
-[captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:] after the recorded file has been saved.
*/
self.backgroundRecordingID = [[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:nil];
}
// Update the orientation on the movie file output video connection before starting recording.
AVCaptureConnection *movieFileOutputConnection = [self.movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
movieFileOutputConnection.videoOrientation = videoPreviewLayerVideoOrientation;
//Code to enable and disable audio in the recorded video file.
AVCaptureConnection *audioConnection = [self.movieFileOutput connectionWithMediaType:AVMediaTypeAudio];
audioConnection.enabled = !self.muteAudio;
// Start recording to a temporary file.
NSString *outputFileName = [NSUUID UUID].UUIDString;
NSString *outputFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:[outputFileName stringByAppendingPathExtension:#"mov"]];
[self.movieFileOutput startRecordingToOutputFileURL:[NSURL fileURLWithPath:outputFilePath] recordingDelegate:self];
}
else {
[self.movieFileOutput stopRecording];
}
} );
}

Record video while other video is playing

I am using UIImagePickerController to record a video. and am using AVPlayer to play a video. and adding AVPlayerLayer to UIImagePickerController's cameraOverlayView so that i can see video while recording.
My requirement is
I need to watch video while recording video using UIImagePickerController
using headset i need to listen audio from playing video
need to record my voice to recording video
only my voice should be recorded but not playing video's audio.
every thing working but 4. audio from playing video also mix with my voice. how to handle this case? My final goal is
Out put for the playing video is headset
Input for the recording is headset's mic
Please help me to get this done.
Your requirement is interesting. So you need to play and record at the same time, right?
So that, you will need to initialize audio session with the category AVAudioSessionCategoryPlayAndRecord.
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
Because you are using UIImagePickerController to record so you don't have much control to your speaker and your mic. So test and see if it works.
In case you still have problem, I suggest you to use AVCaptureSession to record video without audio. Look at this example how to use it record-video-with-avcapturesession-2.
UPDATE: In my VOIP application, I use AVAudioUnit to record while playing back. So I think the only way is record video and audio separately and then use AVComposition to compose its to a single movie. Using AVCaptureSession to record video only and use EZAudio to record audio. The EZAudio use AVAudioUnit to record so that it should work. You can test it by record audio while playing a movie and see if it works. I hope it will help
UPDATE: I tested and it only work if you use headphone or select microphone back.
Here is the tested code:
NSString *moviePath = [[NSBundle mainBundle] pathForResource:#"videoviewdemo" ofType:#"mp4"];
NSURL *url = [NSURL fileURLWithPath:moviePath];
// You may find a test stream at <http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8>.
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithURL:url];
AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem];
AVPlayerLayer *layer = [[AVPlayerLayer alloc] init];
[layer setPlayer:player];
[layer setFrame:CGRectMake(0, 0, 100, 100)];
[self.view.layer addSublayer:layer];
[player play];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//
// Setup the AVAudioSession. EZMicrophone will not work properly on iOS
// if you don't do this!
//
AVAudioSession *session = [AVAudioSession sharedInstance];
NSError *error;
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
if (error)
{
NSLog(#"Error setting up audio session category: %#", error.localizedDescription);
}
[session setActive:YES error:&error];
if (error)
{
NSLog(#"Error setting up audio session active: %#", error.localizedDescription);
}
//
// Customizing the audio plot's look
//
// Background color
self.audioPlot.backgroundColor = [UIColor colorWithRed:0.984 green:0.471 blue:0.525 alpha:1.0];
// Waveform color
self.audioPlot.color = [UIColor colorWithRed:1.0 green:1.0 blue:1.0 alpha:1.0];
// Plot type
self.audioPlot.plotType = EZPlotTypeBuffer;
//
// Create the microphone
//
self.microphone = [EZMicrophone microphoneWithDelegate:self];
//
// Set up the microphone input UIPickerView items to select
// between different microphone inputs. Here what we're doing behind the hood
// is enumerating the available inputs provided by the AVAudioSession.
//
self.inputs = [EZAudioDevice inputDevices];
self.microphoneInputPickerView.dataSource = self;
self.microphoneInputPickerView.delegate = self;
//
// Start the microphone
//
[self.microphone startFetchingAudio];
self.microphoneTextLabel.text = #"Microphone On";
[[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil];
});
Take a look at PBJVision library. It allows you to record video while you are watching the preview, and at the end, you can do whatever you want with audio and video footage.

Upload audio clip realtime while its recording?

How to upload audio clip realtime to a server while its recording? Basically my requirement is upload an audio clip as chucks/packets while its recording.
I already did the recording part with using IQAudioRecorderController https://github.com/hackiftekhar/IQAudioRecorderController. It records the audio and save to TemporaryDirectory.
I wanted to know how to upload realtime without saving the audio clip.
This is the recording part
//Unique recording URL
NSString *fileName = [[NSProcessInfo processInfo] globallyUniqueString];
_recordingFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"%#.m4a",fileName]];
// Initiate and prepare the recorder
_audioRecorder = [[AVAudioRecorder alloc] initWithURL:[NSURL fileURLWithPath:_recordingFilePath] settings:recordSetting error:nil];
_audioRecorder.delegate = self;
_audioRecorder.meteringEnabled = YES;
// Recording start
- (void)recordingButtonAction:(UIBarButtonItem *)item
{
if (_isRecording == NO)
{
_isRecording = YES;
//UI Update
{
[self showNavigationButton:NO];
_recordButton.tintColor = _recordingTintColor;
_playButton.enabled = NO;
_trashButton.enabled = NO;
}
/*
Create the recorder
*/
if ([[NSFileManager defaultManager] fileExistsAtPath:_recordingFilePath])
{
[[NSFileManager defaultManager] removeItemAtPath:_recordingFilePath error:nil];
}
_oldSessionCategory = [[AVAudioSession sharedInstance] category];
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryRecord error:nil];
[_audioRecorder prepareToRecord];
[_audioRecorder record];
}
else
{
_isRecording = NO;
//UI Update
{
[self showNavigationButton:YES];
_recordButton.tintColor = _normalTintColor;
_playButton.enabled = YES;
_trashButton.enabled = YES;
}
[_audioRecorder stop];
[[AVAudioSession sharedInstance] setCategory:_oldSessionCategory error:nil];
}
}
// Recording done
-(void)doneAction:(UIBarButtonItem*)item
{
if ([self.delegate respondsToSelector:#selector(audioRecorderController:didFinishWithAudioAtPath:)])
{
IQAudioRecorderController *controller = (IQAudioRecorderController*)[self navigationController];
[self.delegate audioRecorderController:controller didFinishWithAudioAtPath:_recordingFilePath];
}
[self dismissViewControllerAnimated:YES completion:nil];
}
There are various ways of solving this, one way is to create your own AudioGraph. The AudioGraph can grab samples from microphone or from a file. Then you proceed to an output unit, but install a callback to get the sampled frames. These you then push to your network class which then can upload packet by packet.
A good example that shows you how to write these captured packets to disk is AVCaptureAudioDataOutput .
In that example packets are written suing ExtAudioFileWriteAsync. You have to replace this with your own logic for uploading to a server. Note that while you can do that easily, one problem is that this will give you raw audio samples. If you need them as wave file or similar, you may need to wait until recording is finished, since the header of the file needs an information about contained audio samples.
The code you are currently using will work for you if you want to upload recorded file after recording is done as it will give you the final recorded file.
If you want to upload live audio recording to the server then I think you have to go with combination of,
AudioSession for recording stuff
ffmpeg for uploading your live audio to server.
You can get good help for recording audio and managing Audio Buffers from here
For ffmpeg I think you have to lear a lot. It will be easy to send static/saved audio file to server using ffmpeg but for sending live Audio Buffer to server will be tricky job.

iOS objective C low-latency audio playback like SoundPool

Hey on my Android apps I can preload my sounds in a SoundPool and then play them with almost no latency at all. Now I am looking for the same thing on iOS/obj-c, but I just can't find anything similar.
I followed a couple of tutorials but eventually there was a bigger lag than I expected and most of the tutorials are advising you to convert your audio to an uncompressed format like wav or caf but my MP3's are already at 14 mb and converting them to lossless audio leads to 81 mb of data which is way too much for me.
The most promising thing I tried was preloading the file (just like I did in Android's SoundPool) like shown in this OAL example:
- (bool) preloadUrl:(NSURL*) url seekTime:(NSTimeInterval)seekTime
{
if(nil == url)
{
OAL_LOG_ERROR(#"%#: Cannot open NULL file / url", self);
return NO;
}
OPTIONALLY_SYNCHRONIZED(self)
{
// Bug: No longer re-using AVAudioPlayer because of bugs when using multiple players.
// Playing two tracks, then stopping one and starting it again will cause prepareToPlay to fail.
bool wasPlaying = playing;
[self stopActions];
if(playing || paused)
{
[player stop];
}
as_release(player);
if(wasPlaying)
{
[[NSNotificationCenter defaultCenter] performSelectorOnMainThread:#selector(postNotification:) withObject:[NSNotification notificationWithName:OALAudioTrackStoppedPlayingNotification object:self] waitUntilDone:NO];
}
NSError* error;
player = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error];
if(nil == player)
{
OAL_LOG_ERROR(#"%#: Could not load URL %#: %#", self, url, [error localizedDescription]);
return NO;
}
player.volume = muted ? 0 : gain;
player.numberOfLoops = numberOfLoops;
player.meteringEnabled = meteringEnabled;
player.delegate = self;
player.pan = pan;
as_release(currentlyLoadedUrl);
currentlyLoadedUrl = as_retain(url);
self.currentTime = seekTime;
playing = NO;
paused = NO;
BOOL allOK = [player prepareToPlay];
if(!allOK)
{
OAL_LOG_ERROR(#"%#: Failed to prepareToPlay: %#", self, url);
}
else
{
[[NSNotificationCenter defaultCenter] performSelectorOnMainThread:#selector(postNotification:) withObject:[NSNotification notificationWithName:OALAudioTrackSourceChangedNotification object:self] waitUntilDone:NO];
}
preloaded = allOK;
return allOK;
}
}
But this still makes a quite considerable delay of about ~60ms which is way too much for an audio app like mine. My audio files don't have any delay in the beginning so it must have to do something with the code.
I tried all that stuff on an iPhone 5c.
You should be able to create several AVAudioPlayers and call prepareToPlay on them, but personally I like to use AVAssetReader to keep a buffer of LPCM audio ready to play at a moment's notice.

Where is audio SampleBuffer in OpenTok, TokBox ios SDK

I am using the OpenTok iOS sdk to stream from iphone to chrome. What I would like to do is record a high res version of the video while streaming.
Using a custom video capturer via the OTVideoCapture interface from Example 2 Let's Build OTPublisher, I can successfully record the video sample buffer to file. The problem is, I cannot find any reference to the audio data gathered from the microphone.
I assume its using a audioInput(AVCaptureDeviceInput), to an audioOutput(AVCaptureAudioDataOutput) via AVCaptureAudioDataOutputSampleBufferDelegate is used somewhere.
Does anyone know how to access it from the OpenTok iOS SDK?
The captureOutput:didOutputSampleBuffer:fromConnection , fromConnection field will differentiate the audio and sound connection and provides the corresponding buffer.
To setup the audio input/output you can try in Let-Build-OTPublisher initCapture method
//add audio input / outputs
AVCaptureDevice * audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
if([_captureSession canAddInput:_audioInput])
{
NSLog(#"added audio device input");
[_captureSession addInput:_audioInput];
}
_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if([_captureSession canAddOutput:_audioOutput])
{
NSLog(#"audio output added");
[_captureSession addOutput:_audioOutput];
}
[_audioOutput setSampleBufferDelegate:self queue:_capture_queue];

Resources