How to upload audio clip realtime to a server while its recording? Basically my requirement is upload an audio clip as chucks/packets while its recording.
I already did the recording part with using IQAudioRecorderController https://github.com/hackiftekhar/IQAudioRecorderController. It records the audio and save to TemporaryDirectory.
I wanted to know how to upload realtime without saving the audio clip.
This is the recording part
//Unique recording URL
NSString *fileName = [[NSProcessInfo processInfo] globallyUniqueString];
_recordingFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"%#.m4a",fileName]];
// Initiate and prepare the recorder
_audioRecorder = [[AVAudioRecorder alloc] initWithURL:[NSURL fileURLWithPath:_recordingFilePath] settings:recordSetting error:nil];
_audioRecorder.delegate = self;
_audioRecorder.meteringEnabled = YES;
// Recording start
- (void)recordingButtonAction:(UIBarButtonItem *)item
{
if (_isRecording == NO)
{
_isRecording = YES;
//UI Update
{
[self showNavigationButton:NO];
_recordButton.tintColor = _recordingTintColor;
_playButton.enabled = NO;
_trashButton.enabled = NO;
}
/*
Create the recorder
*/
if ([[NSFileManager defaultManager] fileExistsAtPath:_recordingFilePath])
{
[[NSFileManager defaultManager] removeItemAtPath:_recordingFilePath error:nil];
}
_oldSessionCategory = [[AVAudioSession sharedInstance] category];
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryRecord error:nil];
[_audioRecorder prepareToRecord];
[_audioRecorder record];
}
else
{
_isRecording = NO;
//UI Update
{
[self showNavigationButton:YES];
_recordButton.tintColor = _normalTintColor;
_playButton.enabled = YES;
_trashButton.enabled = YES;
}
[_audioRecorder stop];
[[AVAudioSession sharedInstance] setCategory:_oldSessionCategory error:nil];
}
}
// Recording done
-(void)doneAction:(UIBarButtonItem*)item
{
if ([self.delegate respondsToSelector:#selector(audioRecorderController:didFinishWithAudioAtPath:)])
{
IQAudioRecorderController *controller = (IQAudioRecorderController*)[self navigationController];
[self.delegate audioRecorderController:controller didFinishWithAudioAtPath:_recordingFilePath];
}
[self dismissViewControllerAnimated:YES completion:nil];
}
There are various ways of solving this, one way is to create your own AudioGraph. The AudioGraph can grab samples from microphone or from a file. Then you proceed to an output unit, but install a callback to get the sampled frames. These you then push to your network class which then can upload packet by packet.
A good example that shows you how to write these captured packets to disk is AVCaptureAudioDataOutput .
In that example packets are written suing ExtAudioFileWriteAsync. You have to replace this with your own logic for uploading to a server. Note that while you can do that easily, one problem is that this will give you raw audio samples. If you need them as wave file or similar, you may need to wait until recording is finished, since the header of the file needs an information about contained audio samples.
The code you are currently using will work for you if you want to upload recorded file after recording is done as it will give you the final recorded file.
If you want to upload live audio recording to the server then I think you have to go with combination of,
AudioSession for recording stuff
ffmpeg for uploading your live audio to server.
You can get good help for recording audio and managing Audio Buffers from here
For ffmpeg I think you have to lear a lot. It will be easy to send static/saved audio file to server using ffmpeg but for sending live Audio Buffer to server will be tricky job.
Related
I am using UIImagePickerController to record a video. and am using AVPlayer to play a video. and adding AVPlayerLayer to UIImagePickerController's cameraOverlayView so that i can see video while recording.
My requirement is
I need to watch video while recording video using UIImagePickerController
using headset i need to listen audio from playing video
need to record my voice to recording video
only my voice should be recorded but not playing video's audio.
every thing working but 4. audio from playing video also mix with my voice. how to handle this case? My final goal is
Out put for the playing video is headset
Input for the recording is headset's mic
Please help me to get this done.
Your requirement is interesting. So you need to play and record at the same time, right?
So that, you will need to initialize audio session with the category AVAudioSessionCategoryPlayAndRecord.
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
Because you are using UIImagePickerController to record so you don't have much control to your speaker and your mic. So test and see if it works.
In case you still have problem, I suggest you to use AVCaptureSession to record video without audio. Look at this example how to use it record-video-with-avcapturesession-2.
UPDATE: In my VOIP application, I use AVAudioUnit to record while playing back. So I think the only way is record video and audio separately and then use AVComposition to compose its to a single movie. Using AVCaptureSession to record video only and use EZAudio to record audio. The EZAudio use AVAudioUnit to record so that it should work. You can test it by record audio while playing a movie and see if it works. I hope it will help
UPDATE: I tested and it only work if you use headphone or select microphone back.
Here is the tested code:
NSString *moviePath = [[NSBundle mainBundle] pathForResource:#"videoviewdemo" ofType:#"mp4"];
NSURL *url = [NSURL fileURLWithPath:moviePath];
// You may find a test stream at <http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8>.
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithURL:url];
AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem];
AVPlayerLayer *layer = [[AVPlayerLayer alloc] init];
[layer setPlayer:player];
[layer setFrame:CGRectMake(0, 0, 100, 100)];
[self.view.layer addSublayer:layer];
[player play];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//
// Setup the AVAudioSession. EZMicrophone will not work properly on iOS
// if you don't do this!
//
AVAudioSession *session = [AVAudioSession sharedInstance];
NSError *error;
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
if (error)
{
NSLog(#"Error setting up audio session category: %#", error.localizedDescription);
}
[session setActive:YES error:&error];
if (error)
{
NSLog(#"Error setting up audio session active: %#", error.localizedDescription);
}
//
// Customizing the audio plot's look
//
// Background color
self.audioPlot.backgroundColor = [UIColor colorWithRed:0.984 green:0.471 blue:0.525 alpha:1.0];
// Waveform color
self.audioPlot.color = [UIColor colorWithRed:1.0 green:1.0 blue:1.0 alpha:1.0];
// Plot type
self.audioPlot.plotType = EZPlotTypeBuffer;
//
// Create the microphone
//
self.microphone = [EZMicrophone microphoneWithDelegate:self];
//
// Set up the microphone input UIPickerView items to select
// between different microphone inputs. Here what we're doing behind the hood
// is enumerating the available inputs provided by the AVAudioSession.
//
self.inputs = [EZAudioDevice inputDevices];
self.microphoneInputPickerView.dataSource = self;
self.microphoneInputPickerView.delegate = self;
//
// Start the microphone
//
[self.microphone startFetchingAudio];
self.microphoneTextLabel.text = #"Microphone On";
[[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil];
});
Take a look at PBJVision library. It allows you to record video while you are watching the preview, and at the end, you can do whatever you want with audio and video footage.
Hey on my Android apps I can preload my sounds in a SoundPool and then play them with almost no latency at all. Now I am looking for the same thing on iOS/obj-c, but I just can't find anything similar.
I followed a couple of tutorials but eventually there was a bigger lag than I expected and most of the tutorials are advising you to convert your audio to an uncompressed format like wav or caf but my MP3's are already at 14 mb and converting them to lossless audio leads to 81 mb of data which is way too much for me.
The most promising thing I tried was preloading the file (just like I did in Android's SoundPool) like shown in this OAL example:
- (bool) preloadUrl:(NSURL*) url seekTime:(NSTimeInterval)seekTime
{
if(nil == url)
{
OAL_LOG_ERROR(#"%#: Cannot open NULL file / url", self);
return NO;
}
OPTIONALLY_SYNCHRONIZED(self)
{
// Bug: No longer re-using AVAudioPlayer because of bugs when using multiple players.
// Playing two tracks, then stopping one and starting it again will cause prepareToPlay to fail.
bool wasPlaying = playing;
[self stopActions];
if(playing || paused)
{
[player stop];
}
as_release(player);
if(wasPlaying)
{
[[NSNotificationCenter defaultCenter] performSelectorOnMainThread:#selector(postNotification:) withObject:[NSNotification notificationWithName:OALAudioTrackStoppedPlayingNotification object:self] waitUntilDone:NO];
}
NSError* error;
player = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error];
if(nil == player)
{
OAL_LOG_ERROR(#"%#: Could not load URL %#: %#", self, url, [error localizedDescription]);
return NO;
}
player.volume = muted ? 0 : gain;
player.numberOfLoops = numberOfLoops;
player.meteringEnabled = meteringEnabled;
player.delegate = self;
player.pan = pan;
as_release(currentlyLoadedUrl);
currentlyLoadedUrl = as_retain(url);
self.currentTime = seekTime;
playing = NO;
paused = NO;
BOOL allOK = [player prepareToPlay];
if(!allOK)
{
OAL_LOG_ERROR(#"%#: Failed to prepareToPlay: %#", self, url);
}
else
{
[[NSNotificationCenter defaultCenter] performSelectorOnMainThread:#selector(postNotification:) withObject:[NSNotification notificationWithName:OALAudioTrackSourceChangedNotification object:self] waitUntilDone:NO];
}
preloaded = allOK;
return allOK;
}
}
But this still makes a quite considerable delay of about ~60ms which is way too much for an audio app like mine. My audio files don't have any delay in the beginning so it must have to do something with the code.
I tried all that stuff on an iPhone 5c.
You should be able to create several AVAudioPlayers and call prepareToPlay on them, but personally I like to use AVAssetReader to keep a buffer of LPCM audio ready to play at a moment's notice.
I'm using Cordova 3.2 for text to speech and speech to text. Under iOS 7, AVSpeechSynthesizer is available and works very well. Here is the critical bit of the plugin:
self.synthesizer = [[AVSpeechSynthesizer alloc] init];
self.synthesizer.delegate = self;
NSString * toBeSpoken=[command.arguments objectAtIndex:0];
NSNumber * rate=[command.arguments objectAtIndex:1];
NSString * voice=[command.arguments objectAtIndex:2];
NSNumber * volume=[command.arguments objectAtIndex:3];
NSNumber * pitchMult=[command.arguments objectAtIndex:4];
AVSpeechUtterance *utt =[[AVSpeechUtterance alloc] initWithString:toBeSpoken];
utt.rate= [rate floatValue]/4;// this is to slow down the speech rate
utt.volume=[volume floatValue];
utt.pitchMultiplier=[pitchMult floatValue];
utt.voice=[AVSpeechSynthesisVoice voiceWithLanguage:voice];
[self.synthesizer speakUtterance:utt];
The problem occurs after the text is spoken. Using the Cordova Media call to record (AVAudioRecorder) the response for voice to text conversion does something to disrupt the synthesizer output.
Some things that I've noticed during my attempts to figure this out:
Running in the simulator, there is no problem. In fact, I have to be careful to wait for the speech to end before recording, otherwise the record will pick the output through the microphone.
On the iPad 3 w/ iOS 7+, starting the recording will pause the synthesizer output until the recorded file is played back. The media reference to the file is released after the successful recording.
After recording, the synthesizer delegate receives responses:
TTSPlugin did start speaking
TTSPlugin will speak in range of speech string.
TTSPlugin will speak in range of speech string.
TTSPlugin will speak in range of speech string.
TTSPlugin did cancel speaking
Canceling the speech synthesize clears the utterance queue.
My goal is to be able to have a conversation with the app. I'm not able to find where the interference is. Help?
EDIT
I solved the problem. The culprit was AVAudioSession, which the Media plugin in Cordova was managing. I hadn't done multiple audio sources before so this was a stumper.
I added these to my TTS plugin to manage AVAudioSession and activate it as needed. Everything is fine now.
// returns whether or not audioSession is available - creates it if necessary
- (BOOL)hasAudioSession
{
BOOL bSession = YES;
if (!self.avSession) {
NSError* error = nil;
self.avSession = [AVAudioSession sharedInstance];
if (error) {
// is not fatal if can't get AVAudioSession , just log the error
NSLog(#"error creating audio session: %#", [[error userInfo] description]);
self.avSession = nil;
bSession = NO;
}
}
return bSession;
}
- (void)setAudioSession
{
if ([self hasAudioSession]) {
NSError* __autoreleasing err = nil;
NSNumber* playAudioWhenScreenIsLocked = 0;
BOOL bPlayAudioWhenScreenIsLocked = YES;
if (playAudioWhenScreenIsLocked != nil) {
bPlayAudioWhenScreenIsLocked = [playAudioWhenScreenIsLocked boolValue];
}
NSString* sessionCategory = bPlayAudioWhenScreenIsLocked ? AVAudioSessionCategoryPlayback : AVAudioSessionCategorySoloAmbient;
[self.avSession setCategory:sessionCategory error:&err];
if (![self.avSession setActive:YES error:&err]) {
// other audio with higher priority that does not allow mixing could cause this to fail
NSLog(#"Unable to play audio: %#", [err localizedFailureReason]);
}
}
}
startRecordingAudio method in Media(CDVSound.m) set AVAudioSession as "AVAudioSessionCategoryRecord".
In my case, I just I added following 1 line, it works for me.
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:nil];
[self.synthesizer speakUtterance:utt];
I am building an alarm clock app which works in conjunction with a physical Bluetooth device. I need to send one audio file (a song selected from the iTunes Library) to the built-in speaker and a separate audio file to the Bluetooth device (which is, effectively, a Bluetooth HFP speaker) at the same time.
My first thought in completing this would be to use AVAudioSession's new AVAudioSessionCategoryMultiRoute, but iOS 7 does not detect my speaker as a possible route; it will detect either the built-in speaker or the HFP speaker, but it will not detect both simultaneously. Detecting both is required to send two files at the same time.
My second thought was to use AVAudioSessionCategoryPlayAndRecord (even though the app has no need for a microphone) and to use overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker and overrideOutputAudioPort:AVAudioSessionPortOverrideNone like so:
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"alarm"
ofType:#"m4a"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
self.bluetoothAudioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:soundFileURL
error:nil];
self.bluetoothAudioPlayer.delegate = self;
[self.bluetoothAudioPlayer play];
NSError *error = nil;
NSURL *selectedSongURL = [self.selectedSong valueForProperty:MPMediaItemPropertyAssetURL];
NSLog(#"selectedSongURL: %#", selectedSongURL);
if (selectedSongURL) {
if (![[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error]) {
NSLog(#"overrideOutput error: %#", error);
}
self.internalAudioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:selectedSongURL error:nil];
self.internalAudioPlayer.delegate = self;
[self.internalAudioPlayer play];
}
After this alarm is silenced, I run [[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:nil]; to reset the override call.
However, this doesn't work because iOS just puts both audio signals on the same speaker.
Is there a way to send one audio stream to a Bluetooth HFP speaker and a different audio stream to the built-in speaker?
Im trying to make karaoke app that records the background music from file and the microphone.
I also want to add filter effects to the microphone input.
i can do everything stated above using the amazing audio engine sdk but i cant figure out how to add the microphone input as a channel so i can apply filters to it (and not to the background music.)
any help would be appreciated.
my current recording code:
- (void)beginRecording {
// Init recorder
self.recorder = [[AERecorder alloc] initWithAudioController:_audioController];
NSString *documentsFolder = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)
objectAtIndex:0];
NSString *filePath = [documentsFolder stringByAppendingPathComponent:#"Recording.aiff"];
// Start the recording process
NSError *error = NULL;
if ( ![_recorder beginRecordingToFileAtPath:filePath
fileType:kAudioFileAIFFType
error:&error] ) {
// Report error
return;
}
// Receive both audio input and audio output. Note that if you're using
// AEPlaythroughChannel, mentioned above, you may not need to receive the input again.
[_audioController addInputReceiver:_recorder];
[_audioController addOutputReceiver:_recorder];
}
You can separate your your back ground music and your mic by using different channels and then you can apply the filter to your mic channel only.
first declare a channel group in the header file
AEChannelGroupRef _group;
then simply add the player that you are using for recorded file to this group
[_audioController addChannels:#[_player] toChannelGroup:_group ];
and then add the filter to this group only
[_audioController addFilter:_reverb toChannelGroup:_group];
self.reverb = [[[AEAudioUnitFilter alloc] initWithComponentDescription:AEAudioComponentDescriptionMake(kAudioUnitManufacturer_Apple, kAudioUnitType_Effect, kAudioUnitSubType_Reverb2) audioController:_audioController error:NULL] autorelease];
AudioUnitSetParameter(_reverb.audioUnit, kReverb2Param_DryWetMix, kAudioUnitScope_Global, 0, 100.f, 0);
[_audioController addFilter:_reverb];
You can apply filters at the time of playing the recorded audio.