iOS Capture Video from Camera and Mixing with audio file in real time - ios

I am trying to capture video from iPhone camera and save the movie mixed with an audio file.
I can capture the video with the audio (from mic) with no problems. What I want to do is capture the video but instead of mic audio, use a music track (a .caf file).
I am capturing the video with AVAssetWriter. I've tried to set up an AVAssetReader to read the audio file, but I couldn't make it work with the AVAssetWriter (maybe because the decoding of audio happens real fast).
Also, I don't want to save the movie without audio and mix it afterwards with an AVAssetExportSession. It would be to slow for my purpose.
Any Ideas ? Thanks in advance.

Capture the video using AVAssetWriter, Capture audio lets say with AvAudioRecorder , then mix audio and video using AVExportSession , lots of posts on this topic.

If you do it after (with AVAssetExportSession) , you will problem with the sync. and short delay (time...) between the video and the audio...didn't find better solution

Related

iOS : How to apply audio effect on recorded video

I am developing an application which require to apply audio effect on recorded video.
I am recording video using GPUImage library. I can successfully done with it. Now, I need to apply audio effect like Chipmunk, Gorila, Large Room, etc.
I looked into Apple's document and it say that AudioEngine can't apply AVAudioUnitTimePitch on Input Node (as Microphone).
For solving this problem, I use following mechanism.
Record video & audio at same time.
Play video. While playing video, start AudioEngine on Audio file and apply AVAudioUnitTimePitch on it.
[playerNode play]; // Start playing audio file with video preview
Merge video and new effected audio file.
Problem :
User have to preview a full video for audio effect merge. This is not a good solution.
If I set volume of playerNode to 0 (zero). Then It record mute video.
Please provide any better suggestion to do this things. Thanks in advance.

Why does audio cut out briefly when changing video input on an AVCaptureSession?

I am using the sample code from Switch cameras with avcapturesession to swap from the iPhone's front camera to its back one during a recording session. Only the video AVCaptureDeviceInput is changed; neither the audio input device nor the AVCaptureSession itself is changed. Even so, there's a clear break in the audio during the camera swap. Why is this?
And is there any workaround? For instance, would using an AVAudioRecorder instead to record the audio separately allow for continuous audio recording during a camera flip? I could then stitch it to the video later, even though that would be a pain.
When switching the video camera, the audio input also changes. When recording with the front camera, the front mic is used. Some audio packets are lost in this process.
I encountered the same problem, and using AVAudioRecorder to record the audio separately and AVMutableComposition to combine the audio and video tracks after recording worked perfectly.

Which way is more simple to capture Audio by mic, meanwhile play the audio? like audio amplifier

I have roughly researched audio APIs for iOS. There are several layer APIs to perform audio capture and play.
My app needs a simple function like audio amplifier (needs delay around 0.2 Seconds). I don't need save record to file. I am not sure which way is more simple to implement it. Core Audio? Or AVfoundation?
How do I record audio on iPhone with AVAudioRecorder? I am not sure does this link working with my case or not.
While playing a sound does not stop recording Avcapture This link is playing other audio when recording. It is not suit my case.
For buffered near-simultaneous audio record and playback, you will need to use either the Audio Queue API or Audio Units such as RemoteIO. Audio Units will allow a lower latency.

iOS: analysing audio while recording video to apply image filters

I'm desperate to find a solution to the following problem: I have an iPhone application that:
can record Video and audio from the camera and microphone to a video file
perform some audio processing algorithms in real-time (while the video is being recorded)
Apply filters to the video (while it's recording) that are modified by the latter algorithms
I've accomplished all of the tasks separately using some libraries (GPUImage for the filters, and AVFoundation for basic audio processing) but I haven't been able to combine the audio analysis and the video recording simultaneously, i.e: it records perfectly the video file and applies the filters correctly, but the audio processing part just STOPS when I start to record the video.
I've tried with AVAudioSession, AVAudioRecorder and have looked all around google and this page but I couldn't find anything. I suspect that it has to do with concurrent access to the audio data (the video recording process stops the audio processing because of concurrency) but either way I don't know how to fix it
Any ideas? anyone? Thanks in advance.

Capturing iPhone game audio

I'd like to capture the audio (music + sound effects) coming from my iPhone game. AVCaptureSession seems to have only the microphone as audio source. I'd like to capture the audio, put it into CMSampleBufferRefs and append these to an AVAssetWriterInput.
I'm currently looking into Audio Queues. Any other ideas?
There is no API to directly capture all the sound effects and music from your game.
The most common solution is for an app to generate all sound twice, once for audio output, plus a second identical copy in the from of PCM samples to feed a DSP or Audio Unit mixer. Then feed the mixer output to AVAssetWriter or other file output. This technique is much easier to implement if all the sounds produced by your app are in the form of raw PCM audio played via Audio Queue or the RemoteIO Audio Unit API, which may require significant rewrites to your music and game sound code.

Resources