Using AVPlayer as input for AVCaptureSession? - ios

Is it possible to capture the output of the AVPlayer using AVCaptureSession? I believe it's possible but can't figure out how to go about using the AVPlayer as an input.

You cannot plug an AVPlayer into an AVCaptureSession, although you can get access to the player's video and audio in the form of CVPixelBuffers and AudioBufferLists.
This is achieved via two APIs: AVPlayerItemVideoOutput for video and MTAudioProcessingTap for audio.
Despite being a c-api, MTAudioProcessingTap is easier to integrate as just like AVCaptureSession, it pushes you samples via a callback, while with AVPlayerItemVideoOutput you pull frames for a given time.
For this reason, if you want a AVCaptureSession-like experience (real-time, push), you should probably let the audio tap drive your frame-pulling.
There is some AVPlayerItemVideoOutput sample code in objective-c here and in swift here and an example of using an MTAudioProcessingTap in swift here.

Related

iOS : How to apply audio effect on recorded video

I am developing an application which require to apply audio effect on recorded video.
I am recording video using GPUImage library. I can successfully done with it. Now, I need to apply audio effect like Chipmunk, Gorila, Large Room, etc.
I looked into Apple's document and it say that AudioEngine can't apply AVAudioUnitTimePitch on Input Node (as Microphone).
For solving this problem, I use following mechanism.
Record video & audio at same time.
Play video. While playing video, start AudioEngine on Audio file and apply AVAudioUnitTimePitch on it.
[playerNode play]; // Start playing audio file with video preview
Merge video and new effected audio file.
Problem :
User have to preview a full video for audio effect merge. This is not a good solution.
If I set volume of playerNode to 0 (zero). Then It record mute video.
Please provide any better suggestion to do this things. Thanks in advance.

Capture WKWebView audio for metering

I am currently working on an app that contains a WKWebView in which I have loaded an iFrame with video streaming from YouTube.
I would like to be able to create an audio visualizer that is shown alongside this iFrame and moves in reaction to the audio from the stream.
I have been following along with this Ray Wenderlich tutorial for creating a music visualizer, but the tutorial uses the setMeteringEnabled and updateMeters functions built in to AVAudioPlayer.
Is there any way to meter audio coming from a WKWebView? I just want an average volume level, not the actual audio stream itself.
I have attempted to look at libraries like The Amazing Audio Engine, but none of them seem to allow you to capture the channel coming from the WKWebView at all, let alone for metering purposes.

Which way is more simple to capture Audio by mic, meanwhile play the audio? like audio amplifier

I have roughly researched audio APIs for iOS. There are several layer APIs to perform audio capture and play.
My app needs a simple function like audio amplifier (needs delay around 0.2 Seconds). I don't need save record to file. I am not sure which way is more simple to implement it. Core Audio? Or AVfoundation?
How do I record audio on iPhone with AVAudioRecorder? I am not sure does this link working with my case or not.
While playing a sound does not stop recording Avcapture This link is playing other audio when recording. It is not suit my case.
For buffered near-simultaneous audio record and playback, you will need to use either the Audio Queue API or Audio Units such as RemoteIO. Audio Units will allow a lower latency.

iOS: analysing audio while recording video to apply image filters

I'm desperate to find a solution to the following problem: I have an iPhone application that:
can record Video and audio from the camera and microphone to a video file
perform some audio processing algorithms in real-time (while the video is being recorded)
Apply filters to the video (while it's recording) that are modified by the latter algorithms
I've accomplished all of the tasks separately using some libraries (GPUImage for the filters, and AVFoundation for basic audio processing) but I haven't been able to combine the audio analysis and the video recording simultaneously, i.e: it records perfectly the video file and applies the filters correctly, but the audio processing part just STOPS when I start to record the video.
I've tried with AVAudioSession, AVAudioRecorder and have looked all around google and this page but I couldn't find anything. I suspect that it has to do with concurrent access to the audio data (the video recording process stops the audio processing because of concurrency) but either way I don't know how to fix it
Any ideas? anyone? Thanks in advance.

iOS Capture Video from Camera and Mixing with audio file in real time

I am trying to capture video from iPhone camera and save the movie mixed with an audio file.
I can capture the video with the audio (from mic) with no problems. What I want to do is capture the video but instead of mic audio, use a music track (a .caf file).
I am capturing the video with AVAssetWriter. I've tried to set up an AVAssetReader to read the audio file, but I couldn't make it work with the AVAssetWriter (maybe because the decoding of audio happens real fast).
Also, I don't want to save the movie without audio and mix it afterwards with an AVAssetExportSession. It would be to slow for my purpose.
Any Ideas ? Thanks in advance.
Capture the video using AVAssetWriter, Capture audio lets say with AvAudioRecorder , then mix audio and video using AVExportSession , lots of posts on this topic.
If you do it after (with AVAssetExportSession) , you will problem with the sync. and short delay (time...) between the video and the audio...didn't find better solution

Resources