I followed a lot of tutorials and every tutorials about recording in AVFoundation covers only recording Video or Audio or both of this things.
I would like to record some location in the same time domain like Video/Audio on separate track. This location waypoints is described with 5 properties only - latitude, longitude, altitude, startTime, duration and it'll be changing no often that 5 seconds of the recording. This recording is for presentation purposes and I need functionality such as streaming, play forward, skipping, pause.
Anybody have some idea how to do it with AVFoundation framework?
Sure, this is possible.
AVFoundation is a collection of higher and lower level libraries with lots of options to tap into the processing pipeline at various stages. Assuming you want to capture from the Camera, then you're going to be using some combination of AVCaptureSession, its delegate https://developer.apple.com/reference/avfoundation/avcapturevideodataoutputsamplebufferdelegate and an AVAssetWriter.
The AVCaptureVideoDataOutputSampleBufferDelegate is capturing vended CMSampleBuffers (which combine a frame of video data with timing information), at the point you receive it, you typically just "write out" the CMSampleBuffer to record the video, but you can also further process it to filter it in realtime or, as you want to do, record additional information with timing data (e.g. at this point in the video, I had these coordinates).
Research how to write video from the camera on iOS to get started and using the Delegate, you'll soon see where to hook into the code to achieve what you're after.
Related
I created an app that uses AVFoundation captureStillImageAsynchronouslyFromConnection to take a picture every 0.2 seconds and analyse the picture. However, I didn't realize that it made the shutter sound every time it took a picture until I had already built the app.
Question: Is there a good alternative to AVFoundation captureStillImageAsynchronouslyFromConnection that doesn't make a shutter sound or is there a legit way to turn the sound off?
Alternative solution to take photo would be AVCapturePhotoCaptureDelegate method capturePhoto(with:delegate:). Check out the documentation for AVCapturePhotoOutput.
But as i can see from your question, you want to mute the shutter speed when taking photos. As per apple documentation API, there is no way you can mute the shutter sound when taking photo. Until unless user turn off the mute hardware button.
As a workaround you can start analysing the camera frame of a continuous video by using AVCaptureVideoDataOutputSampleBufferDelegate. Here is the detailed documentation from apple for How to capture video frames from the camera as images using AV Foundation on iOS. With this method you can still get the images and avoid shutter sound as well.
I am recording videos and playing them back using AVFoundation. Everything is perfect except the hissing which is there in the whole video. You can hear this hissing in every video captured from any iPad. Even videos captured from Apple's inbuilt camera app has it.
To hear it clearly, you can record a video in a place as quiet as possible without speaking anything. It can be very easily detected through headphones and keeping volume to maximum.
After researching, I found out that this hissing is made by preamplifier of the device and cannot be avoided while recording.
Only possible solution is to remove it during post processing of audio. Low frequency noise can be removed by implementing low pass filter and noise gates. There are applications and software like Adobe Audition which can perform this operation. This video shows how it is achieved using Adobe Audition.
I have searched Apple docs and found nothing which can achieve this directly. So I want to know if there exists any library, api or open source project which can perform this operation. If not, then how can I start going in right direction because it does looks like a complex task.
I am interested in recording media using an AVCaptureSession in iOS while playing media back using an AVPlayer (specifically, I am playing back audio and recording video, but I'm not sure it matters).
The problem is, when I play the resulting media back together later, they are out of sync. Is it possible to synchronize them, either by ensuring that playback and recording start simultaneously, or by discovering what the offset is between them? I probably need the sync to be on the order of 10 ms. It is unreasonable to assume that I can always capture audio (since the user may use headphones), so syncing via analysis of original and recorded audio is not an option.
This question suggests that it's possible to end playback and recording simultaneously and determine the initial offset from the resulting lengths that way, but I'm unclear how to get them to end simultaneously. I have two cases: 1) the audio playback runs out, and 2), the user hits the "stop recording" button.
This question suggests priming and then applying a fixed, but possibly device-dependent delay, which is obviously a hack, but if it's good enough for audio it's obviously worth considering for video.
Is there another media layer I can use to perform the required synchronization?
Related: this question is unanswered.
If you are specifically using AVPlayer to playback Audio and i would suggest you to use AudioQueueServices for the same. Its seamless and fast as it reads buffer by buffer and play pause is faster than AVPLlayer
There can also be the possibility that you are missing the initial statement of [avPlayer prepareToPlay] which might be causing much overhead for it to sync before playing the Audio.
Hope it helps you.
Although i've searched SO and read documentation multiple times on AVCaptureConnection, AVCaptureSession, AVCaptureVideoPreviewLayer, AVCaptureDevice, AVCaptureInput/Output … i'm still confused about all this AV stuff. When it comes to this, it's one big pile of abstract words to me, that don't make much sense. I'm asking to shed some light on the subject for me here.
So, can anyone explain coherently in plain english the logic of proper setup and use of the media devices? What is AVCaptureVideoPreviewLayer? What is AVCaptureConnection? Input/Output?
I want to catch the basic idea the people who made this stuff had while making it.
Thanks
I wish I had more time to write a more thorough reply. Here are some simplified basics:
In order to work with audio and video coming from the hardware, destined for the screen or files, you need to setup an AVCaptureSession that helps coordinate the sources and the destinations, using AVCaptureConnections. You use the session instance to start and stop the process, along with setting some output properties like bitrate and quality. You use the AVCaptureConnection instance(s) to control the connection between an AVCaptureInputPort and an AVCaptureOutputPort (or AVCaptureVideoPreviewLayer), such as monitoring input levels of sounds or setting the orientation of the video.
AVCaptureInputPort are different inputs from AVCaptureDevice - which is where your video or audio is coming from, such as the camera or the microphone. You will normally look through all available devices and choose those that have the properties you are looking for, such as if they are audio, or if they are the front-facing camera.
AVCaptureOutput is where the AV is sent - it might be a file or a routine that allows you to process the data in real-time, etc.
AVCaptureVideoPreviewLayer is an OpenGL layer that is optimized for very fast rendering of the output of the selected video input device (front or back camera). You typically use this to show your user what input you are working with - sort of like a camera viewfinder.
If you are going to use this stuff, then you must read Apple's AV Foundation Programming Guide
Here's an image that may help you some more (from above-mentioned doc):
A more detailed view:
I'm desperate to find a solution to the following problem: I have an iPhone application that:
can record Video and audio from the camera and microphone to a video file
perform some audio processing algorithms in real-time (while the video is being recorded)
Apply filters to the video (while it's recording) that are modified by the latter algorithms
I've accomplished all of the tasks separately using some libraries (GPUImage for the filters, and AVFoundation for basic audio processing) but I haven't been able to combine the audio analysis and the video recording simultaneously, i.e: it records perfectly the video file and applies the filters correctly, but the audio processing part just STOPS when I start to record the video.
I've tried with AVAudioSession, AVAudioRecorder and have looked all around google and this page but I couldn't find anything. I suspect that it has to do with concurrent access to the audio data (the video recording process stops the audio processing because of concurrency) but either way I don't know how to fix it
Any ideas? anyone? Thanks in advance.