Record Audio For Ten Hours in iOS - ios

I'm having trouble with my application where I'm looking to record audio for roughly ten hours at a time. I'm not looking to save a ten hour clip, but rather listen out for short loud noises over that ten hour period and save only them.
Currently I have made a prototype using the AVFoundation framework which records audio in the background, plays it back and monitors the noise levels to check how loud the current audio is. I had an idea of how to get around my problem where I could monitor the noise levels and only save the recording if it exceeds these noise levels. Therefore only recording short second clips and saving on space.
However in order to check the noise levels the audio recorder must be recording all the time and so will currently record that full audio clip.
Could anyone offer me some guidance on the matter please?

Related

IOS Apply a slow motion effect to a video while shooting

Is it possible to apply slow motion effect while recording video?
This means that the recording has not finished yet, the file has not been saved, but the user sees the recording process in slow motion.
I think it is important to understand what slow-motion actually means. To "slow down motion" in a movie, you need to film more images per second than usually and play this movie afterwards in normal speed, that's making the slow motion effect.
Example: Videos are often shot in 30 frames per second (fps), so for one second of movie you're creating 30 single images. If you want a motion to be half as fast, you need to shoot 60 fps (60 images per second). If you play those 60 images at half-speed (the normal 30fps), it will result in a movie of 2 second lengths showing the slow-motion effect.
As you can see, you cannot record and show a slow-motion effect at the same time. You'll need to save it first and then play it slower than recorded.

How to set delay in seconds to the camera capture on iOS?

I want to know if setting a delay in seconds to the camera feed is possible on iOS with Swift.
Let's say I add a 30 second delay to my camera. If I am pointing the camera at a person who is crouching down, and then that person stands up, my camera is not going to show that until those 30 seconds have passed. In other words, it will continue to show the person crouched until that time has passed.
how can I achieve this?
This sounds extremely difficult. Problem is camera recording is bringing in (most likely) 30 frames a second. So those buffers from 30 seconds ago need to be saved somewhere, and in raw form that is a TON of data so you can't just keep those in memory.
You don't want to write as images because the compression is garbage compared to video.
Perhaps the best way would be to setup your capture, then capture your video in say 10 second segments, save to file, then display those video file segments on an AVPlayer or something. That way it's 'delayed' for the user.
Either way, from my understanding of how things work, the main challenge is what to do with those buffers while waiting to display them. You could potentially stream it somewhere, but then you need a whole backend to support that and then stream it back, seems silly.

The Amazing audio engine 2 - Crossfade looping

I am using the amazing audio engine 2 library for my sequencer app and I want to implement Crossfade loop audio.
Here is explanation :
When user press any key in sequencer piano it will play some audio file and and that audio file will continue to play in loop until user release the key. But that loop will be crossfade to itself.
I am using AEAudioFilePlayerModule for looping but not sure how to crossfade audio file with this class.
Explanation of Cross fade :
Start/End: This setting allows me to choose where in the audio file I want the app to constantly loop so that if user taps+holds note down for a long time, the audio will sound continuously until the user releases his finger.
XFade: This function (crossfade) allows me to chose how to fade between the end and start of the audio loop. This is good so
that the sound will loop smoothly. Here, 9999 is set. So at about 5k samples before the 200k end point, the audio for this note
will begin to fade away and at the same time, the audio loop starting at 50k samples will fade in for a duration of about 5k samples (1/2 the XFade amount).
Please help.
Thank you.

In AVFoundation, how to synchronize recording and playback

I am interested in recording media using an AVCaptureSession in iOS while playing media back using an AVPlayer (specifically, I am playing back audio and recording video, but I'm not sure it matters).
The problem is, when I play the resulting media back together later, they are out of sync. Is it possible to synchronize them, either by ensuring that playback and recording start simultaneously, or by discovering what the offset is between them? I probably need the sync to be on the order of 10 ms. It is unreasonable to assume that I can always capture audio (since the user may use headphones), so syncing via analysis of original and recorded audio is not an option.
This question suggests that it's possible to end playback and recording simultaneously and determine the initial offset from the resulting lengths that way, but I'm unclear how to get them to end simultaneously. I have two cases: 1) the audio playback runs out, and 2), the user hits the "stop recording" button.
This question suggests priming and then applying a fixed, but possibly device-dependent delay, which is obviously a hack, but if it's good enough for audio it's obviously worth considering for video.
Is there another media layer I can use to perform the required synchronization?
Related: this question is unanswered.
If you are specifically using AVPlayer to playback Audio and i would suggest you to use AudioQueueServices for the same. Its seamless and fast as it reads buffer by buffer and play pause is faster than AVPLlayer
There can also be the possibility that you are missing the initial statement of [avPlayer prepareToPlay] which might be causing much overhead for it to sync before playing the Audio.
Hope it helps you.

Playing an AVMutableComposition with AVPlayer audio gets out of sync

I have an AVMutableComposition with 2 audio tracks and one video track. I'm using the composition to string about 40 different video clips from .mov files, putting the video content of each clip in the video track of my composition and the audio in the audio track. The second audio track I use for music.
I also have a synchronized layer for titles graphics.
When I play this composition using an AVPlayer, the audio slowly gets out of sync. It takes about 4 minutes to start becoming noticeable. It seems like if I only string together a handfull of longer clips the problem is not as apparent, it is when there are many clips shorter clips (~40 in my test) that it gets really bad.
Pausing and Playing doesn't re-sync the audio, however seeking does. In other words, if I let the video play to the end, towards the end the lip sync gets noticeably off even if I pause and play throughout, however, if I seek to a time towards the end the audio gets back in sync.
My hacky solution for now is to seek to the currentTime + 1 frame every minute or so. This creates an unpleasant jump in the video caused by a lag in the seek operation, so not a good solution.
Exporting with an ExportSession doesn't present this problem, audio remains in sync in the output movie.
I'm wondering if the new masterClock property in the AVPlayer is the answer to this, and if it is, how is it used?
I had the same issue and fixed it, among many other audio and video things, by specifying times timescales in the following manner:
CMTime(seconds: my_seconds, preferredTimescale: CMTimeScale(600))
Before, my time scale was CMTimeScale(NSEC_PER_SEC). That caused me jittery when composing clips at a different frame rate, plus this audio out-of-sync that Eddy mentions here.
In spite of looking like a magic number, 600 is a common multiple of 24, 30, 60 and 120. These are usual frame rates for different purposes. The common multiple avoids dragging around rounding problems when composing multiple clips.

Resources