rapid volume changes cause artefacts in AVAudioPlayer - ios

I have some pure tones (sine waves) that I need to fade in and out and also adjust their volume arbitrarily through a thumbwheel. I generate the PCM data along with a WAV header and then encapsulate it in an AVAudioPlayer which plays back fine at constant volume.
Now, as trivial as this sounds, I'm finding that changing volume rapidly on iOS 7 causes some pretty nasty artefacts. Imagine a tone playing with a slider controlling it's volume. Moving the slider around rapidly will cause the noise I'm describing. It's particularly obvious because the source signal is just a tone. If it were music, things would likely just get lost in the relative noise. Oddly, if I directly instrument the device volume via MPMusicPlayerController instead of trying to control things either through AVAudioPlayer's volume or even at the byte level, I get much smoother results and no artefacts. I suspect the device is doing something when it's volume is adjusted that I am not.I know this kind of quantization noise is an issue in audio processing and I'm wondering if anyone may have some advice.
I've also reproduced the issue using sample level playback via Novocaine. No matter what I do, I can't seem to get smooth, noise-free fade in/out characteristics. Any help would be much appreciated.

Have you checked out AVAudioMix? I've used this for ramping volume with an AVPlayer, and it works incredibly well. The function to ramp volume over a specific time has been particularly helpful for fade in/out effects.

If modifying audio volume at the sample level, make sure to smoothly change the volume, never suddenly from one sample to the next, otherwise the result will have discontinuities, which can be very noisy. e.g. gradually fade in any volume changes over many (perhaps a few dozen) milliseconds worth of samples, using either a linear ramp, or an ease-in-ease-out half-cosine curve, per-sample between level settings. Perhaps one controller is doing this automatically, but the other is not.

Related

iOS timecode-synced downloadable animation system

As an introduction and context, I'm currently a novice iOS app developer and I want to make sure I'm not reinventing the wheel too much as I make this app (reinventing wheels can get very expensive.)
The app will allow the user to download our videos off the internet and will allow storage for offline usage. The problem with storing these videos on the device is that many of them will be too long and thus too big to be practical to store.
The videos are quite simple however, consisting of a couple short "real" video clips at the beginning and end, with the bulk of the video being still images animated around the screen. The animations would consist solely of opacity and simple transformation keyframes (translate, scale, rotate around static anchor point), and would require a variety of easing functions for each transition.
The hardest part likely would be that the "video" player will also have to be able to track with an audio player's timecode, and will have to support seeking to any arbitrary point like a normal video player.
So, now that I've described the problem, here's the solution I've come up with so far. Hopefully doing it this way will reduce the probability of XY problems. :)
The idea is to basically do a dumbed-down version of what Final Cut and other editing programs do with animations—have a bunch of clips, sometimes overlapping, and be able to animate the position, scale, rotation, and opacity of each using keyframes.
My first instinct as far as implementation goes is to use some of iOS's game engine stuff to do animations (maybe SceneKit because it seems to allow animations to use scene time as opposed to real time, despite the fact that it's primarily 3d and I am doing 2d animations) and manually handle syncing time with the audio player, as well as manually handling the adding and removing of nodes from the scene when seeking through the video and when clips begin/end.
What are some built-in systems, plugins, etc. that I can take advantage of to make this easier and faster to develop and maintain? Double points if I don't have to transcode the animations by hand to some custom format.
As I mentioned in my comment your question is rather broad and contains multiple questions in one, I will address what you mentioned to be likely the hardest part:
https://developer.apple.com/documentation/avfoundation/avplayeritem
https://developer.apple.com/documentation/avfoundation/avasset
Instead of SceneKit, take a look at SpriteKit and its SKVideoNode.
Also, research Metal video processing. There are quit a few example projects available you could use as a starting point.

Simplified screen capture: record video of only what appears within the layers of a UIView?

This SO answer addresses how to do a screen capture of a UIView. We need something similar, but instead of a single image, the goal is to produce a video of everything appearing within a UIView over 60 seconds -- conceptually like recording only the layers of that UIView, ignoring other layers.
Our video app superimposes layers on whatever the user is recording, and the ultimate goal is to produce a master video merging those layers with the original video. However, using AVVideoCompositionCoreAnimationTool to merge layers with the original video is very, very, very slow: exporting a 60-second video takes 10-20 seconds.
What we found is combining two videos (i.e., only using AVMutableComposition without AVVideoCompositionCoreAnimationTool) is very fast: ~ 1 second. The hope is to create an independent video of the layers and then combine that with the original video only using AVMutableComposition.
An answer in Swift is ideal but not required.
It sounds like your "fast" merge doesn't involve (re)-encoding frames, i.e. it's trivial and basically a glorified file concatenation, which is why it's getting 60x realtime. I asked about that because your "very slow" export is from 3-6 times realtime, which actually isn't that terrible (at least it wasn't on older hardware).
Encoding frames with an AVAssetWriter should give you an idea of the fastest possible non-trivial export and this may reveal that on modern hardware you could halve or quarter your export times.
This is a long way of saying that there might not be that much more performance to be had. If you think about the typical iOS video encoding use case, which would probably be recording 1920p # 120 fps or 240 fps, your encoding at ~6x realtime # 30fps is in the ballpark of what your typical iOS device "needs" to be able to do.
There are optimisations available to you (like lower/variable framerates), but these may lose you the convenience of being able to capture CALayers.

Audio bars visualizer in iOS

I'm looking for a way to create a audio bars visualizer similar to this in iOS.
Every white bar will move up and down depending of audio wave. I'm really lost because haven't much experience dealing with audio in Objective-c.
EDIT: What i'm seeking is what Overcast's app does on its visualizer (the group of vertical orange bars on the lower part of the podcast's image)
Anyone can help?
Thanks
EDIT: Thanks to Tomer's answer I finally made it. First I did this tutorial in order to make it all clear. Then I created my own VisualizerView for my project, you can find it in this gist. Maybe is not perfect but it does what I needed to do.
Generally, you have a few options if you want to get an idea of what something sounds like in iOS:
Use the simple AVAudioPlayer audio player, and then use the [audioPlayer averagePowerForChannel:] method to get the avarage audio level for the current moment. Check out this tutorial.
Use the Audio Queue API, which lets you send whatever audio you want to the speaker: You would read audio from your source and fill the buffers with it every time. (If you're reading from a file, use AVAssetReader) This way you always know exactly what waveform you're playing, so you can, for example, calculate its avarage power or process it in other ways like FFT. Then you'd update the bars accordingly.
EDIT: The standard way of doing such a thing is to use the Fast Fourier Transform (FFT) - it extracts frequency information from a sound. Here's a good example of using it on iOS (Apple's guide here). But, of course, to use it you have to know exactly what waveform you're playing every time, so you'd probably want to use a lower-level API such as Audio Queue.

iOS record audio and draw waveform like Voice Memos

I'm going to ask this at the risk of being too vague or asking too many things in one question, but I'm really just looking for a point in the right direction.
In my app I want to record audio, show a waveform while recording, and scroll through the waveform to record and playback from a specified time. For example, if I have 3 minutes of audio, I should be able to scroll back to 2:00 and start recording from there to fix a mistake.
In Voice Memos, this is accomplished instantaneously, without any delay or loading time. I'm trying to figure out how the did this, if anyone has a clue.
What I've tried:
EZAudio - This library is great, but doesn't do what I want. You can't scroll through the waveform. It deletes the waveform data at the beginning and begins appending it to the end once it reaches a certain length.
SCWaveformView - This waveform is nice, but it uses images. Once the waveform is too long, putting it in a scroll view causes really jittery scrolling. Also you can't build the waveform while recording, only afterward.
As far as appending, I've used this method: https://stackoverflow.com/a/11520553/1391672
But there is significant processing time, even when appending two very short clips of audio together (in my experience).
How does Voice Memos do what it does? Do you think the waveform is drawn in OpenGL or CoreGraphics? Are they using Core Audio or AVAudioRecorder? Has anyone built anything like this that can point me in the right direction?
When zoomed-in, a scrollview only needs to draw the small portion of the waveform that is visible. When zoomed-out, a graph view might only drawn every Nth point of the audio buffer, or use some other DSP down-sampling algorithm on the data before rendering. This likely has to be done using your own custom drawing or graphics rendering code inside a UIScrollView or similar custom controller. The waveform rendering code during and after recording don't have to be the same.
The recording API and the drawing API you use can be completely independent, and can be almost anything, from OpenGL to Metal to Core Graphics (on newer faster devices). On the audio end, Core Audio will help provide the lowest latency, but Audio Queues and the AVAudioEngine might also be suitable.

iOS AVPlayer: How to slow down a 30fps video to 1fps

I have a 30fps Quicktime .mov of still images I created with AVAssetWriter. (It's only about 10 frames long). I would like the user to be able to slow it down using a UISlider to about 1fps, but when I adjust the AVPlayer .rate property from 1 down to 0, it doesn't get anywhere near 1fps, it just stops playback (because a 0 rate is effectively stopping/pausing it, which makes sense). But how can I slow the player down to about 1fps? I think I'd need to do some math to calculate the actual rate, but that's where I'm stuck. Would it end up being something like 0.000000000000001?
Thanks!
If this was a requirement of mine I would approach this as follows (also suggested by Inafziger in the comments). Use AVAssetReader and roll my own viewer for the images. This would give you precise control using a timer as stated in your comments. Make sure you reuse some preallocated image(s) memory area (you can probably get away with space for a single image). I would probably take a pull approach like CoreAudio. When you need an image pull it from some image buffer manager class which calls AVAssetReaders read function. This way you can have N buffers that will always be available. This may be a little overkill. I do believe AVAssetReader pre decodes some amount of the movie upon initialization. This is why I say you can more than likely just get away with using a single buffer for reading image data into.
From you comment about memory issues. I do believe there are some functions in the AVAssetReader and associated classes that use the create rule.

Resources