Create a crossfader with AvAudioPlayer - ios

How I can create a crossfader between two sound track with AVAudioplayer?
I must use AVAudiomix? But... how I should use?

AVAudioPlayer does not support using an AVAudioMix. You could try to fade out the playing yourself by directly setting the playback volume but getting the timing right between two AVAudioPlayers will be difficult as AVAudioPlayer is known to have very high and unpredictable latency when starting to play.
One way to accomplish this is to use AVPlayer and AVPlayerItem and AVComposition. You can setup the composition to overlay the two audio files by your desired amount and setup the AVAudioMix fade out the first and fade in the second. This method will let you have precise control over when the audio files play relative to one another.

Related

set AVPlayer AVPlayerItem buffer size?

Playing video with avplayer, listening to the loadedTimeRanges property, playing about 10 minutes of video, avplayer always preload the video, and feel very costly, is there a way to limit the size of the preloaded area? Such as pre-loading video half of the time?
I think you're looking for AVPlayerItem's preferredForwardBufferedDuration property.
Per Apple:
This property defines the preferred forward buffer duration in seconds. If set to 0, the player will choose an appropriate level of buffering for most use cases. Setting this property to a low value will increase the chance that playback will stall and re-buffer, while setting it to a high value will increase demand on system resources.
See https://developer.apple.com/reference/avfoundation/avplayeritem/1643630-preferredforwardbufferduration?language=objc
If you use resourceLoaderdelegate you can control the exact amount of content that gets preloaded/downloaded prior to playback.
The code example here is a good start - AVPlayer stalling on large video files using resource loader delegate
Basically, you would have to maintain an array of pendingRequests and process them once by one by firing up URLSession dataTask until you have downloaded enough content that you would like to preload.
Cheers.

Swift: How to play short sounds longer

I'm working on a SpriteKit project using Swift and I wanted to play my 1 sec long sound longer. For instance, the sound file is "beep" - I want it to be extended to something like "beeeep" or "beeeeeeeeep". I'm tried several options to play a sound file, but I can't find a way to extend it.
Currently, I'm using SKAction to play sound:
var beep = SKAction.playSoundFileNamed("beep.caf", waitForCompletion: false)
runAction(beep)
Anyone knows how to do this?
AVAudioPlayer has a rate property which lets you play at half to double speed. If half works, you can use that.
If that does not work, you can use AVComposition to create a composite track from your sound file. You can select portions of the sound file and stitch them together anyway you like.

Playing Motor Sound Effects for iOS SpriteKit Game... AVAudioPlayer vs SKAction vs?

I am developing a driving game using SpriteKit and am having trouble with engine sound effects.
I want to have two different engine sounds. One for when the throttle button is being pressed and one for when the throttle button is not being pressed. One of the two sounds will be playing constantly while the game is going.
What is the best approach? Should my sound files be extremely short (0.10 seconds or less) and looped or should they be fairly long and just turned on and off? Should I use SKAction to play the sounds or AVAudioPlayer or something else? I have tried using AVAudioPlayer but every time I pause and play the player (switching the throttle on or off), the frame rate of the game momentarily drops. Any help is appreciated!
Based on the comment left by LearnCocos2D, I looked into ObjectAL. For those not familiar with ObjectAL, it is designed to be a simple and intuitive interface to OpenAL and AVAudioPlayer. You can download and find more information about it here...
http://kstenerud.github.io/ObjectAL-for-iPhone/index.html
For my audio clip I used 3 to 5 second long .caf audio file of a motor sound. ObjectAL allowed me to continuously loop and vary the pitch of the audio file. By varying the pitch, I could simulate the motor at different speeds.
Below is a sample of the code...
Two member variables or you can set them as properties...
ALBuffer *_buffer;
ALSource *_source;
Method to initialize the motor sound effect...
- (void)initializeSound
{
// We'll let OALSimpleAudio deal with the device and context.
// Since we're not going to use it for playing effects, don't give it any sources.
[OALSimpleAudio sharedInstance].reservedSources = 0;
_source = [ALSource source];
_buffer = [[OpenALManager sharedInstance] bufferFromFile:#"EngineSound.caf"];
_source.pitch = 0.30; // Start at low pitch for engine idle.
[_source play:_buffer loop:YES];
}
Inside my SKScene's update method I adjust the pitch according to speed.
- (void)update:(CFTimeInterval)currentTime
{
CGFloat enginePitch;
// Code to calculate desired enginePitch value based on vehicle speed.
_source.pitch = enginePitch;
}
Because of an alleged bug in SpriteKit, I did have an issue with a gpus_ReturnNotPermittedKillClient EXC_BAD_ACCESS error when closing the app. I found the fix here...
https://stackoverflow.com/a/19283721/3148272
I would suggest using AVAudioPlayer. You need to have it in property and not create every time you want to play something.
SKAction is bad in a way that you can't stop or cancel it (even by removing action).

In AVFoundation, how to synchronize recording and playback

I am interested in recording media using an AVCaptureSession in iOS while playing media back using an AVPlayer (specifically, I am playing back audio and recording video, but I'm not sure it matters).
The problem is, when I play the resulting media back together later, they are out of sync. Is it possible to synchronize them, either by ensuring that playback and recording start simultaneously, or by discovering what the offset is between them? I probably need the sync to be on the order of 10 ms. It is unreasonable to assume that I can always capture audio (since the user may use headphones), so syncing via analysis of original and recorded audio is not an option.
This question suggests that it's possible to end playback and recording simultaneously and determine the initial offset from the resulting lengths that way, but I'm unclear how to get them to end simultaneously. I have two cases: 1) the audio playback runs out, and 2), the user hits the "stop recording" button.
This question suggests priming and then applying a fixed, but possibly device-dependent delay, which is obviously a hack, but if it's good enough for audio it's obviously worth considering for video.
Is there another media layer I can use to perform the required synchronization?
Related: this question is unanswered.
If you are specifically using AVPlayer to playback Audio and i would suggest you to use AudioQueueServices for the same. Its seamless and fast as it reads buffer by buffer and play pause is faster than AVPLlayer
There can also be the possibility that you are missing the initial statement of [avPlayer prepareToPlay] which might be causing much overhead for it to sync before playing the Audio.
Hope it helps you.

Playing an AVMutableComposition with AVPlayer audio gets out of sync

I have an AVMutableComposition with 2 audio tracks and one video track. I'm using the composition to string about 40 different video clips from .mov files, putting the video content of each clip in the video track of my composition and the audio in the audio track. The second audio track I use for music.
I also have a synchronized layer for titles graphics.
When I play this composition using an AVPlayer, the audio slowly gets out of sync. It takes about 4 minutes to start becoming noticeable. It seems like if I only string together a handfull of longer clips the problem is not as apparent, it is when there are many clips shorter clips (~40 in my test) that it gets really bad.
Pausing and Playing doesn't re-sync the audio, however seeking does. In other words, if I let the video play to the end, towards the end the lip sync gets noticeably off even if I pause and play throughout, however, if I seek to a time towards the end the audio gets back in sync.
My hacky solution for now is to seek to the currentTime + 1 frame every minute or so. This creates an unpleasant jump in the video caused by a lag in the seek operation, so not a good solution.
Exporting with an ExportSession doesn't present this problem, audio remains in sync in the output movie.
I'm wondering if the new masterClock property in the AVPlayer is the answer to this, and if it is, how is it used?
I had the same issue and fixed it, among many other audio and video things, by specifying times timescales in the following manner:
CMTime(seconds: my_seconds, preferredTimescale: CMTimeScale(600))
Before, my time scale was CMTimeScale(NSEC_PER_SEC). That caused me jittery when composing clips at a different frame rate, plus this audio out-of-sync that Eddy mentions here.
In spite of looking like a magic number, 600 is a common multiple of 24, 30, 60 and 120. These are usual frame rates for different purposes. The common multiple avoids dragging around rounding problems when composing multiple clips.

Resources