I am using AudioKit 4.3 with Xcode 9.4.1, Swift, and managed so far to get a sequencer playing an AKSynthKick in a classic house music fashion, it is a 4 beat loop and the kick plays in each beat, but i am clueless at adding wav or caf or aiff files to play with the sequencer, AKMIDISampler asks for notes but does not makes sense to me if it is a single file...
AKMIDISampler will work for you here. AKMIDISampler is a subclass of AKAppleSampler, and it has the methods loadWav() and loadAudioFile(). The noteNumber parameter, when using an audio file, will control playback speed/pitch. Using MIDINoteNumber 60 will play back the file at its actual speed, 72 will be double speed (and an octave higher, if it's a pitched sample), 48 will be half speed (an octave lower) and so on.
Related
I am using The Amazing Audio Engine to simply play an audio file, but I find that when the channel starts playing, there is some automatic fade in happening.
You can see the top waveform is the output of my iPad, and the bottom waveform is the actual raw audio file. There is definitely a 30ms microfade being done.
There is nothing doing that within the amazing audio engine library, so it's something internally happening from apple's mixer audio unit. Is there any way to turn off this behavior?
I suspect that the AudioFilePlayer (used by TAAE) uses the Extended Audio File Services under the hood. ExtAudioFileRef will do that on the first read after a seek if there is any de-coding or sample rate conversion. I had to use the Audio File Services directly to get ride of the implicit fading.
I have an app that plays a sound file every time the screen is touched. For some reason, the app will crash every once in a while with the following error:
reason: 'Resource tick.mp3 can not be loaded'
In case you need it, here is how I play the file each time the screen is tapped:
runAction(SKAction.playSoundFileNamed("tick.mp3", waitForCompletion: false))
This does not happen very often, maybe 1 in 10 runs of the app. Most of the time everything works as expected. I wish I knew what I am doing to cause the crash but I have no clue! I am just tapping away seemingly no different than the times when it doesn't crash. Then all of a sudden I get this issue...
If you play the sound via a playSound function, it will work
var soundFile = SKAction.playSoundFileNamed("bark.wav", waitForCompletion: false)
playSound(soundFile)
playSound:
func playSound(soundVariable : SKAction)
{
runAction(soundVariable)
}
First of all, it looks like that you are using mp3 file to play (short) sound effects. When using mp3 the audio is compressed. In memory, it will have different, bigger size. Also there is a decoding performance penalty (decoding takes CPU time). The most important thing, and the reason why I am talking about mp3 files can be found in docs:
When using hardware-assisted decoding, the device can play only a
single instance of one of the supported formats at a time. For
example, if you are playing a stereo MP3 sound using the hardware
codec, a second simultaneous MP3 sound will use software decoding.
Similarly, you cannot simultaneously play an AAC and an ALAC sound
using hardware. If the iPod application is playing an AAC or MP3 sound
in the background, it has claimed the hardware codec; your application
then plays AAC, ALAC, and MP3 audio using software decoding.
As you can see,the problem is that only one mp3 file at a time can be played using hardware. If you play more than one mp3 at a time, they will be decoded with software and that is slow.
So, I would recommend you to use .wav or .caf files to play sound effects. mp3 would be probably good for background music.
About crashing issue:
try to use .wav or .caf files instead of .mp3
try to hold a strong reference to the SKAction and reuse it as suggested by Reece Kenney.
We're building an iPhone game using PhoneGap.
iOS devices support many audio formats, and we are thinking about using .mp3 or .caf files for the sound effects.
Does it matter which audio format is used? What are the differences between using one versus another?
CAF for Sound Effects
MP3 for soundtracks (mp3 files can't be loop, it has a small pause in between)
I am trying do slow motion for my video file along with audio. In my case, I have to do Ramped Slow motion(Gradually slowing down and speeding up
like parabola not a "Linear Slow Motion".
Ref:Linear slow motion :
Ref : Ramped Slow Motion :
What have i done so far:
Used AVFoundation for first three bullets
From video files, separated audio and video.
Did slow motion for video using AVFoundation api (scaletimeRange).Its really working fine.
The same is not working for audio. Seems there's a bug in apple api itself (Bug ID : 14616144). The relevant question is scaleTimeRange has no effect on audio type AVMutableCompositionTrack
So i switched to Dirac. later found there is a limitation with Dirac's open source edition that it doesn't support Dynamic Time Stretching.
Finally trying to do with OpenAL.
I've taken a sample OpenAL program from Apple developer forum and executed it.
Here are my questions:
Can i store/save the processed audio in OpenAl?if its directly not possible with "OpenAl", can it be done with AVFoundation + OpenAL?
Very importantly, how to do slow motion or stretch the time scale with OpenAL? If i know time stretching, i can apply logic for Ramp Slow Motion.
Is there any other way?
I can't really speak to 1 or 2, but time scaling audio can be as easy as resampling. If you have RAW/PCM audio sampled at 48 kHz and want to playback at half speed, resample to 96 kHz and play the audio at 48 kHz. Since you have twice the number of samples it will take twice as long to play. Generally:
scaledSampleRate = (orignalSampleRate / playRate);
or
playRate = (originalSampleRate / scaledSampleRate);
This will effect the pitch of the track, however that may be the desired effect since that behavior is somewhat is expected in "slow motion" audio. There are more advanced techniques that preserve pitch while scaling time. The open source software Audacity implements these algorithms. You could find inspiration there. There are many resources on the web that explain the tradeoffs of pitch shifting vs time stretching.
http://en.wikipedia.org/wiki/Audio_time-scale/pitch_modification
http://www.dspdimension.com/admin/time-pitch-overview/
Another option you may not have considered is muting the audio during slow motion. That seems to be the technique employed by most AV playback utilities. However, depending on your use case, distorted audio does indicate time is being manipulated.
I have applied slow motion on complete video including audio this might help You check this link : How to do Slow Motion video in IOS
I am creating an iphone application that use audio.
I want to play a beep sound that loop indefinitely.
I found an easy way to do that using the upper layer AVAudioPlayer and the numberOfLoops set to "-1". It works fine.
But now I want to play this audio and be able to change the rate / speed. It may works like the sound played by a car when approaching an obstacle. At the beginning the beep has a low frequency and this frequency accelerate till reaching a continuous sound biiiiiiiiiiiip ...
It seems this is not feasible using the high layer AVAudioPlayer, but even looking at AudioToolBox I found no solution.
Any help?
Take a look at Dave Dribin's A440 sample application, which plays a constant 440 Hz tone on the iPhone / iPad. It uses the lower-level Audio Queue Services, but it does what you're asking (short of the realtime tone adjustment, which would just require a tweak of the existing code).