Piano notes with AKKeyboardView - ios

I am new to AudioKit - I am able to use the AKKeyboardView to play notes using AKOscillatorBank, but I want the audio to sound more like a grand piano. Loading .wav files seems to make the notes choppy. I have also changed the note envelope. How can I map grand piano notes onto the AKKeyboardView keys?

You're not easily going to get a piano sound out of an oscillator. You might want to use a soundfont instead. You can load an sf2 (but not sf3, I believe) into an AKAppleSampler and trigger it using AKKeyboardDelegate as you are doing with the AKOscillatorBank. MuseScore has list of soundfont file links, many of which use open source licenses.
First add the sf2 file to your project, then set up the AKAppleSampler:
let sampler = AKAppleSampler()
// note that if you're using a GM soundfont, 'Grand Piano' will be preset 0
sampler.loadMelodicSoundFont("NameOfSoundFontWithoutExtension", preset: 0)

Related

Creating a MIDI file from an AKKeyboardView

Currently I am using an AKKeyboardView to connect essentially to the AKRhodesPiano object, and I was wondering if there was an easy way to generate a MIDI file from this?
I see the AKKeyboardView has the noteOn and noteOff functions, which does produce the MIDINoteNumber but I can't find anywhere else in the AudioKit library to really take this input and generate a MIDI file, even if only just a simple one.
You would need to run an AKSequencer in the background (maybe with a metronome track). Make an additional track that you will record onto. Also set the length to be as long as you will need for the recording.
When you get a noteOn message from the keyboard, you can check the sequencer's currentPosition and record this into a dictionary. When you get the matching pitch's noteOff message, again check the currentPosition. Use the difference between these two times to get the duration and add a note to your recording track on the sequencer:
myRecordingTrack.add(noteNumber: noteNumber,
velocity: 127,
position: timeAtNoteOn,
duration: timeAtNoteOff - timeAtNoteOn,
channel: 0)
Then you could easily use AKSequencer's genData() to create a MIDI file (possibly either deleting the metronome track, or copying the recorded track to a new AKSequencer instance).
Check out the SequencerDemo for setting up AKSequencer and building sequences and MIDIFileEditAndSync (both in the iOS Example folder in the AudioKit repo) for an example of writing AKSequencer to a MIDI file.

Read HLS Playlist information to dynamically change the preferredBitRate of an Item

I'm working on a video app, we are changing form regular mp4 files to HLS, one of the many reasons we have to do the change is that we hace much more control over the bandwidth usage of videos (we load lots of other stuff in our player, so we need to optimize the experience the best way).
So, AVFoundation introduced in iOS10 the ability to control the bandwidth using:
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:self.urlAsset];
playerItem.preferredForwardBufferDuration = 30.0;
playerItem.preferredPeakBitRate = 200000.0; // Remember this line
There's also a configuration introduced on iOS11 to set the maximum resolution of the item with preferredMaximumResolution, So we're using it, but we still need a solution for iOS10 devices.
Well, now we have control over the preferredPeakBitRate that's nice, but we have a problem, not all the HLS sources are generated by us, so, let's say we want to set a maximum resolution of 480p when you're not connected to a wifi network, today I don't have way to achieve that, not always I'm going to be able to know how much bandwidth needs the 480p source for the selected HLS playlist.
One thing I was thinking about is to read the information inside the m3u8 file, to at least know which are the different quality sources that my player can show and how much bandwidth needs everyone.
One way to do this, would download the m3u8 playlist as a plain text, use a regex to read the file and process this data, well, I'm trying to avoid that, I think that this should far less difficult.
I cannot read this information from the tracks, because a) I can't find the information, b) the tracks are replaced dynamically when changing the quality, yeah 1 track for every quality level.
So, I don't know how I can get this information, I've searched google, stackoverflow and I can't find this information, does any one can help me?
Here's an example for what I want to do, I have this example playlist:
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=314000,RESOLUTION=228x128,CODECS="mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_192k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=478000,RESOLUTION=400x224,CODECS="avc1.42001e,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_400k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=691000,RESOLUTION=480x270,CODECS="avc1.42001e,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_600k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1120000,RESOLUTION=640x360,CODECS="avc1.4d001f,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_1000k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1661000,RESOLUTION=960x540,CODECS="avc1.4d001f,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_1500k.m3u8
And I just want to have that information available on an array inside my code, something like this:
NSArray<ZZMetadata *> *metadataArray = self.urlAsset.bandwidthMetadata;
NSLog(#"Metadata info: %#", metadataArray);
And print something like this:
<__NSArrayM 0x123456789> (
<ZZMetadata 0x234567890> {
trackId: 1
neededBandwidth: 314000
resolution: 228x128
codecs: ...
...
}
<ZZMetadata 0x345678901> {
trackId: 2
neededBandwidth: 478000
resolution: 400x224
}
...
}

Playing multi-sampled Instruments using AudioKit, controlling ADSR envelope

I'm trying to play instrument of several .wav samples using AudioKit.
I've tried so far:
Using AKSampler (with underlying AVAudioUnitSampler) – it worked fine, but I can't figure out how to control ADSR envelope here – calling stop will stop note immediately.
Another way is to use AKSamplePlayer for each sample and play it, manually setting rate so it play the right note. I can (possibly?) then connect AKAmplitudeEnvelope to each sample player. But if I want to play 5 notes of the same sample simultaneously, I would need 5 instances of AKSamplePlayer, which seems like wasting resources.
I also tried to find a way to just push raw audio samples to the AudioKit output buffer, making mixing and sample interpolation by myself (in C, probably?). But didn't find how to do it :(
What is the right way to make a multi-sampled instrument using AudioKit? I feel like it must be a fairly simple task.
Thanks to mahal tertin, it's pretty easy to use AKAUPresetBuilder!
You can create .aupreset file somewhere in tmp directory and then load this instrument with AKSampler.
The only thing worth noting is that by default AKAUPresetBuilder will generate samples with trigger mode set to trigger, which will ignore note-off events. So you should set it explicitly.
For example:
let sampleC4 = AKAUPresetBuilder.generateDictionary(
rootNote: 60,
filename: pathToC4WavSample,
startNote: 48,
endNote: 65)
sampleC4["triggerMode"] = "hold"
let sampleC5 = AKAUPresetBuilder.generateDictionary(
rootNote: 72,
filename: pathToC5WavSample,
startNote: 66,
endNote: 83)
sampleC5["triggerMode"] = "hold"
AKAUPresetBuilder.createAUPreset(
dict: [sampleC4, sampleC5],
path: pathToAUPresetFilename,
instrumentName: "My Instrument",
attack: 0,
release: 0.2)
and then create a sampler and start AudioKit:
sampler = AKSampler()
try sampler.loadInstrument(atPath: pathToAUPresetFilename)
AudioKit.output = sampler
AudioKit.start()
and then use this to start playing note:
sampler.play(noteNumber: MIDINoteNumber(63), velocity: MIDIVelocity(120), channel: 0)
and this to stop, respecting release parameter:
sampler.stop(noteNumber: MIDINoteNumber(63), channel: 0)
Probably the best way would be to embed your wav files into an EXS or Soundfont format, making use of tools in that realm to accomplish the ADSR for instance. Otherwise you'll kind of have to have an instrument for each sample.

AVMutableMetadataItem's time & duration INVALID after reading

I have a question.
Recently I needed to add custom tags for recorded video. Local video on device not a streamed video. The task is to add some event specific tags in video, position of which could be set by pressing forward/backward like buttons like in any player.
It is not important whether the movie file will be mov file or mp4 format.
I searched on forum, found several samples how to add metadata using AVExportSession & it worked.
Although, when I tried to add metadata using AVAssetWriter. I wasn't able to append attributes to video.
What I do not understand is that after adding attribute, returned (time & duration) properties are always invalid.
For instance let's say I have a video with duration 2 seconds.
I have tried different key spaces. I am not able to write keys' from ID3 space.
IS ID3 used for stream video? (as far as I understood ID3 metadata of .mp3). Therefore, I was not able to write it into MPEG-4 file
I also used QuickTimeUserData & ISOUserData but again results are the same.
Here is an example
AVMutableMetadataItem *item2 = [AVMutableMetadataItem new];
item2.keySpace = AVMetadataKeySpaceiTunes;
item2.key = AVMetadataiTunesMetadataKeyUserComment;
item2.value = #"One two three";
item2.duration =CMTimeMakeWithSeconds(1, 1);
item2.time = CMTimeMakeWithSeconds(0, 1);
After reading I got the following:
AVMutableMetadataItem: 0xa4301f0, keySpace=itsk, key=\U00a9cmt, commonKey=(null), locale= (null), value=One two three, time={INVALID}, duration={INVALID}, extras={\n dataType = 1;\n}
I would like to use time & duration properties for metadata instead of writing custom data and processing it after that.
Ideally it would be great to append array of items with time = t1, duration = d1, .... (tn,dn).
Does anyone know how to accomplish that?
I've ended with a solution adding chapters to a video file instead of using metadata.
I looked at available libraries, took mpv4lib.
The library currently is not compiled for iOS, therefore, I ported the source project into static library for iOS platform.
That library allows to add custom "atoms" to mp4 file, and one of them is Quick Time text track, containing chapters.
I do similar with that post
The library is located here.

iOS Audio Units : When is usage of AUGraph's necessary?

I'm totally new to iOS programing (I'm more an Android guy..) and have to build an application dealing with audio DSP. (I know it's not the easiest way to approach iOS dev ;) )
The app needs to be able to accept inputs both from :
1- built-in microphone
2- iPod library
Then filters may be applied to the input sound and the resulting is to be outputed to :
1- Speaker
2- Record to a file
My question is the following : Is an AUGraph necessary in order to be able for example to apply multiple filters to the input or can these different effects be applied by processing the samples with different render callbacks ?
If I go with AUGraph do I need : 1 Audio Unit for each input, 1 Audio Unit for the output and 1 Audio Input for each effect/filter ?
And finally if I don't may I only have 1 Audio Unit and reconfigure it in order to select the source/destination ?
Many thanks for your answers ! I'm getting lost with this stuff...
You may indeed use render callbacks if you so wished to but the built in Audio Units are great (and there are things coming that I can't say here yet under NDA etc., I've said too much, if you have access to the iOS 5 SDK I recommend you have a look).
You can implement the behavior you wish without using AUGraph, however it is recommended you do as it takes care of a lot of things under the hood and saves you time and effort.
Using AUGraph
From the Audio Unit Hosting Guide (iOS Developer Library):
The AUGraph type adds thread safety to the audio unit story: It enables you to reconfigure a processing chain on the fly. For example, you could safely insert an equalizer, or even swap in a different render callback function for a mixer input, while audio is playing. In fact, the AUGraph type provides the only API in iOS for performing this sort of dynamic reconfiguration in an audio app.
Choosing A Design Pattern (iOS Developer Library) goes into some detail on how you would choose how to implement your Audio Unit environment. From setting up the audio session, graph and configuring/adding units, writing callbacks.
As for which Audio Units you would want in the graph, in addition to what you already stated, you will want to have a MultiChannel Mixer Unit (see Using Specific Audio Units (iOS Developer Library)) to mix your two audio inputs and then hook up the mixer to the Output unit.
Direct Connection
Alternatively, if you were to do it directly without using AUGraph, the following code is a sample to hook up Audio units together yourself. (From Constructing Audio Unit Apps (iOS Developer Library))
You can, alternatively, establish and break connections between audio
units directly by using the audio unit property mechanism. To do so,
use the AudioUnitSetProperty function along with the
kAudioUnitProperty_MakeConnection property, as shown in Listing 2-6.
This approach requires that you define an AudioUnitConnection
structure for each connection to serve as its property value.
/*Listing 2-6*/
AudioUnitElement mixerUnitOutputBus = 0;
AudioUnitElement ioUnitOutputElement = 0;
AudioUnitConnection mixerOutToIoUnitIn;
mixerOutToIoUnitIn.sourceAudioUnit = mixerUnitInstance;
mixerOutToIoUnitIn.sourceOutputNumber = mixerUnitOutputBus;
mixerOutToIoUnitIn.destInputNumber = ioUnitOutputElement;
AudioUnitSetProperty (
ioUnitInstance, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
ioUnitOutputElement, // destination element
&mixerOutToIoUnitIn, // connection definition
sizeof (mixerOutToIoUnitIn)
);

Resources