In an AKSequencer track I add midi notes, the first is positioned at 0
trackOne?.add(noteNumber: MIDINoteNumber(64), velocity: 100, position: AKDuration(beats: 0.0), duration: AKDuration(beats: 0.5))
the note at position 0 never plays in a single run through of the sequence, but weirdly does play if it is in a loop.
I have the track midi output going into a AKMidiCallBackInstrument and in the initial play, it does not register the noteOn byte it only seems to receive the note end byte.
Because the notes after the first one played I tried setting the position to 0.1 and that actually worked, maybe there is something that I need to call/activate just prior to sequence play...
Has anyone ever see anything like this before and if so how did you solve it? thanks.
I stumbled across an answer to this,
I had two tracks: The first had it's midi output assigned to the midi input of a AKCallbackInstrument, the problem was that the second track wasn't assigned an output.
Either removing the unassigned track or setting its midi output to the input of a callback instrument seems to have fixed it.
Related
I am trying to play MIDI Notes in iOS using AudioGraph and AudioUnit. I used kAudioUnitSubType_MIDISynth to successfully create MIDISynth Unit, loaded sound font file into the unit, and used NOTE ON (0x90|0) message to start a note.
osStatus = MusicDeviceMIDIEvent(midiSynthUnit,
0x90|0,
60, //Pitch
100, //Velocity
0);
However, I would like to pause the music player at some point. When the music player has resumed after pause, the note on time might be passed already.
The image hereunder is an example, the vertical line is where the MIDI has to be resumed in time, a note D3 has passed its start time but not yet ended:
How can I play the midi note from the middle its time?
Thanks
We are using StreamingKit (https://github.com/tumtumtum/StreamingKit) to play from a list of streaming m4a audio sources that the user can move back and forth between freely.
We remember the position in each stream, and we perform a seek when the item begins playing (in the delegate method didStartPlayingQueueItemId), to return to a remembered spot in the audio for that item.
Immediately after the seek, the audio itself moves to the correct offset, but the reported time is too large, often larger than the length of the track.
I found that at line 1547 of STKAudioPlayer.m, delta is sometimes negative, which leads to the player grossly overreporting the track's progress after a seek.
I'm not sure how it gets the incorrect value, but for our purposes, wrapping those lines in an if (delta > 0) { } clause corrects the issue.
It seems to particularly happen when the queued items have recently been changed, and the playback is buffering.
Anyone know what's happening here, and whether it represents a bug in seeking in StreamingKit, a misunderstanding on our part of how to use it, or both/neither?
I just ran into the same issue and fixed it using:
https://github.com/tumtumtum/StreamingKit/issues/219
STKAudioPlayer.m change:
look for line:
OSSpinLockLock(¤tEntry->spinLock);
currentEntry->seekTime -= delta;
OSSpinLockUnlock(¤tEntry->spinLock);
and enclose it in the if statement to check if delta >0
if (delta > 0) {
OSSpinLockLock(¤tEntry->spinLock);
currentEntry->seekTime -= delta;
OSSpinLockUnlock(¤tEntry->spinLock);
}
I have an app which needs to play audio in the background...
Is this possible using Swift and SpriteKit with SKActions...
Or is it possible another way..
A nudge in the right direction would be very helpful.
SKAction is really easy to use with sounds, but sometimes you might want to do more.
In that case, you would want to use AVAudioPlayerinstead of it.
In order to not write your own "player", I suggest you to use an existing one. Here is one I've already used (SKTAudio) : https://github.com/raywenderlich/SKTUtils/blob/master/SKTUtils/SKTAudio.swift
Here is how to use it :
// For background audio (playing continuously)
SKTAudio.sharedInstance().playBackgroundMusic("yourBackgroundMusic.mp3") // Start the music
SKTAudio.sharedInstance().pauseBackgroundMusic() // Pause the music
SKTAudio.sharedInstance().resumeBackgroundMusic() // Resume the music
// For short sounds
SKTAudio.sharedInstance().playSoundEffect("sound.wav") // Play the sound once
As you can see, you'll be able to either play short sound (as you might already have done with SKAction) and even background music that will play in loop as you're looking for.
After trying out a few things I stumbled upon a reasonable solution. For playing music in a loop you can set up an SKAudioNode and set the AutoPlayedLoop = true. Then call the "play music" method in when Scene loads.
Now, if you want a sound effect you can use an SKAudioNode for that as well. I found that the sound effect started repeating and so to counteract that I simply created a play-stop sequence. Essentially, every time the sound effect was triggered it would run this sequence, hence, it only sounded once, as intended. There was a cut off to the sound effect however, so I created a simple colour shade method for about 1 second and added this to the sequence. Now the sequence produced was play-shade-stop. Every time the sound effect ran this sequence the dormant shade action would give it a 1 second lag and allow it to play fully and then it would stop as intended. Based on the sound effect you can use the dormant shade action to compensate as required, sometimes longer, sometimes shorter.
Well, here is the code. It's pretty straightforward and you don't need to create a whole AVPlayer class for it.
func playerhitfx() {
let fx:SKAudioNode = SKAudioNode(fileNamed: "playerfx")
let play: SKAction = SKAction.play()
let stop: SKAction = SKAction.stop()
let shade: SKAction = SKAction.colorize(with: UIColor.clear, colorBlendFactor: 1, duration: 1)
let volume: SKAction = SKAction.changeVolume(to: 3, duration: 0)
let seq: SKAction = SKAction.sequence([play,shade,stop])
fx.run(seq)
fx.run(volume)
self.addChild(fx)
}
Furthermore, you can adjust the volume properties having all audio as AudioNodes, to create a good audio mix between the music in the background and the sound effects and also to pronounce certain other effects over others. Hope this helps.
By the way, you can't cast an SKAudioNode into an AVAudioNode, just to bare in mind.
I have a MIDI file loop fine as long as it loops the entire track. The problem is that I'd like to loop from the beginning to a specified length - say 2 beats out of four. But looping from the beginning, not from the end as described in Apple's documentation(re: MusicTrackLoopInfo): "The point in a music track, measured in beats from the end of the music track, at which to begin playback during looped playback"
Any ideas on how to solve this?
Not sure if this is the answer but... Maybe set the track length instead of the loop point. So set track length to your desired loop length and loop indefinitely.
I am trying to synchronize several CABasicAnimations with AVAudioPlayer. The issue I have is that CABasicAnimation uses CACurrentMediaTime() as a reference point when scheduling animations while AVAudioPlayer uses deviceCurrentTime. Also for the animations, CFTimeInterval is used, while for sound it's NSTimeInterval (not sure if they're "toll free bridged" like other CF and NS types). I'm finding that the reference points are different as well.
Is there a way to ensure that the sounds and animations use the same reference point?
I don't know the "official" answer, but they are both double precision floating point numbers that measure a number of seconds from some reference time.
From the docs, it sounds like deviceCurrentTime is linked to the current audio session:
The time value, in seconds, of the audio output device. (read-only)
#property(readonly) NSTimeInterval deviceCurrentTime Discussion The
value of this property increases monotonically while an audio player
is playing or paused.
If more than one audio player is connected to the audio output device,
device time continues incrementing as long as at least one of the
players is playing or paused.
If the audio output device has no connected audio players that are
either playing or paused, device time reverts to 0.
You should be able to start an audio output session, call CACurrentMediaTime() then get the deviceCurrentTime of your audio session in 2 sequential statements, then calculate an offset constant to convert between them. That offset would be accurate within a few nanoseconds.
The offset would only be valid while the audio output session is active. You'd have to recalculate it each time you remove all audio players from the audio session.
I think the official answer just changed, though currently under NDA.
See "What's New in Camera Capture", in particular the last few slides about the CMSync* functions.
https://developer.apple.com/videos/wwdc/2012/?id=520