I have a MIDI file loop fine as long as it loops the entire track. The problem is that I'd like to loop from the beginning to a specified length - say 2 beats out of four. But looping from the beginning, not from the end as described in Apple's documentation(re: MusicTrackLoopInfo): "The point in a music track, measured in beats from the end of the music track, at which to begin playback during looped playback"
Any ideas on how to solve this?
Not sure if this is the answer but... Maybe set the track length instead of the loop point. So set track length to your desired loop length and loop indefinitely.
Related
I was wondering if it is possible to simulate the scratching of a record player with AudioKit.
So basically to have an input value (e.g. position of the finger on screen) as input for the index of the playback of an audio-file. I am not sure if Audio Kit is even possible of doing something like that. If not how would I program something like this for iOS? Any other frameworks/libraries? Will I need to write c++?
Thank you,
J
Use the AKPhaseLockedVocoder. Its pretty cool, does exactly that.
I am using the amazing audio engine 2 library for my sequencer app and I want to implement Crossfade loop audio.
Here is explanation :
When user press any key in sequencer piano it will play some audio file and and that audio file will continue to play in loop until user release the key. But that loop will be crossfade to itself.
I am using AEAudioFilePlayerModule for looping but not sure how to crossfade audio file with this class.
Explanation of Cross fade :
Start/End: This setting allows me to choose where in the audio file I want the app to constantly loop so that if user taps+holds note down for a long time, the audio will sound continuously until the user releases his finger.
XFade: This function (crossfade) allows me to chose how to fade between the end and start of the audio loop. This is good so
that the sound will loop smoothly. Here, 9999 is set. So at about 5k samples before the 200k end point, the audio for this note
will begin to fade away and at the same time, the audio loop starting at 50k samples will fade in for a duration of about 5k samples (1/2 the XFade amount).
Please help.
Thank you.
We are using StreamingKit (https://github.com/tumtumtum/StreamingKit) to play from a list of streaming m4a audio sources that the user can move back and forth between freely.
We remember the position in each stream, and we perform a seek when the item begins playing (in the delegate method didStartPlayingQueueItemId), to return to a remembered spot in the audio for that item.
Immediately after the seek, the audio itself moves to the correct offset, but the reported time is too large, often larger than the length of the track.
I found that at line 1547 of STKAudioPlayer.m, delta is sometimes negative, which leads to the player grossly overreporting the track's progress after a seek.
I'm not sure how it gets the incorrect value, but for our purposes, wrapping those lines in an if (delta > 0) { } clause corrects the issue.
It seems to particularly happen when the queued items have recently been changed, and the playback is buffering.
Anyone know what's happening here, and whether it represents a bug in seeking in StreamingKit, a misunderstanding on our part of how to use it, or both/neither?
I just ran into the same issue and fixed it using:
https://github.com/tumtumtum/StreamingKit/issues/219
STKAudioPlayer.m change:
look for line:
OSSpinLockLock(¤tEntry->spinLock);
currentEntry->seekTime -= delta;
OSSpinLockUnlock(¤tEntry->spinLock);
and enclose it in the if statement to check if delta >0
if (delta > 0) {
OSSpinLockLock(¤tEntry->spinLock);
currentEntry->seekTime -= delta;
OSSpinLockUnlock(¤tEntry->spinLock);
}
The app I’m working on loops a video a specified # of times by adding the same AVAssetTrack (created from the original video url) multiple times to the same AVComposition at successive intervals. The app similarly inserts a new video clip into an existing composition by 'removing' the time range from the composition's AVMutableCompositionTrack (for AVMediaTypeVideo) and inserting the new clip's AVAssetTrack into the previously removed time range.
However, occasionally and somewhat rarely, after inserting a new clip as described above into a time range within a repeat of the original looping video, there are resulting blank frames which only appear at the video loop’s transition points (within the composition), but only during playback - the video exports correctly without gaps.
This leads me to believe the issue is with the AVPlayer or AVPlayerItem and how the frames are currently buffered for playback, rather than how I'm inserting/ looping the clips or choosing the correct CMTime stamps to do so. The app is doing a bunch of things at once (loop visualization in the UI via an NSTimer, audio playback via Amazing Audio Engine) - could my issue be a result of competition for resources?
One more note: I understand that discrepancies between audio and video in an asset can cause glitches (i.e. the underlying audio is a little bit longer than the video length), but as I'm not adding an audioEncodingTarget to the GPUImageWriter that I'm using to record and save the video, the videos have no audio components.
Any thoughts or directions you can point me in would be greatly appreciated! Many thanks in advance.
Update: the flashes coincide with the "Had to drop a video frame" error logged by the GPUImage library, which according to the creator has to do with the phone not being able to process video fast enough. Can multi-threading solving this?
Update 2: So the flashes actually don't always correspond to the had to drop a video frame error. I have also disabled all of the AVRecorder/Amazing Audio Engine code and the issue still persists making it not a problem of resource competition between those engines. I have been logging properties of AVPlayer item and notice that the 'isPlayBackLikelyToKeepUp' which is always NO, and 'isPlaybackBufferFull' which is always yes.
So problem is solved - sort of frustrating how brutally simple the fix is. I just used a time range a frame shorter for adding the videos to the composition rather than the AVAssetTrack's time range. No more flashes. Hopefully the users won't miss that 30th of a second :)
shortened_duration = CMTimeSubtract(originalVideoAssetTrack.timeRange.duration, CMTimeMake(1,30));
I am trying to synchronize several CABasicAnimations with AVAudioPlayer. The issue I have is that CABasicAnimation uses CACurrentMediaTime() as a reference point when scheduling animations while AVAudioPlayer uses deviceCurrentTime. Also for the animations, CFTimeInterval is used, while for sound it's NSTimeInterval (not sure if they're "toll free bridged" like other CF and NS types). I'm finding that the reference points are different as well.
Is there a way to ensure that the sounds and animations use the same reference point?
I don't know the "official" answer, but they are both double precision floating point numbers that measure a number of seconds from some reference time.
From the docs, it sounds like deviceCurrentTime is linked to the current audio session:
The time value, in seconds, of the audio output device. (read-only)
#property(readonly) NSTimeInterval deviceCurrentTime Discussion The
value of this property increases monotonically while an audio player
is playing or paused.
If more than one audio player is connected to the audio output device,
device time continues incrementing as long as at least one of the
players is playing or paused.
If the audio output device has no connected audio players that are
either playing or paused, device time reverts to 0.
You should be able to start an audio output session, call CACurrentMediaTime() then get the deviceCurrentTime of your audio session in 2 sequential statements, then calculate an offset constant to convert between them. That offset would be accurate within a few nanoseconds.
The offset would only be valid while the audio output session is active. You'd have to recalculate it each time you remove all audio players from the audio session.
I think the official answer just changed, though currently under NDA.
See "What's New in Camera Capture", in particular the last few slides about the CMSync* functions.
https://developer.apple.com/videos/wwdc/2012/?id=520