I am using the amazing audio engine 2 library for my sequencer app and I want to implement Crossfade loop audio.
Here is explanation :
When user press any key in sequencer piano it will play some audio file and and that audio file will continue to play in loop until user release the key. But that loop will be crossfade to itself.
I am using AEAudioFilePlayerModule for looping but not sure how to crossfade audio file with this class.
Explanation of Cross fade :
Start/End: This setting allows me to choose where in the audio file I want the app to constantly loop so that if user taps+holds note down for a long time, the audio will sound continuously until the user releases his finger.
XFade: This function (crossfade) allows me to chose how to fade between the end and start of the audio loop. This is good so
that the sound will loop smoothly. Here, 9999 is set. So at about 5k samples before the 200k end point, the audio for this note
will begin to fade away and at the same time, the audio loop starting at 50k samples will fade in for a duration of about 5k samples (1/2 the XFade amount).
Please help.
Thank you.
Related
I would like to schedule a series of local audio tracks to play in sequence while my app is backgrounded or the device is locked, and after a delay. Furthermore, each track should only play a fixed duration sample. For example:
User presses a button in my app.
User locks device.
After x minutes, tracks A, B, and C each play for y seconds in sequence.
How might I accomplish this?
My current Best Hope is to schedule these in an AVPlayerQueue since if I set up the queue correctly the background audio should then 'just work'. But I don't see any way to set a duration for each AVPlayerItem. I also don't know how to set an initial delay, though I would consider using looping a silent audio clip if that is the only obstacle.
If you use AudioKit, something like will be easy to accomplish. Have a look at AKClipPlayer or AKPlayer. Both classes support scheduling and start/end time.
I need to get information about total time a video is played using MPMoviePlayer.How to handle the case when user watches a 3 min video upto 2 min and moves backward to 1.30 and closes the video.The requirement is to know the fraction of video viewed by user accurately.
From the Apple docs on MPMoviePlayerController:
Movie Player Notifications
A movie player generates notifications to keep your app informed about the state of movie playback. In addition to being notified when playback finishes, your app can be notified in the following situations:
When the movie player begins playing, is paused, or begins seeking forward or backward
Using these notifications, you could set your own timers to know the total amount of time that a video has been playing. Specifically, you probably want the MPMoviePlayerPlaybackStateDidChangeNotification.
Knowing the total percentage of the video watched could be a little trickier but still possible I think. You would need to register for the MPMediaPlayback protocol and use it in conjunction with the PlaybackStateDidChangeNotification mentioned above.
One idea I had (though probably not the best or most efficient approach) would be to create an array of BOOL values, 1 for each second of the video. When a video plays, grab the currentPlaybackTime on the player and mark off each second as it is played. If the video state changes (pause, skip forward, etc), stop marking them off until it is resumed, then start at that new index based on the new currentPlaybackTime and continue marking. When they're finished, calculate the % of indexes that have been marked.
MPMoviePlayerController
MPMediaPlayback Protocol
Let me know if this works for you!!
I am interested in recording media using an AVCaptureSession in iOS while playing media back using an AVPlayer (specifically, I am playing back audio and recording video, but I'm not sure it matters).
The problem is, when I play the resulting media back together later, they are out of sync. Is it possible to synchronize them, either by ensuring that playback and recording start simultaneously, or by discovering what the offset is between them? I probably need the sync to be on the order of 10 ms. It is unreasonable to assume that I can always capture audio (since the user may use headphones), so syncing via analysis of original and recorded audio is not an option.
This question suggests that it's possible to end playback and recording simultaneously and determine the initial offset from the resulting lengths that way, but I'm unclear how to get them to end simultaneously. I have two cases: 1) the audio playback runs out, and 2), the user hits the "stop recording" button.
This question suggests priming and then applying a fixed, but possibly device-dependent delay, which is obviously a hack, but if it's good enough for audio it's obviously worth considering for video.
Is there another media layer I can use to perform the required synchronization?
Related: this question is unanswered.
If you are specifically using AVPlayer to playback Audio and i would suggest you to use AudioQueueServices for the same. Its seamless and fast as it reads buffer by buffer and play pause is faster than AVPLlayer
There can also be the possibility that you are missing the initial statement of [avPlayer prepareToPlay] which might be causing much overhead for it to sync before playing the Audio.
Hope it helps you.
I have an AVMutableComposition with 2 audio tracks and one video track. I'm using the composition to string about 40 different video clips from .mov files, putting the video content of each clip in the video track of my composition and the audio in the audio track. The second audio track I use for music.
I also have a synchronized layer for titles graphics.
When I play this composition using an AVPlayer, the audio slowly gets out of sync. It takes about 4 minutes to start becoming noticeable. It seems like if I only string together a handfull of longer clips the problem is not as apparent, it is when there are many clips shorter clips (~40 in my test) that it gets really bad.
Pausing and Playing doesn't re-sync the audio, however seeking does. In other words, if I let the video play to the end, towards the end the lip sync gets noticeably off even if I pause and play throughout, however, if I seek to a time towards the end the audio gets back in sync.
My hacky solution for now is to seek to the currentTime + 1 frame every minute or so. This creates an unpleasant jump in the video caused by a lag in the seek operation, so not a good solution.
Exporting with an ExportSession doesn't present this problem, audio remains in sync in the output movie.
I'm wondering if the new masterClock property in the AVPlayer is the answer to this, and if it is, how is it used?
I had the same issue and fixed it, among many other audio and video things, by specifying times timescales in the following manner:
CMTime(seconds: my_seconds, preferredTimescale: CMTimeScale(600))
Before, my time scale was CMTimeScale(NSEC_PER_SEC). That caused me jittery when composing clips at a different frame rate, plus this audio out-of-sync that Eddy mentions here.
In spite of looking like a magic number, 600 is a common multiple of 24, 30, 60 and 120. These are usual frame rates for different purposes. The common multiple avoids dragging around rounding problems when composing multiple clips.
I am creating an iphone application that use audio.
I want to play a beep sound that loop indefinitely.
I found an easy way to do that using the upper layer AVAudioPlayer and the numberOfLoops set to "-1". It works fine.
But now I want to play this audio and be able to change the rate / speed. It may works like the sound played by a car when approaching an obstacle. At the beginning the beep has a low frequency and this frequency accelerate till reaching a continuous sound biiiiiiiiiiiip ...
It seems this is not feasible using the high layer AVAudioPlayer, but even looking at AudioToolBox I found no solution.
Any help?
Take a look at Dave Dribin's A440 sample application, which plays a constant 440 Hz tone on the iPhone / iPad. It uses the lower-level Audio Queue Services, but it does what you're asking (short of the realtime tone adjustment, which would just require a tweak of the existing code).