Get the total played duration of a video in MPMoviePlayer - ios

I need to get information about total time a video is played using MPMoviePlayer.How to handle the case when user watches a 3 min video upto 2 min and moves backward to 1.30 and closes the video.The requirement is to know the fraction of video viewed by user accurately.

From the Apple docs on MPMoviePlayerController:
Movie Player Notifications
A movie player generates notifications to keep your app informed about the state of movie playback. In addition to being notified when playback finishes, your app can be notified in the following situations:
When the movie player begins playing, is paused, or begins seeking forward or backward
Using these notifications, you could set your own timers to know the total amount of time that a video has been playing. Specifically, you probably want the MPMoviePlayerPlaybackStateDidChangeNotification.
Knowing the total percentage of the video watched could be a little trickier but still possible I think. You would need to register for the MPMediaPlayback protocol and use it in conjunction with the PlaybackStateDidChangeNotification mentioned above.
One idea I had (though probably not the best or most efficient approach) would be to create an array of BOOL values, 1 for each second of the video. When a video plays, grab the currentPlaybackTime on the player and mark off each second as it is played. If the video state changes (pause, skip forward, etc), stop marking them off until it is resumed, then start at that new index based on the new currentPlaybackTime and continue marking. When they're finished, calculate the % of indexes that have been marked.
MPMoviePlayerController
MPMediaPlayback Protocol
Let me know if this works for you!!

Related

The Amazing audio engine 2 - Crossfade looping

I am using the amazing audio engine 2 library for my sequencer app and I want to implement Crossfade loop audio.
Here is explanation :
When user press any key in sequencer piano it will play some audio file and and that audio file will continue to play in loop until user release the key. But that loop will be crossfade to itself.
I am using AEAudioFilePlayerModule for looping but not sure how to crossfade audio file with this class.
Explanation of Cross fade :
Start/End: This setting allows me to choose where in the audio file I want the app to constantly loop so that if user taps+holds note down for a long time, the audio will sound continuously until the user releases his finger.
XFade: This function (crossfade) allows me to chose how to fade between the end and start of the audio loop. This is good so
that the sound will loop smoothly. Here, 9999 is set. So at about 5k samples before the 200k end point, the audio for this note
will begin to fade away and at the same time, the audio loop starting at 50k samples will fade in for a duration of about 5k samples (1/2 the XFade amount).
Please help.
Thank you.

Playing a sequence of audio clips in the background after a delay

I would like to schedule a series of local audio tracks to play in sequence while my app is backgrounded or the device is locked, and after a delay. Furthermore, each track should only play a fixed duration sample. For example:
User presses a button in my app.
User locks device.
After x minutes, tracks A, B, and C each play for y seconds in sequence.
How might I accomplish this?
My current Best Hope is to schedule these in an AVPlayerQueue since if I set up the queue correctly the background audio should then 'just work'. But I don't see any way to set a duration for each AVPlayerItem. I also don't know how to set an initial delay, though I would consider using looping a silent audio clip if that is the only obstacle.
If you use AudioKit, something like will be easy to accomplish. Have a look at AKClipPlayer or AKPlayer. Both classes support scheduling and start/end time.

iOS: It there a way to get and set the current playback position?

The idea isn't so much providing information to the user in the lock screen (see How to set current playback duration and elapsed time on iOS 7 lockscreen? for that) but rather I want my app to read the playback position in seconds, then if that value is above a certain threshold skip back by that threshold. If it's less than that threshold and the current track is in a playlist queue, determine the length of the previous track in the queue, and go back to that track, setting playback position to:
position = last.length - threshold + last_position
Where last is the previous track and last_position is the playback position in the track which was playing when the event occurred.
I'm using the Model / ControllerView spec for my app so I have my UIEvents trapped in the Controller. I just can't quite grasp how I get the current playback position. How would I capture a reference to the MPNowPlayingInfoCenter? I'm writing in Objective-C.

In AVFoundation, how to synchronize recording and playback

I am interested in recording media using an AVCaptureSession in iOS while playing media back using an AVPlayer (specifically, I am playing back audio and recording video, but I'm not sure it matters).
The problem is, when I play the resulting media back together later, they are out of sync. Is it possible to synchronize them, either by ensuring that playback and recording start simultaneously, or by discovering what the offset is between them? I probably need the sync to be on the order of 10 ms. It is unreasonable to assume that I can always capture audio (since the user may use headphones), so syncing via analysis of original and recorded audio is not an option.
This question suggests that it's possible to end playback and recording simultaneously and determine the initial offset from the resulting lengths that way, but I'm unclear how to get them to end simultaneously. I have two cases: 1) the audio playback runs out, and 2), the user hits the "stop recording" button.
This question suggests priming and then applying a fixed, but possibly device-dependent delay, which is obviously a hack, but if it's good enough for audio it's obviously worth considering for video.
Is there another media layer I can use to perform the required synchronization?
Related: this question is unanswered.
If you are specifically using AVPlayer to playback Audio and i would suggest you to use AudioQueueServices for the same. Its seamless and fast as it reads buffer by buffer and play pause is faster than AVPLlayer
There can also be the possibility that you are missing the initial statement of [avPlayer prepareToPlay] which might be causing much overhead for it to sync before playing the Audio.
Hope it helps you.

Reconciling CACurrentMediaTime() and deviceCurrentTime

I am trying to synchronize several CABasicAnimations with AVAudioPlayer. The issue I have is that CABasicAnimation uses CACurrentMediaTime() as a reference point when scheduling animations while AVAudioPlayer uses deviceCurrentTime. Also for the animations, CFTimeInterval is used, while for sound it's NSTimeInterval (not sure if they're "toll free bridged" like other CF and NS types). I'm finding that the reference points are different as well.
Is there a way to ensure that the sounds and animations use the same reference point?
I don't know the "official" answer, but they are both double precision floating point numbers that measure a number of seconds from some reference time.
From the docs, it sounds like deviceCurrentTime is linked to the current audio session:
The time value, in seconds, of the audio output device. (read-only)
#property(readonly) NSTimeInterval deviceCurrentTime Discussion The
value of this property increases monotonically while an audio player
is playing or paused.
If more than one audio player is connected to the audio output device,
device time continues incrementing as long as at least one of the
players is playing or paused.
If the audio output device has no connected audio players that are
either playing or paused, device time reverts to 0.
You should be able to start an audio output session, call CACurrentMediaTime() then get the deviceCurrentTime of your audio session in 2 sequential statements, then calculate an offset constant to convert between them. That offset would be accurate within a few nanoseconds.
The offset would only be valid while the audio output session is active. You'd have to recalculate it each time you remove all audio players from the audio session.
I think the official answer just changed, though currently under NDA.
See "What's New in Camera Capture", in particular the last few slides about the CMSync* functions.
https://developer.apple.com/videos/wwdc/2012/?id=520

Resources