Suppose i have multiple frames from a video . I would like to play these frames in a movie player . These frames can change at any point of time . So it should be like a callback that should request each frame and program provides the frame to player as a response to call back.
Is it possible in IOS?
Please guide me in right direction in order to achieve this.
Thanks in advance
mia.
You are not going to be able to implement that type of approach using the built-in movie player. But, if you are just going to loop through video frames stored in PNG files in a directory, that would not be too hard to implement. You could take a look at this code as a starting point. This source code is completely free and does what you need.
PNGAnimatorDemo.zip
If you want to do some more advanced stuff, take a look at the AVImageFrameDecoder class in the AVAnimator (google it to find out more).
Related
is it possible to have a real-time preview of AVMutableComposition which has some layer instructions applied to its assets?
The only class I found that connects AVMutableComposition with AVVideoComposition (holding instructions) is AVExportSession. Does it mean I must export it first to play a preview?
If so, how does apps like Final Cut Pro serve real-time preview when I edit part of the video. Do they cut the whole video into multiple chunks, export what has changed and keep change of everything else?
This sounds like a difficult problem - is there any library that would help in cutting video into small chunks to export and keeping an eye on cache invalidation?
Cheers,
M.
I don't know if this is still relevant but you can always extract each frame from the video, manipulate it accordingly then render it to the screen.
If its from AVCaptureSession you can get CMSampleBuffer from the callbacks, if it's a file I think AVReader is your best bet then you can use either CoreImage or Metal to manipulate the frames and render them in real-time.
There is no real time preview with AVMutableComposition , they may create a time slot for every change and manage it's visibility when you change the slider below
I'm looking for a way to create a audio bars visualizer similar to this in iOS.
Every white bar will move up and down depending of audio wave. I'm really lost because haven't much experience dealing with audio in Objective-c.
EDIT: What i'm seeking is what Overcast's app does on its visualizer (the group of vertical orange bars on the lower part of the podcast's image)
Anyone can help?
Thanks
EDIT: Thanks to Tomer's answer I finally made it. First I did this tutorial in order to make it all clear. Then I created my own VisualizerView for my project, you can find it in this gist. Maybe is not perfect but it does what I needed to do.
Generally, you have a few options if you want to get an idea of what something sounds like in iOS:
Use the simple AVAudioPlayer audio player, and then use the [audioPlayer averagePowerForChannel:] method to get the avarage audio level for the current moment. Check out this tutorial.
Use the Audio Queue API, which lets you send whatever audio you want to the speaker: You would read audio from your source and fill the buffers with it every time. (If you're reading from a file, use AVAssetReader) This way you always know exactly what waveform you're playing, so you can, for example, calculate its avarage power or process it in other ways like FFT. Then you'd update the bars accordingly.
EDIT: The standard way of doing such a thing is to use the Fast Fourier Transform (FFT) - it extracts frequency information from a sound. Here's a good example of using it on iOS (Apple's guide here). But, of course, to use it you have to know exactly what waveform you're playing every time, so you'd probably want to use a lower-level API such as Audio Queue.
I'm working on an iPad app that records and plays videos using AVFoundation classes. I have all of the code for basic record/playback in place and now I would like to add a feature that allows the user to draw and make annotations on the video—something I believe will not be too difficult. The harder part, and something that I have not been able to find any examples of, will be to combine the drawing and annotations into the video file itself. I suspect this is part is accomplished with AVComposition but have no idea exactly how. Your help would be greatly appreciated.
Mark
I do not think that you can actually save a drawing into a video file in iOS. You could however consider using a separate view to save the drawing and synchronize the overlay onto the video using a transparent view. In other words, the user circled something at time 3 mins 42 secs in the video. Then when the video is played back you overlay the saved drawing onto the video at the 3:42 mark. It's not what you want but I think it is as close as you can get right now.
EDIT: Actually there might be a way after all. Take a look at this tutorial. I have not read the whole thing but it seems to incorporate the overlay function you need.
http://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos
I have some device which streams h264 video in following format: top half of picture is even lines of video, and bottom half of picture is odd lines of video. So the question is - how can I play this video in normal visibility, using standart players, ffplay for example.
I know about "tinterlace:merge" plugin in ffmpeg, but it combines video from two pictures following one by one. So my task is make a correct video from single frame.
Regards,
Alexey.
I recently had to deal with the exact same problem.
there are many different methods and the optimum solution completely depends on your situation,
the simplest fastest method is weaving two fields together which is perfect for immobile parts but create comb effect in moving object.
more complicated methods use motion detection methods.
what I did was merging two fields then applying Edge-Line averaging (ELA) for moving segments to reduce comb effect.
check this link for a detailed explanation of the problem
It would be good if you could provide a sample video file. You describe very well what the picture looks like, but the file may contain other information that is helpful for playback.
Furthermore, the format you describe doesn't sound like a standard format, so it's unlikely you will get a regular player to play it the way you want, out-of-the-box. If you're using ffplay, it's likely that you will have to write your own plugin to re-order the scanlines prior to displaying them.
Alternatively, you could re-encode the video into a standard format (interlaced or deinterlaced) using ffmpeg. You could then play it back in any regular player, like ffplay or VLC.
Finally, I recommend asking your question on the ffmpeg mailing list.
I have an AVMutableComposition with a video track and I would like to add a still image into the video track, to be displayed for some given time. The still image is simply a PNG. I can load the image as an asset, but that’s about it, because the resulting asset does not have any tracks and therefore cannot be simply inserted using the insertTimeRange… methods.
Is there a way to add still images to a composition? It looks like the answer is somewhere in Core Animation, but the whole thing seems to be a bit above my head and I would appreciate a code sample or some information pointers.
OK. There’s a great video called Editing Media with AV Foundation from WWDC that explains a lot. You can’t insert images right to the AVComposition timeline, at least I did not find any way to do that. But when exporting or playing an asset you can refer to an AVVideoComposition. That’s maybe not a perfect name for the class, since it allows you to mix between various video tracks in the asset, very much like AVAudioMix does for audio. And the AVVideoComposition has an animationTool property that lets you throw Core Animation layers (CALayer) into the mix. CALayer has a contents property that can be assigned a CGImageRef. Does not help in my case, might help somebody else.
I also need still images in my composition. My line of thinking is a little different. Insert on-the-fly movies of black in place of when images should be appearing (possibly one such video would suffice). Add a dictionary reference to each such insert, linking composition time-ranges to bona-fide desired images. When the correct time range arrives in my full-time custom compositor, pull out the desired image and paint that into the output pixel buffer, ignoring the incoming black frames from the composition. I think that'd be another way of doing it.