I'm trying to display the contents of a video file (let's just say without the audio for now) onto a UV mapped 3D object in OpenGL. I've done a fair bit in OpenGL but have no idea where to begin in video file handling, and most of the examples out there seems to be for getting video frames from cameras, which is not what I'm after.
At the moment I feel if I can get individual frames of the video as CGImageRef I'd be set, so I'm wondering how to do this? Perhaps there are even be better ways to do this? Where should I start and what's the most straight forward file format for video playback on iOS? .mov?
Apologies; typing on an iPhone so I'll be a little brief.
Create an AVURLAsset with the URL of your video - which can be a local file URL if you like. Anything QuickTime can do is fine, so MOV or M4V in H.264 is probably the best source.
Query the asset for tracks of type AVMediaTypeVideo. You should get just one unless your source video has multiple camera angles of something like that, so just taking objectAtIndex:0 should give you the AVAssetTrack you want.
Use that to create an AVAssetReaderTrackOutput. Probably you want to specify kCVPixelFormatType_32BGRA.
Create an AVAssetReader using the asset; attach the asset reader track output as an output. And call startReading.
Henceforth you can call copyNextSampleBuffer on the track output to get new CMSampleBuffers, putting you in the same position as if you were taking input from the camera. So you can lock that to get at pixel contents and push those to OpenGL via Apple's BGRA extension.
You're probably going to have to use a player layer and flatten its contents into a bitmap context. See the documentation for AVPlayerLayer. The performance might be very poor though.
Related
This question is about iOS. On Android, it is very easy to use OpenGL ES 2.0 to render a texture on a view (for previewing) or to send it to an encoder (for file writing). I haven't been able to find any tutorial on iOS to achieve video playback (previewing video effect from a file) and video recording (saving a video with an effect) with shader effects. Is this something possible with iOS?
I've come across a demo about shaders called GLCameraRipple but I have no clue about how to use it more generically. Ex: With AVFoundation.
[EDIT]
I trampled on this tutorial about OpenGL ES, AVFoundation and video merging on iOS while searching for a snippet. That's another interesting entry door.
It's all very low-level stuff over in iOS land, with a whole bunch of pieces to connect.
The main thing you're likely to be interested in is CVOpenGLESTextureCache. As the CV prefix implies, it's part of Core Video, in this case its primary point of interest is CVOpenGLESTextureCacheCreateTextureFromImage which "creates a live binding between the image buffer and the underlying texture object". The documentation further provides you with explicit advice on use of such an image as a GL_COLOR_ATTACHMENT — i.e. the texture ID returned is usable both as a source and as a destination for OpenGL.
The bound image buffer will be tied to a CVImageBuffer, one type of which is a CVPixelBuffer. You can supply pixel buffers to an AVAssetWriterInputPixelBufferAdaptor wired to an AVAssetWriter in order to output to a video.
In the other direction, an AVAssetReaderOutput attached to a AVAssetReader will vend CMSampleBuffers which can be queried for attached image buffers (if you've got video coming in and not just audio, there'll be some) that can then be mapped into OpenGL via a texture cache.
I am working on an application where video and time/GPS/accelerometer data is simultaneously recorded to separate files.
I can play the video and have my overlay appear perfectly in realtime, but I cannot simply export this.
I am wanting to post-process the video and overlay the time,coordinates and on the video.
There are other shapes that will be overlayed which change size/position on each frame.
I have tried using AVMutableComposition and adding CALayers with limited results-
This works to an extent but I cannot synchronise the timestamp with the video. I could use a CAKeyframeAnimation with values+keyTimes, but the amount of values I need to work with is excessive.
My current approach is to render a separate video consisting of CGImages created using the data. This works well but I will need to use a ChromaKey to have transparency in the overlay. I have read that there will likely be quality issues after doing this.
Is there a simpler approach that I should be looking at?
I understand that render speed will not be fantastic, however I do not wish to require a separate 'PC' application to render the video.
Use AVAssetReader for recorded video. Get the CMSampleBufferRef, get it timestamp, draw time on sample buffer, write buffer to AVAssetWriterInputPixelBufferAdaptor. Similar approach for video being recorded.
Use the AVVideoCompositing protocol https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVVideoCompositing_Protocol/index.html
This will allow you to get frame by frame call backs with the pixel buffers to do what you want.
With this protocol you will be able to take one frame and overlay whoever you would like. Take a look at This sample - https://developer.apple.com/library/ios/samplecode/AVCustomEdit/Introduction/Intro.html to see how to handle frame by frame modifications. If you take advantage of the AVVideoCompositing protocol you can set a custom video compositor and a video composition on your AVPlayerItem and AVExportSession to render/export what you want.
I'm using AVMutableComposition and AVAssetExportSession to composite several discrete audio clips/files together into a single file, similarly to this post but there will be no "video" track. I'd like to give the track some visual appeal using a still image so that when the user plays the clip they don't just see a generic quicktime icon, ideally I'd replace the image with branding or something relevant to the audio content. How would I go about doing it and is there a way to do it without dramatically increasing file size(ie some way to have a really slow framerate or just something so its not generating 30 fps for what is non moving art.) Appreciate any help on this.
AVAssetWriter will allow you to create video from a still image. This question provides a great example of how to do so.
I'm looking for a tips to develop an application for iPhone/iPad that will be able to process video (let's consider only local files stored on the device for simplicity) and play it in real-time. For example you can choose any movie and choose "Old movie" filter and want it like on old lamp TV.
In order to make this idea real i need to implement two key features:
1) Grab frames and audio stream from a movie file and get access to separate frames (I'm interested in raw pixel buffer in BGRA or at least YUV color space).
2) Display processed frames somehow. I know it's possible to render processed frame to OpenGL texture, but i would like to have more powerful component with playback controls. Is there any class of media player that supports playing custom image and audio buffers?
The processing function is done and it's fast (less than duration on one frame).
I'm not asking for ready solution, but any tips are welcome!
Answer
Frame grabbing.
It seems the only way to grab video and audio frames is to use AVAssetReader class. Although it's not recommended to use for real-time grabbing it does the job. In my tests on iPad2 grabbing single frame needs about 7-8 ms. Seeking across the video is a tricky. Maybe someone can point to more efficient solution?
Video playback. I've done this with custom view and GLES to render a rectangle texture with a video frame inside of it. As far as i know it's the fastest way to draw bitmaps.
Problems
Need to manually play a sound samples
AVAssetReader grabbing should be synchronized with a movie frame rate. Otherwise movie will go too fast or too slow.
AVAssetReader allows only continuous frame access. You can't seek forward and backward. Only proposed solution is to delete old reader and create a new with trimmed time range.
This is how you would load a video from the camera roll..
This is a way to start processing video. Brad Larson did a great job..
How to grab video frames..
You can use AVPlayer+ AVPlayerItem, it provide you a chance to apply a filter on the display image.
I have used the following method iOS4: how do I use video file as an OpenGL texture? to get video frames rendering in openGL successfully.
This method however seems to fall down when you want to scrub (jump to a certain point in the playback) as it only supplies you with video frames sequentially.
Does anyone know a way this behaviour can successfully be achieved?
One easy way to implement this is to export the video to a series of frames, store each frame as a PNG, and then "scrub" by seeing to a PNG at a specific offset. That gives you random access in the image stream at the cost of decoding the entire video first and holding all the data on disk. This would also involve decoding each frame as it is accessed, that would eat up CPU but modern iPhones and iPads can handle it as long as you are not doing too much else.