I'm trying to decode a video in real time (30 fps) and display /modify it with OpenGL. On an iPod touch, if I decode a video that I took with the camera, decoding a frame can take over 1s, while 30 fps should be 0.03s max. Thus the result is not very good..
Is it possible to achieve that with AVAssetReader ? For example Instagram applies filters (I think GLSL shaders) in real time on a video, and they can even navigate in the video. Instagram works fine on the ipod touch.
The code to decode can be found in the answer here :
Best way to access all movie frames in iOS
And more specifically here : Hardware accelerated h.264 decoding to texture, overlay or similar in iOS
Thank you in advance
Due to the very limited information you provided, I have to assume that your video sequence are compressed in the format of YUV and you set settings of AVAssetReader with other format like kCVPixelFormatType_32BGRA which forces iOS use hardware acceleration to convert colour space for you, then you feel it slowly. I suggest that no settings to set, just use its original pixel format.
Actually my app was just doing too much work on the CPU, I had another process analyzing images. When I removed it, the decoding was really fast.
Related
I had no luck getting h264 videos with RGBA pixels to work on iOS (tested on iOS 10.2) Is it possible? The Apple docs doesn't say much about it: https://developer.apple.com/library/content/documentation/Miscellaneous/Conceptual/iPhoneOSTechOverview/MediaLayer/MediaLayer.html
I don't have much interesting code to share since it's just that the AVPlayer doesn't display videos with RGBA pixels.
It's not possible to decode H264 into exactly 'RGBA' however:
a AVPlayerItemVideoOutput can be set to:kCVPixelFormatType_32BGRA using the kCVPixelBufferPixelFormatTypeKey
and
VTDecompressionSessionCreate also allows : kCVPixelFormatType_32BGRA.
Then when rendering I swizzle the pixels like this: gl_FragColor.bgra = texture2D(SamplerRGB, v_textureCoordinate);
So the answer is yes but you have to do the rendering and swizzle.
/A
(Edit. added links to code):
Here is a really great project by Apple that will get you going.. Real-time Video Processing Using AVPlayerItemVideoOutput
Just change it from YUV to RGB and bypass the 'colorConversionMatrix' part of the shader.
Good luck!
I am using scaleTimeRange:toDuration: to produce a fast-motion effect of upto 10x the original video speed.But I noticed that videos start to stutter when played through an AVPlayer at 10x.
I also noticed that on OSX's QuickTime the same composition plays smoothly.
Another question states that the reason for this is hardware limitation , but I want to know if there is a way around this , so that the fast motion effect occurs smoothly over the length of the entire video.
Video Specs
Format : H.264 , 1280x544
FPS : 25
Data Size : 26MB
Data Rate : 1.17 Mbit/s
I have a feeling that by playing your videos at 10x using scaleTimeRange:toDuration simply has the effect of multiplying your data rate by 10, bringing it up to 10Mbit/s, which osx machines can handle, but iOS devices cannot.
In other words, you're creating videos that need to play back at 300 frames per second, which is pushing AVPlayer too hard.
If I didn't know about your other question, I would have said that the solution is to export your AVComposition using AVAssetExportSession, which should result in your high FPS video being down sampled to an easier to handle 30fps, and then play that with AVPlayer.
If AVAssetExportSession isn't working, you could try applying the speedup effect yourself, by reading the frames from the source video using AVAssetReader and writing every tenth frame to the output file using AVAssetWriter (don't forget to set the correct presentation timestamps).
I have to extract all frames from video file and then save them to file.
I tried to use AVAssetImageGenerator, but it's very slow - it takes 1s - 3s per each frame ( sample 1280x720 MPEG4 video ) without saving to file process.
Is there anyway to make it much faster?
OpenGL, GPU, (...)?
I will be very grateful for showing me right direction.
AVAssetImageGenerator is a random access (seeking) interface, and seeking takes time, so one optimisation could be to use an AVAssetReader which will quickly and sequentially vend you frames. You can also choose to work in yuv format, which will give you smaller frames (and I think) faster decoding.
However, those raw frames are enormous: are 1280px * 720px * 4 bytes/pixel (if in RGBA), which is about 3.6MB each. You're going to need some pretty serious compression if you want to keep them all (MPEG4 # 720p comes to mind :).
So what are you trying to achieve?
Are you sure you want fill up your users' disks at a rate of 108MB/s (at 30fps) or 864MB/s (at 240fps)?
I'm looking for a tips to develop an application for iPhone/iPad that will be able to process video (let's consider only local files stored on the device for simplicity) and play it in real-time. For example you can choose any movie and choose "Old movie" filter and want it like on old lamp TV.
In order to make this idea real i need to implement two key features:
1) Grab frames and audio stream from a movie file and get access to separate frames (I'm interested in raw pixel buffer in BGRA or at least YUV color space).
2) Display processed frames somehow. I know it's possible to render processed frame to OpenGL texture, but i would like to have more powerful component with playback controls. Is there any class of media player that supports playing custom image and audio buffers?
The processing function is done and it's fast (less than duration on one frame).
I'm not asking for ready solution, but any tips are welcome!
Answer
Frame grabbing.
It seems the only way to grab video and audio frames is to use AVAssetReader class. Although it's not recommended to use for real-time grabbing it does the job. In my tests on iPad2 grabbing single frame needs about 7-8 ms. Seeking across the video is a tricky. Maybe someone can point to more efficient solution?
Video playback. I've done this with custom view and GLES to render a rectangle texture with a video frame inside of it. As far as i know it's the fastest way to draw bitmaps.
Problems
Need to manually play a sound samples
AVAssetReader grabbing should be synchronized with a movie frame rate. Otherwise movie will go too fast or too slow.
AVAssetReader allows only continuous frame access. You can't seek forward and backward. Only proposed solution is to delete old reader and create a new with trimmed time range.
This is how you would load a video from the camera roll..
This is a way to start processing video. Brad Larson did a great job..
How to grab video frames..
You can use AVPlayer+ AVPlayerItem, it provide you a chance to apply a filter on the display image.
I have used the following method iOS4: how do I use video file as an OpenGL texture? to get video frames rendering in openGL successfully.
This method however seems to fall down when you want to scrub (jump to a certain point in the playback) as it only supplies you with video frames sequentially.
Does anyone know a way this behaviour can successfully be achieved?
One easy way to implement this is to export the video to a series of frames, store each frame as a PNG, and then "scrub" by seeing to a PNG at a specific offset. That gives you random access in the image stream at the cost of decoding the entire video first and holding all the data on disk. This would also involve decoding each frame as it is accessed, that would eat up CPU but modern iPhones and iPads can handle it as long as you are not doing too much else.