Creating a Motion JPEG frame by frame with variable frame-rate - delphi

I'm analyzing a number of solutions to the problem that I have in hand: I'm receiving images from a device and I need to make a video file out of it. However, the images arrive with a somewhat random delay between them and I'm looking for the best way to encode this. I have to create this video frame by frame, and after each frame I must have a new video file with the new frame, replacing the old video file.
I was thinking of fixating the frame-rate a little "faster" than the minimum delay that I might get and just repeat the last frame until a new one arrives, but I guess that this solution is not optimal.
Also, this project is made with Delphi (no, I cannot change that) and I need means to turn these frames into a video file after each frame. I was thinking about using mencoder as an external tool, but I'm reading the documentation and still haven't found an option to make it insert a frame in an already encoded Motion JPEG video file. As my images come in as JPEG, I thought that it would be reasonable to use Motion JPEG, but not even this is certain yet. Also, I don't know if mencoder can be used as a library. It would help a lot if it did.
What would you suggest?

There are some media container formats that support variable frame rate, but I don't think MJPEG is good choice because of the storage overhead. I believe the best way would be to transcode JPEG frames to MP4 format using both I-frames and P-frames.
You can use FFMPEG Delphi/FP header files for the transcoding.
Edit:
The most up to date version of FFMPEG headers can be found at GLScene repository on SourceForge.net. To view the files you can use this link

Related

Any way to have a real-time preview of AVMutableComposition? If no, how does Final Cut Pro does it?

is it possible to have a real-time preview of AVMutableComposition which has some layer instructions applied to its assets?
The only class I found that connects AVMutableComposition with AVVideoComposition (holding instructions) is AVExportSession. Does it mean I must export it first to play a preview?
If so, how does apps like Final Cut Pro serve real-time preview when I edit part of the video. Do they cut the whole video into multiple chunks, export what has changed and keep change of everything else?
This sounds like a difficult problem - is there any library that would help in cutting video into small chunks to export and keeping an eye on cache invalidation?
Cheers,
M.
I don't know if this is still relevant but you can always extract each frame from the video, manipulate it accordingly then render it to the screen.
If its from AVCaptureSession you can get CMSampleBuffer from the callbacks, if it's a file I think AVReader is your best bet then you can use either CoreImage or Metal to manipulate the frames and render them in real-time.
There is no real time preview with AVMutableComposition , they may create a time slot for every change and manage it's visibility when you change the slider below

What's the best way to composite frame-based animated stickers over recorded video?

We want to allow the user to place animated "stickers" over video that they record in the app and are considering different ways to composite these stickers.
Create a video in code from the frame-based animated stickers (which can be rotated, and have translations applied to them) using AVAssetWriter. The problem is that AVAssetWriter only writes to a file and doesn't keep transparency. This would prevent us from being able to overly it over the video using AVMutableComposition.
Create .mov files ahead of time for our frame based stickers and composite them using AVMutableComposition and layer instructions with transformations. The problem with this is that there are no tools for easily converting our PNG based frames to a .mov while maintaining an alpha channel and we'd have to write our own.
Creating separate CALayers for each frame in the sticker animations. This could potentially create a very large number of layers per frame rate of the video.
Or any better ideas?
Thanks.
I would suggest that you take a look at my blog post on this specific subject. Basically, this example shows how RGBA video data can be loaded from a file attached to the app resources. This is imported from a .mov that contains Animation RGBA data on the desktop. A conversion step is required to get the data from the Desktop into iOS, since plain H.264 cannot support an Alpha channel directly (as you have discovered). Note that older hardware may have issues decoding a H.264 user recorded video and then another one on top of that, so this approach of using the CPU instead of the H.264 hardware for the sticker is actually better.

How to control the number of buffered frames of VLCkit?

I have just built VLC library for iOS at VLCKit
and using it to display a video stream. I need to make it displays in real-time with a lowest latency, so I tried to find a way to reduce the number of buffered frames (or something similar to it) before display on an UIView.
I started looking in the module MobileVLCKit but it seems no property allows me to control that.
I am wondering if the change can be accomplished on MobileVLCKit itself or on the VLC library.
If so, will I need to modify the library and rebuild it? What is the parameter should I need to change?
After spending a mount of time to look into the vlc library without successful, I tried to stream with rtsp instead of rtmp protocol and the real-time of video produced has been improved.
Thus i also found a workaround solution by setting a timer to force player moves forward the buffered frames. It might cause jagging but keep video in more real-time.

Load video from iPhone library, modify frame and play it in real-time

I'm looking for a tips to develop an application for iPhone/iPad that will be able to process video (let's consider only local files stored on the device for simplicity) and play it in real-time. For example you can choose any movie and choose "Old movie" filter and want it like on old lamp TV.
In order to make this idea real i need to implement two key features:
1) Grab frames and audio stream from a movie file and get access to separate frames (I'm interested in raw pixel buffer in BGRA or at least YUV color space).
2) Display processed frames somehow. I know it's possible to render processed frame to OpenGL texture, but i would like to have more powerful component with playback controls. Is there any class of media player that supports playing custom image and audio buffers?
The processing function is done and it's fast (less than duration on one frame).
I'm not asking for ready solution, but any tips are welcome!
Answer
Frame grabbing.
It seems the only way to grab video and audio frames is to use AVAssetReader class. Although it's not recommended to use for real-time grabbing it does the job. In my tests on iPad2 grabbing single frame needs about 7-8 ms. Seeking across the video is a tricky. Maybe someone can point to more efficient solution?
Video playback. I've done this with custom view and GLES to render a rectangle texture with a video frame inside of it. As far as i know it's the fastest way to draw bitmaps.
Problems
Need to manually play a sound samples
AVAssetReader grabbing should be synchronized with a movie frame rate. Otherwise movie will go too fast or too slow.
AVAssetReader allows only continuous frame access. You can't seek forward and backward. Only proposed solution is to delete old reader and create a new with trimmed time range.
This is how you would load a video from the camera roll..
This is a way to start processing video. Brad Larson did a great job..
How to grab video frames..
You can use AVPlayer+ AVPlayerItem, it provide you a chance to apply a filter on the display image.

iOS AVPlayer: How to slow down a 30fps video to 1fps

I have a 30fps Quicktime .mov of still images I created with AVAssetWriter. (It's only about 10 frames long). I would like the user to be able to slow it down using a UISlider to about 1fps, but when I adjust the AVPlayer .rate property from 1 down to 0, it doesn't get anywhere near 1fps, it just stops playback (because a 0 rate is effectively stopping/pausing it, which makes sense). But how can I slow the player down to about 1fps? I think I'd need to do some math to calculate the actual rate, but that's where I'm stuck. Would it end up being something like 0.000000000000001?
Thanks!
If this was a requirement of mine I would approach this as follows (also suggested by Inafziger in the comments). Use AVAssetReader and roll my own viewer for the images. This would give you precise control using a timer as stated in your comments. Make sure you reuse some preallocated image(s) memory area (you can probably get away with space for a single image). I would probably take a pull approach like CoreAudio. When you need an image pull it from some image buffer manager class which calls AVAssetReaders read function. This way you can have N buffers that will always be available. This may be a little overkill. I do believe AVAssetReader pre decodes some amount of the movie upon initialization. This is why I say you can more than likely just get away with using a single buffer for reading image data into.
From you comment about memory issues. I do believe there are some functions in the AVAssetReader and associated classes that use the create rule.

Resources