Recording video from camera with animated UIView overlay - ios

I currently have a video camera set up with an AVCaptureVideoDataOutput whose sample buffer delegate is implemented as such:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSArray *detectedFaces = [self detectFacesFromSampleBuffer:sampleBuffer];
[self animateViewsForFaces:detectedFaces];
}
The sample buffer is processed and if any faces are detected, their bounds are shown as views over an AVCaptureVideoPreviewLayer that's displaying the live video output (rectangles over the faces). The views are animated so that they move smoothly between face detects. Is it possible to somehow record what's shown in the preview layer and merge it with the animated UIViews that are overlaying it, the end result being a video file?

Generally, you can use low-level approach to make a video stream, then write it to a file. I'm not an expert with video formats, codecs and so on, but approach is:
— Set up an CADisplayLink for getting fired callback every frame the screen redraws. Maybe good decision is to setup frame interval to 2 to reduce target video frame rate to ~30 fps.
— Each time screen redraws take a snapshot of preview layer and overlay.
— Process collected images: zip each two images of one frame then make a video stream from the sequence of merged frames. I assume, iOS has built-in tools for more or less simple way to do this.
Of course, resolution and quality constrained to the layers' parameters. If you need raw video stream from the camera, you should capture this stream and then draw your overlay data directly in the video frames that you captured.

Related

AVFoundation: Get CMSampleBuffer from Preview View (while recording at the same time)

I'm trying to create a camera capture session that has the following features:
Preview
Photo capture
Video capture
Realtime frame processing (AI)
While the first two things are not a problem, I haven't found a way to make the last two separately.
Currently, I use a single AVCaptureVideoDataOutput and run the video recording first, then the frame processing in the same function, in the same queue. (see code here and here)
The only problem with this is that the video capture captures 4k video, and I don't really want the frame processor to receive 4k buffers as that is going to be very slow and blocks the video recording (frame drops).
Ideally I want to create one AVCaptureDataOutput for 4k video recording, and another one that receives frames in a lower (preview?) resolution - but you cannot use two AVCaptureDataOutputs in the same capture session.
I thought maybe I could "hook into" the Preview layer to receive the CMSampleBuffers from there, just like the captureOutput(...) func, since those are in preview-sized resolutions, does anyone know if that is somehow possible?
For such thing I recommended implement custom renderer flow.
You need just one AVCaptureDataOutput without system PreviewLayer which provided by iOS.
Setup Color scheme to YUV (it's more compact then BGRA)
Get CMSampleBuffer in AVCaptureDataOutput
Send CMSampleBuffer to Metal texture.
Create resized low resolution texture in Metal
Hi resolution texture send to Renderer in draw it in MTKLView
Low resolution texture send to CVPixelBuffer, then you can convert it to CGImage, CGImage, Data.
Send low resolution image to Neural network
I have article on Medium: link. You can use it as some example.

Reduce unwanted motion blur while using GPUImage to capture

I'm writing an app in swift, and using GPUImage to capture and manipulate the images. I'm looking for a way to decrease the exposure time to reduce motion blur. If you too move quickly in the frame it looks very blurry. I have good lighting, so I'm not sure why the exposure isn't fast enough.
I'm currently doing this to setup GPUImage:
self.stillCamera = GPUImageStillCamera(sessionPreset: AVCaptureSessionPreset640x480, cameraPosition: .Front)
self.stillCamera!.outputImageOrientation = .Portrait
I then setup the filters I want (a crop and optionally effects).
I then start the preview:
self.stillCamera?.startCameraCapture()
And to capture a frame:
self.finalFilter?.useNextFrameForImageCapture()
var capturedImage = self.finalFilter?.imageFromCurrentFramebuffer()
The reason you're seeing such long exposure times is that you're using a GPUImageStillCamera and its preview to capture frames. A GPUImageStillCamera uses a AVCaptureStillImageOutput under the hood, and enables the live preview feed from that. The photo preview feed runs at ~15 FPS or lower on the various devices, and doesn't provide as clear an image as a GPUImageVideoCamera will.
You either want to capture photos from the AVCaptureStillImageOutput by triggering an actual photo capture (via -capturePhotoProcessedUpToFilter: or the like) or use a GPUImageVideoCamera and capture individual frames like you do above.

iOS postprocessing - overlay timestamp to video and export

I am working on an application where video and time/GPS/accelerometer data is simultaneously recorded to separate files.
I can play the video and have my overlay appear perfectly in realtime, but I cannot simply export this.
I am wanting to post-process the video and overlay the time,coordinates and on the video.
There are other shapes that will be overlayed which change size/position on each frame.
I have tried using AVMutableComposition and adding CALayers with limited results-
This works to an extent but I cannot synchronise the timestamp with the video. I could use a CAKeyframeAnimation with values+keyTimes, but the amount of values I need to work with is excessive.
My current approach is to render a separate video consisting of CGImages created using the data. This works well but I will need to use a ChromaKey to have transparency in the overlay. I have read that there will likely be quality issues after doing this.
Is there a simpler approach that I should be looking at?
I understand that render speed will not be fantastic, however I do not wish to require a separate 'PC' application to render the video.
Use AVAssetReader for recorded video. Get the CMSampleBufferRef, get it timestamp, draw time on sample buffer, write buffer to AVAssetWriterInputPixelBufferAdaptor. Similar approach for video being recorded.
Use the AVVideoCompositing protocol https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVVideoCompositing_Protocol/index.html
This will allow you to get frame by frame call backs with the pixel buffers to do what you want.
With this protocol you will be able to take one frame and overlay whoever you would like. Take a look at This sample - https://developer.apple.com/library/ios/samplecode/AVCustomEdit/Introduction/Intro.html to see how to handle frame by frame modifications. If you take advantage of the AVVideoCompositing protocol you can set a custom video compositor and a video composition on your AVPlayerItem and AVExportSession to render/export what you want.

Load video from iPhone library, modify frame and play it in real-time

I'm looking for a tips to develop an application for iPhone/iPad that will be able to process video (let's consider only local files stored on the device for simplicity) and play it in real-time. For example you can choose any movie and choose "Old movie" filter and want it like on old lamp TV.
In order to make this idea real i need to implement two key features:
1) Grab frames and audio stream from a movie file and get access to separate frames (I'm interested in raw pixel buffer in BGRA or at least YUV color space).
2) Display processed frames somehow. I know it's possible to render processed frame to OpenGL texture, but i would like to have more powerful component with playback controls. Is there any class of media player that supports playing custom image and audio buffers?
The processing function is done and it's fast (less than duration on one frame).
I'm not asking for ready solution, but any tips are welcome!
Answer
Frame grabbing.
It seems the only way to grab video and audio frames is to use AVAssetReader class. Although it's not recommended to use for real-time grabbing it does the job. In my tests on iPad2 grabbing single frame needs about 7-8 ms. Seeking across the video is a tricky. Maybe someone can point to more efficient solution?
Video playback. I've done this with custom view and GLES to render a rectangle texture with a video frame inside of it. As far as i know it's the fastest way to draw bitmaps.
Problems
Need to manually play a sound samples
AVAssetReader grabbing should be synchronized with a movie frame rate. Otherwise movie will go too fast or too slow.
AVAssetReader allows only continuous frame access. You can't seek forward and backward. Only proposed solution is to delete old reader and create a new with trimmed time range.
This is how you would load a video from the camera roll..
This is a way to start processing video. Brad Larson did a great job..
How to grab video frames..
You can use AVPlayer+ AVPlayerItem, it provide you a chance to apply a filter on the display image.

Capture video from cam + custom view into single video file

I wonder if it's possible in iOS 4 or 5 to save into a single video file not just a stream from camera, but a stream from camera WITH custom view(s) overlaid. Custom view will contain few labels with transparent background. Those labels will show additional info: current time and GPS coordinates. And every video player must be able to playback that additional info.
I think you can use AVCaptureVideoDataOutput to process each frame and use AVAssetWriter to record the processed frame.You can refer to this answer
https://stackoverflow.com/a/4944594/379941 .
And you can process CVImageBufferRef then use AVAssetWriterPixelBufferAdaptor's appendPixelBuffer:withPresentationTime: method to export.
And I strongly suggest using OpenCV to process frame. this is a nice tutorial http://aptogo.co.uk/2011/09/opencv-framework-for-ios/. OpenCV library is very great.

Resources