I'm currently working on an augmented reality app, that's why I would like to add an image above the live video feed (GPUImageVideoCamera) AND be able to record the whole to an output file.
Get the video live preview, and recording is ok from now, but I can't manage to add the image on the screen the way it's recorded to the output file.
What's the best way to achieve this (I mean a GPUImage compliant way) ?
Related
I am looking to make a chrome extension that analyzes frames of a youtube video being watched and overlays a rectangle around certain points of interest.
Is there any api to access the frames being rendered and preprocess them in real time?
If this isn't possible my next idea would be to download the video separately(either locally or on a backend server) processing and storing the needed coordinates and then overlaying the rectangles as an html/js animation? Ideally using some sort of buffering system so the entire video doesn't need to be preprocessed before playing.
Are either of these possible, or is there a better solution that I am missing? Thanks!
In my app I have to play an alpha channel video as an overlay over the current view (I'm planning to achieve this alpha channel video using GPUImageAlphaBlendFilter or GPUImageChromaKeyBlendFilter), so I wanted to know if the output video after applying these filters can be played using GPUImage? If we can, then can I get some sample code for the same.
I know AVAnimator is an option but I want to apply filters to these overlay videos i.e.brightness,saturation etc which has to be visible while video is being played because of which I can't use AVAnimator. But this being the next step for now I want to know how to play video using GPUImage.
Thanks in advance! :]
Well, even though I like telling people about AVAnimator, Brad Larson's GPUImage is specifically designed to be a GPU based filtering framework for iOS apps. Application of real time effects like the ones you describe is exactly what GPUImage was designed for. See GPUImage Chroma Key filter
In my app I want to give effect to audio and video file like it is given to Pheed Application. I want to mix audio recorded sound with predefine clips. And also want to give sound different effect like echo , surrounding Normal , And with video file I want to add different effects like sepia,.. I don't know which class library I have to use to implement6 this . Can any one suggest me any library for that or any way how to do that?
Here are a pair of libraries you can try:
For audio filters and effects you can try with The Amazing Audio Engine. It is build on top of Core Audio.
For video effects try GPUImage. I haven't tried it but it is supposed to work better than Core Image when handling videos.
Currently I am working on Video Processing application. I record video using GPUImage for iOS. Video is generated Perfect but i also want to add dynamic subtitles in Video i.e text change with varying time. I don't have any idea how to do subtitles in exported video using GPUImage so i am having difficulty in getting required Output. Kindly help me in this.
Thanks
I have been trying to create a video template which uses alpha channel video overlayed on the mp4 videos and images.
This is how I need to create a video http://viewptch.ptchcdn.com/rendered/52b28a9f8d4f980f3a3f99c3_cb44bf2b/52b28a9f8d4f980f3a3f99c3_lrg_main_main.mov
For overlaying alpha video on another videos, I have used AVAnimator, I was succeeded for playing a preview using AVFoundation, AVSynchronizedLayer and AVAnimator.
When rendering video from composition, frames of alpha channel videos renders very slowly.
I need to create a video with alpha channel video on top of another video.
Can any one please suggest me what are the possible ways to render a video like http://viewptch.ptchcdn.com/rendered/52b28a9f8d4f980f3a3f99c3_cb44bf2b/52b28a9f8d4f980f3a3f99c3_lrg_main_main.mov ?
You mention that you have looked at AVAnimator, did you download the KittyBoom example project and try it out? The specifics of how it works are detailed in this post. One thing to note is that when you build and run on the device, you need to turn Debug mode off otherwise it will not execute quickly because a number of extra checks are done in debug mode. Also, you have to make sure to test on the actual device, the simulator is not a good measure of performance on a real device. Performance is a key problem with video that contains an alpha channel as iOS does not support video with an alpha channel by default.