I am looking to make a chrome extension that analyzes frames of a youtube video being watched and overlays a rectangle around certain points of interest.
Is there any api to access the frames being rendered and preprocess them in real time?
If this isn't possible my next idea would be to download the video separately(either locally or on a backend server) processing and storing the needed coordinates and then overlaying the rectangles as an html/js animation? Ideally using some sort of buffering system so the entire video doesn't need to be preprocessed before playing.
Are either of these possible, or is there a better solution that I am missing? Thanks!
Related
My app uses VideoCore project for live streaming to Wowza server and store the video. Also it uses AVCaptureMovieFileDataOutput to record the offline video.
I want to embed the video capturing time stamp on top-left of video, and it is not a static time. It means it is not only a static watermark but also a real video capturing time display.
For the streaming case, I have no idea for now. For the offline case, I tried to utilize AVCaptureAudioDataOutput to get every frame to add time text overlay. But this causes preview screen freezes.
Any tips are helpful.
Thank you.
My platform is Xcode7.3 + Swift2
I did some similar thing using transcodig on wowza, the transcodig menu enables image overlay, this image could be refreshed every second (or less), so if you create an image with timestamps every second, wowza takes it and put it on the stream every second. you can define where to put the image, the size and transparency.
to create the image I use PHP, but you could use another tool that enables image creation.
I am recording videos and playing them back using AVFoundation. Everything is perfect except the hissing which is there in the whole video. You can hear this hissing in every video captured from any iPad. Even videos captured from Apple's inbuilt camera app has it.
To hear it clearly, you can record a video in a place as quiet as possible without speaking anything. It can be very easily detected through headphones and keeping volume to maximum.
After researching, I found out that this hissing is made by preamplifier of the device and cannot be avoided while recording.
Only possible solution is to remove it during post processing of audio. Low frequency noise can be removed by implementing low pass filter and noise gates. There are applications and software like Adobe Audition which can perform this operation. This video shows how it is achieved using Adobe Audition.
I have searched Apple docs and found nothing which can achieve this directly. So I want to know if there exists any library, api or open source project which can perform this operation. If not, then how can I start going in right direction because it does looks like a complex task.
I have been trying to create a video template which uses alpha channel video overlayed on the mp4 videos and images.
This is how I need to create a video http://viewptch.ptchcdn.com/rendered/52b28a9f8d4f980f3a3f99c3_cb44bf2b/52b28a9f8d4f980f3a3f99c3_lrg_main_main.mov
For overlaying alpha video on another videos, I have used AVAnimator, I was succeeded for playing a preview using AVFoundation, AVSynchronizedLayer and AVAnimator.
When rendering video from composition, frames of alpha channel videos renders very slowly.
I need to create a video with alpha channel video on top of another video.
Can any one please suggest me what are the possible ways to render a video like http://viewptch.ptchcdn.com/rendered/52b28a9f8d4f980f3a3f99c3_cb44bf2b/52b28a9f8d4f980f3a3f99c3_lrg_main_main.mov ?
You mention that you have looked at AVAnimator, did you download the KittyBoom example project and try it out? The specifics of how it works are detailed in this post. One thing to note is that when you build and run on the device, you need to turn Debug mode off otherwise it will not execute quickly because a number of extra checks are done in debug mode. Also, you have to make sure to test on the actual device, the simulator is not a good measure of performance on a real device. Performance is a key problem with video that contains an alpha channel as iOS does not support video with an alpha channel by default.
I'm looking for a tips to develop an application for iPhone/iPad that will be able to process video (let's consider only local files stored on the device for simplicity) and play it in real-time. For example you can choose any movie and choose "Old movie" filter and want it like on old lamp TV.
In order to make this idea real i need to implement two key features:
1) Grab frames and audio stream from a movie file and get access to separate frames (I'm interested in raw pixel buffer in BGRA or at least YUV color space).
2) Display processed frames somehow. I know it's possible to render processed frame to OpenGL texture, but i would like to have more powerful component with playback controls. Is there any class of media player that supports playing custom image and audio buffers?
The processing function is done and it's fast (less than duration on one frame).
I'm not asking for ready solution, but any tips are welcome!
Answer
Frame grabbing.
It seems the only way to grab video and audio frames is to use AVAssetReader class. Although it's not recommended to use for real-time grabbing it does the job. In my tests on iPad2 grabbing single frame needs about 7-8 ms. Seeking across the video is a tricky. Maybe someone can point to more efficient solution?
Video playback. I've done this with custom view and GLES to render a rectangle texture with a video frame inside of it. As far as i know it's the fastest way to draw bitmaps.
Problems
Need to manually play a sound samples
AVAssetReader grabbing should be synchronized with a movie frame rate. Otherwise movie will go too fast or too slow.
AVAssetReader allows only continuous frame access. You can't seek forward and backward. Only proposed solution is to delete old reader and create a new with trimmed time range.
This is how you would load a video from the camera roll..
This is a way to start processing video. Brad Larson did a great job..
How to grab video frames..
You can use AVPlayer+ AVPlayerItem, it provide you a chance to apply a filter on the display image.
I am trying to make a nice pretty video.
I have a AVI video from a GOPro video camera, and I hae some info I want to overlay on top of the video. Like Time, GPS, Speed, G-Force etc.
I got my raw data, and ActionScript coded it up into a Flash movie, but then worked out I have two issues.
Flash export to AVI is pretty crap, and basically does a screen capture.
The export to AVI cant be transparent or anything but spare/rectangle.
So, can anyone suggest a better way? Should I use something other than Flash to create my speedometer, that is more friendly for overlaying on a AVI?
This is the sort of thing I am trying to create.
youtube.com/watch?v=tT-vDtQyCbo
I have a CSV of all my raw data, and am trying to find a way to overlay it and look as professional as that link above. I can make the dials in actionscript, but then exporting to AVI with a 'screen capture' type program, they look pretty crap. But on the other hand, inporting my HD video info Flash, and it becomes pretty crap quality, and still have the export issue at the end.
I'm not 100% clear on what you're trying to do.
If you mean that you want to put some info over a video using Flash, all you need to do is import your video onto the timeline on one layer and then place your information on a higher layer. If you want your video to play as a different shape, then you can simply apply a mask to the layer your video is sitting on.
If you throw in more directing information then I'll improve this answer for you :)