I have searched all over to no prevail with this query. On Android tv VLC, I am trying to adjust the contrast in the LibVLC line, but seem to hit a blank.
Any ideas on what might work?
Try with a video filter. From https://wiki.videolan.org/VLC_command-line_help/
Image properties filter (adjust)
--contrast=<float [0.000000 .. 2.000000]>
Image contrast (0-2)
Set the image contrast, between 0 and 2. Defaults to 1.
This would go in the libvlc constructor, like so
LibVLC = libVLCFactory.getFromOptions(ctx, "--contrast=1.5")
Related
In my app I have to play an alpha channel video as an overlay over the current view (I'm planning to achieve this alpha channel video using GPUImageAlphaBlendFilter or GPUImageChromaKeyBlendFilter), so I wanted to know if the output video after applying these filters can be played using GPUImage? If we can, then can I get some sample code for the same.
I know AVAnimator is an option but I want to apply filters to these overlay videos i.e.brightness,saturation etc which has to be visible while video is being played because of which I can't use AVAnimator. But this being the next step for now I want to know how to play video using GPUImage.
Thanks in advance! :]
Well, even though I like telling people about AVAnimator, Brad Larson's GPUImage is specifically designed to be a GPU based filtering framework for iOS apps. Application of real time effects like the ones you describe is exactly what GPUImage was designed for. See GPUImage Chroma Key filter
I'm using the two OpenCV functions mentioned above to retrieve frames from my webcam. No additional properties are set, just running with default parameters.
While reading frames in a loop I can see that the image changes, brightness and contrast seem to be adjusted automatically. It definitely seems to be a operation of OpenCV because the image captured by the camera is not changed and lit constantly.
So how can I disable this automated correction? I could not find a property that seems to be able to do that job.
You should try to play around with these three parameters:
CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras)
CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras)
CV_CAP_PROP_SATURATION Saturation of the image (only for cameras)
Try to set them all to 50. Also (if it won't help) try to change another camera capture parameters from documentation.
To answer that for my own: OpenCV is buggy or outdated here.
it seems to be impossible to get images in native resolution of the camera, they're always 640x480; also forcing it to an other value by setting width and height properties does not change anything
it seems to be impossible to disable the automatic image correction, the properties mentioned above seem not to work
the brightness/contrast properties doesn't seem to work as well - or at least I could not find any good values for it or the automatic image correction always overrides them
To sum it up: I'd not recommend to use OpenCV for some more enhanced image capturing.
I'm looking for a tips to develop an application for iPhone/iPad that will be able to process video (let's consider only local files stored on the device for simplicity) and play it in real-time. For example you can choose any movie and choose "Old movie" filter and want it like on old lamp TV.
In order to make this idea real i need to implement two key features:
1) Grab frames and audio stream from a movie file and get access to separate frames (I'm interested in raw pixel buffer in BGRA or at least YUV color space).
2) Display processed frames somehow. I know it's possible to render processed frame to OpenGL texture, but i would like to have more powerful component with playback controls. Is there any class of media player that supports playing custom image and audio buffers?
The processing function is done and it's fast (less than duration on one frame).
I'm not asking for ready solution, but any tips are welcome!
Answer
Frame grabbing.
It seems the only way to grab video and audio frames is to use AVAssetReader class. Although it's not recommended to use for real-time grabbing it does the job. In my tests on iPad2 grabbing single frame needs about 7-8 ms. Seeking across the video is a tricky. Maybe someone can point to more efficient solution?
Video playback. I've done this with custom view and GLES to render a rectangle texture with a video frame inside of it. As far as i know it's the fastest way to draw bitmaps.
Problems
Need to manually play a sound samples
AVAssetReader grabbing should be synchronized with a movie frame rate. Otherwise movie will go too fast or too slow.
AVAssetReader allows only continuous frame access. You can't seek forward and backward. Only proposed solution is to delete old reader and create a new with trimmed time range.
This is how you would load a video from the camera roll..
This is a way to start processing video. Brad Larson did a great job..
How to grab video frames..
You can use AVPlayer+ AVPlayerItem, it provide you a chance to apply a filter on the display image.
I am showing a video inline (not fullscreen) using MPMoviePlayerController. I am using this class because it is the only player I got working using a remote file (progressive download) and not a local file.
Is there any way to create a blue-screen effect? what I basically mean is decide on a certain RGB value and set that pixel's alpha to 0. Is it possible to perform any image processing per frame with MPMoviePlayerController?
You can not use MPMoviePlayerController for such movie processing.
Still, there is ways to accomplish what you are asking for. You may use the AVAssetWriter etc.
Check my answer on a similar question.
My application uses VMR9 Renderless mode to play a WMV file. I build a filter graph with IGraphBuilder::RenderFile and control playback with IMediaControl. Everything plays okay, but I can't figure out how to determine the source video size. Any ideas?
Note: This question was asked before in How can I adjust the video to a specified size in VMR9 renderless mode?. But the solution was to use Windowless mode instead of Renderless mode, which would require rewriting my code.
Firstly you want the Video renderer. You can do this by using EnumFilters on the IGraphBuilder interface. Then call EnumPins on that filter to find the input pin. You can then call ConnectionMediaType to get the media type being fed into that filter. Now depending what formattype is set to you can cast the pbFormat pointer to the relevant structure and from there find out what the video size is. If you want the size before that (to see if some scaling is going on) you can work your way back across the pin using "ConnectedTo" to get the next filter back. You can then find its input pins and repeat the ConnectionMediaType call. Repeat until you get to the filter's pin that you want.
You could use the MediaInfo project at http://mediainfo.sourceforge.net/hr/Download/Windows and through the CS wrapper included in the VCS2010 or VCS2008 folders get all the information about a video you need.
EDIT: Sorry I thought you were on managed. But in either case the MediaInfo can be used, so maybe it helps.