How to find the source video size using VMR9 renderless mode - directx

My application uses VMR9 Renderless mode to play a WMV file. I build a filter graph with IGraphBuilder::RenderFile and control playback with IMediaControl. Everything plays okay, but I can't figure out how to determine the source video size. Any ideas?
Note: This question was asked before in How can I adjust the video to a specified size in VMR9 renderless mode?. But the solution was to use Windowless mode instead of Renderless mode, which would require rewriting my code.

Firstly you want the Video renderer. You can do this by using EnumFilters on the IGraphBuilder interface. Then call EnumPins on that filter to find the input pin. You can then call ConnectionMediaType to get the media type being fed into that filter. Now depending what formattype is set to you can cast the pbFormat pointer to the relevant structure and from there find out what the video size is. If you want the size before that (to see if some scaling is going on) you can work your way back across the pin using "ConnectedTo" to get the next filter back. You can then find its input pins and repeat the ConnectionMediaType call. Repeat until you get to the filter's pin that you want.

You could use the MediaInfo project at http://mediainfo.sourceforge.net/hr/Download/Windows and through the CS wrapper included in the VCS2010 or VCS2008 folders get all the information about a video you need.
EDIT: Sorry I thought you were on managed. But in either case the MediaInfo can be used, so maybe it helps.

Related

Set interlacing information in QuickTime

I'm trying to set the correct interlacing information via the QuickTime 7 API on a movie that I am creating.
I want to make my movie progressive scan but when I visually check the output, every frame is squashed into the top half. So even though I make sure QuickTime knows my movie is kQTFieldsProgressiveScan it still gets confused.
This is what I am doing:
myCreateNewMovie(...);
ICMCompressionSessionOptionsCreate(...);
BeginMediaEdits(media);
myCreate(ImageDescription with appropriate FieldInfoImageDescriptionExtension2);
SetMediaSampleDescription(media, ImageDescription);
and then when writing each frame I add the same description:
ICMImageDescriptionSetProperty(myFieldInfoImageDescription, ...);
AddMediaSample2(...);
From various bits and pieces on the net I got the impression that setting the sample description for the media was getting overwritten. Now I'm setting the FieldInfo data inside my ICM Encoded Frame Output callback and it seems to be satisfactory.

How to play interlaced video, where even lines at top half of picture, and odd lines at bottom half?

I have some device which streams h264 video in following format: top half of picture is even lines of video, and bottom half of picture is odd lines of video. So the question is - how can I play this video in normal visibility, using standart players, ffplay for example.
I know about "tinterlace:merge" plugin in ffmpeg, but it combines video from two pictures following one by one. So my task is make a correct video from single frame.
Regards,
Alexey.
I recently had to deal with the exact same problem.
there are many different methods and the optimum solution completely depends on your situation,
the simplest fastest method is weaving two fields together which is perfect for immobile parts but create comb effect in moving object.
more complicated methods use motion detection methods.
what I did was merging two fields then applying Edge-Line averaging (ELA) for moving segments to reduce comb effect.
check this link for a detailed explanation of the problem
It would be good if you could provide a sample video file. You describe very well what the picture looks like, but the file may contain other information that is helpful for playback.
Furthermore, the format you describe doesn't sound like a standard format, so it's unlikely you will get a regular player to play it the way you want, out-of-the-box. If you're using ffplay, it's likely that you will have to write your own plugin to re-order the scanlines prior to displaying them.
Alternatively, you could re-encode the video into a standard format (interlaced or deinterlaced) using ffmpeg. You could then play it back in any regular player, like ffplay or VLC.
Finally, I recommend asking your question on the ffmpeg mailing list.

iOS AVPlayer: How to slow down a 30fps video to 1fps

I have a 30fps Quicktime .mov of still images I created with AVAssetWriter. (It's only about 10 frames long). I would like the user to be able to slow it down using a UISlider to about 1fps, but when I adjust the AVPlayer .rate property from 1 down to 0, it doesn't get anywhere near 1fps, it just stops playback (because a 0 rate is effectively stopping/pausing it, which makes sense). But how can I slow the player down to about 1fps? I think I'd need to do some math to calculate the actual rate, but that's where I'm stuck. Would it end up being something like 0.000000000000001?
Thanks!
If this was a requirement of mine I would approach this as follows (also suggested by Inafziger in the comments). Use AVAssetReader and roll my own viewer for the images. This would give you precise control using a timer as stated in your comments. Make sure you reuse some preallocated image(s) memory area (you can probably get away with space for a single image). I would probably take a pull approach like CoreAudio. When you need an image pull it from some image buffer manager class which calls AVAssetReaders read function. This way you can have N buffers that will always be available. This may be a little overkill. I do believe AVAssetReader pre decodes some amount of the movie upon initialization. This is why I say you can more than likely just get away with using a single buffer for reading image data into.
From you comment about memory issues. I do believe there are some functions in the AVAssetReader and associated classes that use the create rule.

Recreating Theater Mode with DirectX

I need to simultaneously display a video that is playing in my applciation, full screen on a larger monitor. On some video cards, this is called Theater mode and is configured using a tool that the card manufacturer supplies.
I would like to do this with only software. Can I do this with DirectX?
My idea is to take the currently active video playing using DirectShow and repaint it on a second display (as configured by the user) in full screen mode.
What technologies or methods would I use for this?
The straightforward way is to split yet encoded video into two branches and use two video renderer set to present video on different monitors. One renderer could be a part of your application UI, the other could expand full screen on the large secondary monitor.
Splitting encoded video give you an option to still leverage hardware assisted decoding (DXVA) if available. You might prefer to use software only decoder and split already decoded video - this is also going to work.
You might additionally want to implement filter which would separately temporarily disable one or the other renderer, such as for example by stopping passing media samples through.
Another thing you can do is to use bridging to even more flexibly control the renderers and be able to detach them from media source.

Can I create a blue-screen effect with MPMoviePlayerController?

I am showing a video inline (not fullscreen) using MPMoviePlayerController. I am using this class because it is the only player I got working using a remote file (progressive download) and not a local file.
Is there any way to create a blue-screen effect? what I basically mean is decide on a certain RGB value and set that pixel's alpha to 0. Is it possible to perform any image processing per frame with MPMoviePlayerController?
You can not use MPMoviePlayerController for such movie processing.
Still, there is ways to accomplish what you are asking for. You may use the AVAssetWriter etc.
Check my answer on a similar question.

Resources