I'm trying to set the correct interlacing information via the QuickTime 7 API on a movie that I am creating.
I want to make my movie progressive scan but when I visually check the output, every frame is squashed into the top half. So even though I make sure QuickTime knows my movie is kQTFieldsProgressiveScan it still gets confused.
This is what I am doing:
myCreateNewMovie(...);
ICMCompressionSessionOptionsCreate(...);
BeginMediaEdits(media);
myCreate(ImageDescription with appropriate FieldInfoImageDescriptionExtension2);
SetMediaSampleDescription(media, ImageDescription);
and then when writing each frame I add the same description:
ICMImageDescriptionSetProperty(myFieldInfoImageDescription, ...);
AddMediaSample2(...);
From various bits and pieces on the net I got the impression that setting the sample description for the media was getting overwritten. Now I'm setting the FieldInfo data inside my ICM Encoded Frame Output callback and it seems to be satisfactory.
Related
I want to track a moving object on a video using swistrack. https://en.wikibooks.org/wiki/SwisTrack
I will use a simple background subtraction algorithm for that. Therefore, I need a snapshot of the first frame of my movie.
The movies are in .avi format, and I have tried taking snapshots using GNOME player and Mplayer (on ubuntu) and VLC player (on Windows). However, I always bump into the same problem: my movie has dimensions 720 x 576 and any screenshot I take has dimensions 768 x 576. This makes background substraction impossible and it makes swistrack complain.
I have no idea what is going wrong here. Is it the movie format? I uploaded a movie and a screenshot on this URL so you could perhaps try it and see if you get the same results?
https://perswww.kuleuven.be/~u0065551/movies_and_snapshots/
The thing is I want to batch process my video's using e.g. Mplayer, always automatically saving a movie in its folder together with the snapshot of the first frame, and the mask that it makes from it, so I can very easily read that in with swistrack.
Thanks a lot for your help!
I have been using MPMovieplayer and the playableDuration to check the available duration of a movie.
The duration always seems to be ~1 second further than my current duration and basically I would like to increase this.
I have tried to use the prepareToPlay but this seems to do nothing noticeable to the playable Duration.
I have tried to set as many parameters as possible to attempt to try and change the value via setting the defaults pre-emptively such as the MPMovieSourceType, MediaType and alike, but all to no avail.
Just to clear a few things up firstly: I am using both MPMoviePlayer and AVplayer which both play different streams simultaneously as the video/audio I am using is split.
EDIT
Seems like I overlooked the file size affecting the stream and should have read more in the apple resources then elsewhere, but as far as I can tell the issue is: the file size is too large and therefore a server side media segmenter has to be implemented.
Apple Resource on Media Segmenting
I have a 30fps Quicktime .mov of still images I created with AVAssetWriter. (It's only about 10 frames long). I would like the user to be able to slow it down using a UISlider to about 1fps, but when I adjust the AVPlayer .rate property from 1 down to 0, it doesn't get anywhere near 1fps, it just stops playback (because a 0 rate is effectively stopping/pausing it, which makes sense). But how can I slow the player down to about 1fps? I think I'd need to do some math to calculate the actual rate, but that's where I'm stuck. Would it end up being something like 0.000000000000001?
Thanks!
If this was a requirement of mine I would approach this as follows (also suggested by Inafziger in the comments). Use AVAssetReader and roll my own viewer for the images. This would give you precise control using a timer as stated in your comments. Make sure you reuse some preallocated image(s) memory area (you can probably get away with space for a single image). I would probably take a pull approach like CoreAudio. When you need an image pull it from some image buffer manager class which calls AVAssetReaders read function. This way you can have N buffers that will always be available. This may be a little overkill. I do believe AVAssetReader pre decodes some amount of the movie upon initialization. This is why I say you can more than likely just get away with using a single buffer for reading image data into.
From you comment about memory issues. I do believe there are some functions in the AVAssetReader and associated classes that use the create rule.
My application uses VMR9 Renderless mode to play a WMV file. I build a filter graph with IGraphBuilder::RenderFile and control playback with IMediaControl. Everything plays okay, but I can't figure out how to determine the source video size. Any ideas?
Note: This question was asked before in How can I adjust the video to a specified size in VMR9 renderless mode?. But the solution was to use Windowless mode instead of Renderless mode, which would require rewriting my code.
Firstly you want the Video renderer. You can do this by using EnumFilters on the IGraphBuilder interface. Then call EnumPins on that filter to find the input pin. You can then call ConnectionMediaType to get the media type being fed into that filter. Now depending what formattype is set to you can cast the pbFormat pointer to the relevant structure and from there find out what the video size is. If you want the size before that (to see if some scaling is going on) you can work your way back across the pin using "ConnectedTo" to get the next filter back. You can then find its input pins and repeat the ConnectionMediaType call. Repeat until you get to the filter's pin that you want.
You could use the MediaInfo project at http://mediainfo.sourceforge.net/hr/Download/Windows and through the CS wrapper included in the VCS2010 or VCS2008 folders get all the information about a video you need.
EDIT: Sorry I thought you were on managed. But in either case the MediaInfo can be used, so maybe it helps.
I have an AVMutableComposition with a video track and I would like to add a still image into the video track, to be displayed for some given time. The still image is simply a PNG. I can load the image as an asset, but that’s about it, because the resulting asset does not have any tracks and therefore cannot be simply inserted using the insertTimeRange… methods.
Is there a way to add still images to a composition? It looks like the answer is somewhere in Core Animation, but the whole thing seems to be a bit above my head and I would appreciate a code sample or some information pointers.
OK. There’s a great video called Editing Media with AV Foundation from WWDC that explains a lot. You can’t insert images right to the AVComposition timeline, at least I did not find any way to do that. But when exporting or playing an asset you can refer to an AVVideoComposition. That’s maybe not a perfect name for the class, since it allows you to mix between various video tracks in the asset, very much like AVAudioMix does for audio. And the AVVideoComposition has an animationTool property that lets you throw Core Animation layers (CALayer) into the mix. CALayer has a contents property that can be assigned a CGImageRef. Does not help in my case, might help somebody else.
I also need still images in my composition. My line of thinking is a little different. Insert on-the-fly movies of black in place of when images should be appearing (possibly one such video would suffice). Add a dictionary reference to each such insert, linking composition time-ranges to bona-fide desired images. When the correct time range arrives in my full-time custom compositor, pull out the desired image and paint that into the output pixel buffer, ignoring the incoming black frames from the composition. I think that'd be another way of doing it.