How does one store the "original capture date" metadata item for a QuickTime movie? - quicktime

Quicktime has a rich metadata API, allowing one to store all sorts of arbitrary data on a .mov file (or one of its streams). I'm looking for the standard key name and value format for storing the shooting date for a video clip, analogous to EXIF's DateTimeOriginal.
The following discussion at the apple site makes it seem like there may not be one defined by apple, as they don't seem to feel it's very important.
http://discussions.apple.com/message.jspa?messageID=6267622
This is related to How can I get the original capture timestamp from my home movie files:: AVI and MPG4? (which deals with .mp4 and .avi)

I'm afraid there is no standard key for this kind of metadata.
You might try to use a reasonably fitting standard key like
kQTMetaDataCommonKeyInformation,
kQTMetaDataCommonKeyDescription or
kQTMetaDataCommonKeyProducer
although this wouldn't be 'standard' (i.e. would most likely only be processed correctly by your application).
On the question which value format to use this sample code and Q&A article (although it is not exactly fitting) might set you on the right track:
http://developer.apple.com/qa/qa2007/qa1515.html
http://developer.apple.com/samplecode/QTMetaData/listing1.html

Related

Youtube Stats for Nerds: What does the (137/140) behind "DASH: yes" stand for?

Youtube shows some more or less informative things if you rightclick on a video and choose "Stats for Nerds".
Unfortunately, I could't find any documentation for the fields shown.
Depending on your configuration, it might be that it streams using MPEG Dash and it looks similar to the following screenshot. But what does the "(137/140)" stand for?
PS: in case someone is wondering: the "codecs"-string seems to be specified by RFC6381
These are itag values that refer to different video and/or audio formats. The first number corresponds to video and the second to audio. You can actually see these numbers change when you select different resolution from the player's settings.
Many but probably not all of the itags are documented in Wikipedia

Can YouTube IDs be reissued after a video is deleted?

Suppose I have a video on YouTube that gets the URL https://www.youtube.com/watch?v=vWSyMuKkXXX (not a real video/ID, fwiw). If I delete that video, what are the chances that "vWSyMuKkXXX" will get reassigned to another video that somebody else puts up? 62^11 (is that right?) is a pretty large space from which to be assigning symbols, but YouTube must be doing some uniqueness test to avoid duplicates. The question, I guess, would then be whether they're including deleted IDs in that test (at least, given the way I'm guessing what they're doing internally).
This question is all about how much work I have to do to figure out whether the video corresponding to an ID exists and that it is the video that I think it is -- whether I can get away with using a simple call to http://www.youtube.com/oembed?... , or whether I need to get authentication and the APIs involved (which might still not resolve the question). Any thoughts? Thanks!
If YouTube were to ever reuse IDs, it would cause problems such as old links now pointing to new (possibly unlisted) videos. There is no advantage in reusing IDs, only problems including privacy problems. It would be an ugly bug.
To support unlisted videos, the IDs cannot be sequential. They must come from a large space of possible values.
how much work I have to do to figure out whether the video corresponding to an ID exists
You must send a query to YouTube.
and that it is the video that I think it is
How do you define "the video that I think it is"? By ID? By watching the transcoded video at your current resolution -- where the individual pixels might not match the uploaded pixels?

iOS capture audio playback into a file

I'd like to know if it's possible to record an audio playback into a file in ios without using microphone. In another word, is it possible to capture the audio playback "raw data" into a file?
I tried the AVAudioRecorder, but there is no option to record without using the microphone. Any idea? Thanks
Yes!
Your question is too generic, and the above answers it.
But, more specifically, as with most things there are several approaches possible, the final solution will depend on your needs. For example, if you're constrained to, or have a preference for "playing" via AVPlayer, you can use an MTAudioProcessingTap (See apple's AudioTapProcessor example) to gain access to the "Raw Data". Similarly, you can build an AudioGraph and use an AudioUnit as the "player" along with Audio File Services to both read (which will give you access to the "Raw Data") and write the file. There are various alternatives depending on your needs and, in some cases, your preference. Providing some code or a better explanation of your needs, might solicit a more comprehensive answer.

OpenCV video frame metadata write and read

I would like to encode a date/time stamp in each frame of a video in a way that it can be easily read back by a computer. On my system the frame rate is variable, so counting frames does not seem like a good solution. I have it displaying the date and time in human readable form (text) on the frame, but reading that back into the computer doesn't appear to be as trivial as I would like. The recorded videos are large (10s of GB) and long so writing a text file also seems to be troublesome besides being one more file to keep track of. Is there a way to store frame-by-frame information in a video?
There are several ways you can do this.
If your compression is not very strong, you may be able to encode the time-stamp in the top or bottom row of your image. These may not contain too much valuable info. You can add some form of error correction (e.g. CRC) to correct any corruptions done by the compressor.
A more general solution (which I used in the past) is to have the video file, e.g. AVI, contain another separate text stream. Besides AVI most formats support multiple streams, since these are used for stereo-audio streams, subs etc. The drawback here is that there aren't many tools that allow you to write these streams, and you'll have to implement this yourself (using the relevant APIs) for each video format you want to support. In a way this is similar to keeping a text file next to your video, only this file content is multiplexed inside the same video file as a separate stream.

Difference between ALAssetType and UTType?

What's the distinction between these two types?
When do you use one versus the other?
Is there a way to translate between the two?
They're completely different. UTType is basically a way to describe a generic file type and give it some meaning to the system and is used by CoreServices. It's totally generic and can be used to classify all kinds of resources to the system. For more information on UTTypes, see the Overview section of the UTType Reference documentation.
ALAssetType is just a way to describe the type of asset you want back from the asset library, and is only used by the AssetLibraryFramework on iOS. It's basically a string constant that tells the asset library whether you want to work with still images or video files (since the AssetLibraryFramework is the programatic access to the user's photo/video library on the device). Unlike a UTType, this constant gives no information about the actual format or arrangement of the asset (like, is it an H.264 encoded movie in an m4v container, or is it a tiff image or is it a jpg?), which you'd get from a UTType, instead it just says that you're interested in either movies or images.

Resources