How can I get image data from QTKit without color or gamma correction in Snow Leopard? - quicktime

Since Snow Leopard, QTKit is now returning color corrected image data from functions like QTMovies frameImageAtTime:withAttributes:error:. Given an uncompressed AVI file, the same image data is displayed with larger pixel values in Snow Leopard vs. Leopard.
Currently I'm using frameImageAtTime to get an NSImage, then ask for the tiffRepresentation of that image. After doing this, pixel values are slightly higher in Snow Leopard.
For example, a file with the following pixel value in Leopard:
[0 180 0]
Now has a pixel value like:
[0 192 0]
Is there any way to ask a QTMovie for video frames that are not color corrected? Should I be asking for a CGImageRef, CIImage, or CVPixelBufferRef instead? Is there a way to disable color correction altogether prior to reading in the video files?
I've attempted to work around this issue by drawing into a NSBitmapImageRep with the NSCalibratedColroSpace, but that only gets my part of the way there:
// Create a movie
NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys :
nsFileName, QTMovieFileNameAttribute,
[NSNumber numberWithBool:NO], QTMovieOpenAsyncOKAttribute,
[NSNumber numberWithBool:NO], QTMovieLoopsAttribute,
[NSNumber numberWithBool:NO], QTMovieLoopsBackAndForthAttribute,
(id)nil];
_theMovie = [[QTMovie alloc] initWithAttributes:dict error:&error];
// ....
NSMutableDictionary *imageAttributes = [NSMutableDictionary dictionary];
[imageAttributes setObject:QTMovieFrameImageTypeNSImage forKey:QTMovieFrameImageType];
[imageAttributes setObject:[NSArray arrayWithObject:#"NSBitmapImageRep"] forKey: QTMovieFrameImageRepresentationsType];
[imageAttributes setObject:[NSNumber numberWithBool:YES] forKey:QTMovieFrameImageHighQuality];
NSError* err = nil;
NSImage* image = (NSImage*)[_theMovie frameImageAtTime:frameTime withAttributes:imageAttributes error:&err];
// copy NSImage into an NSBitmapImageRep (Objective-C)
NSBitmapImageRep* bitmap = [[image representations] objectAtIndex:0];
// Draw into a colorspace we know about
NSBitmapImageRep *bitmapWhoseFormatIKnow = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:getWidth()
pixelsHigh:getHeight()
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(getWidth() * 4)
bitsPerPixel:32];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapWhoseFormatIKnow]];
[bitmap draw];
[NSGraphicsContext restoreGraphicsState];
This does convert back to a 'Non color corrected' colorspace, but the color values NOT are exactly the same as what is stored in the Uncompressed AVI files we are testing with. Also this is much less efficient because it is converting from RGB -> "Device RGB" -> RGB.
Also, I am working in a 64-bit application, so dropping down to the Quicktime-C API is not an option.
Thanks for your help.

This is what you're running up against: http://support.apple.com/kb/ht3712
Also described here in reference to QT: http://developer.apple.com/library/mac/#technotes/tn2227/_index.html
There is supposed to be an opt-out flag for ColorSync (which is automatic and silent), but because ColorSync goes back to the "Carbon-iferous era" there's a lot of confusion about how to disable it from a 64-bit app. It may even exist, and just have no documentation. But even Adobe has not figured it out AFAIK (64 bit Photoshop for OS X???) [EDIT: Actually, maybe they did with CS5, or found a workaround... so if you have any friends who work there maybe they can be a resource ;) However, I think even their colors may come out different on 64 bit OS X vs Windows or 32 bit].
The first link tells how to force a gamma value of 1.8 system-wide and non-programmatically. Just pointing it out - it probably is only of limited use for your application, but good to know.
The usual way to disable colorsync is/was
GraphicsImportSetFlags( gi, kGraphicsImporterDontUseColorMatching );
But I don't know if that's going to work in a 64-bit application (pretty sure it does not). In fact, a lot of things related to ColorSync are more difficult, maybe not yet possible, in 64-bit. Just search "site:apple.com colorsync 64 bit".
If I find something better I'll edit my answer. Just thought this would help you know the enemy, so to speak.
Another edit: (Re)Tagging your videos might mitigate or eliminate this problem too, since ColorSync treats untagged videos a certain way (see links) and operates on them differently depending on the tags. What is the color profile of the videos? Try sRGB maybe?

Related

What does `kCGImageDestinationOptimizeColorForSharing` do?

At the time of asking there is zero useful information on https://developer.apple.com/documentation/imageio/kcgimagedestinationoptimizecolorforsharing?language=objc
In the header it says:
/* Create an image using a colorspace, that has is compatible with older devices
* The value should be kCFBooleanTrue or kCFBooleanFalse
* Defaults to kCFBooleanFalse = don't do any color conversion
*/
but isn't completely clear what is going on.
I note that sips also seems to support it with the incredibly unhelpful information on the manpage:
--optimizeColorForSharing
Optimize color for sharing.
Any details about what is really going on would be appreciated.

React native: Real time camera data without image save and preview

I started working on my first non-demo react-native app. I hope it will be a iOS/Android app, but actually I'm focused on iOS only.
I have a one problem actually. How can I get a data (base64, array of pixels, ...) in real-time from the camera without saving to the camera roll.
There is this module: https://github.com/lwansbrough/react-native-camera but base64 is deprecated and is useless for me, because I want a render processed image to user (change picture colors eg.), not the real picture from camera, as it does react-native-camera module.
(I know how to communicate with SWIFT code, but I don't know what the options are in native code, I come here from WebDev)
Thanks a lot.
This may not be optimal but is what I have been using. If anyone can give a better solution, I would appreciate your help, too!
My basic idea is simply to loop (but not simple for-loop, see below) taking still pictures in yuv/rgb format at max resolution, which is reasonably fast (~x0ms with normal exposure duration) and process them. Basically you will setup AVCaptureStillImageOutput that links to you camera (following tutorials everywhere) then set the format to kCVPixelFormatType_420YpCbCr8BiPlanarFullRange (if you want YUV) or kCVPixelFormatType_32BGRA(if you prefer rgba) like
bool usingYUVFormat = true;
NSDictionary *outputFormat = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:usingYUVFormat?kCVPixelFormatType_420YpCbCr8BiPlanarFullRange:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[yourAVCaptureStillImageOutput setOutputSettings:outputFormat];
When you are ready, you can start calling
AVCaptureConnection *captureConnection=[yourAVCaptureStillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[yourAVCaptureStillImageOutput captureStillImageAsynchronouslyFromConnection:captureConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if(imageDataSampleBuffer){
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// do your magic with the data buffer imageBuffer
// use CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0/1/2); to get each plane
// use CVPixelBufferGetWidth/CVPixelBufferGetHeight to get dimensions
// if you want more, please google
}
}];
Additionally, use NSNotificationCenter to register your photo-taking action and post a notification after you have processed each frame (with some delay perhaps, to cap your through-put and reduce power consumption) so the loop will keep going.
A quick precaution: the Android counterpart is much worse a headache. Few hardware manufacturers implement api for max-resolution uncompressed photos but only 1080p for preview/video, as I have raised in my question. I am still looking for solutions but gave up most hope. JPEG images are just toooo slow.

CVPixelBufferRef as a GPU Texture

I have one (or possibly two) CVPixelBufferRef objects I am processing on the CPU, and then placing the results onto a final CVPixelBufferRef. I would like to do this processing on the GPU using GLSL instead because the CPU can barely keep up (these are frames of live video). I know this is possible "directly" (ie writing my own open gl code), but from the (absolutely impenetrable) sample code I've looked at it's an insane amount of work.
Two options seem to be:
1) GPUImage: This is an awesome library, but I'm a little unclear if I can do what I want easily. First thing I tried was requesting OpenGLES compatible pixel buffers using this code:
#{ (NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey : [NSNumber numberWithBool:YES]};
Then transferring data from the CVPixelBufferRef to GPUImageRawDataInput as follows:
// setup:
_foreground = [[GPUImageRawDataInput alloc] initWithBytes:nil size:CGSizeMake(0,0)pixelFormat:GPUPixelFormatBGRA type:GPUPixelTypeUByte];
// call for each frame:
[_foreground updateDataFromBytes:CVPixelBufferGetBaseAddress(foregroundPixelBuffer)
size:CGSizeMake(CVPixelBufferGetWidth(foregroundPixelBuffer), CVPixelBufferGetHeight(foregroundPixelBuffer))];
However, my CPU usage goes from 7% to 27% on an iPhone 5S just with that line (no processing or anything). This suggests there's some copying going on on the CPU, or something else is wrong. Am I missing something?
2) OpenFrameworks: OF is commonly used for this type of thing, and OF projects can be easily setup to use GLSL. However, two questions remain about this solution: 1. can I use openframeworks as a library, or do I have to rejigger my whole app just to use the OpenGL features? I don't see any tutorials or docs that show how I might do this without actually starting from scratch and creating an OF app. 2. is it possible to use CVPixelBufferRef as a texture.
I am targeting iOS 7+.
I was able to get this to work using the GPUImageMovie class. If you look inside this class, you'll see that there's a private method called:
- (void)processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime
This method takes a CVPixelBufferRef as input.
To access this method, declare a class extension that exposes it inside your class
#interface GPUImageMovie ()
-(void) processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime;
#end
Then initialize the class, set up the filter, and pass it your video frame:
GPUImageMovie *gpuMovie = [[GPUImageMovie alloc] initWithAsset:nil]; // <- call initWithAsset even though there's no asset
// to initialize internal data structures
// connect filters...
// Call the method we exposed
[gpuMovie processMovieFrame:myCVPixelBufferRef withSampleTime:kCMTimeZero];
One thing: you need to request your pixel buffers with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange in order to match what the library expects.

What`s the compression ratio of NSData from AVCaptureStillImageOutput?

Use jpegStillImageNSDataRepresentation: method can get NSData from AVCapureStillImageOutput, then I can write the data to a file.
NSData * imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
Just imageSampleBuffer turn to NSData. Can I get the compression radio in this method, or is there some index to measure for this?
Just like UIImageJPEGRepresentation(UIImage* image, 1.0) this, it has 1.0 to measure the compression radio for image.
You can control the compression (0-1) that is used by the AVCaptureStillImageOutput.
[stillImageOutput setOutputSettings:#{AVVideoCodecKey : AVVideoCodecJPEG, AVVideoQualityKey:#1.0}];
See AVCaptureOutput.h and AVVideoSettings.h:
AVVideoQualityKey is supported on iOS 6.0 and later and may only be used when AVVideoCodecKey is set to AVVideoCodecJPEG. AVVideoQualityKey NSNumber (0.0-1.0, JPEG only)
You can quickly verify that this key works by looking at the size of the data for different values. As for the default value I don't think it's 1.0 because when I set it to 1.0 the size bumps up a bit and I didn't setup a tripod to attempt to take exactly the same pic....

How to use AVAssetWriter instead of AVAssetExportSession to re-encode existing video

I'm trying to re-encode videos on an iPad which were recorded on that device but with the "wrong" orientation. This is because when the file is converted to an MP4 file and uploaded to a web server for use with the "video" HTML5 tag, only Safari seems to render the video with the correct orientation.
Basically, I've managed to implement what I wanted by using a AVMutableVideoCompositionLayerInstruction, and then using AVAssetExportSession to create the resultant video with audio. However, the problem is that the file sizes jump up considerably after doing this, eg correcting an original file of 4.1MB results in a final file size of 18.5MB! All I've done is rotate the video through 180 degrees!! Incidentally, the video instance that I'm trying to process was originally created by the UIImagePicker during "compression" using videoQuality = UIImagePickerControllerQualityType640x480, which actually results in videos of 568 x 320 on an iPad mini.
I experimented with the various presetName settings on AVAssetExportSession but I couldn't get the desired result. The closest I got filesize-wise was 4.1MB (ie exactly the same as source!) by using AVAssetExportPresetMediumQuality, BUT this also reduced the dimensions of the resultant video to 480 x 272 instead of the 568 x 320 that I had explicitly set.
So, this led me to look into other options, and hence using AVAssetWriter instead. The problem is, I can't get any code that I have found to work! I tried the code found on this SO post (Video Encoding using AVAssetWriter - CRASHES), but can't get it to work. For a start, I get a compilation error for this line:
NSDictionary *videoOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
The resultant compilation error being:
Undefined symbols for architecture armv7: "_kCVPixelBufferPixelFormatTypeKey"
This aside, I tried passsing in nil for the AVAssetReaderTrackOutput's outputSettings, which should be OK according to header info:
A value of nil for outputSettings configures the output to vend samples in their original format as stored by the specified track.
However, I then get a crash happening at this line:
BOOL result = [videoWriterInput appendSampleBuffer:sampleBuffer];
In short, I've not been able to get any code to work with AVAssetWriter, so I REALLY need some help here. Are there any other ways to achieve my desired results? Incidentally, I'm using Xcode 4.6 and I'm targeting everything from iOS5 upwards, using ARC.
I have solved similar problems related to your questions. This might help someone who has similar problems:
Assuming writerInput is your object instance of AVAssetWriterInput and assetTrack is the instance of your AVAssetTrack, then your transform problem is solved by simply:
writerInput.transform = assetTrack.preferredTransform;
You have to release sampleBuffer after appending your sample buffer, so you would have something like:
if (sampleBuffer = [asset_reader_output copyNextSampleBuffer]) {
BOOL result = [writerInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer); // Release sampleBuffer!
}
The compilation error was caused by me not including the CoreVideo.framework. As soon as I had included that and imported it, I could get the code to compile. Also, the code would work and generate a resultant video, but I uncovered 2 new problems:
I can't get the transform to work using the transform property on AVAssetWriterInput. This means that I'm stuck with using a AVMutableVideoCompositionInstruction and AVAssetExportSession for the transformation.
If I use AVAssetWriter to just handle compression (seeing as I don't have many options with AVAssetExportSession), I still have a bad memory leak. I've tried everything I can think of, starting with the solution in this link ( Help Fix Memory Leak release ) and also with #autorelease blocks at key points. But it seems that the following line will cause a leak, no matter what I try:
CMSampleBufferRef sampleBuffer = [asset_reader_output copyNextSampleBuffer];
I could really do with some help.

Resources