I'm trying to get as good an image as possible from the camera, but can only find examples that captureStillImageAsynchronouslyFromConnection and then go straight to:
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
JPEG being lossy and all, is there any way to get the data as PNG, or even just RGBA (BGRA, what-have-you?). AVCaptureStillImageOutput doesn't seem to have any other NSData* methods....
Actually looking at the CMSampleBufferRef, it seems like it's already locked as JPEG ~
formatDescription = <CMVideoFormatDescription 0xfe5e1f0 [0x3e5ac650]> {
mediaType:'vide'
mediaSubType:'jpeg'
mediaSpecific: {
codecType: 'jpeg' dimensions: 2592 x 1936
}
extensions: {(null)}
}
Is there some other way to take a full-res picture and get the raw data?
You'll need to set the outputSettings with a different pixel format. If you want 32-bit BGRA, for example, you can set:
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
From https://developer.apple.com/library/mac/#documentation/AVFoundation/Reference/AVCaptureStillImageOutput_Class/Reference/Reference.html, the "recommended" pixel formats are:
kCMVideoCodecType_JPEG
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
kCVPixelFormatType_32BGRA
Of course, if you're not using JPEG output, you can't use jpegStillImageNSDataRepresentation:, but there's an example here:
how to convert a CVImageBufferRef to UIImage
Just change the Output settings of your connection:
outputSettings
The compression settings for the output.
#property(nonatomic, copy) NSDictionary *outputSettings
You can retreive an NSDictionary of the supported values with
availableImageDataCodecTypes
The supported image codec formats that can be specified in outputSettings. (read-only)
#property(nonatomic, readonly) NSArray *availableImageDataCodecTypes
Related
I am using the following code to extract depth map (by following Apple's own example):
- (nullable AVDepthData *)depthDataFromImageData:(nonnull NSData *)imageData orientation:(CGImagePropertyOrientation)orientation {
AVDepthData *depthData = nil;
CGImageSourceRef imageSource = CGImageSourceCreateWithData((CFDataRef)imageData, NULL);
if (imageSource) {
NSDictionary *auxDataDictionary = (__bridge NSDictionary *)CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity);
if (auxDataDictionary) {
depthData = [[AVDepthData depthDataFromDictionaryRepresentation:auxDataDictionary error:NULL] depthDataByApplyingExifOrientation:orientation];
}
CFRelease(imageSource);
}
return depthData;
}
And I call this from:
[[PHAssetResourceManager defaultManager] requestDataForAssetResource:[PHAssetResource assetResourcesForAsset:asset].firstObject options:nil dataReceivedHandler:^(NSData * _Nonnull data) {
AVDepthData *depthData = [self depthDataFromImageData:data orientation:[self CGImagePropertyOrientationForUIImageOrientation:pickedUiImageOrientation]];
CIImage *image = [CIImage imageWithDepthData:depthData];
UIImage *uiImage = [UIImage imageWithCIImage:image];
UIGraphicsBeginImageContext(uiImage.size);
[uiImage drawInRect:CGRectMake(0, 0, uiImage.size.width, uiImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *pngData = UIImagePNGRepresentation(newImage);
UIImage* pngImage = [UIImage imageWithData:pngData]; // rewrap
UIImageWriteToSavedPhotosAlbum(pngImage, nil, nil, nil);
} completionHandler:^(NSError * _Nullable error) {
}];
Here is the result: it's a low quality (and rotated but let's put orientation aside for now) image:
Then I've transferred the original HEIC file, opened in Photoshop, went to Channels, and selected depth map as below:
Here is the result:
It's a higher resolution/quality, correctly oriented depth map. Why is the code (actually Apple's own code at https://developer.apple.com/documentation/avfoundation/avdepthdata/2881221-depthdatafromdictionaryrepresent?language=objc) resulting in lower-quality result?
I've found the issue. Actually, it was hiding in plain sight. What is obtained from the +[AVDepthData depthDataFromDictionaryRepresentation:error:] method returns disparity data. I've converted it to depth using the following code:
if(depthData.depthDataType != kCVPixelFormatType_DepthFloat32){
depthData = [depthData depthDataByConvertingToDepthDataType:kCVPixelFormatType_DepthFloat32];
}
(Haven't tried but 16-bit Depth, kCVPixelFormatType_DepthFloat16, should also work well)
After converting disparity to depth, the image is exactly the same as in Photoshop. I should have woken up as I was using CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity); (note the "disparity" in the end) and Photoshop was clearly saying "depth map", converting disparity to depth (or just somehow reading as depth, I honestly don't know the physical encoding, maybe iOS was converting depth to disparity when I was copying the aux data in the first place) on the fly.
Side note: I've also solved the orientation issue by creating the image source directly from [PHAsset requestContentEditingInputWithOptions:completionHandler:] method and passing the contentEditingInput.fullSizeImageURL into CGImageSourceCreateWithURL method. It took care of the orientation.
let image_data = UIImageJPEGRepresentation(self.imagetoadd.image!,0.0)
The image in ios, am using swift 3 to do this is being uploaded rotated.How can I solve such thing?
JPEG images usually contain an EXIF dictionary, here are stored a lot information about how the image was taken, image rotation is one of it.
UIImage instances keeps these information (if the original image has it) as well inside a specific property called imageOrientation.
As far as I remember this information is ripped of by using the method UIImageJPEGRepresentation.
To create a correct data instance with the above information you must use Core Graphics methods, or normalize the rotation before sending the image.
To normalize the image something like that should be enough:
CGImageRef cgRef = imageToSave.CGImage;
UIImage * fixImage = [[UIImage alloc] initWithCGImage:cgRef scale:imageToSave.scale orientation:UIImageOrientationUp];
To keep the rotation information:
CFURLRef url = (__bridge_retained CFURLRef)[NSURL fileURLWithPath:path];//Save data path
NSDictionary * metadataDictionary = [self imageMetadataForPath:pathToOriginalImage];
CFMutableDictionaryRef metadataImage = (__bridge_retained CFMutableDictionaryRef) metadata;
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, image, metadataImage);
if (!CGImageDestinationFinalize(destination)) {
DLog(#"Failed to write image to %#", path);
}
Where the -imageMetadataForPath:
- (NSDictionary*) imageMetadataForPath:(NSString*) imagePath{
NSURL *imageURL = [NSURL fileURLWithPath:imagePath];
CGImageSourceRef mySourceRef = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, NULL);
NSDictionary * dict = (NSDictionary *) CFBridgingRelease(CGImageSourceCopyPropertiesAtIndex(mySourceRef,0,NULL));
CFRelease(mySourceRef);
return dict;
}
This is a copy and paste from a project of mine, you probably need to do a huge refactoring, also because it is using manual memory management in core foundation and you are using SWIFT. Of course by using this last set of instructions, the backend code must be prepared to deal with image orientation too.
If you want to know more about rotation, here is a link.
I want to generate a true color animated Gif from a couple of PNG files represented as base64 string. I found this post and did something similar. I have an array with the dataUrls:
NSArray* imageDataUrls; // array with the data urls without data:image/png;base64, prefix
Here is what I did:
NSDictionary *fileProperties = #{
(__bridge id)kCGImagePropertyGIFDictionary: #{
(__bridge id)kCGImagePropertyGIFLoopCount: #0, // 0 means loop forever
}
};
NSDictionary *frameProperties = #{
(__bridge id)kCGImagePropertyGIFDictionary: #{
(__bridge id)kCGImagePropertyGIFDelayTime: #0.4f, // a float (not double!) in seconds, rounded to centiseconds in the GIF data
}
};
NSURL *documentsDirectoryURL = [[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:YES error:nil];
NSURL *fileURL = [documentsDirectoryURL URLByAppendingPathComponent:#"animated.gif"];
CFMutableDataRef destinationData = CFDataCreateMutable(kCFAllocatorDefault, 0);
CGImageDestinationRef destination = CGImageDestinationCreateWithData(destinationData, kUTTypeGIF, kFrameCount, NULL);
CGImageDestinationSetProperties(destination, (__bridge CFDictionaryRef)fileProperties);
NSData* myImageData;
UIImage *myImage = [UIImage alloc];
for (NSUInteger i = 0; i < kFrameCount; i++) {
#autoreleasepool {
myImageData = [NSData dataFromBase64String:[imageDataUrls objectAtIndex:i]];
myImage = [myImage initWithData: myImageData];
CGImageDestinationAddImage(destination, myImage.CGImage, (__bridge CFDictionaryRef)frameProperties);
}
}
myImageData = nil;
myImage = nil;
CFRelease(destination);
NSData* data = nil;
data = (__bridge NSData *)destinationData;
Finally, I send the gif image as base64EncodedString back to the phonegap container.
// send back gif image
CDVPluginResult* pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK messageAsString: [data base64EncodedString]];
It works good but the quality of the resulting gif image is bad. This is because it has only 256 colors.
Here is the original png image:
Here is a screenshot of the generated gif image:
How do I get the same quality as I imported, i.e., how can I raise the quality level of the generated gif? How do I generate true color gifs on iOS?
GIFs are not designed to store true-color data, and they are also poorly suited for animations1. Since this is such an unusual use of GIFs, you will have to write a lot of your own code.
Break each frame into rectangular chunks, where each chunk contains at most 256 distinct colors. The easiest way to do this is to use 16x16 chunks.
Convert each chunk to an indexed image.
Add each chunk to the GIF. For the first chunk in a frame, use the frame delay. For other chunks in a frame, use a delay of 0.
Done. You will have to familiarize yourself with the GIF specification, which is freely available online (GIF89a specification at W3.org, see section 23). You will also need to find an LZW compressor, which is not too hard to find. The animation will also use an obscene amount of storage: including base64 conversion, I estimate about 43 bits/pixel, or about 1.2 Gbit/s for 720p video, which is about 400x as much storage as you would use for high-quality MPEG4 or WebM, and probably about 3x as much storage as the PNGs would require. The storage and bandwidth requirements will likely incur undesirable costs for hosts and clients, unless the animations are very short and small.
Note that this will not allow you to use alpha transparency, this is a hard limitation of the GIF format.
Opinion
The idea of putting high quality animations in a GIF is absurd in the extreme, even though it is possible. It is especially absurd given the available alternatives:
If you are targeting modern browsers or mobile devices, MPEG4 (support matrix) and WebM (support matrix) are the obvious choices. Between the two formats, only Opera Mini supports neither.
If you are targeting older browsers or less-capable devices, or if you cannot afford MPEG4 encoding, you can encode the frames as individual JPEG or PNG images. Bundle these with a JSON payload with the timing, and use JavaScript or other client-side scripting to switch between animation frames. This works surprisingly well.
Notes
1 From the GIF 89a specification:
Animation - The Graphics Interchange Format is not intended as a platform for
animation, even though it can be done in a limited way.
In my project i need to show the different sizes of images in zig-zag fashion. so, i converted the uiimages(url) which are coming from service to NSData and then i get the uiimage. my code is
NSURL *url = [NSURL URLWithString:[[_result objectAtIndex:i ] valueForKey:#"PImage"]];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
so i can get the image size(width and height), But my problem is according to the image size, i need to create UIView, this code is works fine for me, but it is taking too much of time(almost 25 sec) to load 8 images. i figured converting UIImage to NSData is taking time. Is there any way to get the image size(width and height) without converting it into NSData
Thanks for spending time for me.
You can get image properties without actually loading whole image data from disk using ImageIO framework:
#import ImageIO;
...
NSURL *imageURL = … // Init URL somehow
CGImageSourceRef imgSource = CGImageSourceCreateWithURL((__bridge CFURLRef)url, NULL);
NSDictionary* imageProps = (__bridge_transfer NSDictionary*) CGImageSourceCopyPropertiesAtIndex(imgSource, 0, NULL);
NSLog(#"%#", imageProps);
CFRelease(imgSource);
Image width and height will be stored in dictionary under PixelHeight and PixelWidth keys (tested with png image, may be other image formats will use different keys)
Instead of converting url to data and to UIImage, Use EGOImageView OR AsyncImageView. You can simply pass the URL to them. Again setFrame based on size of the image.
I have an object ALAsset retrieved from ALAssetLibrary I want to extrapolate a compress JPEG in order to send it to a web services.
any suggestion where to start?
Edit:
I've found a way to get NSData out of the ALAsset
ALAssetRepresentation *rappresentation = [asset defaultRepresentation];
Byte *buffer = (Byte*)malloc(rappresentation.size);
NSUInteger buffered = [rappresentation getBytes:buffer fromOffset:0.0 length:rappresentation.size error:&err];
but I can't find a way to reduce the size of the image by resizing and compressing it.
My idea was to have something like:
UIImage *myImage = [UIImage imageWithData:data];
//resize image
NSData *compressedData = UIImageJPEGRepresentation(myImage, 0.5);
but, first of all, even without resizing, just using this two lines of code compressedData is bigger than data.
and second I'm not sure about what's the best way to resize the UIImage
You can use the
[theAsset thumbnail]
Or;
Compressing might result bigger files after some point, you need to resize the image:
+ (UIImage *)imageWithCGImage:(CGImageRef)cgImage scale:(CGFloat)scale orientation:(UIImageOrientation)orientation
It's easy to get the CGImage from an ALAssetRepresentation:
ALAssetRepresentation repr = [asset defaultRepresentation];
// use the asset representation's orientation and scale in order to set the UIImage
// up correctly
UIImage *image = [UIImage imageWithCGImage:[repr fullResolutionImage] scale:[repr scale] orientation:[repr orientation]];
// then do whatever you want with the UIImage instance