Is there a way to do lossless compression for generated GIFs? - ios

We are using the following code to generate GIF file from a set of JPEG images, for the setting on doing lossless compression, it doesn't seem to generate a smaller sized file at all. Are we doing the right thing here?
CGImageDestinationRef imageDestination = CGImageDestinationCreateWithURL((CFURLRef)pathUrl, CFSTR("com.compuserve.gif"), images.count, NULL);
// image/frame level properties
NSDictionary *imageProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:delayTime], (NSString *)kCGImagePropertyGIFDelayTime,
nil];
NSDictionary *properties = [NSDictionary dictionaryWithObjectsAndKeys:
imageProperties, (NSString *)kCGImagePropertyGIFDictionary,
nil];
for (UIImage *image in [images objectEnumerator]) {
CGImageDestinationAddImage(imageDestination, image.CGImage, (CFDictionaryRef)properties);
}
// gif level properties
NSDictionary *gifProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:0], (NSString *)kCGImagePropertyGIFLoopCount,
[NSNumber numberWithInt:1.0], kCGImageDestinationLossyCompressionQuality,
nil];
properties = [NSDictionary dictionaryWithObjectsAndKeys:
gifProperties, (NSString *)kCGImagePropertyGIFDictionary,
nil];
CGImageDestinationSetProperties(imageDestination, (CFDictionaryRef)properties);
CGImageDestinationFinalize(imageDestination);
CFRelease(imageDestination);

GIF does not support the kCGImageDestinationLossyCompressionQuality property. No built in support for compressing gifs on iOS as far as I can tell -- I haven't been able to get the color map to work.
const CFStringRef kCGImagePropertyGIFLoopCount;
const CFStringRef kCGImagePropertyGIFDelayTime;
const CFStringRef kCGImagePropertyGIFImageColorMap;
const CFStringRef kCGImagePropertyGIFHasGlobalColorMap;
const CFStringRef kCGImagePropertyGIFUnclampedDelayTime;
Reference: https://developer.apple.com/library/ios/documentation/graphicsimaging/Reference/CGImageProperties_Reference/Reference/reference.html#//apple_ref/doc/constant_group/GIF_Dictionary_Keys

The GIF image format IS lossless compression. However you are compressing a (lossy) compressed format. File size may go up.

Jpeg images contain many very similar but not identical pixels, which are very hard for lossless compression schemes to compress. To get better compression you have to quantize the colors first. Gif images will be lossless after you've taken the hit of losing information to make the image compressible.

Related

Loading Raw image with Image I/O objective C return wrong resolutions

I'm trying to load RAW image to an iOS project for displaying. Currently, I'm using this piece of code to load the raw image.
CFDictionaryRef optionsRef = (__bridge CFDictionaryRef) #{
// (id) kCGImageSourceShouldAllowFloat: #YES,
(id) kCGImageSourceCreateThumbnailWithTransform : #YES,
(id) kCGImageSourceCreateThumbnailFromImageAlways : #YES,
(id) kCGImageSourceThumbnailMaxPixelSize : #(MAX(SCREEN_HEIGHT, SCREEN_WIDTH) * ([UIScreen mainScreen].scale))
};
CGImageSourceRef imageSourceRef = CGImageSourceCreateWithURL((CFURLRef)imageUrl, NULL);
if (!imageSourceRef)
return nil;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], (NSString *)kCGImageSourceShouldCache,
nil];
CFDictionaryRef imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSourceRef, 0, (CFDictionaryRef)options);
if (imageProperties) {
NSNumber *width = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelWidth);
NSNumber *height = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelHeight);
NSLog(#"Image dimensions: %# x %# px", width, height);
CFRelease(imageProperties);
}
The problem is, this code works flawlessly with some images, while it behaves strangely with others. For example, with one image I load, it show the width is 128 and the height is 96 , while the correct width height are 4320 × 3240.
I'm not sure what is the problem here because all I did was just load the image to CGImageSourceRef. :(
By raw you mean camera raw? (E.g. Nikon NEF files?)
As far as I know iOS does not have a raw image parsing engine. The only thing it will be able to do is to load the embedded JPEG thumbnail. You are probably seeing the size of the thumbnail, not the size of the full raw image.
Disclaimer: I've never tried to open a camera raw image in iOS, so my answer about what iOS is doing is based on imperfect memory of what I've read.
Nikon has libraries available for download, and I think they are in source form. Probably C or C++, so it would take a little work to compile them and figure out how to use them from iOS but it should be possible. You should be able to find them on the Nikon website.

Not able to find the DPI for an image in iOS

I want to find the DPI for an image that has been captured from iPhone/iPad Camera
this is how i am trying to get the DPI
CFDictionaryRef exifDict = CMGetAttachment(imageDataSampleBuffer,
kCGImagePropertyExifDictionary ,
NULL);
originalExifDict = (__bridge NSMutableDictionary *)(exifDict);
[originalExifDict objectForKey:(NSString *)kCGImagePropertyDPIHeight]
[originalExifDict objectForKey:(NSString *)kCGImagePropertyDPIWidth]
However both the entries in the dictionary come to be 0.
What is the correct way to find the DPI ?
Thanks in advance for the help
CGSize size;
NSNumber *width = (NSNumber *)CFDictionaryGetValue(exifDict, kCGImagePropertyDPIWidth);
NSNumber *height = (NSNumber *)CFDictionaryGetValue(exifDict, kCGImagePropertyDPIHeight);
size.width = [width floatValue];
size.height = [height floatValue];
//Tell me its work or not.
The information isn't in the metadata that comes with your imageDataSampleBuffer. It is written (72 dpi) at the time the image is saved, unless you have, first, manually set it yourself when editing the metadata, before the save.
For most purposes, it is meaningless, However, some software uses it to calculate the "correct size" of an image when placing it in a document. A 3000 pixel square image at 300 dpi will thus appear 10 inches (c.25.4 cm) square; at 72 dpi it will be nearly 42 inches (c.105.8 cm) square. Also, some online image uploaders (especially those used by stock photo libraries and the like) insist on images having high-ish dpi.
If you are using imagePickerController use this below code
NSURL *assetURL = [info objectForKey:UIImagePickerControllerReferenceURL];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:assetURL
resultBlock:^(ALAsset *asset) {
NSMutableDictionary *imageMetadata = nil;
NSDictionary *metadata = asset.defaultRepresentation.metadata;
imageMetadata = [[NSMutableDictionary alloc] initWithDictionary:metadata];
NSLog (#"imageMetaData from AssetLibrary %#",imageMetadata);
NSString *dpi = [imageMetadata objectForKey:#"DPIHeight"];
NSLog (#"Dpi: %#",dpi);
}
failureBlock:^(NSError *error) {
NSLog (#"error %#",error);
}];

Does Core Image load image data immediately?

Let's say I want to find out the size of an image, so if a user tries to load a 10,000x10,000 pixel image in my iPad app I can present them with a dialog and not crash. If I do [UIImage imageNamed:] or [UIImage imageWithContentsOfFile:] that will load my potentially large image into memory immediately.
If I use Core Image instead, say like this:
CIImage *ciImage = [CIImage imageWithContentsOfURL:[NSURL fileURLWithPath:imgPath]];
Then ask my new CIImage for its size:
CGSize imgSize = ciImage.extent.size;
Will that load the entire image into memory to tell me this, or will it just look at the metadata of the file to discover the size of the image?
The imageWithContentsOfURL function loads the image into memory, yes.
Fortunately Apple implemented CGImageSource for reading image metadata without loading the actual pixel data into memory in iOS4, you can read about how to use it in this blog post (conveniently it provides a code sample on how to get image dimensions).
EDIT: Pasted code sample here to protect against link rot:
#import <ImageIO/ImageIO.h>
NSURL *imageFileURL = [NSURL fileURLWithPath:...];
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)imageFileURL, NULL);
if (imageSource == NULL) {
// Error loading image
...
return;
}
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], (NSString *)kCGImageSourceShouldCache,nil];
CFDictionaryRef imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, (CFDictionaryRef)options);
if (imageProperties) {
NSNumber *width = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelWidth);
NSNumber *height = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelHeight);
NSLog(#"Image dimensions: %# x %# px", width, height);
CFRelease(imageProperties);
}
The full API reference is also available here.

Is it possible to use AVAssetReader to get back a stereo channel layout?

I'd like to be able to get back AudioBufferList from AVAssetReader which has 2 buffers so that I can process the left and right audio through an AudioUnit. I tried using the output settings below but it will not read as long as I specify the stereo layout set by kAudioChannelLayoutTag_Stereo.
Is it possible for AVAssetReader to return a non-interleaved result?
If not, how would I convert it to a non-interleaved AudioBufferList? I have tried to use Audio Converter Services but I cannot get it to accept either the the input or output values for the AudioStreamBasicDescription. (ASBD) If I cannot get the data in the format I want from AVAssetReader I would like to at least be able to convert it to the format I need.
Any tips are appreciated.
- (NSDictionary *) getOutputSettings {
AudioChannelLayout channelLayout;
memset(&channelLayout, 0, sizeof(AudioChannelLayout));
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
return outputSettings;
}
I think that kAudioChannelLayoutTag_Stereo is requesting interleaved samples, so I'd lose it.
It all depends on what kind of AVAssetReaderOutput you're creating with those output settings. AVAssetReaderTrackOutput does no conversion beyond decoding to LPCM, but AVAssetReaderAudioMixOutput accepts a bunch more format keys, in fact it probably IS an AVAssetReaderTrackOutput + AudioConverter.
I've learned that I can have AVAssetReader return results with the default output settings (nil) which will give me an interleaved result of float values. The buffer of float values alternates from Left to Right through the buffer. I am able to work with these values which are in the range of -1.0 to 1.0 but in order to play the audio it is necessary to increase the values to the range of a short signed int, so I multiply them by SHRT_MAX and ensure the values stay within the range of SHRT_MAX and SHRT_MIN so the audio plays as expected.
Since the interleaved buffer returns the L and R values on the same buffer it is considered 2 channels on the 1 buffer which is reflected in the AudioBufferList. Previously I was able to get back 2 buffers with 1 channel per buffer but that is not really necessary now that I understand the very simple interleaved format.

How to convert WAV file to M4A?

Is there any way to convert my recorded .WAV file to .M4A file in iOS?
And also I have to convert .M4A file to .WAV file.
I tried with Audio Queue Services, but I am not able to do.
This post: From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary describes how to load a file from the users ipod library and write it to the file system as a linear pcm (wav) file.
I believe that the change that you will need to make to the code to load a file from the file system instead would be in the NSURL that describes where the asset is:
-(IBAction) convertTapped: (id) sender {
// set up an AVAssetReader to read from the iPod Library
NSURL *assetURL = [[NSURL alloc] initFileURLWithPath:#"your_m4a.m4a"];
AVURLAsset *songAsset =
[AVURLAsset URLAssetWithURL:assetURL options:nil];
NSError *assetError = nil;
AVAssetReader *assetReader =
[[AVAssetReader assetReaderWithAsset:songAsset
error:&assetError]
retain];
if (assetError) {
NSLog (#"error: %#", assetError);
return;
}
If you are going in the opposite direction, you will need to change the formatting on the output end:
NSDictionary *outputSettings =[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)],
AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
I am not sure of the exact settings that would go in here for m4a, but this should get you closer.
The other option would be to load in ffmpeg lib and do all your conversion in there, but that seems like different than what you want.
TPAACAudioConverter works fine

Resources