iOS Image Capture from Camera with pixel density(ppi) >300ppi - ios

I am developing an iOS application which takes images from the iPhone camera using AVCaptureDevice.
The images captured seem to have a pixel density(ppi) of 72ppi.
1. I need to send these images for further processing to a backend cloud server which expects the images to have a minimum pixel density of 300ppi.
2. I also see that the images taken from the native iPhone 5 camera also have a pixel density of 72 ppi.
3. I need to know if there are any settings in the AVCapture foundation to set the pixel density of the images taken or if there are ways to increase the pixel density of the images taken from 72 to 300 ppi.
Any help would be appreciated.

What's the difference between a 3264 by 2448 pixel image at 72ppi and a 3264 by 2448 pixel image at 300ppi? There's hardly any except some minor difference in the meta data. And I don't understand why your backend service insists on a minimum pixel density.
The pixel density (or ppi) becomes relevant when you print or display an image at a specific size or place it in a document using a specific size.
Anyway, there is no good reason to set a specific ppi at capture time. That's probably the reason why Apple uses the default density of 72ppi. And I don't know of any means to change it at capture time.
However, you can change it at a later time by modifying the EXIF data of a JPEG file, e.g. using libexif.

As #Codo has pointed out, the pixel density is irrelevant until the image is being output (to a display, a printer a RIP, or whatever). It's metadata, not image data. However, if you're dealing with a third-party service that doesn't have the wit to understand this, you need to edit the image metadata after you have captured the image and before you save it.
This is how:
captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer
NSError *error) {
CFDictionaryRef metadataDict = CMCopyDictionaryOfAttachments(kCFAllocatorDefault,
imageDataSampleBuffer,
kCMAttachmentMode_ShouldPropagate);
NSMutableDictionary *metadata = [[NSMutableDictionary alloc]
initWithDictionary:(__bridge NSDictionary*)metadataDict];
CFRelease(metadataDict);
NSMutableDictionary *tiffMetadata = [[NSMutableDictionary alloc] init];
[tiffMetadata setObject:[NSNumber numberWithInt:300]
forKey(NSString*)kCGImagePropertyTIFFXResolution];
[tiffMetadata setObject:[NSNumber numberWithInt:300] forKey:
(NSString*)kCGImagePropertyTIFFYResolution];
[metadata setObject:tiffMetadata forKey:(NSString*)kCGImagePropertyTIFFDictionary];
.
.
.
}];
Then feed metadata into writeImageToSavedPhotosAlbum:metadata:completionBlock, writeImageDataToSavedPhotosAlbum:metadata:completionBlock or a save into your private app folder, depending on your requirements.

Related

Efficiently store picture in Firebase Storage?

As storing pictures is going to be one of the more expensive features of Firebase my app will be using I want to make sure I'm doing it efficiently.
The steps I'm taking are the following:
Resize picture the user wants to upload to have width of 500 points (A point represents a pixel on non-retina screens and two pixels on retina screens)
Upload the data for the specified image in PNG format to Firebase storage
Here's my actual code:
let storageRef = FIRStorage.storage().reference().child("\(name).png")
if let uploadData = UIImagePNGRepresentation(profImage.resizeImage(targetSize: CGSize(width: 500, height: Int(500*(profImage.size.height/profImage.size.width))))) {
storageRef.put(uploadData, metadata: nil, completion: nil)
}
The photos are going to be a little less than the width of an iPhone screen when displayed to the user. Is the way I'm storing them efficient or is there a better way to format them?
**Edit: After a bit more research I've found out that JPGs are more efficient than PNG so I'll be switching to that since transparency isn't important for me. See my answer for example.
I've changed the image format from png to jpeg and found it saves a lot of space. Here's a picture of my storage for comparisons:
My code went from using UIImagePNGRepresentation to UIImageJPEGRepresentation with a compression factor of 1. I'm sure if I reduce the compression factor it'll save even more space.

iphone sdk get actual size of image in bytes

How to get actual size of image ?
I am using
NSInteger actualSize = CGImageGetHeight(image.CGImage) * CGImageGetBytesPerRow(image.CGImage);
or
NSData *imageData2 = UIImageJPEGRepresentation(image, 1.0);
[imageData2 length];
but I don't get the actual size of image it is either larger of smaller compared to the size on the disk (as I am using simulator).
Is there any way to get actual size of the image?
It depends upon what you mean by "size".
If you want the amount of memory used while the image is loaded into memory and used by the app, the bytes-per-row times height is the way to go. This captures the amount of memory used by the uncompressed pixel buffer while the image is actively used by the app.
If you want the number of bytes used in persistent storage when you save the image (generally enjoying some compression), then grab the the original asset's NSData and examine its length. Note, though, if you load an image and then use UIImageJPEGRepresentation with a quality of 1, you'll generally get a size a good deal larger than the original compressed file.
Bottom line, standard JPEG and PNG files enjoy some compression, but when the image is loaded into memory it is uncompressed. You can't generally infer the original file size from a UIImage object. You have to look at the original asset.
Try this (for iOS 6.0 or later and OS X 10.8):
NSLog(#"%#",[NSByteCountFormatter stringFromByteCount:imageData2.length countStyle:NSByteCountFormatterCountStyleFile]);
UPDATE:
Question: Can you post code where you initialise your image?
Above solution did not work for you. Let's try something else. You could try to check directly image file size:
NSError* error;
NSDictionary *fileDictionary = [[NSFileManager defaultManager] attributesOfItemAtPath:mediaURL error: &error];
NSNumber *size = [fileDictionary objectForKey:NSFileSize];

Efficient way to resize photos in iOS

What is the most efficient way to iterate over the entire camera roll, open every single photo and resize it?
My naive attempts to iterate over the asset library and get the defaultRepresentation results took about 1 second per 4 images (iPhone 5). Is there a way to do better?
I need the resized images to do some kind of processing.
Resizing full resolution photos is rather expensive operation. But you can use images already resized to screen resolution:
ALAsset *result = // .. do not forget to initialize it
ALAssetRepresentation *rawImage = [result defaultRepresentation];
UIImage *image = [UIImage imageWithCGImage:rawImage.fullScreenImage];
If you need another resolution you can still use 'fullScreenImage' since it has smaller size than original photo.
(CGImageRef)fullScreenImage
Returns a CGImage of the representation that is appropriate for
displaying full screen. The dimensions of the image are dependent on
the device your application is running on; the dimensions may not,
however, exactly match the dimensions of the screen.
In iOS 5 and later, this method returns a fully cropped, rotated, and
adjusted image—exactly as a user would see in Photos or in the image
picker.
Returns a CGImage of the representation that is appropriate for
displaying full screen, or NULL if a CGImage representation could not
be generated.

Preloading images ios app

I have an app with 150 local images (about 500kb each). I have loaded them all into an array like this:
allPics = [[NSMutableArray alloc] init];
//NSString *imagePath;
NSArray *result = [database performQuery:#"SELECT image_path FROM validWords order by valid_word"];
for (NSArray *row in result) {
NSString *temp = [row objectAtIndex:0];
NSLog(#"%#", temp);
//imagePath = temp;
UIImage *newImage = [UIImage imageNamed:temp];
[allPics addObject:newImage];
}
When I set my UIImageView later to one of these pics, it hangs my interface up for a second, due to lazy loading from what I have read. I tried to prerender them, but that spiked my memory usage to over 3gb before it got a third of the way through my images. Should I be looking to use a background thread to render the image when I need it? When I reduced the image total to 4, once all 4 were rendered once, the transitions between them was seamless.
I appreciate any and all tips and solutions!
Yes, I would suggest a background thread and paging. If the user is looking at image 7, you should load images, say, 5,6,8 and 9. If the user then moves onto image 8, you can discard image 5 and lazy load image 10. This way the user sjhould be able to move through your images without a significant memory or performance overhead.
You can then also add heuristics such as 'if the user is paging through the images very quickly, don't load any images until they slow down',
Another tip is to store a very low resolution version of the image (say, a 50kb version) and store that at a different path. Then you can show the thumbnail images to the user and only lazy load in the high res image if the user stops on that image for a period of time.
Finally, be careful when you talk about image sizes. Is the 500KB compressed or uncompressed? If it is a 500KB compressed JPeg, the actual image on the device could be vastly bigger. A jpg with fairly uniform colour and a reasonably high level of compression can be very small on disk, but decompressed it could be a massive image. This could be another source of the lag you experience.

How to implement accessibility for ALAsset photos on iOS

I wrote a custom image picker based on ALAssetsLibrary, everything works fine but VoiceOver, every photo are only representing as "Button", I think that is not good.
So I checked the Photo app that built in iOS, VoiceOver spoke following information for each photo:
It's photo or video or screenshot etc.
It's portrait or landscape.
The creation date of it.
It's sharp or blurry.
It's bright or dark.
I think I can get the first three from ALAsset's properties, which is
ALAssetPropertyType
ALAssetPropertyOrientation
ALAssetPropertyDate
But how about sharpness and brightness? Can I get them from image Metadata or derive them out?
Update:
In the Photo's EXIF metadata:
brightness is available with photos taken directly from camera, but photos saved from web or captured from screen always returns nil value.
sharpness is always nil in the exif, from the document, sharpness value is "The sharpness applied to the image", so I think it is used by image-processing app (such as Aperture)
But Photos.app always have right brightness and sharpness values for any kind of photo, is it possible to do this by ourselves?
You can get values using EXIF metadata.
All the keys are refferenced in Apple's Docs here and here
Here I wrote an example:
NSDictionary *allMetadata = [[asset defaultRepresentation] metadata];
NSDictionary *exif = [allMetadata objectForKey:(NSString*)kCGImagePropertyExifDictionary];
and than get sharpness and brightness
NSNumber *sharpness = [exif objectForKey:(NSString*)kCGImagePropertyExifSharpness];
NSNumber *brightness = [exif objectForKey:(NSString*)kCGImagePropertyExifBrightnessValue];

Resources