I wrote a custom image picker based on ALAssetsLibrary, everything works fine but VoiceOver, every photo are only representing as "Button", I think that is not good.
So I checked the Photo app that built in iOS, VoiceOver spoke following information for each photo:
It's photo or video or screenshot etc.
It's portrait or landscape.
The creation date of it.
It's sharp or blurry.
It's bright or dark.
I think I can get the first three from ALAsset's properties, which is
ALAssetPropertyType
ALAssetPropertyOrientation
ALAssetPropertyDate
But how about sharpness and brightness? Can I get them from image Metadata or derive them out?
Update:
In the Photo's EXIF metadata:
brightness is available with photos taken directly from camera, but photos saved from web or captured from screen always returns nil value.
sharpness is always nil in the exif, from the document, sharpness value is "The sharpness applied to the image", so I think it is used by image-processing app (such as Aperture)
But Photos.app always have right brightness and sharpness values for any kind of photo, is it possible to do this by ourselves?
You can get values using EXIF metadata.
All the keys are refferenced in Apple's Docs here and here
Here I wrote an example:
NSDictionary *allMetadata = [[asset defaultRepresentation] metadata];
NSDictionary *exif = [allMetadata objectForKey:(NSString*)kCGImagePropertyExifDictionary];
and than get sharpness and brightness
NSNumber *sharpness = [exif objectForKey:(NSString*)kCGImagePropertyExifSharpness];
NSNumber *brightness = [exif objectForKey:(NSString*)kCGImagePropertyExifBrightnessValue];
Related
I have an app that chooses an image from the Camera Roll or Takes a photo using the Camera. The problem is that when I run the app on an iPad, if I select an image it doesn't display the full image later on.
Image you select : (Notice that the waterfall is pretty much entered.)
Image displayed : (The image is displayed wrong)
The problem is only on big devices like iPads. I'm saving the image using CoreData and then recovering it. How can I get it to display the full I'm age?
Niall
You need to set UIImageView contentMode to ScaleAspectFit
What is the most efficient way to iterate over the entire camera roll, open every single photo and resize it?
My naive attempts to iterate over the asset library and get the defaultRepresentation results took about 1 second per 4 images (iPhone 5). Is there a way to do better?
I need the resized images to do some kind of processing.
Resizing full resolution photos is rather expensive operation. But you can use images already resized to screen resolution:
ALAsset *result = // .. do not forget to initialize it
ALAssetRepresentation *rawImage = [result defaultRepresentation];
UIImage *image = [UIImage imageWithCGImage:rawImage.fullScreenImage];
If you need another resolution you can still use 'fullScreenImage' since it has smaller size than original photo.
(CGImageRef)fullScreenImage
Returns a CGImage of the representation that is appropriate for
displaying full screen. The dimensions of the image are dependent on
the device your application is running on; the dimensions may not,
however, exactly match the dimensions of the screen.
In iOS 5 and later, this method returns a fully cropped, rotated, and
adjusted image—exactly as a user would see in Photos or in the image
picker.
Returns a CGImage of the representation that is appropriate for
displaying full screen, or NULL if a CGImage representation could not
be generated.
I'm capturing an image from the iPhone camera and storing it in the document folder for further check and use.
Before storing the image i want to check the image quality based on the RGB value, grayscale and white balance , etc.
All that i can get from the image. But i am not able to understand what should or how should i use any framework that would help me retrieve this information.
Any help would be appreciated.
Thank you!
This app may do what you want. It is called Photo Metadata Reader. I saw it had RGB in the screenshots.
https://itunes.apple.com/us/app/photo-metadata-reader/id437865801?mt=8
If you want to retrieve metadata from UIImage - check this
https://github.com/foundry/UIImageMetadata
If you want to get, for example, color balance, I think you must get pixel data from image and compute this by yourself
Basic Task: update the EXIF orientation property for the metaData asociated with a UIImage. My problem is that I don't know where the orientation property is in all the EXIF info.
Convoluted Background: I am changing the orientation of the image returned by imagePickerController:didFinishPickingMediaWithInfo: so I am thinking that I also need to update the metaData before saving the image with writeImageToSavedPhotosAlbum:(CGImageRef)imageRef metadata:(NSDictionary *)metadata.
In other words, unless I change it, the metaData will contain the old/initial orientation and thus be wrong. The reason I am changing the orientation is because it keeps tripping me up when I run the Core Image face detection routine. Taking a photo with the iPhone (device) in Portrait mode using the front camera, the orientation is UIImageOrientationRight (3). If I rewrite the image so the orientation is UIImageOrientationUp(0), I get good face detection results. For reference, the routine to rewrite the image is below.
The whole camera orientation thing I find very confusing and I seem to be digging myself deeper into a code hole with all of this. I have looked at the posts (here, here and here. And according to this post (https://stackoverflow.com/a/3781192/840992):
"The camera is actually landscape native, so you get up or down when
you take a picture in landscape and left or right when you take a
picture in portrait (depending on how you hold the device)."
...which is totally confusing. If the above is true, I would think I should be getting an orientation of UIImageOrientationLeftMirrored or UIImageOrientationRightMirrored with the front camera. And none of this would explain why the CIDetector fails the virgin image returned by the picker.
I am approaching this ass-backwards but I can't seem to get oriented...
-(UIImage *)normalizedImage:(UIImage *) thisImage
{
if (thisImage.imageOrientation == UIImageOrientationUp) return thisImage;
UIGraphicsBeginImageContextWithOptions(thisImage.size, NO, thisImage.scale);
[thisImage drawInRect:(CGRect){0, 0, thisImage.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}
Take a look at my answer here:
Force UIImagePickerController to take photo in portrait orientation/dimensions iOS
and associated project on github (you won't need to run the project, just look at the readme).
It's more concerned with reading rather than writing metadata - but it includes a few notes on Apple's imageOrientation and the corresponding orientation 'Exif' metadata.
This might be worth a read also
Captured photo automatically rotated during upload in IOS 6.0 or iPhone
There are two different constant numbering conventions in play to indicate image orientation.
kCGImagePropertyOrientation constants as used in TIFF/IPTC image metadata tags
UIImageOrientation constants as used by UIImage imageOrientation property.
iPhones native camera orientation is landscape left (with home button to the right). Native pixel dimensions always reflect this, rotation flags are used to orient the image correctly with the display orientation.
Apple UIImage.imageOrientation TIFF/IPTC kCGImagePropertyOrientation
iPhone native UIImageOrientationUp = 0 = Landscape left = 1
rotate 180deg UIImageOrientationDown = 1 = Landscape right = 3
rotate 90CCW UIImageOrientationLeft = 2 = Portrait down = 8
rotate 90CW UIImageOrientationRight = 3 = Portrait up = 6
UIImageOrientation 4-7 map to kCGImagePropertyOrientation 2,4,5,7 - these are the mirrored counterparts.
UIImage derives it's imagerOrientation property from the underlying kCGImagePropertyOrientation flags - that's why it is a read-only property. This means that as long as you get the metadata flags right, the imageOrientation will follow correctly. But if you are reading the numbers in order to apply a transform, you need to be aware which numbers you are looking at.
A few gleanings from my world o' pain in looking into this:
Background: Core Image face detection was failing and it seemed to be related to using featuresInImage:options: and using the UIImage.imageOrientation property as an argument. With an image adjusted to have no rotation and not mirrored, detection worked fine but when passing in an image directly from the camera detection failed.
Well...UIImage.imageOrientation is DIFFERENT than the actual orientation of the image.
In other words...
UIImage* tmpImage = [self.imageInfo objectForKey:UIImagePickerControllerOriginalImage];
printf("tmpImage.imageOrientation: %d\n", tmpImage.imageOrientation);
Reports a value of 3 or UIImageOrientationRight whereas using the metaData returned by the UIImagePickerControllerDelegate method...
NSMutableDictionary* metaData = [[tmpInfo objectForKey:#"UIImagePickerControllerMediaMetadata"] mutableCopy];
printf(" metaData orientation %d\n", [[metaData objectForKey:#"Orientation"] integerValue]);
Reports a value of 6 or UIImageOrientationLeftMirrored.
I suppose it seems obvious now that UIImage.imageOrientation is a display orientation which appears to determined by source image orientation and device rotation. (I may be wrong here) Since the display orientation is different than the actual image data, using that will cause the CIDetector to fail. Ugh.
I'm sure all of that serves very good and important purposes for liquid GUIs etc. but it is too much to deal with for me since all the CIDetector coordinates will also be in the original image orientation making CALayer drawing sick making. So, for posterity here is how to change the Orientation property of the metaData AND the image contained therein. This metaData can then be used to save the image to the cameraRoll.
Solution
// NORMALIZE IMAGE
UIImage* tmpImage = [self normalizedImage:[self.imageInfo objectForKey:UIImagePickerControllerOriginalImage]];
NSMutableDictionary * tmpInfo =[self.imageInfo mutableCopy];
NSMutableDictionary* metaData = [[tmpInfo objectForKey:#"UIImagePickerControllerMediaMetadata"] mutableCopy];
[metaData setObject:[NSNumber numberWithInt:0] forKey:#"Orientation"];
[tmpInfo setObject:tmpImage forKey:#"UIImagePickerControllerOriginalImage"];
[tmpInfo setObject:metaData forKey:#"UIImagePickerControllerMediaMetadata"];
self.imageInfo = tmpInfo;
I was having a look at the RosyWriter Sample Code provided by Apple as a starting point and I'd like to find a way how to crop a video.
So i have the full resolution video from the iPhones Camera, but I just want to use a cropped part of it (and also rotate this subpart).
I figured that in captureOutput:didOutputSampleBuffer: fromConnection: i can modify each frame by modifying the CMSampleBufferRef that i get passed in.
So my questions now are:
Is this the right place to crop my video?
Where do I specify that the final video (that get's saved to disc) has a smaller resolution than the full video captured by AVCaptureSession? Setting the AVVideoWidthKey and AVVideoHeightKey has no effect.
How can I crop the video and still have good performance?
Any help is appreciated!
Thanks a lot!
EDIT:
Maybe I just need to know how I can make a video that was shot in portrait a landscape one by turning the images of the video by 90 degrees and then zoom in to fit the width again...?!?
In AVVideoSetttings.h there is the AVVideoScalingModeKey. This key combined with the defined values control how the video is scaled/cropped when encoding the images to the video container. For example if you specified a value of AVVideoScalingModeFit then cropping is used. Check out the header for how other values effect the video images.