get thumbnail from UIImagePickerController when using CAMERA (not album) - ios

It's very easy to get a thumbnail when using the ALBUM with UIImagePickerController.
It's even easy to get a thumbnail from a VIDEO.
But when you're using the CAMERA,
self.cameraController.sourceType = UIImagePickerControllerSourceTypeCamera;
and you come back to
-(void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
How do you quickly get a thumbnail?
Note, I'm not particularly trying to SAVE the image .. I want to use the image from the camera (imagine then using it in a drawing screen, or for image processing, etc.)
Below, a naive category that turns the image in to a 128.128 image. But it's way too slow.
Many apps move very quickly from the camera, to the next screen (say, editing the image or drawing on it). What's the technique? Cheers
-(UIImage *)squareAndSmall
{
// this category works fine to make a 128.128 thumbnail,
// but it is very slow
CGSize finalsize = CGSizeMake(128,128);
CGFloat scale = MAX(
finalsize.width/self.size.width,
finalsize.height/self.size.height);
CGFloat width = self.size.width * scale;
CGFloat height = self.size.height * scale;
//uses the central area, say....
CGRect imageRect = CGRectMake(
(finalsize.width - width)/2.0f,
(finalsize.height - height)/2.0f,
width, height);
UIGraphicsBeginImageContextWithOptions(finalsize, NO, 0);
[self drawInRect:imageRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Try using "AssetLibrary.framework" to write the image to your camera roll and it will automatically create a thumbnail representation for you. Moreover, as this process is asynchronous, it will certainly be a performance boost for the app.
Try some thing like the following:
#import <AssetsLibrary/AssetsLibrary.h>
////.........your code
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
/////.........
UIImage* img = info[UIImagePickerControllerOriginalImage];
ALAssetsLibrary *library = [ALAssetsLibrary new];
[library writeImageToSavedPhotosAlbum:[img CGImage] orientation:(ALAssetOrientation)[img imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
NSLog(#"error");
return;
}
[library assetForURL:assetURL resultBlock:^(ALAsset *asset) {
[self.imageViewer setImage:[UIImage imageWithCGImage:[asset aspectRatioThumbnail]]]; //self.imageViewer is a UIImageView to display thumbnail
} failureBlock:^(NSError *error) {
NSLog(#"Failed to load captured image.");
}];
}];
[library release];
//................
}
////......rest of your code
I have not tested the code but I hope, it should fulfill your purpose. Please make sure to add a reference of "AssetLibrary.framework" to your project before you try and let me know if it worked out for you.

Related

Get a photo from photo library not working

I am new in XCode, I want to select a photo from photo library.
- (void) ChoosePhoto_from_Album
{
UIImagePickerController *imgage_picker = [[UIImagePickerController alloc] init];
imgage_picker.delegate = self;
imgage_picker.allowsEditing = YES;
imgage_picker.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
[self presentViewController:imgage_picker animated:YES completion:NULL];
}
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage * image = [info objectForKey:UIImagePickerControllerOriginalImage];
NSLog(#"ImagePicker image size = (%.0f x %.0f)", image.size.width , image.size.height);
// process image...
}
The photo I selected is 5MP (2592x1936). However, it return the size is 960x716.
What I am missing ?
Thanks !
I found this link. You can use the Three20 Framework by using following contains the codes.
Here is your solution.
UIImagePickerController does give you the full camera based image. You need to get it from the editingInfo parameter of the imagePickerController:didFinishPickingImage:editingInfo method.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)image editingInfo:(NSDictionary *)editingInfo {
CGImageRef ref = image.CGImage;
int width = CGImageGetWidth(ref);
int height = CGImageGetHeight(ref);
NSLog(#"image size = %d x %d", width, height);
UIImage *orig = [editingInfo objectForKey:UIImagePickerControllerOriginalImage];
ref = orig.CGImage;
width = CGImageGetWidth(ref);
height = CGImageGetHeight(ref);
NSLog(#"orig image size = %d x %d", width, height);
CGRect origRect;
[[editingInfo objectForKey:UIImagePickerControllerCropRect] getValue:&origRect];
NSLog(#"Crop rect = %f %f %f %f", origRect.origin.x, origRect.origin.y, origRect.size.width, origRect.size.height);
}

Global variable is not being updated fast enough 50% of the time

I have a photo taking app. When the user presses the button to take a photo, I set a global NSString variable called self.hasUserTakenAPhoto equal to YES. This works perfectly 100% of the time when using the rear facing camera. However, it only works about 50% of the time when using the front facing camera and I have no idea why.
Below are the important pieces of code and a quick description of what they do.
Here is my viewDidLoad:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.topHalfView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height/2);
self.takingPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
self.afterPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
self.bottomHalfView.frame = CGRectMake(0, 240, self.view.bounds.size.width, self.view.bounds.size.height/2);
PFFile *imageFile = [self.message objectForKey:#"file"];
NSURL *imageFileURL = [[NSURL alloc]initWithString:imageFile.url];
imageFile = nil;
self.imageData = [NSData dataWithContentsOfURL:imageFileURL];
imageFileURL = nil;
self.topHalfView.image = [UIImage imageWithData:self.imageData];
//START CREATING THE SESSION
self.session =[[AVCaptureSession alloc]init];
[self.session setSessionPreset:AVCaptureSessionPresetPhoto];
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:&error];
if([self.session canAddInput:self.deviceInput])
[self.session addInput:self.deviceInput];
_previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:_session];
self.rootLayer = [[self view]layer];
[self.rootLayer setMasksToBounds:YES];
[_previewLayer setFrame:CGRectMake(0, 240, self.rootLayer.bounds.size.width, self.rootLayer.bounds.size.height/2)];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.rootLayer insertSublayer:_previewLayer atIndex:0];
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoOutput.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
[self.session addOutput:self.videoOutput];
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[self.videoOutput setSampleBufferDelegate:self queue:queue];
[_session startRunning];
}
The Important part of viewDidLoad starts where I left the comment of //START CREATING THE SESSION
I basically create the session and then start running it. I have set this view controller as a AVCaptureVideoDataOutputSampleBufferDelegate, so as soon as the session starts running then the method below starts being called as well.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
//Sample buffer data is being sent, but don't actually use it until self.hasUserTakenAPhoto has been set to YES.
NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto);
if([self.hasUserTakenAPhoto isEqualToString:#"YES"]) {
//Now that self.hasUserTakenAPhoto is equal to YES, grab the current sample buffer and use it for the value of self.image aka the captured photo.
self.image = [self imageFromSampleBuffer:sampleBuffer];
}
}
This code is receiving the video output from the camera every second, but I don't actually do anything with it until self.hasUserTakenAPhoto is equal to YES. Once that has a string value of YES, then I use the current sampleBuffer from the camera and place it inside my global variable called self.image
So, here is when self.hasUserTakenAPhoto is actually set to YES.
Below is my IBAction code that is called when the user presses the button to capture a photo. A lot happens when this code runs, but really all that matters is the very first statement of: self.hasUserTakenAPhoto = #"YES";
-(IBAction)stillImageCapture {
self.hasUserTakenAPhoto = #"YES";
[self.session stopRunning];
if(self.inputDevice.position == 2) {
self.image = [self selfieCorrection:self.image];
} else {
self.image = [self rotate:UIImageOrientationRight];
}
CGFloat widthToHeightRatio = _previewLayer.bounds.size.width / _previewLayer.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (self.image.size.width < self.image.size.height) {
cropRect.size.width = self.image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = self.image.size.height * widthToHeightRatio;
cropRect.size.height = self.image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (self.image.size.height - cropRect.size.height)/2.0;
NSLog(#"Y Math: %f", (self.image.size.height - cropRect.size.height));
} else {
cropRect.origin.x = (self.image.size.width - cropRect.size.width)/2.0;
cropRect.origin.y = 0;
float cropValueDoubled = self.image.size.height - cropRect.size.height;
float final = cropValueDoubled/2;
finalXValueForCrop = final;
}
CGRect cropRectFinal = CGRectMake(cropRect.origin.x, finalXValueForCrop, cropRect.size.width, cropRect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([self.image CGImage], cropRectFinal);
UIImage *image2 = [[UIImage alloc]initWithCGImage:imageRef];
self.image = image2;
CGImageRelease(imageRef);
self.bottomHalfView.image = self.image;
if ([self.hasUserTakenAPhoto isEqual:#"YES"]) {
[self.takingPhotoView setHidden:YES];
self.image = [self screenshot];
[_afterPhotoView setHidden:NO];
}
}
So basically the viewDidLoad method runs and the session is started, the session is sending everything the camera sees to the captureOutput method, and then as soon as the user presses the "take a photo" button we set the string value of self.hasUserTakenAPhoto to YES, the session stops, and since self.hasUserTakenAPhoto is equal to YES now, the captureOutput method places the very last camera buffer into the self.image object for me to use.
I just can't figure this out because like I said it works 100% of the time when using the rear facing camera. However, when using the front facing camera it only works 50% of the time.
I have narrowed the problem down to the fact that self.hasUserTakenAPhoto does not update to YES fast enough when using the ront facing camera, and I know because if you look in my 2nd code I posted it has the statement of NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto);.
When this works correctly, and the user has just pressed the button to capture a photo which also stops the session, the very last time that NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto); runs it will print with the correct value of YES.
However, when it doesn't work correctly and doesn't update fast enough, the very last time it runs it still prints to the log with a value of null.
Any ideas on why self.hasUserTakenAPhoto does not update fast enough 50% of the time whe using the front facing camera? Even if we can't figure that out, it doesn't matter. I just need help then coming up with an alternate solution to this.
Thanks for the help.
I think its a scheduling problem. At the return point of your methods
– captureOutput:didOutputSampleBuffer:fromConnection:
– captureOutput:didDropSampleBuffer:fromConnection:
add a CFRunLoopRun()

How to set photo taken with UIImagePickerController to be viewed/modified into a custom UIViewController?

I am currently using the default UIImagePickerController, and immediately after a picture is taken, the following default screen is shown:
My question is, how would I be able to use my own custom UIViewController to view the resultant image (and therefore bypass this confirmation screen ).
Please note that am not interested in using a custom overlay for the UIImagePicker with custom camera controls or images from photo gallery, but rather just to skip this screen and assume that the photo taken is what the user would have liked.
Thanks!
Try this code to maintain your resultant image in same size:
First create IBOutlet for UIImageview.
-(void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString *mediaType = info[UIImagePickerControllerMediaType];
[self dismissViewControllerAnimated:YES completion:nil];
if ([mediaType isEqualToString:(NSString *)kUTTypeImage]) {
OriginalImage=info[UIImagePickerControllerOriginalImage];
image = info[UIImagePickerControllerOriginalImage];
//----------
imageview.image = image; // additional method
[self resizeImage];
//---------
if (_newMedia)
UIImageWriteToSavedPhotosAlbum(image,self,#selector(image:finishedSavingWithError:contextInfo:),nil);
}
else if ([mediaType isEqualToString:(NSString *)kUTTypeMovie])
{
// Code here to support video if enabled
}
}
//---- Resize the original image ----------------------
-(void)resizeImage
{
UIImage *resizeImage = imageview.image;
float width = 320;
float height = 320;
CGSize newSize = CGSizeMake(320,320);
UIGraphicsBeginImageContextWithOptions(newSize,NO,0.0);
CGRect rect = CGRectMake(0, 0, width, height);
float widthRatio = resizeImage.size.width / width;
float heightRatio = resizeImage.size.height / height;
float divisor = widthRatio > heightRatio ? widthRatio : heightRatio;
width = resizeImage.size.width / divisor;
height = resizeImage.size.height / divisor;
rect.size.width = width;
rect.size.height = height;
//indent in case of width or height difference
float offset = (width - height) / 2;
if (offset > 0) {
rect.origin.y = offset;
}
else {
rect.origin.x = -offset;
}
[resizeImage drawInRect: rect];
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageview.image = smallImage;
imageview.contentMode = UIViewContentModeScaleAspectFit;
}
You can customize the image picker view controller using an overlay view.
Read the documentation here.
You set up a new view, set it as the camera overlay, tell the image picker to not display its own controls and display the picker.
Inside your the code of your overlay, you call takePicture on the image picker view controller to finally take the image when you are ready.
Apple has an example of this here:
https://developer.apple.com/library/ios/samplecode/PhotoPicker/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010196-Intro-DontLinkElementID_2
For transforms, you can use the cameraViewTransform property to set transformations to be used when taking the image.
After taking the picture, the delegate method would be called:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
In this method, you dismiss the imagePickerController. You have access to the image, so you can play around with the image, possibly display it in a UIImageView. So the code would be something like this:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[self dismissViewControllerAnimated:YES completion:nil];
if (picker.sourceType == UIImagePickerControllerSourceTypeCamera)
{
//here you have access to the image
UIImage *pickedImage = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
//Create an instance of a view controller and show the UIImageView or do your stuff there. According to you business logic.
}
}

Benefits of using Setter and Getter methods vs. direct manipulation

Sorry if this a beginner question but I was wondering what the benefits are to using setter and getter methods rather than directly manipulating them directly. I'm in obj-c and I'm wondering if there is any benefits in terms of memory/cpu usage.
For instance, I'm cropping an image before I upload it and I crop it after it is taken/picked. I put all my code in the didFinishPickingMediaWithInfo method. So it would look like the following:
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{
//Create image
imageFile = [info objectForKey:UIImagePickerControllerOriginalImage];
//Unhide imageView and populate
selectedImage.hidden = false;
selectedImage.image = imageFile;
//Create original image for reservation
originalImage = imageFile;
//Crop image
UIGraphicsBeginImageContext(selectedImage.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextRotateCTM(context, 2*M_PI);
[selectedImage.layer renderInContext:context];
imageFile = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Update imageView with croppedImage
selectedImage.image = imageFile;
//Dismiss image picker
[imagePickerController dismissViewControllerAnimated:YES completion:nil];
}
So let's say I do the same thing but have a method for populating the selectedImage imageView and a method for cropping the image so it would look like the following:
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{
//Create image
[self setImage:[info objectForKey:UIImagePickerControllerOriginalImage]];
//Create local image
UIImage * localImage = [self returnImage];
//Unhide imageView and populate
selectedImage.hidden = false;
[self populateImageView:localImage];
//Create original image for reservation
originalImage = localImage;
//Crop image
localImage = [self getImageFromContext:localImage withImageView:selectedImage];
//Update imageView with croppedImage
[self populateImageView:localImage];
//Dismiss image picker
[imagePickerController dismissViewControllerAnimated:YES completion:nil];
}
//Crop image method
-(UIImage *)getImageFromContext:(UIImage *)image withImageView:(UIImageView *)imageView{
UIGraphicsBeginImageContext(imageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextRotateCTM(context, 2*M_PI);
[imageView.layer renderInContext:context];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
image = nil;
context = nil;
}
-(void)populateImageView:(UIImage *)image{
selectedImage.image = image;
}
- (void)setImage:(UIImage *)image{
imageFile = image;
}
-(UIImage *)returnImage{
return imageFile;
}
So are there any other benefits other than readability and neatness of code? Is there anyway to make this more efficient?
You have a great benchmark made by Big Nerd Ranch on that topic.
Usually I use properties as a best practice. This is useful because you have:
An expected place where your property will be accessed (getter)
An expected place where your property will be set (setter)
This usually helps in debugging (you can override the setter or set a breakpoint there to check who is changing the property and when it is changing) and you can do some lazy instantiation.
Usually I do lazy instantiation with arrays or with programmatically created views. For instance:
#property(nonatomic, strong) UIView *myView;
-(UIView*) myView {
if(!_myView) {
//I usually prefer a function over macros
_myView = [[UIView alloc] initWithFrame: [self myViewFrame]];
_myView.backgroundColor = [UIColor redColor];
}
return _myView;
}
Another important factor bolded by Jacky Boy is that with properties you have a free KVO ready structure.

How to resize image pixel size programmatically

i am creating app using facebook. if i am trying to upload photo to facebook means i got following message any give idea for solve that
"The provided user_generated photo for an OG action must be at least 480px in both dimensions"
I use a function like follow to get an image with any size.
Original image should big than you wanted.(ps:You can try an image little)
+ (UIImage *)thumbnailWithImageWithoutScale:(UIImage *)image size:(CGSize)wantSize
{
UIImage * targetImage;
if (nil == image) {
targetImage = nil;
}else{
CGSize size = image.size;
CGRect rect;
if (wantSize.width/wantSize.height > size.width/size.height) {
rect.size.width = wantSize.height*size.width/size.height;
rect.size.height = wantSize.height;
rect.origin.x = (wantSize.width - rect.size.width)/2;
rect.origin.y = 0;
} else{
rect.size.width = wantSize.width;
rect.size.height = wantSize.width*size.height/size.width;
rect.origin.x = 0;
rect.origin.y = (wantSize.height - rect.size.height)/2;
}
UIGraphicsBeginImageContext(wantSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [[UIColor clearColor] CGColor]);
UIRectFill(CGRectMake(0, 0, wantSize.width, wantSize.height));//clear background
[image drawInRect:rect];
targetImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return targetImage;
}
You must provide a bigger image, with at least 480px width and height.
Your image is apparently smaller than 480px wide or tall. The problem is either that the original image is too small, or you're retrieving it incorrectly. You could, theoretically resize the image to make it bigger, but that will result in pixelation that is probably undesirable.
You should show us how you're retrieving the image. For example, when I want to pick a photo from my library, I'll use the code adapted from Picking an Item from the Photo Library from the Camera Programming Topics for iOS:
UIImagePickerController *mediaUI = [[UIImagePickerController alloc] init];
mediaUI.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
// To instead show the controls to allow user to trim image, set this to YES;
// If no cropping available, set this to NO.
mediaUI.allowsEditing = YES;
mediaUI.delegate = delegate;
And then, you obviously have to implement the didFinishPickingMediaWithInfo:
#pragma mark - UIImagePickerControllerDelegate
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType];
UIImage *originalImage, *editedImage, *imageToUse;
// Handle a still image picked from a photo album
if (CFStringCompare ((CFStringRef) mediaType, kUTTypeImage, 0) == kCFCompareEqualTo) {
editedImage = (UIImage *) [info objectForKey:UIImagePickerControllerEditedImage];
originalImage = (UIImage *) [info objectForKey:UIImagePickerControllerOriginalImage];
if (editedImage) {
imageToUse = editedImage;
} else {
imageToUse = originalImage;
}
NSLog(#"image size = %#", NSStringFromCGSize(imageToUse.size));
if (imageToUse.size.width < 480 || imageToUse.size.height < 480)
{
[[[UIAlertView alloc] initWithTitle:nil
message:#"Please select image that is at least 480 x 480"
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil] show];
}
else
{
// do something with imageToUse
}
}
[picker dismissViewControllerAnimated: YES completion:nil];
}

Resources