i am creating app using facebook. if i am trying to upload photo to facebook means i got following message any give idea for solve that
"The provided user_generated photo for an OG action must be at least 480px in both dimensions"
I use a function like follow to get an image with any size.
Original image should big than you wanted.(ps:You can try an image little)
+ (UIImage *)thumbnailWithImageWithoutScale:(UIImage *)image size:(CGSize)wantSize
{
UIImage * targetImage;
if (nil == image) {
targetImage = nil;
}else{
CGSize size = image.size;
CGRect rect;
if (wantSize.width/wantSize.height > size.width/size.height) {
rect.size.width = wantSize.height*size.width/size.height;
rect.size.height = wantSize.height;
rect.origin.x = (wantSize.width - rect.size.width)/2;
rect.origin.y = 0;
} else{
rect.size.width = wantSize.width;
rect.size.height = wantSize.width*size.height/size.width;
rect.origin.x = 0;
rect.origin.y = (wantSize.height - rect.size.height)/2;
}
UIGraphicsBeginImageContext(wantSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [[UIColor clearColor] CGColor]);
UIRectFill(CGRectMake(0, 0, wantSize.width, wantSize.height));//clear background
[image drawInRect:rect];
targetImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return targetImage;
}
You must provide a bigger image, with at least 480px width and height.
Your image is apparently smaller than 480px wide or tall. The problem is either that the original image is too small, or you're retrieving it incorrectly. You could, theoretically resize the image to make it bigger, but that will result in pixelation that is probably undesirable.
You should show us how you're retrieving the image. For example, when I want to pick a photo from my library, I'll use the code adapted from Picking an Item from the Photo Library from the Camera Programming Topics for iOS:
UIImagePickerController *mediaUI = [[UIImagePickerController alloc] init];
mediaUI.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
// To instead show the controls to allow user to trim image, set this to YES;
// If no cropping available, set this to NO.
mediaUI.allowsEditing = YES;
mediaUI.delegate = delegate;
And then, you obviously have to implement the didFinishPickingMediaWithInfo:
#pragma mark - UIImagePickerControllerDelegate
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType];
UIImage *originalImage, *editedImage, *imageToUse;
// Handle a still image picked from a photo album
if (CFStringCompare ((CFStringRef) mediaType, kUTTypeImage, 0) == kCFCompareEqualTo) {
editedImage = (UIImage *) [info objectForKey:UIImagePickerControllerEditedImage];
originalImage = (UIImage *) [info objectForKey:UIImagePickerControllerOriginalImage];
if (editedImage) {
imageToUse = editedImage;
} else {
imageToUse = originalImage;
}
NSLog(#"image size = %#", NSStringFromCGSize(imageToUse.size));
if (imageToUse.size.width < 480 || imageToUse.size.height < 480)
{
[[[UIAlertView alloc] initWithTitle:nil
message:#"Please select image that is at least 480 x 480"
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil] show];
}
else
{
// do something with imageToUse
}
}
[picker dismissViewControllerAnimated: YES completion:nil];
}
Related
I am using AVCapture to capture the images from camera.Everything works fine except this issue.
I need the final captured image as same like which is visible in camera.But the image shows more area(which is not like visible in camera).How can i get the same visible image as final stillImageOutput?
Any help would be highly appreciated.
use your view/imageview object name instead of contentScrollview. This will help you to render the view and provide you an image.
for reference:https://charangiri.wordpress.com/2014/09/18/how-to-render-screen-taking-screen-shot-programmatically/
- (UIImage *) createScreenshotOfCompleteScreen
{
UIImage* image = nil;
UIGraphicsBeginImageContext(contentScrollview.contentSize);
{
CGPoint savedContentOffset = contentScrollview.contentOffset;
CGRect savedFrame = contentScrollview.frame;
contentScrollview.contentOffset = CGPointZero;
contentScrollview.frame = CGRectMake(0, 0, contentScrollview.contentSize.width, contentScrollview.contentSize.height);
if ([[NSString versionofiOS] intValue]>=7)
{
[contentScrollview drawViewHierarchyInRect:contentScrollview.bounds afterScreenUpdates:YES];
}
else
{
[contentScrollview.layer renderInContext: UIGraphicsGetCurrentContext()];
}
image = UIGraphicsGetImageFromCurrentImageContext();
contentScrollview.contentOffset = savedContentOffset;
contentScrollview.frame = savedFrame;
}
UIGraphicsEndImageContext();
return image;
}
I am new in XCode, I want to select a photo from photo library.
- (void) ChoosePhoto_from_Album
{
UIImagePickerController *imgage_picker = [[UIImagePickerController alloc] init];
imgage_picker.delegate = self;
imgage_picker.allowsEditing = YES;
imgage_picker.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
[self presentViewController:imgage_picker animated:YES completion:NULL];
}
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage * image = [info objectForKey:UIImagePickerControllerOriginalImage];
NSLog(#"ImagePicker image size = (%.0f x %.0f)", image.size.width , image.size.height);
// process image...
}
The photo I selected is 5MP (2592x1936). However, it return the size is 960x716.
What I am missing ?
Thanks !
I found this link. You can use the Three20 Framework by using following contains the codes.
Here is your solution.
UIImagePickerController does give you the full camera based image. You need to get it from the editingInfo parameter of the imagePickerController:didFinishPickingImage:editingInfo method.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)image editingInfo:(NSDictionary *)editingInfo {
CGImageRef ref = image.CGImage;
int width = CGImageGetWidth(ref);
int height = CGImageGetHeight(ref);
NSLog(#"image size = %d x %d", width, height);
UIImage *orig = [editingInfo objectForKey:UIImagePickerControllerOriginalImage];
ref = orig.CGImage;
width = CGImageGetWidth(ref);
height = CGImageGetHeight(ref);
NSLog(#"orig image size = %d x %d", width, height);
CGRect origRect;
[[editingInfo objectForKey:UIImagePickerControllerCropRect] getValue:&origRect];
NSLog(#"Crop rect = %f %f %f %f", origRect.origin.x, origRect.origin.y, origRect.size.width, origRect.size.height);
}
I am currently using the default UIImagePickerController, and immediately after a picture is taken, the following default screen is shown:
My question is, how would I be able to use my own custom UIViewController to view the resultant image (and therefore bypass this confirmation screen ).
Please note that am not interested in using a custom overlay for the UIImagePicker with custom camera controls or images from photo gallery, but rather just to skip this screen and assume that the photo taken is what the user would have liked.
Thanks!
Try this code to maintain your resultant image in same size:
First create IBOutlet for UIImageview.
-(void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString *mediaType = info[UIImagePickerControllerMediaType];
[self dismissViewControllerAnimated:YES completion:nil];
if ([mediaType isEqualToString:(NSString *)kUTTypeImage]) {
OriginalImage=info[UIImagePickerControllerOriginalImage];
image = info[UIImagePickerControllerOriginalImage];
//----------
imageview.image = image; // additional method
[self resizeImage];
//---------
if (_newMedia)
UIImageWriteToSavedPhotosAlbum(image,self,#selector(image:finishedSavingWithError:contextInfo:),nil);
}
else if ([mediaType isEqualToString:(NSString *)kUTTypeMovie])
{
// Code here to support video if enabled
}
}
//---- Resize the original image ----------------------
-(void)resizeImage
{
UIImage *resizeImage = imageview.image;
float width = 320;
float height = 320;
CGSize newSize = CGSizeMake(320,320);
UIGraphicsBeginImageContextWithOptions(newSize,NO,0.0);
CGRect rect = CGRectMake(0, 0, width, height);
float widthRatio = resizeImage.size.width / width;
float heightRatio = resizeImage.size.height / height;
float divisor = widthRatio > heightRatio ? widthRatio : heightRatio;
width = resizeImage.size.width / divisor;
height = resizeImage.size.height / divisor;
rect.size.width = width;
rect.size.height = height;
//indent in case of width or height difference
float offset = (width - height) / 2;
if (offset > 0) {
rect.origin.y = offset;
}
else {
rect.origin.x = -offset;
}
[resizeImage drawInRect: rect];
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageview.image = smallImage;
imageview.contentMode = UIViewContentModeScaleAspectFit;
}
You can customize the image picker view controller using an overlay view.
Read the documentation here.
You set up a new view, set it as the camera overlay, tell the image picker to not display its own controls and display the picker.
Inside your the code of your overlay, you call takePicture on the image picker view controller to finally take the image when you are ready.
Apple has an example of this here:
https://developer.apple.com/library/ios/samplecode/PhotoPicker/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010196-Intro-DontLinkElementID_2
For transforms, you can use the cameraViewTransform property to set transformations to be used when taking the image.
After taking the picture, the delegate method would be called:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
In this method, you dismiss the imagePickerController. You have access to the image, so you can play around with the image, possibly display it in a UIImageView. So the code would be something like this:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[self dismissViewControllerAnimated:YES completion:nil];
if (picker.sourceType == UIImagePickerControllerSourceTypeCamera)
{
//here you have access to the image
UIImage *pickedImage = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
//Create an instance of a view controller and show the UIImageView or do your stuff there. According to you business logic.
}
}
Hey guys I'm doing some image editing with UIImagePickerController. Here is some code in imagePickerController:didFinishPickingMediaWithInfo:
UIImage *editedImg = [info objectForKey:UIImagePickerControllerEditedImage];
UIImageView *imgView = [[UIImageView alloc] initWithImage:editedImg];
CGRect imgFrm = imgView.frame;
float rate = imgFrm.size.height / imgFrm.size.width;
imgFrm.size.width = size;
imgFrm.size.height = size * rate;
imgFrm.origin.x = 0;
imgFrm.origin.y = (size - imgFrm.size.height) / 2;
[imgView setFrame:imgFrm];
UIView *cropView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, size, size)];
[cropView setBackgroundColor:[UIColor blackColor]];
[cropView addSubview:imgView];
UIImage *croppedImg = [MyUtil createUIImageFromUIView:cropView];
The above is to set the image in a size*size view and draw a image from a view when the height of the image returned by picker is smaller than size.
Here is the code of createUIImageFromUIView:(UIView*)view :
+ (UIImage *)createUIImageFromUIView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.frame.size, NO, 2.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[view.layer renderInContext:ctx];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
My problem is : when debugging, the 'editedImg'(defined in first line) just shows 'nil'. But, the following code works well. I get the corpView(shows 'nil' too) correctly and get cropped image and can encode it to base64 encoded string for sending to server side. I just want to know why the editedImg is nil(returned by [info objectForKey:UIImagePickerControllerEditedImage], but when I choose to print the info in debug mode, the output is not nil in the console)?
The editdImg gets nil, try:
UIImage *editedImg = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
Get file sizeļ¼
- (long long) fileSizeAtPath:(NSString*) filePath{
NSFileManager* manager = [NSFileManager defaultManager];
if ([manager fileExistsAtPath:filePath]){
return [[manager attributesOfItemAtPath:filePath error:nil] fileSize];
}
return 0;
}
Best wishes!
After some searching I accidentally found this : string value always shows nil in objective-c
This is the reason why I always see 'nil' in debug mode while the code works well.
You can get your cropped image size by
UIImage *croppedImg = [MyUtil createUIImageFromUIView:cropView];
NSData *dataForImage = UIImagePNGRepresentation(croppedImg);
Now you can check length
if (dataForImage.length)
{
}
Sorry if this a beginner question but I was wondering what the benefits are to using setter and getter methods rather than directly manipulating them directly. I'm in obj-c and I'm wondering if there is any benefits in terms of memory/cpu usage.
For instance, I'm cropping an image before I upload it and I crop it after it is taken/picked. I put all my code in the didFinishPickingMediaWithInfo method. So it would look like the following:
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{
//Create image
imageFile = [info objectForKey:UIImagePickerControllerOriginalImage];
//Unhide imageView and populate
selectedImage.hidden = false;
selectedImage.image = imageFile;
//Create original image for reservation
originalImage = imageFile;
//Crop image
UIGraphicsBeginImageContext(selectedImage.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextRotateCTM(context, 2*M_PI);
[selectedImage.layer renderInContext:context];
imageFile = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Update imageView with croppedImage
selectedImage.image = imageFile;
//Dismiss image picker
[imagePickerController dismissViewControllerAnimated:YES completion:nil];
}
So let's say I do the same thing but have a method for populating the selectedImage imageView and a method for cropping the image so it would look like the following:
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{
//Create image
[self setImage:[info objectForKey:UIImagePickerControllerOriginalImage]];
//Create local image
UIImage * localImage = [self returnImage];
//Unhide imageView and populate
selectedImage.hidden = false;
[self populateImageView:localImage];
//Create original image for reservation
originalImage = localImage;
//Crop image
localImage = [self getImageFromContext:localImage withImageView:selectedImage];
//Update imageView with croppedImage
[self populateImageView:localImage];
//Dismiss image picker
[imagePickerController dismissViewControllerAnimated:YES completion:nil];
}
//Crop image method
-(UIImage *)getImageFromContext:(UIImage *)image withImageView:(UIImageView *)imageView{
UIGraphicsBeginImageContext(imageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextRotateCTM(context, 2*M_PI);
[imageView.layer renderInContext:context];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
image = nil;
context = nil;
}
-(void)populateImageView:(UIImage *)image{
selectedImage.image = image;
}
- (void)setImage:(UIImage *)image{
imageFile = image;
}
-(UIImage *)returnImage{
return imageFile;
}
So are there any other benefits other than readability and neatness of code? Is there anyway to make this more efficient?
You have a great benchmark made by Big Nerd Ranch on that topic.
Usually I use properties as a best practice. This is useful because you have:
An expected place where your property will be accessed (getter)
An expected place where your property will be set (setter)
This usually helps in debugging (you can override the setter or set a breakpoint there to check who is changing the property and when it is changing) and you can do some lazy instantiation.
Usually I do lazy instantiation with arrays or with programmatically created views. For instance:
#property(nonatomic, strong) UIView *myView;
-(UIView*) myView {
if(!_myView) {
//I usually prefer a function over macros
_myView = [[UIView alloc] initWithFrame: [self myViewFrame]];
_myView.backgroundColor = [UIColor redColor];
}
return _myView;
}
Another important factor bolded by Jacky Boy is that with properties you have a free KVO ready structure.