Xcode 5, iOS 7
I am loading an image into a UIImage, and then copy it into another UIImage with a Mirror transform, i.e.:
self.imageB.image=[UIImage imageWithCGImage:[self.imageA.image CGImage] scale:1.0 orientation:UIImageOrientationUpMirrored];
Next, I'm trying to combine the two images into one when saving (originalImage refers to the loaded image prior to copying/transforming into imageA and imageB):
newSize=CGSizeMake(originalImage.size.width*2,originalImage.size.height);
UIGraphicsBeginImageContext(newSize);
UIImage *leftImage=self.imageA.image;
UIImage *rightImage=self.imageB.image;
[self.imageA.image drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];
[self.imageB.image drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
This works and gives me a single saved mirrored image.
However, I'm trying to do this using only a sub-region of mageA and imageB,
and the result is that the portion from imageB loses it's Mirrored transformation.
I end up with both sides of the final having the same orientation!(Note: visRatio is the percentage of the image I want to keep)
newSize=CGSizeMake(originalImage.size.width*2,originalImage.size.height);
UIGraphicsBeginImageContext(newSize);
UIImage *leftImage=self.imageA.image;
UIImage *rightImage=self.imageB.image;
CGRect clippedRectA = CGRectMake(0,0,originalImage.size.width*visRatio,originalImage.size.height);
CGImageRef imageRefA = CGImageCreateWithImageInRect([self.imageA.image CGImage], clippedRectA);
UIImage *leftImageA = [UIImage imageWithCGImage:imageRefA];
[leftImageA drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];
CGRect clippedRectB = CGRectMake(originalImage.size.width-(originalImage.size.width*visRatio),0,originalImage.size.width*visRatio,originalImage.size.height);
CGImageRef imageRefB = CGImageCreateWithImageInRect([self.imageB.image CGImage], clippedRectB);
UIImage *rightImageB = [UIImage imageWithCGImage:imageRefB];
[rightImageB drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
It's as though "CGImageCreateWithImageInRect" copies the image from the original, not the transformed, data.
How can I accomplish this without losing the mirrored transformations of imageB ?
I figured it out - this will use the original, full resolution data opened into originalImage and saved a clipped, transformed version (I left the mirroring out of this post for simplicity):
//capture the transformed version
CGSize tmpSize=CGSizeMake(originalImage.size.width, originalImage.size.height);
UIGraphicsBeginImageContext(tmpSize);
[self.imageA.image drawInRect:CGRectMake(0,0, tmpSize.width, tmpSize.height)];
UIImage *tmpImageA=UIGraphicsGetImageFromCurrentImageContext();
visRatio=0.5;
//clip it
clippedRectA=CGRectMake(0,0,roundf(originalImage.size.width*visRatio),originalImage.size.height);
imageRefA=CGImageCreateWithImageInRect([tmpImageA CGImage],clippedRectA);
leftImageA=[UIImage imageWithCGImage:imageRefA];
UIGraphicsEndImageContext();
savedImage=UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(savedImage, nil, nil, nil);
I believe the key is to draw the image into the context, then use the context for cropping/saving, instead of the main image (imageA).
If you find a more efficient way, please post it!
Otherwise, I hope this helps someone else....
Related
I want to crop a UIImage (not imageview), before doing pixel operations on it. Is there a reliable way to do this in ios framework?
Here are the related methods in android I am using:
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(android.graphics.Bitmap, int, int, int, int)
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(int, int, android.graphics.Bitmap.Config)
-(UIImage*)scaleToSize:(CGSize)size image:(UIImage*)image
{
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
I would advise you to look at:
The CoreAnimation Apple framework to create an image context, draw in it and save this as a UIImage in RAM or save it to disk.
The CoreImage Apple framework if you want a powerful and fully customizable image manipulation solution.
A third party lib to crop and resize images: CocoaPods is a great way to quickly integrate that kind of libs… Here is a list of a some interesting image manipulation pods.
This worked for me:
-(UIImage *)cropCanvas:(UImage *)input x1:(int)x1 y1:(int)y1 x2:(int)x2 y2:(int)y2{
CGRect rect = CGRectMake(x1, y1, input.size.width-x2-x1, input.size.height-y2-y1);
CGImageRef imageref = CGImageCreateWithImageInRect([input CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageref];
return img;
}
I'm working on augmented reality app for iPhone and I'm using sample code "ImageTargets" from Vuforia SDK. I'm using my own images as templates and my own model to augment the scene (just a few vertices in OpenGL). Next thing I wanna do is to save the scene to camera roll after pushing a button. I created the button as well as the method the button responds to. Here comes the tricky part. When I press the button the method gets called, image is properly saved, but the image is completely white showing only the button icon (like this http://tinypic.com/r/16c2kjq/5).
- (void)saveImage {
UIGraphicsBeginImageContext(self.view.layer.frame.size);
[self.view.layer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
- (void)image: (UIImage *)image didFinishSavingWithError:(NSError *)error
contextInfo: (void *) contextInfo {
NSLog(#"Image Saved");
}
I have these 2 methods in ImageTargetsParentViewController class but I also tried saving the view from ARParentViewController (and even moved the methods to the class). Has anyone found solution to this? I'm not so sure which view to save and/or whether there aren't any tricky parts with saving the view that contains OpeglES. Thanks for any reply.
try to use this code for save photo:
- (void)saveImage {
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imagee = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect;
rect = CGRectMake(0, 0, 320, 480);
CGImageRef imageRef = CGImageCreateWithImageInRect([imagee CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageWriteToSavedPhotosAlbum(img, Nil, Nil, Nil);
Yep as the title says I need to take a cropped snapshot of my app.
I want to cut top of the screenshot little bit (%20) I already have a code which I used to take a snapshot and send it to facebook and its working but its taking the photo of all of the screen so how can tell my code to ignore the %20 percent of the screen.Maybe with height and width also I looked some questions in the stack overflow and manage to slide my screenshot so I get rid of the unwanted part at the top but this time at the bottom huge white area appeared so it didnt solve my problem.
Here is my snapshot code
UIGraphicsBeginImageContext(self.ekran.bounds.size);
[self.ekran.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
A method to crop the image, that accepts any frame to crop the image against
- (UIImage *)cropImage:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return cropped;
}
Use it as follows:
UIGraphicsBeginImageContext(self.ekran.bounds.size);
[self.ekran.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGFloat imgHight = resultingImage.size.height;
// Create a frame that crops the top 20% of the image
CGRect* imageFrame = CGRectMake(0, imgHight - (imgHight*0.8), width, imgHight*0.8);
resultingImage = [self cropImage:resultingImage toRect:imageFrame];
I want to create an OCR application which allows the user to chose the specific area on which to apply the processing.
As of now, i am able to capture the entire image using the AVFoundation however, my current target is to use an overlay of some dimensions and capture the image inside that, So rather than the entire image being captured, I want the image only inside the overlay to be captured and used.
+ (UIImage *)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
Ok using coregraphics, I'm building up an image which will later be used in a CGContextClipToMask operation. It looks something like the following:
UIImage *eyes = [UIImage imageNamed:#"eyes"];
UIImage *mouth = [UIImage imageNamed:#"mouth"];
UIGraphicsBeginImageContext(CGSizeMake(150, 150));
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextFillRect(context, bounds);
[eyes drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
[mouth drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
// how can i now get a CGImageRef here to use in a masking operation?
UIGraphicsEndImageContext();
Now, as you can see by the comment, I'm wondering how I'm actually going to USE the image I've built up. The reason why I'm using core graphics here and not just building up a UIImage is that the transparency I'm creating is very important. If I just grab a UIImage from the context, when it's used as a mask, it will just apply to everything... Further to the point, will I have any problems using a partially-transparent mask using this method?
CGImageRef result = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
You can call the UIGraphicsGetImageFromCurrentImageContext function, which will return a UIImage object. You can hold onto and use the UIImage, or ask it for its CGImage.