I am trying to combine 2 UIImageViews and save them as 1. The top-most image is a static "frame", and the lower image is rotatable / scalable.
My issue is that the photo needs to be saved as 640 x 960, however, the actual view that the 2 images sit on is 320 x 480 (so it shows correctly on the users screen). When these 2 images are combined, they are saved on a 640 x 960 view, however, the 2 images themselves are combined as 320 x 480 (as seen in the image example below).
Here is the code that I am currently using to get my wrong results.
CGSize deviceSpec;
if ( IDIOM == IPAD ) { deviceSpec =CGSizeMake(768,1024); } else { deviceSpec =CGSizeMake(640,960); }
UIGraphicsBeginImageContext( deviceSpec );
UIView * rendered = [[UIView alloc] init];
[[rendered layer] setFrame:CGRectMake(0,0,deviceSpec.width,deviceSpec.height)];
[[rendered layer] addSublayer:[self.view layer]];
[[rendered layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage * draft = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
AssetsLibrary
Should also mention I am saving the image using the following:
ALAssetsLibrary * library = [[[ALAssetsLibrary alloc] init] autorelease];
[library writeImageToSavedPhotosAlbum: draft.CGImage orientation:ALAssetOrientationUp
completionBlock: ^(NSURL *assetURL, NSError *error)
{
if (error) {
NSLog(#"ALAssetLibrary error - %#", error);
} else {
NSLog(#"Image saved: %#", assetURL);
}
}];
The output
Note here, the entire white area is actually 640 x 960 and the 2 images are 320 x 480.
Note: The actual image here is 640x960 (the entire area), where as the actual image (the photo) is 320x480 which is the actual size of the original layers frame.
To support different scales I used the following code to:
// If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
UIGraphicsBeginImageContextWithOptions(compositeView.frame.size, compositeView.opaque, scale);
[compositeView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
So, you need to replace UIGraphicsBeginImageContext with UIGraphicsBeginImageContextWithOptions
Related
I start out with an imageView.image (a photo).
I submit (POST) the imageView.image to remote service (Microsoft face detection) for processing.
Remote service returns JSON of CGRect's for each detected face on the image.
I feed JSON into my UIView to draw the rectangles. I initiate my UIView with a frame of {0, 0, imageView.image.size.width, imageView.image.size.height}. <-- my thinking, a frame equivalent to the size of the imageView.image
Add my UIView as a subview of self.imageView OR self.view (tried both)
End Result:
Rectangles are drawn but they do not appear correctly on the imageView.image. That is, the CGRects generated for each of the faces are supposed to be relative to the image's coordinate space, as returned by the remote service but they appear off once I add my custom view.
I believe I may have a scaling issue of some sort as, if I divide each value in the CGRects / 2 (as a test) I can get an approximation but still off. The microsoft documentation states the detected faces are returned with rectangles indicating the location of faces in the image in Pixels. Yet, aren't they being treated as points when drawing my path?
Also, shouldn't I be initiating my view with a frame equivalent to the imageView.image's frame so that the view matches an identical coordinate space as the submitted image?
Here is a screenshot example of what it looks like if i try to scale down each CGRect by dividing them by 2.
I am new to iOS and broke away from the books to work on this as a self exercise. I can provide more code as needed. Thanks in advance for your insight!
EDIT 1
I add a subview for each rectangle as I iterate over an array of face attributes which include the rectangle for each face via the following method, which gets called during (void)viewDidAppear:(BOOL)animated
- (void)buildFaceRects {
// build an array of CGRect dicts off of JSON returned from analized image
NSMutableArray *array = [self analizeImage:self.imageView.image];
// enumerate over array using block - each obj in array represents one face
[array enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
// build dictionary of rects and attributes for the face
NSDictionary *json = [NSDictionary dictionaryWithObjectsAndKeys:obj[#"attributes"], #"attributes", obj[#"faceId"], #"faceId", obj[#"faceRectangle"], #"faceRectangle", nil];
// initiate face model object with dictionary
ZGCFace *face = [[ZGCFace alloc] initWithJSON:json];
NSLog(#"%#", face.faceId);
NSLog(#"%d", face.age);
NSLog(#"%#", face.gender);
NSLog(#"%f", face.faceRect.origin.x);
NSLog(#"%f", face.faceRect.origin.y);
NSLog(#"%f", face.faceRect.size.height);
NSLog(#"%f", face.faceRect.size.width);
// define frame for subview containing face rectangle
CGRect imageRect = CGRectMake(0, 0, self.imageView.image.size.width, self.imageView.image.size.height);
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imageRect];
// add view as subview of imageview (?)
[self.imageView addSubview:faceRect];
}];
}
EDIT 2:
/* Image info */
UIImageView *iv = self.imageView;
UIImage *img = iv.image;
CGImageRef CGimg = img.CGImage;
// Bitmap dimensions [pixels]
NSUInteger imgWidth = CGImageGetWidth(CGimg);
NSUInteger imgHeight = CGImageGetHeight(CGimg);
NSLog(#"Image dimensions: %lux%lu", imgWidth, imgHeight);
// Image size pixels (size * scale)
CGSize imgSizeInPixels = CGSizeMake(img.size.width * img.scale, img.size.height * img.scale);
NSLog(#"image size in Pixels: %fx%f", imgSizeInPixels.width, imgSizeInPixels.height);
// Image size points
CGSize imgSizeInPoints = img.size;
NSLog(#"image size in Points: %fx%f", imgSizeInPoints.width, imgSizeInPoints.height);
// Calculate Image frame (within imgview) with a contentMode of UIViewContentModeScaleAspectFit
CGFloat imgScale = fminf(CGRectGetWidth(iv.bounds)/imgSizeInPoints.width, CGRectGetHeight(iv.bounds)/imgSizeInPoints.height);
CGSize scaledImgSize = CGSizeMake(imgSizeInPoints.width * imgScale, imgSizeInPoints.height * imgScale);
CGRect imgFrame = CGRectMake(roundf(0.5f*(CGRectGetWidth(iv.bounds)-scaledImgSize.width)), roundf(0.5f*(CGRectGetHeight(iv.bounds)-scaledImgSize.height)), roundf(scaledImgSize.width), roundf(scaledImgSize.height));
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imgFrame];
// add view as subview of image view
[iv addSubview:faceRect];
}];
We've got several problems :
Microsoft returns pixel and iOS uses points. The difference between them is just about screen dimension. For instance on an iPhone 5 : 1 pt = 2 px and on an 3GS 1px = 1 pt. Look at the iOS documentation for more informations.
The frame of your UIImageView is not the image frame. When Microsofts returns the frame of a face, it returns it in the frame of the image and not in the frame of the UIImageView. So we've got a problem of coordinates system.
Be careful about time if you use Autolayout. The frame of a view set by constraints is not the same when ViewDidLoad: is called than when you see it on the screen.
Solution :
I'm just a read-only Objective C developer so I can't give you code. I could in Swift but it's not necessary.
Convert pixels into points. That's easy : use ratio.
Define the frame of a face using what you did. Then you have to move the coordinates you determined from the image frame coordinates system to the UIImageView coordinates system. That's less easy. It depends on the contentMode of your UIImageView. But I quickly find informations about it on the Internet.
If you use AutoLayout, add the frame of the face when AutoLayout finishes to calculate the layouts. So when ViewDidLayoutSubview: is called.
Or, that's better, use constraints to set your frame in the UIImageView.
I hope to be clear enough.
Some links :
iOS Drawing Concepts
Displayed Image Frame In UIImageView
I am taking a photo using AV Foundation, and then I went to crop that image into a square that fits my UI. In the UI, there are two semi-transparent views that show what's being captured, and I want to crop the image to include just what's in between the bottom of the top view and the top of the bottom view:
topView
|
area I want to capture and crop
|
bottom View
The actual capturing of the full image works fine. The problem is using Core Image to crop the image successfully.
// Custom function that takes a photo asynchronously from the capture session and gives
// the photo and error back in a block. Works fine.
[self.captureSession takePhotoWithCompletionBlock:^(UIImage *photo, NSError *error) {
if (photo) {
CIImage *imageToCrop = [CIImage imageWithCGImage:photo.CGImage];
// Find, proportionately, the y-value at which I should start the
// cropping, based on my UI
CGFloat beginningYOfCrop = topView.frame.size.height * photo.size.height / self.view.frame.size.height;
CGFloat endYOfCrop = CGRectGetMinY(bottomView.frame) * photo.size.height / self.view.frame.size.height;
CGRect croppedFrame = CGRectMake(0,
beginningYOfCrop,
photo.size.width,
endYOfCrop - beginningYOfCrop);
// Attempt to transform the croppedFrame to fit Core Image's
// different coordinate system
CGAffineTransform coordinateTransform = CGAffineTransformMakeScale(1.0, -1.0);
coordinateTransform = CGAffineTransformTranslate(coordinateTransform,
0,
-photo.size.height);
CGRectApplyAffineTransform(croppedFrame, coordinateTransform);
imageToCrop = [imageToCrop imageByCroppingToRect:croppedFrame];
// Orient the image correctly
UIImage *filteredImage = [UIImage imageWithCIImage:imageToCrop
scale:1.0
orientation:UIImageOrientationRight];
}
}];
After successfully using UIView’s new drawViewHierarchyInRect:afterScreenUpdates: method introduced in iOS 7 to obtain an image representation (via UIGraphicsGetImageFromCurrentImageContext()) for blurring my app also needed to obtain just a portion of a view. I managed to get it in the following manner:
UIImage *image;
CGSize blurredImageSize = [_blurImageView frame].size;
UIGraphicsBeginImageContextWithOptions(blurredImageSize, YES, .0f);
[aView drawViewHierarchyInRect: [aView bounds] afterScreenUpdates: YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This lets me retrieve aView’s content following _blurImageView’s frame.
Now, however, I would need to obtain a portion of aView, but this time this portion would be “inside”. Below is an image representing what I would like to achieve.
I have already tried creating a new graphics context and setting its size to the portion’s size (red box) and calling aView to draw in the rect that represents the red box’s frame (of course its superview’s frame being equal to aView’s) but the image obtained is all black (empty).
After a lot of tweaking I managed to find something that did the job, however I heavily doubt this is the way to go.
Here’s my [edited-for-Stack Overflow] code that works:
- (UIImage *) imageOfPortionOfABiggerView
{
UIView *bigViewToExtractFrom;
UIImage *image;
UIImage *wholeImage;
CGImageRef _image;
CGRect imageToExtractFrame;
CGFloat screenScale = [[UIScreen mainScreen] scale];
// have to scale the rect due to (I suppose) the screen's scale for Core Graphics.
imageToExtractFrame = CGRectApplyAffineTransform(imageToExtractFrame, CGAffineTransformMakeScale(screenScale, screenScale));
UIGraphicsBeginImageContextWithOptions([bigViewToExtractFrom bounds].size, YES, screenScale);
[bigViewToExtractFrom drawViewHierarchyInRect: [bigViewToExtractFrom bounds] afterScreenUpdates: NO];
wholeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// obtain a CGImage[Ref] from another CGImage, this lets me specify the rect to extract.
// However since the image is from a UIView which are all at 2x scale (retina) if you specify a rect in points CGImage will not take the screen's scale into consideration and will process the rect in pixels. You'll end up with an image from the wrong rect and half the size.
_image = CGImageCreateWithImageInRect([wholeImage CGImage], imageToExtractFrame);
wholeImage = nil;
// have to specify the image's scale due to CGImage not taking the screen's scale into consideration.
image = [UIImage imageWithCGImage: _image scale: screenScale orientation: UIImageOrientationUp];
CGImageRelease(_image);
return image;
}
I hope this will help anyone that stumped upon my issue. Feel free to improve my snippet.
Thanks
I've a UIImageView *userImage whose size is full screen and UIImageView *imageSquare whose size is 320x320. The user will be able to play with userImage to make it bigger, change position, etc. imageSquare is static and should be seen as the cropping view
The code below can crop userImage as the imageSquare.frame.size. My problem is that it crops it from the top of userImage and not from imageSquare.frame.origin, meaning I need to crop it from X and Y coordinates. It's my first time trying to do this and every things I've tried so far can't make it crop from imageSquare.frame.origin.
How could I crop the current view (the one the user is manipulating) of userImage from imageSquare.frame.origin?
CGSize pageSize = imageSquare.frame.size;
UIGraphicsBeginImageContext(pageSize);
CGContextRef resizedContext = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(resizedContext, userImage.frame.origin.x, userImage.frame.origin.y);
[userImage.layer renderInContext:resizedContext];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (image != nil) {
NSLog(#"is not nil");
NSData *imgData = UIImagePNGRepresentation(image);
imageSquare.image = [[UIImage alloc]initWithData:imgData];
}
You'll need to translate by negative x and y:
CGContextTranslateCTM(resizedContext,
-userImage.frame.origin.x,
-userImage.frame.origin.y);
[userImage.layer renderInContext:resizedContext];
I'm working on an iPhone app that can create pictures and post them to Facebook and Instagram.
The correct size for Facebook photos seems to be 350x350, and indeed this code creates a 350x350 image exactly how I want:
-(UIImage *)createImage {
UIImageView *v = [[UIImageView alloc] initWithFrame:CGRectMake(0, screenHeight/2-349, 349, 349)];
v.image = [UIImage imageNamed:#"backgroundForFacebook.png"]; //"backgroundForFacebook.png" is 349x349.
//This code adds some text to the image.
CGSize dimensions = CGSizeMake(screenWidth, screenHeight);
CGSize imageSize = [self.ghhaiku.text sizeWithFont:[UIFont fontWithName:#"Georgia"
size:mediumFontSize]
constrainedToSize:dimensions lineBreakMode:0];
int textHeight = imageSize.height+16;
UITextView *tv = [self createTextViewForDisplay:self.ghhaiku.text];
tv.frame = CGRectMake((screenWidth/2)-(self.textWidth/2),s creenHeight/3.5,
self.textWidth/2 + screenWidth/2, textHeight*2);
[v addSubview:tv];
//End of text-adding code
CGRect newRect = CGRectMake(0, screenHeight/2-349, 349, 349);
UIGraphicsBeginImageContext(newRect.size);
[[v layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[v removeFromSuperview];
return myImage;
}
But when I use the same code to create an Instagram image, which needs to be 612x612, I get the text only, no background image:
-(UIImage *)createImageForInstagram {
UIImageView *v = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 612, 612)];
v.image = [UIImage imageNamed:#"backgroundForInstagram.png"]; //"backgroundForInstagram.png" is 612x612.
//...text-adding code...
CGRect newRect = CGRectMake(0, 0, 612, 612);
UIGraphicsBeginImageContext(newRect.size);
[[v layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[v removeFromSuperview];
return myImage;
}
What am I doing wrong, and how do I fix it?
(While I'm at it, I'll also say that I'm very new to using graphic contexts, so if there's any awkwardness in the code I'd appreciate your pointing it out.)
EDIT: Now I've reduced the two methods to one, and this time I don't even get the text. Argh!
-(UIImage *)addTextToImage:(UIImage *)myImage withFontSize:(int)sz {
NSString *string=self.displayHaikuTextView.text;
NSString *myWatermarkText = [string stringByAppendingString:#"\n\n\t--haiku.com"];
NSDictionary *attrs = [NSDictionary dictionaryWithObjectsAndKeys:[UIFont fontWithName:#"Georgia"
size:sz],
NSFontAttributeName,
nil];
NSAttributedString *attString = [[NSAttributedString alloc] initWithString:myWatermarkText attributes:attrs];
UIGraphicsBeginImageContextWithOptions(myImage.size,NO,1.0);
[myImage drawAtPoint: CGPointZero];
NSString *longestLine = ghv.listOfLines[1];
CGSize sizeOfLongestLine = [longestLine sizeWithFont:[UIFont fontWithName:#"Georgia" size:sz]];
CGSize siz = CGSizeMake(sizeOfLongestLine.width, sizeOfLongestLine.height*5);
[attString drawAtPoint: CGPointMake(myImage.size.width/2 - siz.width/2, myImage.size.height/2-siz.height/2)];
myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return myImage;
}
When I pass the arguments [UIImage imageNamed:"backgroundForFacebook.png"] (an image 349x349) and 12, everything is fine. I get the picture. When I pass the arguments [UIImage imageNamed:"backgroundForInstagram.png"] (an image 612x612) and 24, nothing doing.
Right now I'm just putting the text on the smaller image (#"backgroundForFacebook.png") and then resizing it, but that makes the text blurry, which I don't like.
EDIT: Just to cover the basics, here are images of 1) the method in which I call this method (to check the spelling) and 2) the Supporting Files and the Build Phases (to show the image is actually there). I also tried assigning longestLine a non-variable NSString. No luck. :(
FURTHER EDIT: Okay, logging the size and scale of the images as I go during addTextToImage: above, here's what I get for the smaller image, the one that's working:
2013-02-04 22:24:09.588 GayHaikuTabbed[38144:c07] 349.000000, 349.000000, 1.000000
And here's what I get for the larger image--it's a doozy.
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGContextGetFontRenderingStyle: invalid context 0x0
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGContextSetFillColorWithColor: invalid context 0x0
//About thirty more of these.
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGBitmapContextCreate: unsupported parameter combination: 0 integer bits/component; 0 bits/pixel; 0-component color space; kCGImageAlphaNoneSkipLast; 2448 bytes/row.
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGContextDrawImage: invalid context 0x0
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGBitmapContextCreateImage: invalid context 0x0
Step through the code. After you create myImage, go into the console and look at myImage.size and myImage.scale. Multiply the size numbers by the scale.
If your background image is Retina-quality, your image is actually 1224 x 1224.
From the UIImage docs:
You should avoid creating UIImage objects that are greater than 1024 x
1024 in size. Besides the large amount of memory such an image would
consume, you may run into problems when using the image as a texture
in OpenGL ES or when drawing the image to a view or layer. This size
restriction does not apply if you are performing code-based
manipulations, such as resizing an image larger than 1024 x 1024
pixels by drawing it to a bitmap-backed graphics context. In fact, you
may need to resize an image in this manner (or break it into several
smaller images) in order to draw it to one of your views.
If your image is actually 612 pixels (not points) but your code is rendering it as 1224 pixels, you can just change the scale property to 1.0.
If your image is actually 1224 pixels, you'll need to do something else, like
put your code on a bitmap-backed graphics context (i.e., calling UIGraphicsBeginImageContext around the offending code)
displaying a smaller version to the user
However, if your image is for Instagram, it should not be 1224 x 1224 :-)
Update: I noticed your app is haiku-related, so here is the answer in haiku format:
Big UIImage?
Bitmap-backed graphics context
Or shrink to 612
i always back up to the obvious questions:
is your image actually properly called/spelled backgroundForInstagram.png?
have you properly added it to your project?
when added, did it copied to the device in the copy steps of the build phases?
what's in the ghv at the time of the call in the edited code?
what's in item[1] of ghv.lines at the time of that rendering?
these are the things i would look at in terms of debugging this code.
Try This
-(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 10.0);// 10.0 means 10 time bigger
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}