I have an app where I take a photo, then zoom/rotate the photo. How can I crop the result to the screen's frame?
For example, say I am displaying a photo that is 1000px x 1000px. If I zoom in and rotate a bit, I want the resulting UIImage to be... say 800px x 800px (not 320x320). How can I accomplish this? If I use CGImageCreateWithImageInRect and pass in the UIImageView's rect, I get a super-zoomed, small corner of the image.
Related
I have an image that I'm using which is 960x1280. I have coordinates for a rectangle of x:391 y:772, x:574 y:772, x:574 y:870, x:391 y:870 which allows me to put the rectangle in the proper spot IF the image is still 960x1280. Of course, when I'm in Xcode, the screen size is 375x667.
When drawing the rectangle with the above coordinates, the rectangle is no longer visible. If just use the screen scale of 3 (UIScreen.main.scale) it's not accurate either.
I create a UIImageView that has constraints of 0 for all four sides and using aspect .fit or .fill. How do I now know the proper scale of the image so the rectangle is drawn in the right spot?
Thanks
I have a UIView that works as camera and it's 320x180 and a UIImageView of same size.
When I take a photo, it generates me an UIImage of size 1080x1920, so when I show it on the imageView, what happens is that the photo is very compressed on its height, because it is very tall, is like this
██████ the black rectangle is the whole photo (1080x1920)
██████
█▒▒▒▒█ the gray is what camera show in screen
██████ (it shows only gray part but it stores
██████ all the black part 1080x1920)
I would like to store it as an UIImage exactly how I see it on the gray rectangle.
I'm not sure how to do this, since the size of the photo is way bigger than the resolution the screen (which is 320 x 568) so is hard to crop correctly (and the crop is also rotating the image and bringing other bugs).
1080/6 = 180. 1920/6 = 320. So the values are in the correct aspect ratio — but they are reversed. You need to apply the correct rotation value to the image.
In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.
I'm stuck with something I can't figure out...
My app lets the user zoom/pan a thumbnail image, via a UIScrollView. Then it needs to take the changes the user made in the scrollview and apply them to the same image at a much higher resolution (ie. generate a high-res UIImage that looks the same as the zoomed/panned low-res thumbnail the user touched).
I can see that the scrollview has a zoomscale and contentOffset which I can reuse, but I really can't see how to apply this to my UIImage.
All help much appreciated, thanks!
The zoom scale,contentOffset and frame of the UIScrollView will present a sub rectangle of the thumbnail.
Rescale that rectangle proportionally against the higher res version of your image.
e.g
Your scroller has bounds of 100px x 100px
Your thumbnail is 100px x 100px and is zoomed at 4x with a content offset of (x:100,y:100). You will see a sub rectangle of frame (x:25,y:25,w:25,h:25) against the original thumbnail inside the 100x100 window of the scroller i.e blurry. The width and height comes from the scrollers frame.
Once you flip in a high res image of 1000px x 1000px you are going to want to present the same chunk of the image except now you present (x:250,y:250,w:250,h:250) by setting the zoom to 0.4. contentOffset remains the same.
Note that the zoom of 1x and zero offset which would present the whole thumbnail image is a zoom of 0.1x and zero offset against the higher res.
BUT
You are overthinking the issue. Your container UIImageView does all the work for you. Once you reach your target zoom point simply load the higher res image into the imageView (myImageView.image = hiresImage ) and it will "just work" assuming your contentMode is set to Scale To Fill (UIViewContentModeScaleToFill) or Aspect Fill . The low res image will be replaced by the high res version in exactly the right position.
I have taken a photo, and then I'm initializing a UIImageView object with this photo. The only problem is, when I take the photo, the photo is being taken using the full iPhone screen (portrait).
The UIImageView that is being initialized with this photo is only set to take up the top 50% of the iphone's screen. So you can imagine the image looks distorted.
I have been able to make it look a lot better by using the following code:
UIImageView *halfView = [[UIImageView alloc]initWithImage:image];
[self.view addSubview:halfView];
halfView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.frame.size.height/2);
halfView.contentMode = UIViewContentModeScaleAspectFill;
The only problem is, the final UIImageView called "halfView" is still slightly distorted.
I have a feeling that this is impossible to fix, because the original photo is being taken with the full iphone screen and can never perfectly scale to fit a UIImageView that only takes up the top 50% of the iphone screen.
I was basically trying to copy the frontback app. Here is what it looks like when you are taking the original image in their app:
This is what my app's screen looks like when you are taking the picture:
And then right after you take the picture, my app's screen changes to look like the frontback screen and takes the picture you just took and places it in the top half and tries to scale it.
I hope that makes sense. I know it is a long question, but I just really wanted to let the user use the full screen while taking the photo and then just scale it to half the screen.
Am I going about this all wrong? Am I crazy to think I could ever properly scale the image to half the screen when it was originally captured as a "full screen" image?
Thanks for the help.
For the sake of argument let's say your captured image size is 640x1136 (twice the size of an iPhone 5 screen) and you are trying to display it in a UIImageView with of size 320x284 (half the size of an iPhone 5 screen).
As you can already see from these dimensions the captured image's width is smaller than its height whereas the UIImageView's width is larger than its height - the proportions are different.
Therefore, scaling the captured image to fit the UIImageView's width (scale by 0.5) means the captured image will be of size 320x568 - its height is larger than the UIImageView's height.
Scaling the captured image to fit the UIImageView's height (scale by 0.25) means the captured image will be of size 160x284 - its width is smaller the the UIImageView's width.
The image can't scale exactly like you want it to scale. However, you can use UIViewContentModeScaleAspectFill to fill the entire UIImageView but lose some of the image (image's height is too big to fit). You can also choose to use UIViewContentModeScaleAspectFit which will show the entire image but will leave some space on the sides (image's width is too small).
Another option you have is to actually capture the image in the proportions of your UIImageView in the first place but that means you won't be able to capture a full screen image.
Try this function, pass your UIImage in this function along with the new size, in turn it will return you the UIImage with size specified by you.
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I guess this is what you want.
Hope this helps.
You mention the image takes up the full size of the screen. If it's to display the UIImageView taking up half the screen, then you'll need to add this code to clip the frame
halfView.clipToBounds = YES;
Despite making the size of the imageview half the screen, the actual image will show outside the boundaries of the imageview if it's original size is bigger with the aspectFit property. clipToBounds will fix this.
I hope this is what you're looking for. Thanks, Jim.