IOS UIImage Scale and Crop, scaling up actually making image smaller? - ios

im using someone else's pinch gesture code for scaling which works perfect but its scaling my image in my photo editing app and once the user presses done scaling, i need the changes to be reflected and saved or another way to say it i need the image to actually be zoomed in and cropped if someone used pinch to scale. I figured i could use the amount they scaled * the frame size for uigraphicsbeginimagecontext but that strategy is not working since when the user scales the image and hits the done button the image gets saved smaller because this now very large size is getting squeezed into the view when what i really want it crop off any leftovers and not do any fitting.
- (IBAction)pinchGest:(UIPinchGestureRecognizer *)sender{
if (sender.state == UIGestureRecognizerStateEnded
|| sender.state == UIGestureRecognizerStateChanged) {
NSLog(#"sender.scale = %f", sender.scale);
CGFloat currentScale = self.activeImageView.frame.size.width / self.activeImageView.bounds.size.width;
CGFloat newScale = currentScale * sender.scale;
if (newScale < .5) {
newScale = .5;
}
if (newScale > 4) {
newScale = 4;
}
CGAffineTransform transform = CGAffineTransformMakeScale(newScale, newScale);
self.activeImageView.transform = transform;
scalersOfficialChange = newScale;
sender.scale = 1;
}
}
- (IBAction)doneMoverViewButtonPressed:(UIButton *)sender {
// turn off ability to move & scale
moverViewActive = NO;
NSLog(#"%f %f",dragOfficialChange.x,dragOfficialChange.y);
NSLog(#"%f",rotationOfficialChange);
NSLog(#"%f",scalersOfficialChange);
//problem area below...
CGSize newSize = CGSizeMake(self.activeImageView.bounds.size.width * scalersOfficialChange, self.activeImageView.bounds.size.height * scalersOfficialChange );
UIGraphicsBeginImageContext(newSize);
[self.activeImageView.image drawInRect:CGRectMake(dragOfficialChange.x, dragOfficialChange.y, self.layerContainerView.bounds.size.width, self.layerContainerView.bounds.size.height)];
self.activeImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self hideMoveViewerAnimation];
//resets activeimageview coords
CGRect myFrame = self.layerContainerView.bounds;
myFrame.origin.x = 0;
myFrame.origin.y = 0;
self.activeImageView.frame = myFrame;
//reset changes values
dragOfficialChange.x = 0;
dragOfficialChange.y = 0;
rotationOfficialChange = 0;
scalersOfficialChange = 0;
}

first of all, can you make your question more clear? I suggest you want to draw your image in a rect and don't want to squeeze it, am I right?
Then lets try this method:
//The method: drawInRect:(CGRect) will scale your pic and squeeze it to fit the Rect area
//So if you dont want to scale your pic, you can use the method below
[image drawAsPatternInRect:(CGRect)_rect];
//This method would not scale your image and cut the needless part

Related

Table View UIImage rendering issue

I'm running into a rendering issue with my tableView UIImages and was wondering if anyone has encountered the same problem and knows how to fix it.
Here is my cellForRowAtIndexPath
-(UITableViewCell *)tableView:(UITableView *)tableView
cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
cell.textLabel.text = exerciseDisplayName;
cell.textLabel.numberOfLines = 0;
cell.textLabel.lineBreakMode = NSLineBreakByWordWrapping;
[tableView setSeparatorInset:UIEdgeInsetsZero];
UtilityMethods *commonMethods = [[UtilityMethods alloc]init];
UIImage *rowImage = [commonMethods imageForRow:tempPlaceholder.bodyPart];
cell.imageView.image = rowImage;
return cell;
}
Here is my height for row.
-(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 96;
}
There are lots of lines and squiggles in the images in the table. I was wondering if anyone knowns any UIImage properties that I might need to apply to my image to fix the problem. Increasing the height for row in table fixes the problem at the expense of increasing the height of the table row. The number that seems to work is 128 in heightForRow. When using 128 the squiggles are much less noticeable. Now I'm pretty sure this has something to do with how iOS is rendering the image. Ive taken the image and resized it to 76x76 using Microsoft Paint just to see if I would see the same problem, and the images appear just fine without all the squiggles. The images are .png format. The original size of the images is 1024x1024. Ive just resized them downwards as I've needed them. If anyone has any tips or advice on how to fix this I'd really appreciate it.
You are going to need to resample the image to the size you need. Viewing a large image in a small space looks rather bad on iOS devices (most any really). But if you use built in functions to create a new UIImage of the proper size everything looks much better. Scaling down a UIImage when displaying will always look worse than creating a new image of the proper size and displaying that. The way to do this is as follows (taken from here):
- (UIImage*)imageByScalingAndCroppingForSize:(CGSize)targetSize
{
UIImage *sourceImage = self;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContextWithOptions(targetSize, 0, NO); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
{
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
That function does a bit more than you are looking for, but you should be able to cut it does to only what you need.
Make sure to use the UIGraphicsBeginImageContextWithOptions function instead of UIGraphicsBeginImageContext so you deal with retina displays properly, otherwise this will make your images more blurry than they should be and you will have a second problem to deal with.

Rotation changing UIImageView frame. How to avoid this?

I have two sliders - one for changing image size and one for rotating this image. My imageview is 60x60. The problem is that I rotate the image using CGAffineTransformMakeRotation, but when I try to resize it after that (like, from 60x60 to 65x65 using the slider), it acts weirdly - the frame of the image view has changed like 80x2. How can I avoid this? Here is my code for the slider that resizes the image:
-(IBAction)imageSliderAction:(UISlider *)sender
{
NSUInteger value = sender.value;
float oldCenterX = logoImageView.center.x;
float oldCenterY = logoImageView.center.y;
newWidth = value;
newHeight = value;
CGRect frame = [logoImageView frame];
frame.size.width = newWidth;
frame.size.height = newHeight;
[logoImageView setFrame:frame];
logoImageView.center = CGPointMake(oldCenterX, oldCenterY);
}
And here is the code for my rotating slider:
-(IBAction)rotationSliderAction:(UISlider *)sender
{
NSUInteger angle = sender.value;
if (sender.value >= 1)
{
CGAffineTransform rotate = CGAffineTransformMakeRotation(angle / 180.0 * 3.14);
[logoImageView setTransform:rotate];
}
if (sender.value <= 0 )
{
CGAffineTransform rotate = CGAffineTransformMakeRotation( (360 + sender.value) / 180.0 * 3.14);
[logoImageView setTransform:rotate];
}
}
How can I avoid autochanging frame's width and height when rotating? Because after that I can't resize the image correctly.
From UIView reference
Warning: If the transform property is not the identity transform, the
value of this property is undefined and therefore should be ignored.
If you want to change the size of view that has nontrivial transform you should do that by changing its bounds property (view's center will remain the same so you won't need any extra logic to maintain its position):
[logoImageView setBounds:CGRectMake(0,0,sender.value, sender.value)];

Drawing on a zoomable view

I'm working on a small drawing application, which has a basic requirement of supporting zoom-in/out. I have two main issues:
Drawing doesn't appear crisp and clear, when view is zoomed/transformed. Is there a better approach, or is there a way to improve the drawing when the view is zoomed?
The drawing performance is slow, when drawing on 1200 x 1200 pts canvas (on iPhone). Any chance I can improve it for large canvas sizes?
Zooming Code:
- (void)scale:(UIPinchGestureRecognizer *)gestureRecognizer {
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
UIView *canvas = [gestureRecognizer view];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan ||
[gestureRecognizer state] == UIGestureRecognizerStateChanged) {
// Calculate the drawing view's size
CGSize drawingViewSize = ...;
// Calculate the minimum allowed tranform size
// Developer's Note: I actually wanted to use the size 1/4th of the view
// but self.view.frame.size doesn't return the correct (actual) width and height
// It returns these values inverted i.e. width as height, and vice verse.
// The reason is that the view appears to be transformed (90 degrees).
// Since there's no work-around this, so for now, I'm just using fixed values.
CGSize minTranformSize = CGSizeMake(100.0f, 100.0f);
if ((minTranformSize.width > drawingViewSize.width) && (minTranformSize.height > drawingViewSize.height)) {
minTranformSize = drawingViewSize;
}
// Transform the view, provided
// 1. It won't scale more than the original size of the background image
// 2. It won't scale less than the minimum possible transform
CGSize transformedSize = CGSizeMake(canvas.frame.size.width * [gestureRecognizer scale],
canvas.frame.size.height * [gestureRecognizer scale]);
if ((transformedSize.width <= drawingViewSize.width) && (transformedSize.height <= drawingViewSize.height) &&
(transformedSize.width >= minTranformSize.width) && (transformedSize.height >= minTranformSize.height)) {
canvas.transform = CGAffineTransformScale([canvas transform],
[gestureRecognizer scale],
[gestureRecognizer scale]);
}
[gestureRecognizer setScale:1.0];
} else if ([gestureRecognizer state] == UIGestureRecognizerStateEnded) {
// Recenter the container view, if required (piece is smaller than the view and it's not aligned)
CGSize viewSize = self.view.bounds.size;
if ((canvas.frame.size.width < viewSize.width) ||
(canvas.frame.size.height < viewSize.height)) {
canvas.center = CGPointMake(viewSize.width/2, viewSize.height/2);
}
// Adjust the x/y coordinates, if required (piece is larger than the view and it's moving outwards from the view)
if (((canvas.frame.origin.x > 0) || (canvas.frame.origin.y > 0)) &&
((canvas.frame.size.width >= viewSize.width) && (canvas.frame.size.height >= viewSize.height))) {
canvas.frame = CGRectMake(0.0,
0.0,
canvas.frame.size.width,
canvas.frame.size.height);
}
canvas.frame = CGRectIntegral(canvas.frame);
}
}
Drawing Code
- (void)draw {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
if (self.fillColor) {
[self.fillColor setFill];
[self.path fill];
}
if ([self.strokeColor isEqual:[UIColor clearColor]]) {
[self.path strokeWithBlendMode:kCGBlendModeClear alpha:1.0];
} else if (self.strokeColor) {
[self.strokeColor setStroke];
[self.path stroke];
}
CGContextRestoreGState(context);
}
This is a pretty complicated problem, that I have struggled a lot with.
I ended up converting the drawings to vector.
draw all lines in one layer, draw all fills in another.
Convert the line drawings to Vector, using potrace (http://potrace.sourceforge.net/)
draw the vector using SVGKit
(https://github.com/SVGKit/SVGKit) and hide the layer drawn in 1)
It is working pretty well and fairly fast, but it requires a lot of work. We have an app in our company that applies this technique:
https://itunes.apple.com/us/app/ideal-paint-hd-mormor/id569450492?mt=8.
If your only problem is performance, try taking a look at CATiledLayer. (also used in app mentioned above). It will increase performance tremendously (You can find a pretty good tutorial here http://www.cimgf.com/2011/03/01/subduing-catiledlayer/).
Good luck! :)
First of all You are Transforming View this is not better way for zooming .Transforming may use only for increase or decrease size of UIVew For zooming you can d this.
1) get get pic of screen by using this code
UIGraphicsBeginImageContext(self.drawingView.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//[self.view.layer renderInContext:context];
[self.layerContainerView.layer renderInContext:context];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
[scaleLabel setHidden:FALSE];
return screenShot;
2) then put it on some UIImageView and then perform Zomming on this Image
scrollView.userInteractionEnabled=TRUE;
self.scrollView.minimumZoomScale = 1.000;
self.scrollView.maximumZoomScale = 30.0f;
[self centerScrollViewContents];
CGFloat newZoomScale = self.scrollView.zoomScale / 1.5f;
newZoomScale = MAX(newZoomScale, self.scrollView.minimumZoomScale);
[self.scrollView setZoomScale:newZoomScale animated:YES];
3) then again Set zoomed image on uiView's background
this works perfect for me hope this will also works for you

Performance issues when cropping UIImage (CoreGraphics, iOS)

The basic idea of what we are trying to do is that we have a large UIImage, and we want to slice it into several pieces. The user of the function can pass in a number of rows and number of columns, and the image will be cropped accordingly (ie. 3 rows and 3 columns slices the image into 9 pieces). The problem is, we're having performance issues when trying to accomplish this with CoreGraphics. The largest grid we require is 5x5, and it takes several seconds for the operation to complete (which registeres as lagtime to the user.) This is of course far from optimal.
My colleague and I have spent quite a while on this, and have searched the web for answers unsuccessfully. Neither of us are extremely experienced with Core Graphics, so I'm hoping there's some silly mistake in the code that will fix our problems. It's left to you, SO users, to please help us figure it out!
We used the tutorial at http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/ to base revisions of our code on.
The function below:
-(void) getImagesFromImage:(UIImage*)image withRow:(NSInteger)rows withColumn:(NSInteger)columns
{
CGSize imageSize = image.size;
CGFloat xPos = 0.0;
CGFloat yPos = 0.0;
CGFloat width = imageSize.width / columns;
CGFloat height = imageSize.height / rows;
int imageCounter = 0;
//create a context to do our clipping in
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGRect clippedRect = CGRectMake(0, 0, width, height);
CGContextClipToRect(currentContext, clippedRect);
for(int i = 0; i < rows; i++)
{
xPos = 0.0;
for(int j = 0; j < columns; j++)
{
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect rect = CGRectMake(xPos, yPos, width, height);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
image.size.width,
image.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(currentContext, drawRect, image.CGImage);
//pull the image from our cropped context
UIImage* croppedImg = UIGraphicsGetImageFromCurrentImageContext();
//PuzzlePiece is a UIView subclass
PuzzlePiece* newPP = [[PuzzlePiece alloc] initWithImageAndFrameAndID:croppedImg :rect :imageCounter];
[slicedImages addObject:newPP];
imageCounter++;
xPos += (width);
}
yPos += (height);
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
}
ANY advice greatly appreciated!!
originalImageView is an IBOutlet ImageView. This image will be cropped.
#import <QuartzCore/QuartzCore.h>
QuartzCore is needed for the white border around each slice for better understanding.
-(UIImage*)getCropImage:(CGRect)cropRect
{
CGImageRef image = CGImageCreateWithImageInRect([originalImageView.image CGImage],cropRect);
UIImage *cropedImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return cropedImage;
}
-(void)prepareSlices:(uint)row:(uint)col
{
float flagX = originalImageView.image.size.width / originalImageView.frame.size.width;
float flagY = originalImageView.image.size.height / originalImageView.frame.size.height;
float _width = originalImageView.frame.size.width / col;
float _height = originalImageView.frame.size.height / row;
float _posX = 0.0;
float _posY = 0.0;
for (int i = 1; i <= row * col; i++) {
UIImageView *croppedImageVeiw = [[UIImageView alloc] initWithFrame:CGRectMake(_posX, _posY, _width, _height)];
UIImage *img = [self getCropImage:CGRectMake(_posX * flagX,_posY * flagY, _width * flagX, _height * flagY)];
croppedImageVeiw.image = img;
croppedImageVeiw.layer.borderColor = [[UIColor whiteColor] CGColor];
croppedImageVeiw.layer.borderWidth = 1.0f;
[self.view addSubview:croppedImageVeiw];
[croppedImageVeiw release];
_posX += _width;
if (i % col == 0) {
_posX = 0;
_posY += _height;
}
}
originalImageView.alpha = 0.0;
}
originalImageView.alpha = 0.0; you won't see the originalImageView any more.
Call it like this:
[self prepareSlices:4 :4];
It should make 16 slices addSubView on self.view. We have a puzzle app. This is working code from there.

Resize UIImage with aspect ratio?

I'm using this code to resize an image on the iPhone:
CGRect screenRect = CGRectMake(0, 0, 320.0, 480.0);
UIGraphicsBeginImageContext(screenRect.size);
[value drawInRect:screenRect blendMode:kCGBlendModePlusDarker alpha:1];
UIImage *tmpValue = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Which is working great, as long as the aspect ratio of the image matches that of the new resized image. I'd like to modify this so that it keeps the correct aspect ratio and just puts a black background anywhere the image doesn't show up. So I would still end up with a 320x480 image but with black on the top and bottom or sides, depending on the original image size.
Is there an easy way to do this similar to what I'm doing? Thanks!
After you set your screen rect, do something like the following to decide what rect to draw the image in:
float hfactor = value.bounds.size.width / screenRect.size.width;
float vfactor = value.bounds.size.height / screenRect.size.height;
float factor = fmax(hfactor, vfactor);
// Divide the size by the greater of the vertical or horizontal shrinkage factor
float newWidth = value.bounds.size.width / factor;
float newHeight = value.bounds.size.height / factor;
// Then figure out if you need to offset it to center vertically or horizontally
float leftOffset = (screenRect.size.width - newWidth) / 2;
float topOffset = (screenRect.size.height - newHeight) / 2;
CGRect newRect = CGRectMake(leftOffset, topOffset, newWidth, newHeight);
If you don't want to enlarge images smaller than the screenRect, make sure factor is greater than or equal to one (e.g. factor = fmax(factor, 1)).
To get the black background, you would probably just want to set the context color to black and call fillRect before drawing the image.
I know this is very old, but thanks for that post -- it redirected me from attempting to use scale to drawing the image. In case it is of benefit to anyone, I made an extension class I'll throw in here. It allows you to resize an image like this:
UIImage imgNew = img.Fit(40.0f, 40.0f);
I don't need a fit option, but it could easily be extended to support Fill as well.
using CoreGraphics;
using System;
using UIKit;
namespace SomeApp.iOS.Extensions
{
public static class UIImageExtensions
{
public static CGSize Fit(this CGSize sizeImage,
CGSize sizeTarget)
{
CGSize ret;
float fw;
float fh;
float f;
fw = (float) (sizeTarget.Width / sizeImage.Width);
fh = (float) (sizeTarget.Height / sizeImage.Height);
f = Math.Min(fw, fh);
ret = new CGSize
{
Width = sizeImage.Width * f,
Height = sizeImage.Height * f
};
return ret;
}
public static UIImage Fit(this UIImage image,
float width,
float height,
bool opaque = false,
float scale = 1.0f)
{
UIImage ret;
ret = image.Fit(new CGSize(width, height),
opaque,
scale);
return ret;
}
public static UIImage Fit(this UIImage image,
CGSize sizeTarget,
bool opaque = false,
float scale = 1.0f)
{
CGSize sizeNewImage;
CGSize size;
UIImage ret;
size = image.Size;
sizeNewImage = size.Fit(sizeTarget);
UIGraphics.BeginImageContextWithOptions(sizeNewImage,
opaque,
1.0f);
using (CGContext context = UIGraphics.GetCurrentContext())
{
context.ScaleCTM(1, -1);
context.TranslateCTM(0, -sizeNewImage.Height);
context.DrawImage(new CGRect(CGPoint.Empty, sizeNewImage),
image.CGImage);
ret = UIGraphics.GetImageFromCurrentImageContext();
}
UIGraphics.EndImageContext();
return ret;
}
}
}
As per the post above, it starts a new context for an image, then for that image it figures out aspect and then paints into the image. If you haven't done any Swift xcode dev time, UIGraphics is a bit backwards to most systems I work with but not bad. One issue is that bitmaps by default paint bottom to top. To get around that,
context.ScaleCTM(1, -1);
context.TranslateCTM(0, -sizeNewImage.Height);
Changes the orientation of drawing to the more common top-left to bottom-right... but then you need to move the origin as well hence the TranslateCTM.
Hopefully, it saves someone some time.
Cheers

Resources