So I face some problems when I deal with image cropping. I am awared of two possible ways: UIGraphicsBeginImageContextWithOptions combined with drawAtPoint:blendMode: methods and CGImageCreateWithImageInRect. They both work but have some serious disadvantages:the first way takes a lot of time(in my case 7 secs approx.) and memory(I receive memory warning)(suppose I crop an image taken with iPhone camera); the second ends up with rotated image so that you need to put a bunch code to defeat this behavior which I don't want. What I want to know is how, for instance, apple's built in edit function of "Photos" app works, or Aviary or any other photo editor. Consider apple's editor(iOS 8), you can rotate image,change cropping rectangle,they also have blurring(!) outside the cropping rect and so on, but when you apply cropping it takes max 8 mb of memory and it happens immediately. How do they do this?
The only thought I have is that they use the potential of GPU(Aviary, for instance). So,if we combine all this questions in one, how can I use Open GL to make cropping be less painful operation? I've never worked with it, so any tuts,links and sources are welcome.Thank you in advance.
As already mentioned this should most likely not be done with openGL but even if...
The problem most people have is getting the rectangle in which the image should be displayed and the solution looks something like this:
- (CGRect)fillSizeForSource:(CGRect)source target:(CGRect)target
{
if(source.size.width/source.size.height > target.size.width/target.size.height)
{
// keep target height and make the width larger
CGSize newSize = CGSizeMake(target.size.height * (source.size.width/source.size.height), target.size.height);
return CGRectMake((target.size.width-newSize.width)*.5f, .0f, newSize.width, newSize.height);
}
else
{
// keep target width and make the height larger
CGSize newSize = CGSizeMake(target.size.width, target.size.width * (source.size.height/source.size.width));
return CGRectMake(.0f, (target.size.height-newSize.height)*.5f, newSize.width, newSize.height);
}
}
- (CGRect)fitSizeForSource:(CGRect)source target:(CGRect)target
{
if(source.size.width/source.size.height < target.size.width/target.size.height)
{
// keep target height and make the width smaller
CGSize newSize = CGSizeMake(target.size.height * (source.size.width/source.size.height), target.size.height);
return CGRectMake((target.size.width-newSize.width)*.5f, .0f, newSize.width, newSize.height);
}
else
{
// keep target width and make the height smaller
CGSize newSize = CGSizeMake(target.size.width, target.size.width * (source.size.height/source.size.width));
return CGRectMake(.0f, (target.size.height-newSize.height)*.5f, newSize.width, newSize.height);
}
}
I did not test this.
Or since you are on iOS simply create an image view with the desired size, add an image to it and then get the screenshot of the image.
Related
Im trying to remove the top part of an image by cropping, but the result is unexpected.
The code used:
extension UIImage {
class func removeStatusbarFromScreenshot(_ screenshot:UIImage) -> UIImage {
let statusBarHeight = 44.0
let newHeight = screenshot.size.height - statusBarHeight
let newSize = CGSize(width: screenshot.size.width, height: newHeight)
let newOrigin = CGPoint(x: 0, y: statusBarHeight)
let imageRef:CGImage = screenshot.cgImage!.cropping(to: CGRect(origin: newOrigin, size: newSize))!
let cropped:UIImage = UIImage(cgImage:imageRef)
return cropped
}
}
My logic is that I need to make the image smaller in heigh by 44px and move the origin y by 44px, but it ends up only creating an image much smaller of the top left corner.
The only way that I get it to work as expected is by multiplying the width by 2 and height by 2.5 in newSize, but that also double the size of the image produced..
Which anyways doesnt make much sense.. can someone help make it work without using magic values?
There are two main problems with what you're doing:
A UIImage has a scale (usually tied to resolution of your device's screen), but a CGImage does not.
Different devices have different "status bar" heights. In general, what you want to cut off from the top is not the status bar but the safe area. The top of the safe area is where your content starts.
Because of this:
You are wrong to talk about 44 px. There are no pixels here. Pixels are physical atomic illuminations on your screen. In code, there are points. Points are independent of the scale (and the scale is the multiplier between points and pixels).
You are wrong to talk about the number 44 itself as if it were hard-coded. You should get the top of the safe area instead.
By crossing into the CGImage world without taking scale into account, you lose the scale information, because CGImage knows nothing of scale.
By crossing back into the UIImage world without taking scale into account, you end up with a UIImage with a resolution of 1, which may not be the resolution of the original UIImage.
The simplest solution is not to do any of what you are doing. First, get the height of the safe area; call it h. Then just draw the snapshot image into a graphics image context that is the same scale as your image (which, if you play your cards right, it will be automatically), but is h points shorter than the height of your image — and draw it with its y origin at -h, thus cutting off the safe area. Extract the resulting image and you're all set.
Example! This code comes a view controller. First, I'll take a screenshot of my own device's current screen (this view controller's view) as my app runs:
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let screenshot = renderer.image { context in
view.layer.render(in: context.cgContext)
}
Now, I'll cut the safe area off the top of that screenshot:
let h = view.safeAreaInsets.top
let size = screenshot.size
let r = UIGraphicsImageRenderer(
size: .init(width: size.width, height: size.height - h)
)
let result = r.image { _ in
screenshot.draw(at: .init(x: 0, y: -h))
}
Experimentation will confirm that this works perfectly on every device, regardless of whether it has a bezel and regardless of its screen resolution: the top of the resulting image, result, is the top of your actual content.
I'm testing a rotation of an image on my Swift project using unit tests.
I'm getting different results to pngData() when I test with iOS 13.7 then when I test with iOS 11.2.
Also strange and I think it's related and also my real problem is that -
On iOS 13.7 comparing 2 images - static image and rotated image - return that they are the same data size.
On iOS 11.2 - my static image changed its data size by X AMOUNT and my rotated image changed its data size by Y AMOUNT and now they have different data sizes and my test fails.
The rotate func -
func cld_rotate(_ degree: Float) -> UIImage? {
var newSize = CGRect(origin: CGPoint.zero, size: self.size).applying(CGAffineTransform(rotationAngle: cld_radians(from: Double(degree)))).size
// Trim off the extremely small float value to prevent core graphics from rounding it up
newSize.width = floor(newSize.width)
newSize.height = floor(newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, self.scale)
let context = UIGraphicsGetCurrentContext()!
// Move origin to middle
context.translateBy(x: newSize.width/2, y: newSize.height/2)
// Rotate around middle
context.rotate(by: cld_radians(from: Double(degree)))
// Draw the image at its center
draw(in: CGRect(x: -self.size.width/2, y: -self.size.height/2, width: self.size.width, height: self.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Any idea about why pngData() is not the same used in 2 different iOS? And why the rotated image change its data size by a different amount than the static one?
PNG images are compressed. Rotating the image will likely require that it be re-compressed, which may yield different output. I would expect the filesize to change slightly, since compression algorithms can yield different file sizes based on different input data. (I originally stated that PNG images used lossy compression, but I was mistaken.)
To me the question is how does iOS 13.7 preserve the file size on rotation. I wonder if it is able to recognize a 90 degree rotation and transform the compressed image data somehow, where iOS 11.2 isn't able to do that? (My guess is that the image compression/decompression algorithm got smarter between iOS 11.2 and iOS 13.7, and now it's able to recognize a 90 degree rotation and use an algorithm on the data without having to decompresss and re-compress the image.)
I'm not sure what you are saying about a static image. Are you saying that you open the PNG image into a UIImage and then export it back to a PNG without transforming it?
I'm working with a CIImage, and while I understand it's not a linear image, it does hold some data.
My question is whether or not a CIImage's extent property returns pixels or points? According to the documentation, which says very little, it's working space coordinates. Does this mean there's no way to get the pixels / points from a CIImage and I must convert to a UIImage to use the .size property to get the points?
I have a UIImage with a certain size, and when I create a CIImage using the UIImage, the extent is shown in points. But if I run a CIImage through a CIFilter that scales it, I sometimes get the extent returned in pixel values.
I'll answer the best I can.
If your source is a UIImage, its size will be the same as the extent. But please, this isn't a UIImageView (which the size is in points). And we're just talking about the source image.
Running something through a CIFilter means you are manipulating things. If all you are doing is manipulating color, its size/extent shouldn't change (the same as creating your own CIColorKernel - it works pixel-by-pixel).
But, depending on the CIFilter, you may well be changing the size/extent. Certain filters create a mask, or tile. These may actually have an extent that is infinite! Others (blurs are a great example) sample surrounding pixels so their extent actually increases because they sample "pixels" beyond the source image's size. (Custom-wise these are a CIWarpKernel.)
Yes, quite a bit. Taking this to a bottom line:
What is the filter doing? Does it need to simply check a pixel's RGB and do something? Then the UIImage size should be the output CIImage extent.
Does the filter produce something that depends on the pixel's surrounding pixels? Then the output CIImage extent is slightly larger. How much may depend on the filter.
There are filters that produce something with no regard to an input. Most of these may have no true extent, as they can be infinite.
Points are what UIKit and CoreGraphics always work with. Pixels? At some point CoreImage does, but it's low-level to a point (unless you want to write your own kernel) you shouldn't care. Extents can usually - but keep in mind the above - be equated to a UIImage size.
EDIT
Many images (particularly RAW ones) can have so large a size as to affect performance. I have an extension for UIImage that resizes an image to a specific rectangle to help maintain consistent CI performance.
extension UIImage {
public func resizeToBoundingSquare(_ boundingSquareSideLength : CGFloat) -> UIImage {
let imgScale = self.size.width > self.size.height ? boundingSquareSideLength / self.size.width : boundingSquareSideLength / self.size.height
let newWidth = self.size.width * imgScale
let newHeight = self.size.height * imgScale
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContext(newSize)
self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
return resizedImage!
}
}
Usage:
image = image.resizeToBoundingSquare(640)
In this example, an image size of 3200x2000 would be reduced to 640x400. Or an image size or 320x200 would be enlarged to 640x400. I do this to an image before rendering it and before creating a CIImage to use in a CIFilter.
I suggest you think of them as points. There is no scale and no screen (a CIImage is not something that is drawn), so there are no pixels.
A UIImage backed by a CGImage is the basis for drawing, and in addition to the CGImage it has a scale; together with the screen resolution, that gives us our translation from points to pixels.
Let's say you have an original image that is
200 high, 100 wide
Let's say you want to draw only a square of it. Let's say, just the bottom square.
Let's say you want to draw it on to a new small image that is
20 high, 20 wide
Of course, you simply do this:
CGRect imageRect = CGRectMake( -10,0, 20,20);
.. begin graphics context ..
[originalImage drawInRect:imageRect];
With drawRect, you supply a rectangle the same full shape (same proportions) of the original image, but expressed in the size of the new canvas. No problem.
BUT:
in the example, you are drawing THE WHOLE ORIGINAL IMAGE -- THE WHOLE 200 HEIGHT on to the new small square.
(Of course the "top half" misses the new canvas, and you only get the bottom half on the new canvas -- which is what you wanted.)
My impression is iOS renders or calculates the "whole" original image, and it only "puts on" the bottom half (in the example) on to the new canvas.
This seems very wasteful.
IS THERE A FASTER WAY TO DO THIS?
It seems like there should be a command, something like this:
drawThisPartOfTheOriginalImage: (0,100 to 100,200)
ontoThisPartOfTheNewCanvas: (0,20 to 20,20)
What's the situation? Is there a more efficient command than drawRect when you are only drawing a small part of the original image? Cheers
CGContextClipToRect approach...(doesn't work!)
.
I experimented with CGContextClipToRect as Peter suggested below.
CGContextClipToRect indeed sets the area you will draw to on your "result" canvas. I simply set it to the size of that result canvas (it would be 20.20 in the example above). To repeat the aim here being to have iOS save time by avoiding pointlessly drawing the, err, not-drawn part of the original.
This example is for an original image 2000.2000 drawing on to a 500.500 (ie, only drawing the top left quarter of the original on to the result).
In fact notice it is slightly slower when you include the CGContextClipToRect, again suggesting iOS "knows when to stop" anyways.
// no need to "overdraw"... quickener turned OFF
//CGContextRef c = UIGraphicsGetCurrentContext();
//CGContextClipToRect(c, CGRectMake(0, 0, resultSize.width,resultSize.height));
//Execution Time .................................. 0.443669
// no need to "overdraw"... quickener turned ON
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextClipToRect(c, CGRectMake(0, 0, resultSize.width,resultSize.height));
//Execution Time .................................. 0.461845
As you can see it's a hair slower, actually, adding the CGContextClipToRect trick.
For the record, here is the exact routine used to crop an image:
-(UIImage *)simplishTopCrop:(UIImage *)fromImage
{
// check for zero fromImage.size.width etc etc
CGSize resultSize = CGSizeMake(640,640);
CGFloat scale = MAX(
resultSize.width/fromImage.size.width,
resultSize.height/fromImage.size.height);
CGFloat width = fromImage.size.width * scale;
CGFloat height = fromImage.size.height * scale;
CGRect imageRect = CGRectMake(0,0, width,height);
UIGraphicsBeginImageContextWithOptions(resultSize, NO, 0);
// INSERT 'CGContextClipToRect' TRICK ABOVE, RIGHT HERE
[fromImage drawInRect:imageRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This is where clipping comes in. Clip to your dirty rect, then draw the whole image into your bounds. The clipping path will keep the rest of the image at least from appearing, and hopefully from being composited or sampled at all.
If your profiling in Instruments finds that that is not efficient enough, you might try cropping the image itself, using CGImageCreateWithImageInRect, and then drawing that image into your dirty rect. You may want to keep your cropped image around and only throw it away when the rect changes. One way or the other, cropping the image may be more efficient—but don't forget to profile both before and after to prove that.
I'm facing the following problem : I have several UIImage (not squared) and I need to resize and crop them. I have read almost every question on StackOverflow but the results that I get are not good, I mean the image produced has a poor quality(blurry).
This is the scenario :
1) Original images size : width 208 pixel - height variable (i.e. from 50 to 2500)
2) Result images : width 100 pixel - height max 200 pixel
That is what I've done so far to achieve this result :
..... // missing code
CGFloat height = (100*image.size.height)/image.size.width;
self.thumbnail=[image resizedImage:CGSizeMake(100,height)
interpolationQuality:kCGInterpolationHigh];
..... // missing code
The method that I use to resize the image can be found here , once the image is resized I crop it using the following code :
CGRect croppedRect;
croppedRect = CGRectMake(0, 0, self.thumbnail.size.width, 200);
CGImageRef tmp = CGImageCreateWithImageInRect([self.thumbnail CGImage],
croppedRect);
self.thumbnail = [UIImage imageWithCGImage:tmp];
CGImageRelease(tmp);
Long story short, the image is resized and cropped but the quality is really poor considering that the original image had a really good quality.
So the question is how to achieve this keeping an high quality of the image?
If you target iOS 4 and later you should use ImageIO to resize images.
http://www.cocoabyss.com/coding-practice/uiimage-scaling-using-imageio/