For taking screenshot of a view, i am using this code
-(UIImage *)renderImageFromView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return renderedImage;
}
Now suppose my view is bigger than screen size and let say its rect relative to screen is {-100, -100, screenWidth+100, screenHeight+100} and i want to take the screenshot of my this view.
I am currently using this code:
-(UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame
{
CGRect rect = {-100, -100, screenWidth+100, screenHeight+100};
UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return renderedImage;
}
But the issue:
The final image contains screen shot of view from {0, 0, screenWidth + 100, screenHeight + 100} but i was expecting it to be {-100, -100, screenWidth + 100, screenHeight + 100}.
Any Solution?
You have to first set that view into the screen with setting origin to (0,0) before that store that points in some temp variable so after taking screenshot you assign it back.
Now add that offset to width and height like if you view x is -100 then add that to width + 100 and set x = 0. same for apply for y.
now create UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0); and renderInContext and have the screen shot that you are doing correct. and don't forgot to set original frame back to view.
Hope it is helpful
The problem in your case is that the context takes just size, not the frame, so setting origin of the rect to -100, -100 has no effect. I believe the solution is to create a context of size that is +200 points bigger in both directions, and then tell the view to render itself at point (100, 100) of that context. To set the relative origin of where to draw I tried to use transform on the view layer.
Sorry for using Swift, but I believe you can easily rewrite it to ObjC:
func renderImageFromView(view: UIView) -> UIImage? {
let size = CGSize(width: view.bounds.size.width + 200, height: view.bounds.size.height + 200)
UIGraphicsBeginImageContextWithOptions(size, true, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
view.layer.transform = CATransform3DMakeAffineTransform(CGAffineTransform.identity.translatedBy(x: 100, y: 100))
view.layer.render(in: context)
view.layer.transform = CATransform3DMakeAffineTransform(CGAffineTransform.identity)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
Related
I want to create a new UIImage from another one which is turned to 45° (at its bottom left corner, clockwise). The space around the old image would be filled white or so. In the image I uploaded, the old image would be the blue one and the new image would be the actual image I linked, including the white parts.
Played a little bit in playground with Swift and here is my solution:
func rotateImage(image: UIImage!, var rotationDegree: CGFloat) -> UIImage {
// 180 degress = 540 degrees, that's why we calculate modulo
rotationDegree = rotationDegree % 360
// If degree is negative, then calculate positive
if rotationDegree < 0.0 {
rotationDegree = 360 + rotationDegree
}
// Get image size
let size = image.size
let width = size.width
let height = size.height
// Get degree which we will use for calculation
var calcDegree = rotationDegree
if calcDegree > 90 {
calcDegree = 90 - calcDegree % 90
}
// Calculate new size
let newWidth = width * CGFloat(cosf(Float(calcDegree.degreesToRadians))) + height * CGFloat(sinf(Float(calcDegree.degreesToRadians)))
let newHeight = width * CGFloat(sinf(Float(calcDegree.degreesToRadians))) + height * CGFloat(cosf(Float(calcDegree.degreesToRadians)))
let newSize = CGSize(width: newWidth, height: newHeight)
// Create context using new size, make it opaque, use screen scale
UIGraphicsBeginImageContextWithOptions(newSize, true, UIScreen.mainScreen().scale)
// Get context variable
let context = UIGraphicsGetCurrentContext()
// Set fill color to white (or any other)
// If no color needed, then set opaque to false when initialize context
CGContextSetFillColorWithColor(context, UIColor.whiteColor().CGColor)
CGContextFillRect(context, CGRect(origin: CGPointZero, size: newSize))
// Rotate context and draw image
CGContextTranslateCTM(context, newSize.width * 0.5, newSize.height * 0.5)
CGContextRotateCTM(context, rotationDegree.degreesToRadians);
CGContextTranslateCTM(context, newSize.width * -0.5, newSize.height * -0.5)
image.drawAtPoint(CGPoint(x: (newSize.width - size.width) / 2.0, y: (newSize.height - size.height) / 2.0))
// Get image from context
let returnImage = UIGraphicsGetImageFromCurrentImageContext()
// End graphics context
UIGraphicsEndImageContext()
return returnImage
}
Do not forget to include this extension:
extension CGFloat {
var degreesToRadians : CGFloat {
return self * CGFloat(M_PI) / 180.0
}
}
I would recommend to go threw this answer to better understand how I calculated newSize after image is rotated.
If you just want to change the way an image is displayed, transform the image view that displays it.
If you really want a new rotated image, redraw the image in a transformed graphics context.
If you just want to rotate the UIImageView used to display the image, you could do this:
#define DegreesToRadians(x) ((x) * M_PI / 180.0) //put this at the top of your file
imageView.transform = CGAffineTransformMakeRotation(DegreesToRadians(45));
But if you want to rotate the actual image, do something like this:
- (UIImage *)image:(UIImage *)image rotatedByDegrees:(CGFloat)degrees
{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0, 0, image.size.width, image.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(DegreesToRadians(degrees));
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width / 2, rotatedSize.height / 2);
// // Rotate the image context
CGContextRotateCTM(bitmap, DegreesToRadians(degrees));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-image.size.width / 2, -image.size.height / 2, image.size.width, image.size.height), [image CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Above code adapted from this answer by The Lion https://stackoverflow.com/a/11667808/1757960
I have an image that does not have an alpha channel - I confirmed in Finder's Get Info panel. Yet when I put it in a UIImageView which is within a UIScrollView and I enable Show Blended Layers, the image is red which indicates it's trying to apply transparency which will be a hit on performance.
How can fix this to be green so iOS knows everything in this view is fully opaque?
I tried the following but this did not remove the red color:
self.imageView.opaque = YES;
self.scrollView.opaque = YES;
By default, UIImage instances are rendered in a Graphic Context that includes alpha channel. To avoid it, you need to generate another image using a new Graphic Context where opaque = YES.
- (UIImage *)optimizedImageFromImage:(UIImage *)image
{
CGSize imageSize = image.size;
UIGraphicsBeginImageContextWithOptions( imageSize, opaque, scale );
[image drawInRect: CGRectMake( 0, 0, imageSize.width, imageSize.height )];
UIImage *optimizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return optimizedImage;
}
Swift 3x
Xcode 9x
func optimizedImage(from image: UIImage) -> UIImage {
let imageSize: CGSize = image.size
UIGraphicsBeginImageContextWithOptions(imageSize, true, UIScreen.main.scale)
image.draw(in: CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
let optimizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return optimizedImage ?? UIImage()
}
Is there a way in iOS to add a border to an image which is not a simple rectangle ?
I have successfully tinted an image using the following code:
- (UIImage *)imageWithTintColor:(UIColor *)tintColor
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
CGContextClipToMask(context, rect, self.CGImage);
[tintColor setFill];
CGContextFillRect(context, rect);
UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tintedImage;
}
Lets say for example i wanted to add a blue border to this image (Note: this is NOT an 'A' NSString, but an UIImage object example)
When i alter the code above to [color setStroke] and CGContextStrokeRect(context, rect), the image just disappears.
I've already learned from SO that this is possible using CoreImage + EdgeDetection, but isn't there a "simple" CoreGraphics - way similar to tinting an image ?
Thank you!
-- EDIT --
Please note that I want to add the border to the image itself. I don't want to create the border effect through an UIImageView !
The border should match the shape of the image before applying the border.
In this case: blue outline for the outside + inside of the 'A'.
This is not a very satisfying method I would say, but works to some extent.
You can make use of adding shadow to the layer. For this you need to strip off the white portion in the image, leaving the character surrounded by alpha.
I used the below code.
UIImage *image = [UIImage imageNamed:#"image_name_here.png"];
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, 200.0f, 200.0f);
imageLayer.contents = (id)image.CGImage;
imageLayer.position = self.view.layer.position;
imageLayer.shadowColor = [UIColor whiteColor].CGColor;
imageLayer.shadowOffset = CGSizeMake(0.0f, 0.0f);
imageLayer.shadowOpacity = 1.0f;
imageLayer.shadowRadius = 4.0f;
[self.view.layer addSublayer:imageLayer];
And the result would be something like this.
I realize this was asked 2 years ago and is probably not still relevant to you, however I'd like to submit my solution in case anyone else stumbles upon this question while looking for an answer (like I just did).
One way to generate a border around an image is by tinting the image to your border color (say black), and then overlaying a smaller copy of your image onto the middle of the tinted one.
I built my solution upon your imageWithTintColor:tintColor function as an extension to UIImage in Swift 3:
extension UIImage {
func imageByApplyingBorder(ofSize borderSize: CGFloat, andColor borderColor: UIColor) -> UIImage {
/*
Get the scale of the smaller image
If borderSize is 10% then smaller image should be 90% of its original size
*/
let scale: CGFloat = 1.0 - borderSize
// Generate tinted background image of original size
let backgroundImage = imageWithTintColor(borderColor)
// Generate smaller image of scale
let smallerImage = imageByResizing(by: scale)
UIGraphicsBeginImageContext(backgroundImage.size)
// Draw background image first, followed by smaller image in the middle
backgroundImage.draw(at: CGPoint(x: 0, y: 0))
smallerImage.draw(at: CGPoint(
x: (backgroundImage.size.width - smallerImage.size.width) / 2,
y: (backgroundImage.size.height - smallerImage.size.height) / 2
))
let borderedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return borderedImage
}
func imageWithTintColor(_ color: UIColor) -> UIImage {
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()!
// Turn up-side-down (later transformations turns image back)
context.translateBy(x: 0, y: size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.setBlendMode(.normal)
// Mask to visible part of image (turns image right-side-up)
context.clip(to: CGRect(x: 0, y: 0, width: size.width, height: size.height), mask: self.cgImage!)
// Fill with input color
color.setFill()
context.fill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let tintedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return tintedImage
}
func imageByResizing(by scale: CGFloat) -> UIImage {
// Determine new width and height
let width = scale * size.width
let height = scale * size.height
// Draw a scaled down image
UIGraphicsBeginImageContextWithOptions(CGSize(width: width, height: height), false, 0.0)
draw(in: CGRect(x: 0, y: 0, width: width, height: height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
}
Please note that the borderSize parameter of imageByApplyingBorder:ofSize:andColor: is given as a percentage of the original image size. If your image is 100x100 px and borderSize = 0.1, then your will get an image of size 100x100 px with a 10x10 px internal border.*
Here is an example image generated using the above function on a 1000x1000px circular center clip of one of the stock iOS Simulator photos:
Any suggestions for optimizations or other approaches are welcome.
You can use below code to add a border to the UIImageView:
[self.testImage.layer setBorderColor:[UIColor blueColor].CGColor];
[self.testImage.layer setBorderWidth:5.0];
Try this
#import <QuartzCore/QuartzCore.h>
[yourUIImageView.layer setBorderColor:[UIColor blueColor].CGColor];
[yourUIImageView.layer setBorderWidth:6.0];
If someone looks for an outside transparent border for UIImageView or any other View, look at my solution here or here.
My current method of combining UIImages come from this answer on SO, as well as this popular question on resizing UIImage with aspect ratio. My current issue is the following:
I have an UIImage called pictureImage taken with the camera that comes out to the standard dimension 2448*3264. I also have a UIImageView called self.annotateView that has a frame of 320*568 where the user could draw and annotate the picture. When I present the pictureImage in a UIImageView, I set the image presentation as Aspect Fill so that it takes up the whole iPhone screen. Of course this means parts of pictureImage is cut off on both left and right (in fact 304 pixels on both sides), but this intended.
My problem is, when I combine the UIImages pictureImage and annotateView.image to a new dimension of 320*568, my combined image alters the original aspects of annotateView.image by stretching it horizontally. This is strange since the new dimensions are exactly that of annotateView.image's original dimensions.
Here is what the outcome looks like -
Before combining the UIImages
After combining the images
Note that the underlying picture is not stretched. However, annotateView.image is stretched only horizontally, not vertically.
Here is my code for merging the UIImages.
//Note: self.firstTakenImage is set to 320*568
CGSize newSize = CGSizeMake(self.firstTakenImage.frame.size.width, self.firstTakenImage.frame.size.height);
UIGraphicsBeginImageContext(newSize);
[self.firstTakenImage.image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
[self.drawView.image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *combinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
While doing the drawInRect, you have to redo the scaling and centering that the system did for you with AspectFill to make it match the original process. Something like this:
CGSize fullSize = self.pictureView.image.size;
CGSize newSize = self.outputView.frame.size;
CGFloat scale = newSize.height/fullSize.height;
CGFloat offset = (newSize.width - fullSize.width*scale)/2;
CGRect offsetRect = CGRectMake(offset, 0, newSize.width-offset*2, newSize.height);
NSLog(#"offset = %#",NSStringFromCGRect(offsetRect));
UIGraphicsBeginImageContext(newSize);
[self.pictureView.image drawInRect:offsetRect];
[self.annotateView.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage *combImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.outputView.image = combImage;
Support Portrait and Landscape both type of image
Drawing and other subviews can be merged in my case I'm adding label to draw
{
CGSize fullSize = getImageForEdit.size;
CGSize sizeInView = AVMakeRectWithAspectRatioInsideRect(imgViewFake.image.size, imgViewFake.bounds).size;
CGFloat orgScale = orgScale = fullSize.width/sizeInView.width;
CGSize newSize = CGSizeMake(orgScale * img.image.size.width, orgScale * img.image.size.height);
if(newSize.width <= fullSize.width && newSize.height <= fullSize.height){
newSize = fullSize;
}
CGRect offsetRect;
if (getImageForEdit.size.height > getImageForEdit.size.width){
CGFloat scale = newSize.height/fullSize.height;
CGFloat offset = (newSize.width - fullSize.width*scale)/2;
offsetRect = CGRectMake(offset, 0, newSize.width-offset*2, newSize.height);
}
else{
CGFloat scale = newSize.width/fullSize.width;
CGFloat offset = (newSize.height - fullSize.height*scale)/2;
offsetRect = CGRectMake(0, offset, newSize.width, newSize.height-offset*2);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, getImageForEdit.scale);
[getImageForEdit drawAtPoint:offsetRect.origin];
// [img.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
CGFloat oldScale = img.contentScaleFactor;
img.contentScaleFactor = getImageForEdit.scale;
[img drawViewHierarchyInRect:CGRectMake(0, 0, newSize.width, newSize.height) afterScreenUpdates:YES];
img.contentScaleFactor = oldScale;
UIImage *combImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageData = UIImageJPEGRepresentation(combImage, 1);
}
When you call [self.drawView.image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)]; you need to modify the frame to account for the difference in aspect ratio that you described (because you are chopping some of the image off on both sides).
That means modifying the x position and the width that you are drawing the annotation image.
The modification is based on the difference between the 2 rects when scaled to the same height. You say this is 304, so you can initially set x to 304 and the width to newSize.width - 608 to test. But really the difference should be calculated...
Mackworth's answer in Swift 3.x
let fullSize:CGSize = img.size
let newSize:CGSize = fullSize
let scale:CGFloat = newSize.height/fullSize.height
let offset:CGFloat = (newSize.width - fullSize.width*scale)/2
let offsetRect:CGRect = CGRect.init(x: offset, y: 0, width: newSize.width - offset*2, height: newSize.height)
print(NSStringFromCGRect(offsetRect))
UIGraphicsBeginImageContext(newSize);
self.pictureView.image.draw(in: offsetRect)
self.annotateView.image.draw(in: CGRect.init(x: 0, y: 0, width: waterMarkImage.size.width, height: waterMarkImage.size.height))
let combImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext();
return combImage;
In IOS How can I crop a rectangular image to square letterbox so that it maintains the original aspect ratio and the remaining spaces are filled with black. E.g. the "pad" strategy that transloadit uses to crop/resize their images.
http://transloadit.com/docs/image-resize
For anyone who stumbles onto this question and many more like it without a clear answer, I have written a neat little category that accomplishes this at the model level by modifying the UIImage directly rather than just modifying the view. Simply use this method the returned image will be letterboxed to a square shape, regardless of which side is longer.
- (UIImage *) letterboxedImageIfNecessary
{
CGFloat width = self.size.width;
CGFloat height = self.size.height;
// no letterboxing needed, already a square
if(width == height)
{
return self;
}
// find the larger side
CGFloat squareSize = MAX(width,height);
UIGraphicsBeginImageContext(CGSizeMake(squareSize, squareSize));
// draw black background
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1.0);
CGContextFillRect(context, CGRectMake(0, 0, squareSize, squareSize));
// draw image in the middle
[self drawInRect:CGRectMake((squareSize - width) / 2, (squareSize - height) / 2, width, height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Just for convenience - heres a swift rewrite of #Dima's answer:
import UIKit
extension UIImage
{
func letterboxImage() -> UIImage
{
let width = self.size.width
let height = self.size.height
// no letterboxing needed, already a square
if(width == height)
{
return self
}
// find the larger side
let squareSize = max(width, height)
UIGraphicsBeginImageContext(CGSizeMake(squareSize, squareSize))
// draw black background
let context = UIGraphicsGetCurrentContext()
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1.0)
CGContextFillRect(context, CGRectMake(0, 0, squareSize, squareSize))
// draw image in the middle
self.drawInRect(CGRectMake((squareSize-width) / 2, (squareSize - height) / 2, width, height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
You have to set contentMode of the UIImageView with UIViewContentModeScaleAspectFit. You can also find this option for UIImageView if you use storyboard.
The set the backgroundColor of UIImageView to black (or other color of your choice).