How to create a black UIImage? - ios

I've looked around everywhere, but I can't find a way to do this. I need to create a black UIImage of a certain width and height (The width and height change, so I can't just create a black box and then load it into a UIImage). Is there some way to make a CGRect and then convert it to a UIImage? Or is there some other way to make a simple black box?

Depending on your situation, you could probably just use a UIView with its backgroundColor set to [UIColor blackColor]. Also, if the image is solidly-colored, you don't need an image that's actually the dimensions you want to display it at; you can just scale a 1x1 pixel image to fill the necessary space (e.g., by setting the contentMode of a UIImageView to UIViewContentModeScaleToFill).
Having said that, it may be instructive to see how to actually generate such an image:
Objective-C
CGSize imageSize = CGSizeMake(64, 64);
UIColor *fillColor = [UIColor blackColor];
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[fillColor setFill];
CGContextFillRect(context, CGRectMake(0, 0, imageSize.width, imageSize.height));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Swift
let imageSize = CGSize(width: 420, height: 120)
let color: UIColor = .black
UIGraphicsBeginImageContextWithOptions(imageSize, true, 0)
let context = UIGraphicsGetCurrentContext()!
color.setFill()
context.fill(CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
let image: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()

UIGraphicsBeginImageContextWithOptions(CGSizeMake(w,h), NO, 0);
UIBezierPath* p =
[UIBezierPath bezierPathWithRect:CGRectMake(0,0,w,h)];
[[UIColor blackColor] setFill];
[p fill];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Now im is the image.
That code comes almost unchanged from this section of my book: http://www.apeth.com/iOSBook/ch15.html#_graphics_contexts

Swift 3:
func uiImage(from color:UIColor?, size:CGSize) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(size, true, 0)
defer {
UIGraphicsEndImageContext()
}
let context = UIGraphicsGetCurrentContext()
color?.setFill()
context?.fill(CGRect.init(x: 0, y: 0, width: size.width, height: size.height))
return UIGraphicsGetImageFromCurrentImageContext()
}

Like this
let image = UIGraphicsImageRenderer(size: bounds.size).image { _ in
UIColor.black.setFill()
UIRectFill(bounds)
}
As quoted in this WWDC vid
There's another function that's older; UIGraphicsBeginImageContext. But please, don't use that.

Here is an example that creates a black UIImage that is 1920x1080 by creating it from a CGImage created from a CIImage:
let frame = CGRect(origin: CGPoint(x: 0, y: 0), size: CGSize(width: 1920, height: 1080))
let cgImage = CIContext().createCGImage(CIImage(color: .black()), from: frame)!
let uiImage = UIImage(cgImage: cgImage)

Related

Fill UIImage with UIColor

I have a UIImage that I want to fill with UIColor
I've tried this code but the app crashes on the 10th row.
Here's the code:
extension UIImage {
func imageWithColor(_ color: UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(size, false, scale)
let context = UIGraphicsGetCurrentContext()
context?.translateBy(x: 0.0, y: size.height)
context?.scaleBy(x: 1.0, y: -1.0)
context?.setBlendMode(CGBlendMode.normal)
let rect = CGRect(origin: CGPoint.zero, size: size)
context?.clip(to: rect, mask: context as! CGImage)// crashes
color.setFill()
context?.fill(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
return newImage!
}
}
The problem is probably on context?.clip(to: rect, mask: context as! CGImage) (I think I shouldn't send context as the mask, but what should I send? Both CGImage() and CGImage.self don't work.
You have to do as follow:
extension UIImage {
func tinted(with color: UIColor) -> UIImage? {
defer { UIGraphicsEndImageContext() }
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
withRenderingMode(.alwaysTemplate).draw(in: CGRect(origin: .zero, size: size))
return UIGraphicsGetImageFromCurrentImageContext()
}
}
You need to end the image context when you finish drawing:
UIGraphicsEndImageContext();
Or you could add a category method for UIImage:
- (UIImage *)imageByTintColor:(UIColor *)color
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
[color set];
UIRectFill(rect);
[self drawAtPoint:CGPointMake(0, 0) blendMode:kCGBlendModeDestinationIn alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Using as:
image = [image imageByTintColor:color];
Simplest way to doing this
theImageView.image? = (theImageView.image?.imageWithRenderingMode(.AlwaysTemplate))!
theImageView.tintColor = UIColor.magentaColor()
let image = UIImage(named: "whatever.png")?.imageWithRenderingMode(.alwaysTemplate)
When you set this image later to UIButton or UIImageView - just change tint color of that control, and image will be drawn using tint color you specified.

How to create an overlay color with a gradient in iOS

Background info: I have a UIImageView. I have added an overlay colour on its image in the following way:
UIGraphicsBeginImageContext(initialImage.size);
[initialImage drawInRect:CGRectMake(0, 0, initialImage.size.width, initialImage.size.height) blendMode:kCGBlendModeNormal alpha:alphaValue];
UIBezierPath * path = [UIBezierPath bezierPathWithRect:CGRectMake(0, 0, initialImage.size.width, initialImage.size.height)];
[overlayColor setFill];
[path fillWithBlendMode:kCGBlendModeMultiply alpha:1];
finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setImage:finalImage];
I still want to add this as an overlay colour but I want it to have a gradient. I have been trying to figure out a way to do this but haven't really been successful. I guess, the approach of adding an overlay colour with a gradient is wrong? I'm not sure about how to do this. I have tried to add a CGGradientLayer as a sublayer to the UIImageView but it doesn't work.
I thought about adding a UIView and setting it's backgroundColor as the overlayColorand then adding a CGGradientLayer as sublayer of the UIView which is added as a subview to the UIImageView but, we are not supposed to add subviews to UIImageViews.
Can someone please help me with this? Maybe I should change my approach?
Pointing me in the right direction will be great as well!
I look forward to your responses and apologies if this post hasn't been entirely clear!
Thanks in advance for your help!
Edit: Code for the CGGradientLayer
CAGradientLayer *gradient = [CAGradientLayer layer];
gradient.frame = self.frame;
UIColor *colorOne = [UIColor colorFromHex:self.feedColor withAlpha:(alphaValue * 0.7)];
UIColor *colorTwo = [UIColor colorFromHex:self.feedColor withAlpha:(alphaValue * 1.0)];
gradient.colors = [NSArray arrayWithObjects:(id)colorOne.CGColor, (id)colorTwo.CGColor, nil];
[self.layer insertSublayer:gradient atIndex:0];
I would just use Core Graphics in order to take your input image, apply a gradient overlay to it, and then pass it onto the UIImageView. Something like this should achieve the desired result:
- (UIImage *)imageWithGradientOverlay:(UIImage *)sourceImage color1:(UIColor *)color1 color2:(UIColor *)color2 gradPointA:(CGPoint)pointA gradPointB:(CGPoint)pointB {
CGSize size = sourceImage.size;
// Start context
UIGraphicsBeginImageContext(size);
CGContextRef c = UIGraphicsGetCurrentContext();
// Draw source image into context
CGContextDrawImage(c, (CGRect){CGPointZero, size}, sourceImage.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat gradLocs[] = {0, 1};
NSArray *colors = #[(id)color1.CGColor, (id)color2.CGColor];
// Create a simple linear gradient with the colors provided.
CGGradientRef grad = CGGradientCreateWithColors(colorSpace, (__bridge CFArrayRef)colors, gradLocs);
CGColorSpaceRelease(colorSpace);
// Draw gradient with multiply blend mode over the source image
CGContextSetBlendMode(c, kCGBlendModeMultiply);
CGContextDrawLinearGradient(c, grad, pointA, pointB, 0);
CGGradientRelease(grad);
// Grab resulting image from context
UIImage *resultImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImg;
}
This is where sourceImage is the input image, color1 and color2 are your gradient colors and gradPointA and gradPointB are your linear gradient end points (In Core Graphics coordinate system, bottom left is (0,0)).
This way you save having to mess about with layers. If you're frequently re-drawing with different colors, then you may want to take an approach that uses layers.
If you're looking for a more dynamic approach, then I would subclass CALayer instead of UIImageView. Therefore you'd want something like this:
#interface gradientImageLayer : CALayer
- (instancetype)initWithFrame:(CGRect)frame;
#end
#implementation gradientImageLayer
- (instancetype)initWithFrame:(CGRect)frame {
if (self = [super init]) {
self.opaque = YES; // Best for performance, but you if you want the layer to have transparency, then remove.
UIImage *i = [UIImage imageNamed:#"foo2.png"]; // Replace with your image
self.frame = frame;
self.contentsScale = [UIScreen mainScreen].nativeScale;
self.contents = (__bridge id _Nullable)(i.CGImage);
// Your code for the CAGradientLayer was indeed correct.
CAGradientLayer *gradient = [CAGradientLayer layer];
gradient.frame = frame;
// Add whatever colors you want here.
UIColor *colorOne = [UIColor colorWithRed:1 green:1 blue:0 alpha:0.1];
UIColor *colorTwo = [UIColor colorWithRed:0 green:1 blue:1 alpha:0.2];
gradient.colors = #[(id)colorOne.CGColor, (id)colorTwo.CGColor]; // Literals read far nicer than a clunky [NSArray arrayWith.... ]
[self addSublayer:gradient];
}
return self;
}
#end
The downside to this approach is you are unable to apply different blend modes. The only solutions I've seen to applying a blend mode on a CALayer is through Core Graphics, but then you'd be better off with my original answer.
I'll post the collaborative solution in this question
This category allows you to add a Color Overlay or Gradient Color Overlay using any blend mode (multiply, normal, etc) and CoreGraphics
Swift 4:
extension UIImage {
//creates a static image with a color of the requested size
static func fromColor(color: UIColor, size: CGSize) -> UIImage {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
context?.setFillColor(color.cgColor)
context?.fill(rect)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
func blendWithColorAndRect(blendMode: CGBlendMode, color: UIColor, rect: CGRect) -> UIImage {
let imageColor = UIImage.fromColor(color: color, size:self.size)
let rectImage = CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)
UIGraphicsBeginImageContextWithOptions(self.size, true, 0)
let context = UIGraphicsGetCurrentContext()
// fill the background with white so that translucent colors get lighter
context!.setFillColor(UIColor.white.cgColor)
context!.fill(rectImage)
self.draw(in: rectImage, blendMode: .normal, alpha: 1)
imageColor.draw(in: rect, blendMode: blendMode, alpha: 0.8)
// grab the finished image and return it
let result = UIGraphicsGetImageFromCurrentImageContext()
//self.backgroundImageView.image = result
UIGraphicsEndImageContext()
return result!
}
//creates a static image with a gradient of colors of the requested size
static func fromGradient(colors: [UIColor], locations: [CGFloat], horizontal: Bool, size: CGSize) -> UIImage {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
let colorSpace = CGColorSpaceCreateDeviceRGB()
let cgColors = colors.map {$0.cgColor} as CFArray
let grad = CGGradient(colorsSpace: colorSpace, colors: cgColors , locations: locations)
let startPoint = CGPoint(x: 0, y: 0)
let endPoint = horizontal ? CGPoint(x: size.width, y: 0) : CGPoint(x: 0, y: size.height)
context?.drawLinearGradient(grad!, start: startPoint, end: endPoint, options: .drawsAfterEndLocation)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
func blendWithGradientAndRect(blendMode: CGBlendMode, colors: [UIColor], locations: [CGFloat], horizontal: Bool = false, alpha: CGFloat = 1.0, rect: CGRect) -> UIImage {
let imageColor = UIImage.fromGradient(colors: colors, locations: locations, horizontal: horizontal, size: size)
let rectImage = CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)
UIGraphicsBeginImageContextWithOptions(self.size, true, 0)
let context = UIGraphicsGetCurrentContext()
// fill the background with white so that translucent colors get lighter
context!.setFillColor(UIColor.white.cgColor)
context!.fill(rectImage)
self.draw(in: rectImage, blendMode: .normal, alpha: 1)
imageColor.draw(in: rect, blendMode: blendMode, alpha: alpha)
// grab the finished image and return it
let result = UIGraphicsGetImageFromCurrentImageContext()
//self.backgroundImageView.image = result
UIGraphicsEndImageContext()
return result!
}
}
Example Gradient:
let newImage = image.blendWithGradientAndRect(blendMode: .multiply,
colors: [.red, .white],
locations: [0, 1],
horizontal: true,
alpha: 0.8,
rect: imageRect)
Example Single Color:
let newImage = image.blendWithColorAndRect(blendMode: .multiply, color: .red, rect: imageRect)

How can I change image tintColor

I'm receiving image from a server, then based on a color chosen by the user, the image color will be changed.
I tried the following :
_sketchImageView.image = [_sketchImageView.image imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
[_sketchImageView setTintColor:color];
i got the opposite of my goal (the white color outside UIImage is colored with the chosen color).
what is going wrong?
i need to do the same in this question,the provided solution doesn't solve my case.
How can I change image tintColor in iOS and WatchKit
Try to generate new image for yourself
UIImage *newImage = [_sketchImageView.image imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
UIGraphicsBeginImageContextWithOptions(image.size, NO, newImage.scale);
[yourTintColor set];
[newImage drawInRect:CGRectMake(0, 0, image.size.width, newImage.size.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
_sketchImageView.image = newImage;
And use it.
Good luck
======= UPDATE =======
This solution will only change color of all pixel's image.
Example: we have a book image: http://pngimg.com/upload/book_PNG2113.png
And after running above code (exp: TintColor is RED). We have:
SO: how your image is depends on how you designed it
In Swift you can use this extension: [Based on #VietHung's objective-c solution]
Swift 5:
extension UIImage {
func imageWithColor(color: UIColor) -> UIImage? {
var image = withRenderingMode(.alwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
}
Previous Swift version:
extension UIImage {
func imageWithColor(color: UIColor) -> UIImage? {
var image = imageWithRenderingMode(.AlwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image.drawInRect(CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
In swift 2.0 you can use this
let image = UIImage(named:"your image name")?.imageWithRenderingMode(.AlwaysTemplate)
let yourimageView.tintColor = UIColor.redColor()
yourimageView.image = image
In swift 3.0 you can use this
let image = UIImage(named:"your image name")?.withRenderingMode(.alwaysTemplate)
let yourimageView.tintColor = UIColor.red
yourimageView.image = image
Try something like this
UIImage *originalImage = _sketchImageView.image
UIImage *newImage = [originalImage imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,50,50)]; // your image size
imageView.tintColor = [UIColor redColor]; // or whatever color that has been selected
imageView.image = newImage;
_sketchImageView.image = imageView.image;
Hope this helps.
In Swift 3.0 you can use this extension: [Based on #VietHung's objective-c solution]
extension UIImage {
func imageWithColor(_ color: UIColor) -> UIImage? {
var image = imageWithRenderingMode(.alwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
For Swift 3.0, I made a custom subclass of UIImageView called TintedUIImageView. Now the image uses whatever tint color is set in interface builder or code
class TintedUIImageView: UIImageView {
override func awakeFromNib() {
if let image = self.image {
self.image = image.withRenderingMode(.alwaysTemplate)
}
}
}
You can try:
_sketchImageView.image = [self imageNamed:#"imageName" withColor:[UIColor blackColor]];
- (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)color
{
// load the image
//NSString *name = #"badge.png";
UIImage *img = [UIImage imageNamed:name];
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(img.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
Try setting the tint color on the superview of the image view. E.g. [self.view setTintColor:color];
in Swift 4 you can simply make an extension like that:
import UIKit
extension UIImageView {
func tintImageColor(color: UIColor) {
guard let image = image else { return }
self.image = image.withRenderingMode(UIImageRenderingMode.alwaysTemplate)
self.tintColor = color
}
}
- SWIFT 4
extension UIImage {
func imageWithColor(_ color: UIColor) -> UIImage? {
var image: UIImage? = withRenderingMode(.alwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image?.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Here's how I apply and use tints in IOS 9 with Swift.
//apply a color to an image
//ref - http://stackoverflow.com/questions/28427935/how-can-i-change-image-tintcolor
//ref - https://www.captechconsulting.com/blogs/ios-7-tutorial-series-tint-color-and-easy-app-theming
func getTintedImage() -> UIImageView {
var image : UIImage;
var imageView : UIImageView;
image = UIImage(named: "someAsset")!;
let size : CGSize = image.size;
let frame : CGRect = CGRectMake((UIScreen.mainScreen().bounds.width-86)/2, 600, size.width, size.height);
let redCover : UIView = UIView(frame: frame);
redCover.backgroundColor = UIColor.redColor();
redCover.layer.opacity = 0.75;
imageView = UIImageView();
imageView.image = image.imageWithRenderingMode(UIImageRenderingMode.Automatic);
imageView.addSubview(redCover);
return imageView;
}
One thing you can do is, just add your images to Assets folder in XCode and then change the rendering mode to Template Image, so whenever you change the tint color of UIImageView, it will automatically makes change to image.
Check this link out -> https://www.google.co.in/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&cad=rja&uact=8&ved=0ahUKEwiM0YXO0ejTAhUIQ48KHfGpBpgQjRwIBw&url=https%3A%2F%2Fkrakendev.io%2Fblog%2F4-xcode-asset-catalog-secrets-you-need-to-know&psig=AFQjCNGnAzVn92pCqM8612o1R0J9q1y7cw&ust=1494619445516498
let image = UIImage(named: "i m a g e n a m e")?.withRenderingMode(.alwaysTemplate)
imageView.tintColor = UIColor.white // Change to require color
imageView.image = image
Try this
iOS 13.4 and above
UIImage *image = [UIImage imageNamed:#"placeHolderIcon"];
[image imageWithTintColor:[UIColor whiteColor] renderingMode: UIImageRenderingModeAlwaysTemplate];

How to scale down a UIImage and make it crispy / sharp at the same time instead of blurry?

I need to scale down an image, but in a sharp way. In Photoshop for example there are the image size reduction options "Bicubic Smoother" (blurry) and "Bicubic Sharper".
Is this image downscaling algorithm open sourced or documented somewhere or does the SDK offer methods to do this?
Merely using imageWithCGImage is not sufficient. It will scale, but the result will be blurry and suboptimal whether scaling up or down.
If you want to get the aliasing right and get rid of the "jaggies" you need something like this: http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/.
My working test code looks something like this, which is Trevor's solution with one small adjustment to work with my transparent PNGs:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
For those using Swift here is the accepted answer in Swift:
func resizeImage(image: UIImage, newSize: CGSize) -> (UIImage) {
let newRect = CGRectIntegral(CGRectMake(0,0, newSize.width, newSize.height))
let imageRef = image.CGImage
UIGraphicsBeginImageContextWithOptions(newSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
let flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height)
CGContextConcatCTM(context, flipVertical)
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef)
let newImageRef = CGBitmapContextCreateImage(context) as CGImage
let newImage = UIImage(CGImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
return newImage
}
If someone is looking for Swift version, here is the Swift version of #Dan Rosenstark's accepted answer:
func resizeImage(image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
image.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
If you retain the original aspect ratio of the image while scaling, you'll always end up with a sharp image no matter how much you scale down.
You can use the following method for scaling:
+ (UIImage *)imageWithCGImage:(CGImageRef)imageRef scale:(CGFloat)scale orientation:(UIImageOrientation)orientation
For Swift 3
func resizeImage(image: UIImage, newSize: CGSize) -> (UIImage) {
let newRect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height).integral
UIGraphicsBeginImageContextWithOptions(newSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
context!.interpolationQuality = CGInterpolationQuality.default
let flipVertical = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: newSize.height)
context!.concatenate(flipVertical)
// Draw into the context; this scales the image
context?.draw(image.cgImage!, in: CGRect(x: 0.0,y: 0.0, width: newRect.width, height: newRect.height))
let newImageRef = context!.makeImage()! as CGImage
let newImage = UIImage(cgImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
return newImage
}
#YAR your solution is working properly.
There is only one thing which does not fit my requirements: The whole image is resized. I wrote a Method which did it like the photos app on iphone.
This calculates the "longer side" and cuts off the "overlay" resulting in getting much better results concerning the quality of the image.
- (UIImage *)resizeImageProportionallyIntoNewSize:(CGSize)newSize;
{
CGFloat scaleWidth = 1.0f;
CGFloat scaleHeight = 1.0f;
if (CGSizeEqualToSize(self.size, newSize) == NO) {
//calculate "the longer side"
if(self.size.width > self.size.height) {
scaleWidth = self.size.width / self.size.height;
} else {
scaleHeight = self.size.height / self.size.width;
}
}
//prepare source and target image
UIImage *sourceImage = self;
UIImage *newImage = nil;
// Now we create a context in newSize and draw the image out of the bounds of the context to get
// A proportionally scaled image by cutting of the image overlay
UIGraphicsBeginImageContext(newSize);
//Center image point so that on each egde is a little cutoff
CGRect thumbnailRect = CGRectZero;
thumbnailRect.size.width = newSize.width * scaleWidth;
thumbnailRect.size.height = newSize.height * scaleHeight;
thumbnailRect.origin.x = (int) (newSize.width - thumbnailRect.size.width) * 0.5;
thumbnailRect.origin.y = (int) (newSize.height - thumbnailRect.size.height) * 0.5;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if(newImage == nil) NSLog(#"could not scale image");
return newImage ;
}
For swift 4.2:
extension UIImage {
func resized(By coefficient:CGFloat) -> UIImage? {
guard coefficient >= 0 && coefficient <= 1 else {
print("The coefficient must be a floating point number between 0 and 1")
return nil
}
let newWidth = size.width * coefficient
let newHeight = size.height * coefficient
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
This extension should scale the image while keeping original aspect ratio. The rest of the image is cropped. (Swift 3)
extension UIImage {
func thumbnail(ofSize proposedSize: CGSize) -> UIImage? {
let scale = min(size.width/proposedSize.width, size.height/proposedSize.height)
let newSize = CGSize(width: size.width/scale, height: size.height/scale)
let newOrigin = CGPoint(x: (proposedSize.width - newSize.width)/2, y: (proposedSize.height - newSize.height)/2)
let thumbRect = CGRect(origin: newOrigin, size: newSize).integral
UIGraphicsBeginImageContextWithOptions(proposedSize, false, 0)
draw(in: thumbRect)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
}

How to create a colored 1x1 UIImage on the iPhone dynamically?

I would like to create a 1x1 UIImage dynamically based on a UIColor.
I suspect this can quickly be done with Quartz2d, and I'm poring over the documentation trying to get a grasp of the fundamentals. However, it looks like there are a lot of potential pitfalls: not identifying the numbers of bits and bytes per things correctly, not specifying the right flags, not releasing unused data, etc.
How can this be safely done with Quartz 2d (or another simpler way)?
You can use CGContextSetFillColorWithColor and CGContextFillRect for this:
Swift
extension UIImage {
class func image(with color: UIColor) -> UIImage {
let rect = CGRectMake(0.0, 0.0, 1.0, 1.0)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillRect(context, rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Swift3
extension UIImage {
class func image(with color: UIColor) -> UIImage {
let rect = CGRect(origin: CGPoint(x: 0, y:0), size: CGSize(width: 1, height: 1))
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()!
context.setFillColor(color.cgColor)
context.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
Objective-C
+ (UIImage *)imageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Here's another option based on Matt Stephen's code. It creates a resizable solid color image such that you could reuse it or change it's size (e.g. use it for a background).
+ (UIImage *)prefix_resizeableImageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 3.0f, 3.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
image = [image resizableImageWithCapInsets:UIEdgeInsetsMake(1, 1, 1, 1)];
return image;
}
Put it in a UIImage category and change the prefix.
I used Matt Steven's answer many times so made a category for it:
#interface UIImage (mxcl)
+ (UIImage *)squareImageWithColor:(UIColor *)color dimension:(int)dimension;
#end
#implementation UIImage (mxcl)
+ (UIImage *)squareImageWithColor:(UIColor *)color dimension:(int)dimension {
CGRect rect = CGRectMake(0, 0, dimension, dimension);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
Using Apple's latest UIGraphicsImageRenderer the code is pretty small:
import UIKit
extension UIImage {
static func from(color: UIColor) -> UIImage {
let size = CGSize(width: 1, height: 1)
return UIGraphicsImageRenderer(size: size).image(actions: { (context) in
context.cgContext.setFillColor(color.cgColor)
context.fill(.init(origin: .zero, size: size))
})
}
}
To me, a convenience init feels neater in Swift.
extension UIImage {
convenience init?(color: UIColor, size: CGSize = CGSize(width: 1, height: 1)) {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContext(rect.size)
guard let context = UIGraphicsGetCurrentContext() else {
return nil
}
context.setFillColor(color.cgColor)
context.fill(rect)
guard let image = context.makeImage() else {
return nil
}
UIGraphicsEndImageContext()
self.init(cgImage: image)
}
}
Ok, this won't be exactly what you want, but this code will draw a line. You can adapt it to make a point. Or at least get a little info from it.
Making the image 1x1 seems a little weird. Strokes ride the line, so a stroke of width 1.0 at 0.5 should work. Just play around.
- (void)drawLine{
UIGraphicsBeginImageContext(CGSizeMake(320,300));
CGContextRef ctx = UIGraphicsGetCurrentContext();
float x = 0;
float xEnd = 320;
float y = 300;
CGContextClearRect(ctx, CGRectMake(5, 45, 320, 300));
CGContextSetGrayStrokeColor(ctx, 1.0, 1.0);
CGContextSetLineWidth(ctx, 1);
CGPoint line[2] = { CGPointMake(x,y), CGPointMake(xEnd, y) };
CGContextStrokeLineSegments(ctx, line, 2);
UIImage *theImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}

Resources