How to create a circular image with border (UIGraphics)?
P.S. I need to draw a picture.
code in viewDidLoad:
NSURL *url2 = [NSURL URLWithString:#"http://images.ak.instagram.com/profiles/profile_55758514_75sq_1399309159.jpg"];
NSData *data2 = [NSData dataWithContentsOfURL:url2];
UIImage *profileImg = [UIImage imageWithData:data2];
UIGraphicsEndImageContext();
// Create image context with the size of the background image.
UIGraphicsBeginImageContext(profileImg.size);
[profileImg drawInRect:CGRectMake(0, 0, profileImg.size.width, profileImg.size.height)];
// Get the newly created image.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
// Release the context.
UIGraphicsEndImageContext();
// Set the newly created image to the imageView.
self.imageView.image = result;
It sounds like you want to clip the image to a circle. Here's an example:
static UIImage *circularImageWithImage(UIImage *inputImage,
UIColor *borderColor, CGFloat borderWidth)
{
CGRect rect = (CGRect){ .origin=CGPointZero, .size=inputImage.size };
UIGraphicsBeginImageContextWithOptions(rect.size, NO, inputImage.scale); {
// Fill the entire circle with the border color.
[borderColor setFill];
[[UIBezierPath bezierPathWithOvalInRect:rect] fill];
// Clip to the interior of the circle (inside the border).
CGRect interiorBox = CGRectInset(rect, borderWidth, borderWidth);
UIBezierPath *interior = [UIBezierPath bezierPathWithOvalInRect:interiorBox];
[interior addClip];
[inputImage drawInRect:rect];
}
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Result:
Have you tried this ?
self.imageView.layer.borderColor = [UIColor greenColor].CGColor;
self.imageView.layer.borderWidth = 1.f;
You'll also need
self.imageView.layer.corderRadius = self.imageView.frame.size.width/2;
self.imageView.clipsToBounds = YES;
Swift 4 version
extension UIImage {
func circularImageWithBorderOf(color: UIColor, diameter: CGFloat, boderWidth:CGFloat) -> UIImage {
let aRect = CGRect.init(x: 0, y: 0, width: diameter, height: diameter)
UIGraphicsBeginImageContextWithOptions(aRect.size, false, self.scale)
color.setFill()
UIBezierPath.init(ovalIn: aRect).fill()
let anInteriorRect = CGRect.init(x: boderWidth, y: boderWidth, width: diameter-2*boderWidth, height: diameter-2*boderWidth)
UIBezierPath.init(ovalIn: anInteriorRect).addClip()
self.draw(in: anInteriorRect)
let anImg = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return anImg
}
}
Related
Background info: I have a UIImageView. I have added an overlay colour on its image in the following way:
UIGraphicsBeginImageContext(initialImage.size);
[initialImage drawInRect:CGRectMake(0, 0, initialImage.size.width, initialImage.size.height) blendMode:kCGBlendModeNormal alpha:alphaValue];
UIBezierPath * path = [UIBezierPath bezierPathWithRect:CGRectMake(0, 0, initialImage.size.width, initialImage.size.height)];
[overlayColor setFill];
[path fillWithBlendMode:kCGBlendModeMultiply alpha:1];
finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setImage:finalImage];
I still want to add this as an overlay colour but I want it to have a gradient. I have been trying to figure out a way to do this but haven't really been successful. I guess, the approach of adding an overlay colour with a gradient is wrong? I'm not sure about how to do this. I have tried to add a CGGradientLayer as a sublayer to the UIImageView but it doesn't work.
I thought about adding a UIView and setting it's backgroundColor as the overlayColorand then adding a CGGradientLayer as sublayer of the UIView which is added as a subview to the UIImageView but, we are not supposed to add subviews to UIImageViews.
Can someone please help me with this? Maybe I should change my approach?
Pointing me in the right direction will be great as well!
I look forward to your responses and apologies if this post hasn't been entirely clear!
Thanks in advance for your help!
Edit: Code for the CGGradientLayer
CAGradientLayer *gradient = [CAGradientLayer layer];
gradient.frame = self.frame;
UIColor *colorOne = [UIColor colorFromHex:self.feedColor withAlpha:(alphaValue * 0.7)];
UIColor *colorTwo = [UIColor colorFromHex:self.feedColor withAlpha:(alphaValue * 1.0)];
gradient.colors = [NSArray arrayWithObjects:(id)colorOne.CGColor, (id)colorTwo.CGColor, nil];
[self.layer insertSublayer:gradient atIndex:0];
I would just use Core Graphics in order to take your input image, apply a gradient overlay to it, and then pass it onto the UIImageView. Something like this should achieve the desired result:
- (UIImage *)imageWithGradientOverlay:(UIImage *)sourceImage color1:(UIColor *)color1 color2:(UIColor *)color2 gradPointA:(CGPoint)pointA gradPointB:(CGPoint)pointB {
CGSize size = sourceImage.size;
// Start context
UIGraphicsBeginImageContext(size);
CGContextRef c = UIGraphicsGetCurrentContext();
// Draw source image into context
CGContextDrawImage(c, (CGRect){CGPointZero, size}, sourceImage.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat gradLocs[] = {0, 1};
NSArray *colors = #[(id)color1.CGColor, (id)color2.CGColor];
// Create a simple linear gradient with the colors provided.
CGGradientRef grad = CGGradientCreateWithColors(colorSpace, (__bridge CFArrayRef)colors, gradLocs);
CGColorSpaceRelease(colorSpace);
// Draw gradient with multiply blend mode over the source image
CGContextSetBlendMode(c, kCGBlendModeMultiply);
CGContextDrawLinearGradient(c, grad, pointA, pointB, 0);
CGGradientRelease(grad);
// Grab resulting image from context
UIImage *resultImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImg;
}
This is where sourceImage is the input image, color1 and color2 are your gradient colors and gradPointA and gradPointB are your linear gradient end points (In Core Graphics coordinate system, bottom left is (0,0)).
This way you save having to mess about with layers. If you're frequently re-drawing with different colors, then you may want to take an approach that uses layers.
If you're looking for a more dynamic approach, then I would subclass CALayer instead of UIImageView. Therefore you'd want something like this:
#interface gradientImageLayer : CALayer
- (instancetype)initWithFrame:(CGRect)frame;
#end
#implementation gradientImageLayer
- (instancetype)initWithFrame:(CGRect)frame {
if (self = [super init]) {
self.opaque = YES; // Best for performance, but you if you want the layer to have transparency, then remove.
UIImage *i = [UIImage imageNamed:#"foo2.png"]; // Replace with your image
self.frame = frame;
self.contentsScale = [UIScreen mainScreen].nativeScale;
self.contents = (__bridge id _Nullable)(i.CGImage);
// Your code for the CAGradientLayer was indeed correct.
CAGradientLayer *gradient = [CAGradientLayer layer];
gradient.frame = frame;
// Add whatever colors you want here.
UIColor *colorOne = [UIColor colorWithRed:1 green:1 blue:0 alpha:0.1];
UIColor *colorTwo = [UIColor colorWithRed:0 green:1 blue:1 alpha:0.2];
gradient.colors = #[(id)colorOne.CGColor, (id)colorTwo.CGColor]; // Literals read far nicer than a clunky [NSArray arrayWith.... ]
[self addSublayer:gradient];
}
return self;
}
#end
The downside to this approach is you are unable to apply different blend modes. The only solutions I've seen to applying a blend mode on a CALayer is through Core Graphics, but then you'd be better off with my original answer.
I'll post the collaborative solution in this question
This category allows you to add a Color Overlay or Gradient Color Overlay using any blend mode (multiply, normal, etc) and CoreGraphics
Swift 4:
extension UIImage {
//creates a static image with a color of the requested size
static func fromColor(color: UIColor, size: CGSize) -> UIImage {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
context?.setFillColor(color.cgColor)
context?.fill(rect)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
func blendWithColorAndRect(blendMode: CGBlendMode, color: UIColor, rect: CGRect) -> UIImage {
let imageColor = UIImage.fromColor(color: color, size:self.size)
let rectImage = CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)
UIGraphicsBeginImageContextWithOptions(self.size, true, 0)
let context = UIGraphicsGetCurrentContext()
// fill the background with white so that translucent colors get lighter
context!.setFillColor(UIColor.white.cgColor)
context!.fill(rectImage)
self.draw(in: rectImage, blendMode: .normal, alpha: 1)
imageColor.draw(in: rect, blendMode: blendMode, alpha: 0.8)
// grab the finished image and return it
let result = UIGraphicsGetImageFromCurrentImageContext()
//self.backgroundImageView.image = result
UIGraphicsEndImageContext()
return result!
}
//creates a static image with a gradient of colors of the requested size
static func fromGradient(colors: [UIColor], locations: [CGFloat], horizontal: Bool, size: CGSize) -> UIImage {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
let colorSpace = CGColorSpaceCreateDeviceRGB()
let cgColors = colors.map {$0.cgColor} as CFArray
let grad = CGGradient(colorsSpace: colorSpace, colors: cgColors , locations: locations)
let startPoint = CGPoint(x: 0, y: 0)
let endPoint = horizontal ? CGPoint(x: size.width, y: 0) : CGPoint(x: 0, y: size.height)
context?.drawLinearGradient(grad!, start: startPoint, end: endPoint, options: .drawsAfterEndLocation)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
func blendWithGradientAndRect(blendMode: CGBlendMode, colors: [UIColor], locations: [CGFloat], horizontal: Bool = false, alpha: CGFloat = 1.0, rect: CGRect) -> UIImage {
let imageColor = UIImage.fromGradient(colors: colors, locations: locations, horizontal: horizontal, size: size)
let rectImage = CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)
UIGraphicsBeginImageContextWithOptions(self.size, true, 0)
let context = UIGraphicsGetCurrentContext()
// fill the background with white so that translucent colors get lighter
context!.setFillColor(UIColor.white.cgColor)
context!.fill(rectImage)
self.draw(in: rectImage, blendMode: .normal, alpha: 1)
imageColor.draw(in: rect, blendMode: blendMode, alpha: alpha)
// grab the finished image and return it
let result = UIGraphicsGetImageFromCurrentImageContext()
//self.backgroundImageView.image = result
UIGraphicsEndImageContext()
return result!
}
}
Example Gradient:
let newImage = image.blendWithGradientAndRect(blendMode: .multiply,
colors: [.red, .white],
locations: [0, 1],
horizontal: true,
alpha: 0.8,
rect: imageRect)
Example Single Color:
let newImage = image.blendWithColorAndRect(blendMode: .multiply, color: .red, rect: imageRect)
I'm receiving image from a server, then based on a color chosen by the user, the image color will be changed.
I tried the following :
_sketchImageView.image = [_sketchImageView.image imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
[_sketchImageView setTintColor:color];
i got the opposite of my goal (the white color outside UIImage is colored with the chosen color).
what is going wrong?
i need to do the same in this question,the provided solution doesn't solve my case.
How can I change image tintColor in iOS and WatchKit
Try to generate new image for yourself
UIImage *newImage = [_sketchImageView.image imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
UIGraphicsBeginImageContextWithOptions(image.size, NO, newImage.scale);
[yourTintColor set];
[newImage drawInRect:CGRectMake(0, 0, image.size.width, newImage.size.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
_sketchImageView.image = newImage;
And use it.
Good luck
======= UPDATE =======
This solution will only change color of all pixel's image.
Example: we have a book image: http://pngimg.com/upload/book_PNG2113.png
And after running above code (exp: TintColor is RED). We have:
SO: how your image is depends on how you designed it
In Swift you can use this extension: [Based on #VietHung's objective-c solution]
Swift 5:
extension UIImage {
func imageWithColor(color: UIColor) -> UIImage? {
var image = withRenderingMode(.alwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
}
Previous Swift version:
extension UIImage {
func imageWithColor(color: UIColor) -> UIImage? {
var image = imageWithRenderingMode(.AlwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image.drawInRect(CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
In swift 2.0 you can use this
let image = UIImage(named:"your image name")?.imageWithRenderingMode(.AlwaysTemplate)
let yourimageView.tintColor = UIColor.redColor()
yourimageView.image = image
In swift 3.0 you can use this
let image = UIImage(named:"your image name")?.withRenderingMode(.alwaysTemplate)
let yourimageView.tintColor = UIColor.red
yourimageView.image = image
Try something like this
UIImage *originalImage = _sketchImageView.image
UIImage *newImage = [originalImage imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,50,50)]; // your image size
imageView.tintColor = [UIColor redColor]; // or whatever color that has been selected
imageView.image = newImage;
_sketchImageView.image = imageView.image;
Hope this helps.
In Swift 3.0 you can use this extension: [Based on #VietHung's objective-c solution]
extension UIImage {
func imageWithColor(_ color: UIColor) -> UIImage? {
var image = imageWithRenderingMode(.alwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
For Swift 3.0, I made a custom subclass of UIImageView called TintedUIImageView. Now the image uses whatever tint color is set in interface builder or code
class TintedUIImageView: UIImageView {
override func awakeFromNib() {
if let image = self.image {
self.image = image.withRenderingMode(.alwaysTemplate)
}
}
}
You can try:
_sketchImageView.image = [self imageNamed:#"imageName" withColor:[UIColor blackColor]];
- (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)color
{
// load the image
//NSString *name = #"badge.png";
UIImage *img = [UIImage imageNamed:name];
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(img.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
Try setting the tint color on the superview of the image view. E.g. [self.view setTintColor:color];
in Swift 4 you can simply make an extension like that:
import UIKit
extension UIImageView {
func tintImageColor(color: UIColor) {
guard let image = image else { return }
self.image = image.withRenderingMode(UIImageRenderingMode.alwaysTemplate)
self.tintColor = color
}
}
- SWIFT 4
extension UIImage {
func imageWithColor(_ color: UIColor) -> UIImage? {
var image: UIImage? = withRenderingMode(.alwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image?.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Here's how I apply and use tints in IOS 9 with Swift.
//apply a color to an image
//ref - http://stackoverflow.com/questions/28427935/how-can-i-change-image-tintcolor
//ref - https://www.captechconsulting.com/blogs/ios-7-tutorial-series-tint-color-and-easy-app-theming
func getTintedImage() -> UIImageView {
var image : UIImage;
var imageView : UIImageView;
image = UIImage(named: "someAsset")!;
let size : CGSize = image.size;
let frame : CGRect = CGRectMake((UIScreen.mainScreen().bounds.width-86)/2, 600, size.width, size.height);
let redCover : UIView = UIView(frame: frame);
redCover.backgroundColor = UIColor.redColor();
redCover.layer.opacity = 0.75;
imageView = UIImageView();
imageView.image = image.imageWithRenderingMode(UIImageRenderingMode.Automatic);
imageView.addSubview(redCover);
return imageView;
}
One thing you can do is, just add your images to Assets folder in XCode and then change the rendering mode to Template Image, so whenever you change the tint color of UIImageView, it will automatically makes change to image.
Check this link out -> https://www.google.co.in/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&cad=rja&uact=8&ved=0ahUKEwiM0YXO0ejTAhUIQ48KHfGpBpgQjRwIBw&url=https%3A%2F%2Fkrakendev.io%2Fblog%2F4-xcode-asset-catalog-secrets-you-need-to-know&psig=AFQjCNGnAzVn92pCqM8612o1R0J9q1y7cw&ust=1494619445516498
let image = UIImage(named: "i m a g e n a m e")?.withRenderingMode(.alwaysTemplate)
imageView.tintColor = UIColor.white // Change to require color
imageView.image = image
Try this
iOS 13.4 and above
UIImage *image = [UIImage imageNamed:#"placeHolderIcon"];
[image imageWithTintColor:[UIColor whiteColor] renderingMode: UIImageRenderingModeAlwaysTemplate];
I have a game where users can create custom levels and upload them to my server for other users to play and I want to get a screenshot of the "action area" before the user tests his/her level to upload to my server as sort of a "preview image".
I know how to get a screenshot of the entire view, but I want to define it to a custom frame. Consider the following image:
I want to just take a screenshot of the area in red, the "action area." Can I achieve this?
Just you need to make a rect of the area you want to be captured and pass the rect in the method.
Swift 3.x :
extension UIView {
func imageSnapshot() -> UIImage {
return self.imageSnapshotCroppedToFrame(frame: nil)
}
func imageSnapshotCroppedToFrame(frame: CGRect?) -> UIImage {
let scaleFactor = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(bounds.size, false, scaleFactor)
self.drawHierarchy(in: bounds, afterScreenUpdates: true)
var image: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
if let frame = frame {
let scaledRect = frame.applying(CGAffineTransform(scaleX: scaleFactor, y: scaleFactor))
if let imageRef = image.cgImage!.cropping(to: scaledRect) {
image = UIImage(cgImage: imageRef)
}
}
return image
}
}
//How to call :
imgview.image = self.view.imageSnapshotCroppedToFrame(frame: CGRect.init(x: 0, y: 0, width: 320, height: 100))
Objective C :
-(UIImage *)captureScreenInRect:(CGRect)captureFrame
{
CALayer *layer;
layer = self.view.layer;
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureFrame);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenImage;
}
//How to call :
imgView.image = [self captureScreenInRect:CGRectMake(0, 0, 320, 100)];
- (UIImage *) getScreenShot {
UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow];
CGRect rect = [keyWindow bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[keyWindow.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I've looked around everywhere, but I can't find a way to do this. I need to create a black UIImage of a certain width and height (The width and height change, so I can't just create a black box and then load it into a UIImage). Is there some way to make a CGRect and then convert it to a UIImage? Or is there some other way to make a simple black box?
Depending on your situation, you could probably just use a UIView with its backgroundColor set to [UIColor blackColor]. Also, if the image is solidly-colored, you don't need an image that's actually the dimensions you want to display it at; you can just scale a 1x1 pixel image to fill the necessary space (e.g., by setting the contentMode of a UIImageView to UIViewContentModeScaleToFill).
Having said that, it may be instructive to see how to actually generate such an image:
Objective-C
CGSize imageSize = CGSizeMake(64, 64);
UIColor *fillColor = [UIColor blackColor];
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[fillColor setFill];
CGContextFillRect(context, CGRectMake(0, 0, imageSize.width, imageSize.height));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Swift
let imageSize = CGSize(width: 420, height: 120)
let color: UIColor = .black
UIGraphicsBeginImageContextWithOptions(imageSize, true, 0)
let context = UIGraphicsGetCurrentContext()!
color.setFill()
context.fill(CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
let image: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
UIGraphicsBeginImageContextWithOptions(CGSizeMake(w,h), NO, 0);
UIBezierPath* p =
[UIBezierPath bezierPathWithRect:CGRectMake(0,0,w,h)];
[[UIColor blackColor] setFill];
[p fill];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Now im is the image.
That code comes almost unchanged from this section of my book: http://www.apeth.com/iOSBook/ch15.html#_graphics_contexts
Swift 3:
func uiImage(from color:UIColor?, size:CGSize) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(size, true, 0)
defer {
UIGraphicsEndImageContext()
}
let context = UIGraphicsGetCurrentContext()
color?.setFill()
context?.fill(CGRect.init(x: 0, y: 0, width: size.width, height: size.height))
return UIGraphicsGetImageFromCurrentImageContext()
}
Like this
let image = UIGraphicsImageRenderer(size: bounds.size).image { _ in
UIColor.black.setFill()
UIRectFill(bounds)
}
As quoted in this WWDC vid
There's another function that's older; UIGraphicsBeginImageContext. But please, don't use that.
Here is an example that creates a black UIImage that is 1920x1080 by creating it from a CGImage created from a CIImage:
let frame = CGRect(origin: CGPoint(x: 0, y: 0), size: CGSize(width: 1920, height: 1080))
let cgImage = CIContext().createCGImage(CIImage(color: .black()), from: frame)!
let uiImage = UIImage(cgImage: cgImage)
I would like to create a 1x1 UIImage dynamically based on a UIColor.
I suspect this can quickly be done with Quartz2d, and I'm poring over the documentation trying to get a grasp of the fundamentals. However, it looks like there are a lot of potential pitfalls: not identifying the numbers of bits and bytes per things correctly, not specifying the right flags, not releasing unused data, etc.
How can this be safely done with Quartz 2d (or another simpler way)?
You can use CGContextSetFillColorWithColor and CGContextFillRect for this:
Swift
extension UIImage {
class func image(with color: UIColor) -> UIImage {
let rect = CGRectMake(0.0, 0.0, 1.0, 1.0)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillRect(context, rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Swift3
extension UIImage {
class func image(with color: UIColor) -> UIImage {
let rect = CGRect(origin: CGPoint(x: 0, y:0), size: CGSize(width: 1, height: 1))
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()!
context.setFillColor(color.cgColor)
context.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
Objective-C
+ (UIImage *)imageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Here's another option based on Matt Stephen's code. It creates a resizable solid color image such that you could reuse it or change it's size (e.g. use it for a background).
+ (UIImage *)prefix_resizeableImageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 3.0f, 3.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
image = [image resizableImageWithCapInsets:UIEdgeInsetsMake(1, 1, 1, 1)];
return image;
}
Put it in a UIImage category and change the prefix.
I used Matt Steven's answer many times so made a category for it:
#interface UIImage (mxcl)
+ (UIImage *)squareImageWithColor:(UIColor *)color dimension:(int)dimension;
#end
#implementation UIImage (mxcl)
+ (UIImage *)squareImageWithColor:(UIColor *)color dimension:(int)dimension {
CGRect rect = CGRectMake(0, 0, dimension, dimension);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
Using Apple's latest UIGraphicsImageRenderer the code is pretty small:
import UIKit
extension UIImage {
static func from(color: UIColor) -> UIImage {
let size = CGSize(width: 1, height: 1)
return UIGraphicsImageRenderer(size: size).image(actions: { (context) in
context.cgContext.setFillColor(color.cgColor)
context.fill(.init(origin: .zero, size: size))
})
}
}
To me, a convenience init feels neater in Swift.
extension UIImage {
convenience init?(color: UIColor, size: CGSize = CGSize(width: 1, height: 1)) {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContext(rect.size)
guard let context = UIGraphicsGetCurrentContext() else {
return nil
}
context.setFillColor(color.cgColor)
context.fill(rect)
guard let image = context.makeImage() else {
return nil
}
UIGraphicsEndImageContext()
self.init(cgImage: image)
}
}
Ok, this won't be exactly what you want, but this code will draw a line. You can adapt it to make a point. Or at least get a little info from it.
Making the image 1x1 seems a little weird. Strokes ride the line, so a stroke of width 1.0 at 0.5 should work. Just play around.
- (void)drawLine{
UIGraphicsBeginImageContext(CGSizeMake(320,300));
CGContextRef ctx = UIGraphicsGetCurrentContext();
float x = 0;
float xEnd = 320;
float y = 300;
CGContextClearRect(ctx, CGRectMake(5, 45, 320, 300));
CGContextSetGrayStrokeColor(ctx, 1.0, 1.0);
CGContextSetLineWidth(ctx, 1);
CGPoint line[2] = { CGPointMake(x,y), CGPointMake(xEnd, y) };
CGContextStrokeLineSegments(ctx, line, 2);
UIImage *theImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}