I have a game where users can create custom levels and upload them to my server for other users to play and I want to get a screenshot of the "action area" before the user tests his/her level to upload to my server as sort of a "preview image".
I know how to get a screenshot of the entire view, but I want to define it to a custom frame. Consider the following image:
I want to just take a screenshot of the area in red, the "action area." Can I achieve this?
Just you need to make a rect of the area you want to be captured and pass the rect in the method.
Swift 3.x :
extension UIView {
func imageSnapshot() -> UIImage {
return self.imageSnapshotCroppedToFrame(frame: nil)
}
func imageSnapshotCroppedToFrame(frame: CGRect?) -> UIImage {
let scaleFactor = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(bounds.size, false, scaleFactor)
self.drawHierarchy(in: bounds, afterScreenUpdates: true)
var image: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
if let frame = frame {
let scaledRect = frame.applying(CGAffineTransform(scaleX: scaleFactor, y: scaleFactor))
if let imageRef = image.cgImage!.cropping(to: scaledRect) {
image = UIImage(cgImage: imageRef)
}
}
return image
}
}
//How to call :
imgview.image = self.view.imageSnapshotCroppedToFrame(frame: CGRect.init(x: 0, y: 0, width: 320, height: 100))
Objective C :
-(UIImage *)captureScreenInRect:(CGRect)captureFrame
{
CALayer *layer;
layer = self.view.layer;
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureFrame);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenImage;
}
//How to call :
imgView.image = [self captureScreenInRect:CGRectMake(0, 0, 320, 100)];
- (UIImage *) getScreenShot {
UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow];
CGRect rect = [keyWindow bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[keyWindow.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Related
I would like to include the Navigation bar in my tableview screenshot image. The following code works to capture the entire tableview and I have tried other code that captures the Navigation bar but not the entire tableview. Is it possible to do both at the same time?
func screenshot(){
var image = UIImage();
UIGraphicsBeginImageContextWithOptions(CGSize(width:tableView.contentSize.width, height:tableView.contentSize.height),false, 0.0)
let context = UIGraphicsGetCurrentContext()
let previousFrame = tableView.frame
tableView.frame = CGRect(x: tableView.frame.origin.x, y: tableView.frame.origin.y, width: tableView.contentSize.width, height: tableView.contentSize.height)
tableView.layer.render(in: context!)
tableView.frame = previousFrame
image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
Take navigation bar and tableview screenshots separately, then merge them.
Objective C:
-(UIImage*)merge:(UIImage*)tvImage with:(UIImage*)navImage {
CGFloat contextHeight = tableView.contentSize.height + self.navigationController.navigationBar.frame.size.height;
CGRect contextFrame = CGRectMake(0, 0, tableView.frame.size.width, contextHeight);
UIGraphicsBeginImageContextWithOptions(contextFrame.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
//1 draw navigation image in context
[navImage drawInRect:self.navigationController.navigationBar.frame];
//2 draw tableview image in context
CGFloat y = self.navigationController.navigationBar.frame.size.height;
CGFloat h = tableView.contentSize.height;
CGFloat w = tableView.frame.size.width;
[tvImage drawInRect:CGRectMake(0, y, w, h)];
// Clean up and get the new image.
UIGraphicsPopContext();
UIImage *mergeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return mergeImage;
}
Swift 3:
func merge(tvImage:UIImage, with navImage:UIImage) {
let contextHeight:CGFloat = tableView.contentSize.height + self.navigationController!.navigationBar.frame.size.height;
let contextFrame:CGRect = CGRect(0, 0, tableView.frame.size.width, contextHeight);
UIGraphicsBeginImageContextWithOptions(contextFrame.size, false, 0.0);
let context:CGContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
//1 draw navigation image in context
navImage.draw(in: self.navigationController.navigationBar.frame)
//2 draw tableview image in context
let y:CGFloat = self.navigationController.navigationBar.frame.size.height;
let h:CGFloat = tableView.contentSize.height;
let w:CGFloat = tableView.frame.size.width;
tvImage.draw(in: CGRectMake(0, y, w, h))
// Clean up and get the new image.
UIGraphicsPopContext();
let mergeImage:UIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return mergeImage;
}
Writing an extension will always help us to reuse. I have created a simple UIView extension check below
extension UIView {
// render the view within the view's bounds, then capture it as image
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image(actions: { rendererContext in
layer.render(in: rendererContext.cgContext)
})
}
}
Usage:
self.imageview.image = self.view.asImage() // If you want to capture the View
self.imageview.image = self.tabBarController?.view.asImage() // If it's TabbarViewController
My Swift code for capturing a UITableView as an image isn't working when the table is scrolled down. I essentially have the answer in Objective-C but can't seem to make it work in Swift. Currently this is what I have in Swift:
func snapshotOfCell (inputView: UIView) -> UIView {
UIGraphicsBeginImageContextWithOptions(inputView.bounds.size, false, 0.0)
inputView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext() as UIImage
UIGraphicsEndImageContext()
let cellSnapshot : UIView = UIImageView(image: image)
cellSnapshot.layer.masksToBounds = false
return cellSnapshot
}
I found this answer but it's in Objective-C:
-(UIImage *) imageWithTableView:(UITableView *)tableView {
UIView *renderedView = tableView;
CGPoint tableContentOffset = tableView.contentOffset;
UIGraphicsBeginImageContextWithOptions(renderedView.bounds.size, renderedView.opaque, 0.0);
CGContextRef contextRef = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(contextRef, 0, -tableContentOffset.y);
[tableView.layer renderInContext:contextRef];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
It seems to fix the scroll problem by using a contentOffset. However, I've been trying to integrate it into my Swift function without success. Anyone good with both Objective-C and Swift? Thanks!
capture whole tableview as a image
UIGraphicsBeginImageContextWithOptions(CGSizeMake(tableView.contentSize.width, tableView.contentSize.height),false, 0.0)
let context = UIGraphicsGetCurrentContext()
let previousFrame = tableView.frame
tableView.frame = CGRectMake(tableView.frame.origin.x, tableView.frame.origin.y, tableView.contentSize.width, tableView.contentSize.height);
tableView.layer.renderInContext(context!)
tableView.frame = previousFrame
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
imageView.image = image;
capture the screenshot of tableview in a scrolled position
let contentOffset = tableView.contentOffset
UIGraphicsBeginImageContextWithOptions(tableView.bounds.size, true, 1)
let context = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(context, 0, -contentOffset.y)
tableView.layer.renderInContext(context!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
imageView.image = image;
Swift 3.0 version for capturing entire tableview based on #Jeyamahesan's answer
UIGraphicsBeginImageContextWithOptions(CGSize(width:tableView.contentSize.width, height:tableView.contentSize.height),false, 0.0)
let context = UIGraphicsGetCurrentContext()
let previousFrame = tableView.frame
tableView.frame = CGRect(x: tableView.frame.origin.x, y: tableView.frame.origin.y, width: tableView.contentSize.width, height: tableView.contentSize.height)
tableView.layer.render(in: context!)
tableView.frame = previousFrame
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
Try this piece of code:
-(UIImage *)screenshot {
UIImage *image = nil;
UIGraphicsBeginImageContextWithOptions(tableView.contentSize, false, 0.0);
{
CGPoint savedContentOffset = tableView.contentOffset;
CGRect savedFrame = tableView.frame;
tableView.contentOffset = CGPointMake(0.0, 0.0);
tableView.frame = CGRectMake(0, 0.0, tableView.contentSize.width, tableView.contentSize.height);
[tableView.layer renderInContext:UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
tableView.contentOffset = savedContentOffset;
tableView.frame = savedFrame;
}
UIGraphicsEndImageContext();
return image;
}
Happy Coding..!!
Please try this one Its may be help to you
- (UIImage*)buildImage:(UIImage*)image
{
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
CGFloat scale;
scale = image.size.width / _workingView.frame.size.width;
CGContextScaleCTM(UIGraphicsGetCurrentContext(), scale, scale);
NSLog(#"%f",scale);
[tableView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *tmp = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tmp;
}
I'm receiving image from a server, then based on a color chosen by the user, the image color will be changed.
I tried the following :
_sketchImageView.image = [_sketchImageView.image imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
[_sketchImageView setTintColor:color];
i got the opposite of my goal (the white color outside UIImage is colored with the chosen color).
what is going wrong?
i need to do the same in this question,the provided solution doesn't solve my case.
How can I change image tintColor in iOS and WatchKit
Try to generate new image for yourself
UIImage *newImage = [_sketchImageView.image imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
UIGraphicsBeginImageContextWithOptions(image.size, NO, newImage.scale);
[yourTintColor set];
[newImage drawInRect:CGRectMake(0, 0, image.size.width, newImage.size.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
_sketchImageView.image = newImage;
And use it.
Good luck
======= UPDATE =======
This solution will only change color of all pixel's image.
Example: we have a book image: http://pngimg.com/upload/book_PNG2113.png
And after running above code (exp: TintColor is RED). We have:
SO: how your image is depends on how you designed it
In Swift you can use this extension: [Based on #VietHung's objective-c solution]
Swift 5:
extension UIImage {
func imageWithColor(color: UIColor) -> UIImage? {
var image = withRenderingMode(.alwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
}
Previous Swift version:
extension UIImage {
func imageWithColor(color: UIColor) -> UIImage? {
var image = imageWithRenderingMode(.AlwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image.drawInRect(CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
In swift 2.0 you can use this
let image = UIImage(named:"your image name")?.imageWithRenderingMode(.AlwaysTemplate)
let yourimageView.tintColor = UIColor.redColor()
yourimageView.image = image
In swift 3.0 you can use this
let image = UIImage(named:"your image name")?.withRenderingMode(.alwaysTemplate)
let yourimageView.tintColor = UIColor.red
yourimageView.image = image
Try something like this
UIImage *originalImage = _sketchImageView.image
UIImage *newImage = [originalImage imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,50,50)]; // your image size
imageView.tintColor = [UIColor redColor]; // or whatever color that has been selected
imageView.image = newImage;
_sketchImageView.image = imageView.image;
Hope this helps.
In Swift 3.0 you can use this extension: [Based on #VietHung's objective-c solution]
extension UIImage {
func imageWithColor(_ color: UIColor) -> UIImage? {
var image = imageWithRenderingMode(.alwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
For Swift 3.0, I made a custom subclass of UIImageView called TintedUIImageView. Now the image uses whatever tint color is set in interface builder or code
class TintedUIImageView: UIImageView {
override func awakeFromNib() {
if let image = self.image {
self.image = image.withRenderingMode(.alwaysTemplate)
}
}
}
You can try:
_sketchImageView.image = [self imageNamed:#"imageName" withColor:[UIColor blackColor]];
- (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)color
{
// load the image
//NSString *name = #"badge.png";
UIImage *img = [UIImage imageNamed:name];
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(img.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
Try setting the tint color on the superview of the image view. E.g. [self.view setTintColor:color];
in Swift 4 you can simply make an extension like that:
import UIKit
extension UIImageView {
func tintImageColor(color: UIColor) {
guard let image = image else { return }
self.image = image.withRenderingMode(UIImageRenderingMode.alwaysTemplate)
self.tintColor = color
}
}
- SWIFT 4
extension UIImage {
func imageWithColor(_ color: UIColor) -> UIImage? {
var image: UIImage? = withRenderingMode(.alwaysTemplate)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.set()
image?.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Here's how I apply and use tints in IOS 9 with Swift.
//apply a color to an image
//ref - http://stackoverflow.com/questions/28427935/how-can-i-change-image-tintcolor
//ref - https://www.captechconsulting.com/blogs/ios-7-tutorial-series-tint-color-and-easy-app-theming
func getTintedImage() -> UIImageView {
var image : UIImage;
var imageView : UIImageView;
image = UIImage(named: "someAsset")!;
let size : CGSize = image.size;
let frame : CGRect = CGRectMake((UIScreen.mainScreen().bounds.width-86)/2, 600, size.width, size.height);
let redCover : UIView = UIView(frame: frame);
redCover.backgroundColor = UIColor.redColor();
redCover.layer.opacity = 0.75;
imageView = UIImageView();
imageView.image = image.imageWithRenderingMode(UIImageRenderingMode.Automatic);
imageView.addSubview(redCover);
return imageView;
}
One thing you can do is, just add your images to Assets folder in XCode and then change the rendering mode to Template Image, so whenever you change the tint color of UIImageView, it will automatically makes change to image.
Check this link out -> https://www.google.co.in/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&cad=rja&uact=8&ved=0ahUKEwiM0YXO0ejTAhUIQ48KHfGpBpgQjRwIBw&url=https%3A%2F%2Fkrakendev.io%2Fblog%2F4-xcode-asset-catalog-secrets-you-need-to-know&psig=AFQjCNGnAzVn92pCqM8612o1R0J9q1y7cw&ust=1494619445516498
let image = UIImage(named: "i m a g e n a m e")?.withRenderingMode(.alwaysTemplate)
imageView.tintColor = UIColor.white // Change to require color
imageView.image = image
Try this
iOS 13.4 and above
UIImage *image = [UIImage imageNamed:#"placeHolderIcon"];
[image imageWithTintColor:[UIColor whiteColor] renderingMode: UIImageRenderingModeAlwaysTemplate];
How to create a circular image with border (UIGraphics)?
P.S. I need to draw a picture.
code in viewDidLoad:
NSURL *url2 = [NSURL URLWithString:#"http://images.ak.instagram.com/profiles/profile_55758514_75sq_1399309159.jpg"];
NSData *data2 = [NSData dataWithContentsOfURL:url2];
UIImage *profileImg = [UIImage imageWithData:data2];
UIGraphicsEndImageContext();
// Create image context with the size of the background image.
UIGraphicsBeginImageContext(profileImg.size);
[profileImg drawInRect:CGRectMake(0, 0, profileImg.size.width, profileImg.size.height)];
// Get the newly created image.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
// Release the context.
UIGraphicsEndImageContext();
// Set the newly created image to the imageView.
self.imageView.image = result;
It sounds like you want to clip the image to a circle. Here's an example:
static UIImage *circularImageWithImage(UIImage *inputImage,
UIColor *borderColor, CGFloat borderWidth)
{
CGRect rect = (CGRect){ .origin=CGPointZero, .size=inputImage.size };
UIGraphicsBeginImageContextWithOptions(rect.size, NO, inputImage.scale); {
// Fill the entire circle with the border color.
[borderColor setFill];
[[UIBezierPath bezierPathWithOvalInRect:rect] fill];
// Clip to the interior of the circle (inside the border).
CGRect interiorBox = CGRectInset(rect, borderWidth, borderWidth);
UIBezierPath *interior = [UIBezierPath bezierPathWithOvalInRect:interiorBox];
[interior addClip];
[inputImage drawInRect:rect];
}
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Result:
Have you tried this ?
self.imageView.layer.borderColor = [UIColor greenColor].CGColor;
self.imageView.layer.borderWidth = 1.f;
You'll also need
self.imageView.layer.corderRadius = self.imageView.frame.size.width/2;
self.imageView.clipsToBounds = YES;
Swift 4 version
extension UIImage {
func circularImageWithBorderOf(color: UIColor, diameter: CGFloat, boderWidth:CGFloat) -> UIImage {
let aRect = CGRect.init(x: 0, y: 0, width: diameter, height: diameter)
UIGraphicsBeginImageContextWithOptions(aRect.size, false, self.scale)
color.setFill()
UIBezierPath.init(ovalIn: aRect).fill()
let anInteriorRect = CGRect.init(x: boderWidth, y: boderWidth, width: diameter-2*boderWidth, height: diameter-2*boderWidth)
UIBezierPath.init(ovalIn: anInteriorRect).addClip()
self.draw(in: anInteriorRect)
let anImg = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return anImg
}
}
I would like to create a 1x1 UIImage dynamically based on a UIColor.
I suspect this can quickly be done with Quartz2d, and I'm poring over the documentation trying to get a grasp of the fundamentals. However, it looks like there are a lot of potential pitfalls: not identifying the numbers of bits and bytes per things correctly, not specifying the right flags, not releasing unused data, etc.
How can this be safely done with Quartz 2d (or another simpler way)?
You can use CGContextSetFillColorWithColor and CGContextFillRect for this:
Swift
extension UIImage {
class func image(with color: UIColor) -> UIImage {
let rect = CGRectMake(0.0, 0.0, 1.0, 1.0)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillRect(context, rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Swift3
extension UIImage {
class func image(with color: UIColor) -> UIImage {
let rect = CGRect(origin: CGPoint(x: 0, y:0), size: CGSize(width: 1, height: 1))
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()!
context.setFillColor(color.cgColor)
context.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
Objective-C
+ (UIImage *)imageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Here's another option based on Matt Stephen's code. It creates a resizable solid color image such that you could reuse it or change it's size (e.g. use it for a background).
+ (UIImage *)prefix_resizeableImageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 3.0f, 3.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
image = [image resizableImageWithCapInsets:UIEdgeInsetsMake(1, 1, 1, 1)];
return image;
}
Put it in a UIImage category and change the prefix.
I used Matt Steven's answer many times so made a category for it:
#interface UIImage (mxcl)
+ (UIImage *)squareImageWithColor:(UIColor *)color dimension:(int)dimension;
#end
#implementation UIImage (mxcl)
+ (UIImage *)squareImageWithColor:(UIColor *)color dimension:(int)dimension {
CGRect rect = CGRectMake(0, 0, dimension, dimension);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
Using Apple's latest UIGraphicsImageRenderer the code is pretty small:
import UIKit
extension UIImage {
static func from(color: UIColor) -> UIImage {
let size = CGSize(width: 1, height: 1)
return UIGraphicsImageRenderer(size: size).image(actions: { (context) in
context.cgContext.setFillColor(color.cgColor)
context.fill(.init(origin: .zero, size: size))
})
}
}
To me, a convenience init feels neater in Swift.
extension UIImage {
convenience init?(color: UIColor, size: CGSize = CGSize(width: 1, height: 1)) {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContext(rect.size)
guard let context = UIGraphicsGetCurrentContext() else {
return nil
}
context.setFillColor(color.cgColor)
context.fill(rect)
guard let image = context.makeImage() else {
return nil
}
UIGraphicsEndImageContext()
self.init(cgImage: image)
}
}
Ok, this won't be exactly what you want, but this code will draw a line. You can adapt it to make a point. Or at least get a little info from it.
Making the image 1x1 seems a little weird. Strokes ride the line, so a stroke of width 1.0 at 0.5 should work. Just play around.
- (void)drawLine{
UIGraphicsBeginImageContext(CGSizeMake(320,300));
CGContextRef ctx = UIGraphicsGetCurrentContext();
float x = 0;
float xEnd = 320;
float y = 300;
CGContextClearRect(ctx, CGRectMake(5, 45, 320, 300));
CGContextSetGrayStrokeColor(ctx, 1.0, 1.0);
CGContextSetLineWidth(ctx, 1);
CGPoint line[2] = { CGPointMake(x,y), CGPointMake(xEnd, y) };
CGContextStrokeLineSegments(ctx, line, 2);
UIImage *theImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}