Create a circle or a disk with antialiasing for retina display - ios

I have creating a circle using CGContextFillEllipseInRect from CoreGraphic.
I'm using this circle (which is actually a disk) to replace the thumbImage of an UISlider. The antialiasing is applied by default.
But the result on my iPhone 6 is clearly not good. I can clearly see the pixels ,not as much as with the antialiasing off, but way more than the pixels of a normal UISlider.
Maybe I'm doing something wrong. So my question is, is there a way to get programmatically the same nice disk than the one used by default for an UISlider?
EDIT:
Here is how I create the disk:
class func drawDisk(rgbValue:UInt32, rectForDisk: CGRect = CGRectMake(0.0, 0.0, 1.0, 1.0)) -> UIImage {
let color = uicolorFromHex(rgbValue)
UIGraphicsBeginImageContext(rectForDisk.size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillEllipseInRect(context, rectForDisk)
let rectForCircle = CGRectMake(0.5, 0.5, rectForDisk.size.width - 1, rectForDisk.size.height - 1)
CGContextSetLineWidth(context, 1.0)
CGContextSetStrokeColorWithColor(context,
UIColor.blackColor().CGColor)
CGContextAddEllipseInRect(context, rectForCircle)
CGContextStrokePath(context)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}

The problem is that you are creating a non-retina graphics context when using UIGraphicsBeginImageContext, as mentioned in the documentation:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
Instead you should be using UIGraphicsBeginImageContextWithOptions to create your image context. You can keep passing NO for the opaque parameter if you want an image that supports transparency (same as what you are implicitly doing now).
In most cases you can pass 0.0 for the scale factor. This sets the scale factor to that of the device's main screen. Again, as mentioned in the documentation:
If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
So, in short, you should create your image context like this:
UIGraphicsBeginImageContextWithOptions(rectForDisk.size, false, 0.0) // false for Swift

Related

CIImage extent in pixels or points?

I'm working with a CIImage, and while I understand it's not a linear image, it does hold some data.
My question is whether or not a CIImage's extent property returns pixels or points? According to the documentation, which says very little, it's working space coordinates. Does this mean there's no way to get the pixels / points from a CIImage and I must convert to a UIImage to use the .size property to get the points?
I have a UIImage with a certain size, and when I create a CIImage using the UIImage, the extent is shown in points. But if I run a CIImage through a CIFilter that scales it, I sometimes get the extent returned in pixel values.
I'll answer the best I can.
If your source is a UIImage, its size will be the same as the extent. But please, this isn't a UIImageView (which the size is in points). And we're just talking about the source image.
Running something through a CIFilter means you are manipulating things. If all you are doing is manipulating color, its size/extent shouldn't change (the same as creating your own CIColorKernel - it works pixel-by-pixel).
But, depending on the CIFilter, you may well be changing the size/extent. Certain filters create a mask, or tile. These may actually have an extent that is infinite! Others (blurs are a great example) sample surrounding pixels so their extent actually increases because they sample "pixels" beyond the source image's size. (Custom-wise these are a CIWarpKernel.)
Yes, quite a bit. Taking this to a bottom line:
What is the filter doing? Does it need to simply check a pixel's RGB and do something? Then the UIImage size should be the output CIImage extent.
Does the filter produce something that depends on the pixel's surrounding pixels? Then the output CIImage extent is slightly larger. How much may depend on the filter.
There are filters that produce something with no regard to an input. Most of these may have no true extent, as they can be infinite.
Points are what UIKit and CoreGraphics always work with. Pixels? At some point CoreImage does, but it's low-level to a point (unless you want to write your own kernel) you shouldn't care. Extents can usually - but keep in mind the above - be equated to a UIImage size.
EDIT
Many images (particularly RAW ones) can have so large a size as to affect performance. I have an extension for UIImage that resizes an image to a specific rectangle to help maintain consistent CI performance.
extension UIImage {
public func resizeToBoundingSquare(_ boundingSquareSideLength : CGFloat) -> UIImage {
let imgScale = self.size.width > self.size.height ? boundingSquareSideLength / self.size.width : boundingSquareSideLength / self.size.height
let newWidth = self.size.width * imgScale
let newHeight = self.size.height * imgScale
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContext(newSize)
self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
return resizedImage!
}
}
Usage:
image = image.resizeToBoundingSquare(640)
In this example, an image size of 3200x2000 would be reduced to 640x400. Or an image size or 320x200 would be enlarged to 640x400. I do this to an image before rendering it and before creating a CIImage to use in a CIFilter.
I suggest you think of them as points. There is no scale and no screen (a CIImage is not something that is drawn), so there are no pixels.
A UIImage backed by a CGImage is the basis for drawing, and in addition to the CGImage it has a scale; together with the screen resolution, that gives us our translation from points to pixels.

iOS the quality of drawing lines using CGContext is much worse than SKShapeNode in SpriteKit

I'm making a game where the user can draw lines with finger. There are tons of methods on websites. I tried two methods, one using CGContext in UIView (UIKit), and the other using CGPath and SKShapeNode in SpriteKit. But the latter shows much better quality. The first one using CGContext has ugly rough edges.
Please see following screen shots. I also attached part of code for both methods here, (in the touchesMove function).
Note: var ref = CGPathCreateMutable()
CGContext in UIView
CGPathAddLineToPoint(ref, nil, currentPoint.x, currentPoint.y)
UIGraphicsBeginImageContext(self.frame.size)
let context = UIGraphicsGetCurrentContext()
tempImageView.image?.drawInRect(CGRect(x: 0, y: 0, width: self.frame.size.width, height: self.frame.size.height))
CGContextAddPath(context, ref)
// 3
CGContextSetLineCap(context, CGLineCap.Round)
CGContextSetLineWidth(context, brushWidth)
CGContextSetRGBStrokeColor(context, red, green, blue, 1.0)
CGContextSetBlendMode(context, CGBlendMode.Normal)
// 4
CGContextStrokePath(context)
// 5
UIGraphicsEndImageContext()
SKShapeNode in SpriteKit
Note: var lineDrawing = SKShapeNode()
CGPathAddLineToPoint(ref, nil, location.x, location.y)
lineDrawing.path = ref
lineDrawing.lineWidth = 3
lineDrawing.strokeColor = UIColor.blackColor()
lineDrawing.alpha = 1.0
self.addChild(lineDrawing)
How can I draw lines in UIView with the same quality of SKShapeNode?
One obvious problem is that you're using an outdated function that doesn't handle Retina screens:
UIGraphicsBeginImageContext(self.frame.size)
You should be using this instead:
UIGraphicsBeginImageContextWithOptions(self.frame.size, false, 0)
The WithOptions version, with 0 as the final argument, creates an image context at Retina resolution if the device has a Retina screen. (The 0 argument means “use the device screen's scale factor”.)
There may be other issues, because you didn't show the code that creates the points in the path for each test case.

UIImagePicker: How to always pick square image from camera library?

I am wondering how I can always pick a square image from camera library using UIImagePicker?
So I have set imagePicker.allowsEditing = true and when the image I pick is large enough (larger than the square crop), the picked image is square. But, when the image that I pick is smaller, let's say it's 748 by 466, even though the square crop enclose the image including the top and bottom black parts, the picked image does not include the black parts and so it returns a non square image. How do I make it so that it always picks the black top and bottom parts so image is always square?
Thanks a lot for help!
Here is the method using CoreGraphics to add the black area manually, add them in the UIImagePicker delegate method: 
let squareSideLength = image.size.width > image.size.height ? image.size.width : image.size.height
UIGraphicsBeginImageContextWithOptions(CGSizeMake(squareSideLength, squareSideLength), false, 1)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, UIColor.blackColor().CGColor)
CGContextFillRect(context, CGRectMake(0, 0, squareSideLength, squareSideLength))
image.drawInRect(CGRectMake((squareSideLength - image.size.width) / 2, (squareSideLength - image.size.height) / 2, image.size.width, image.size.height))
let imageYouWant = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
use imageYouWant then.

How to tint only one part of the UIImage with alpha channel - PNG (replacing color)?

I have this transparent image:
My goal is to change the "ME!" parts color. Either tint only the last 3rd of the image, or replace the blue color with the new color.
Expected result after color change:
Unfortunately neither worked for me. To change the specific color I tried this: LINK, but as the documentation says, this works only without alpha channel!
Then I tried this one: LINK, but this actually does nothing, no tint or anything.
Is there any other way to tint only one part of the color or just replace a specific color?
I know I could slice the image in two parts, but I hope there is another way.
It turns out to be surprisingly complicated—you’d think you could do it in one pass with CoreGraphics blend modes, but from pretty extensive experimentation I haven’t found such a way that doesn’t mangle the alpha channel or the coloration. The solution I landed on is this:
Start with a grayscale/alpha version of your image rather than a colored one: black in the areas you don’t want tinted, white in the areas you do
Create an image context with your image’s dimensions
Fill that context with black
Draw the image into the context
Get a new image (let’s call it “the-image-over-black”) from that context
Clear the context (so you can use it again)
Fill the context with the color you want the tinted part of your image to be
Draw the-image-over-black into the context with the “multiply” blend mode
Draw the original image into the context with the “destination in” blend mode
Get your final image from the context
The reason this works is because of the combination of blend modes. What you’re doing is creating a fully-opaque black-and-white image (step 5), then multiplying it by your final color (step 8), which gives you a fully opaque black-and-your-final-color image. Then, you take the original image, which still has its alpha channel, and draw it with the “destination in” blend mode which takes the color from the black-and-your-color image and the alpha channel from the original image. The result is a tinted image with the original brightness values and alpha channel.
Objective-C
- (UIImage *)createTintedImageFromImage:(UIImage *)originalImage color:(UIColor *)desiredColor {
CGSize imageSize = originalImage.size;
CGFloat imageScale = originalImage.scale;
CGRect contextBounds = CGRectMake(0, 0, imageSize.width, imageSize.height);
UIGraphicsBeginImageContextWithOptions(imageSize, NO /* not opaque */, imageScale); // 2
[[UIColor blackColor] setFill]; // 3a
UIRectFill(contextBounds); // 3b
[originalImage drawAtPoint:CGPointZero]; // 4
UIImage *imageOverBlack = UIGraphicsGetImageFromCurrentImageContext(); // 5
CGContextClear(UIGraphicsGetCurrentImageContext()); // 6
[desiredColor setFill]; // 7a
UIRectFill(contextBounds); // 7b
[imageOverBlack drawAtPoint:CGPointZero blendMode:kCGBlendModeMultiply alpha:1]; // 8
[originalImage drawAtPoint:CGPointZero blendMode:kCGBlendModeDestinationIn alpha:1]; // 9
finalImage = UIGraphicsGetImageFromCurrentContext(); // 10
UIGraphicsEndImageContext();
return finalImage;
}
Swift 4
func createTintedImageFromImage(originalImage: UIImage, desiredColor: UIColor) -> UIImage {
let imageSize = originalImage.size
let imageScale = originalImage.scale
let contextBounds = CGRect(origin: .zero, size: imageSize)
UIGraphicsBeginImageContextWithOptions(imageSize, false /* not opaque */, imageScale) // 2
defer { UIGraphicsEndImageContext() }
UIColor.black.setFill() // 3a
UIRectFill(contextBounds) // 3b
originalImage.draw(at: .zero) // 4
guard let imageOverBlack = UIGraphicsGetImageFromCurrentImageContext() else { return originalImage } // 5
desiredColor.setFill() // 7a
UIRectFill(contextBounds) // 7b
imageOverBlack.draw(at: .zero, blendMode: .multiply, alpha: 1) // 8
originalImage.draw(at: .zero, blendMode: .destinationIn, alpha: 1) // 9
guard let finalImage = UIGraphicsGetImageFromCurrentImageContext() else { return originalImage } // 10
return finalImage
}
There are lots of ways to do this.
Core image filters come to mind as a good way to go. Since the part you want to change is a unique color, you could use the Core image CIHueAdjust filter to shift the hue from blue to red. Only the word you want to change has any color to it, so that's all it would change.
If you had an image with various colors in it and still wanted to replace ALL The colors in the image you could use CIColorCube to map the blue pixels to red without affecting other colors. There was a thread on this board last week with sample code using CIColorCube to force one color to another. Search on CIColorCube and look for the most recent post and you should be able to find it.
If you wanted to limit the change to a specific area of the screen you could probably come up with a sequence of core image filters that would limit your changes to just the target area.
You could also slice out the part you want to change, color edit it using a any of variety of techniques, and then composite it back together.
Another way is to use CoreImage filter - ColorCube.
Made category for myself when had this problem. It is for NSImage, but I think should work for UIImage after some update
https://github.com/braginets/NSImage-replace-color

Filling UIBezierPath results in pixelated image from image context [duplicate]

This question already has an answer here:
How come my drawing code keeps resulting in fuzzy shapes?
(1 answer)
Closed 8 years ago.
I have implemented a very simple filled in circle using UIBezierPath, which I then convert to UIImage so that I can set that to a UITableViewCell's imageView. This works really well, but you can see the edges are pixelated on a Retina display. Why is that, what can be done to ensure it looks fantastic?
let colorSize = cell.frame.size.height - 20
let rect = CGRectMake(0, 0, colorSize, colorSize)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
UIBezierPath(roundedRect: rect, cornerRadius: (colorSize)/2).addClip()
CGContextSetFillColorWithColor(context, UIColor.redColor().CGColor);
CGContextFillRect(context, rect)
var colorImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
cell.imageView?.image = colorImage
Step 1. Read the UIGraphicsBeginImageContext documentation:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to false and a scale factor of 1.0.
Step 2. Follow the link to the UIGraphicsBeginImageContextWithOptions documentation, which says:
scale The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
Step 3. Try using UIGraphicsBeginImageContextWithOptions with a scale of 0.0:
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
Step 4. Profit…?

Resources