This question already has an answer here:
How come my drawing code keeps resulting in fuzzy shapes?
(1 answer)
Closed 8 years ago.
I have implemented a very simple filled in circle using UIBezierPath, which I then convert to UIImage so that I can set that to a UITableViewCell's imageView. This works really well, but you can see the edges are pixelated on a Retina display. Why is that, what can be done to ensure it looks fantastic?
let colorSize = cell.frame.size.height - 20
let rect = CGRectMake(0, 0, colorSize, colorSize)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
UIBezierPath(roundedRect: rect, cornerRadius: (colorSize)/2).addClip()
CGContextSetFillColorWithColor(context, UIColor.redColor().CGColor);
CGContextFillRect(context, rect)
var colorImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
cell.imageView?.image = colorImage
Step 1. Read the UIGraphicsBeginImageContext documentation:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to false and a scale factor of 1.0.
Step 2. Follow the link to the UIGraphicsBeginImageContextWithOptions documentation, which says:
scale The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
Step 3. Try using UIGraphicsBeginImageContextWithOptions with a scale of 0.0:
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
Step 4. Profit…?
Related
I'm working with a CIImage, and while I understand it's not a linear image, it does hold some data.
My question is whether or not a CIImage's extent property returns pixels or points? According to the documentation, which says very little, it's working space coordinates. Does this mean there's no way to get the pixels / points from a CIImage and I must convert to a UIImage to use the .size property to get the points?
I have a UIImage with a certain size, and when I create a CIImage using the UIImage, the extent is shown in points. But if I run a CIImage through a CIFilter that scales it, I sometimes get the extent returned in pixel values.
I'll answer the best I can.
If your source is a UIImage, its size will be the same as the extent. But please, this isn't a UIImageView (which the size is in points). And we're just talking about the source image.
Running something through a CIFilter means you are manipulating things. If all you are doing is manipulating color, its size/extent shouldn't change (the same as creating your own CIColorKernel - it works pixel-by-pixel).
But, depending on the CIFilter, you may well be changing the size/extent. Certain filters create a mask, or tile. These may actually have an extent that is infinite! Others (blurs are a great example) sample surrounding pixels so their extent actually increases because they sample "pixels" beyond the source image's size. (Custom-wise these are a CIWarpKernel.)
Yes, quite a bit. Taking this to a bottom line:
What is the filter doing? Does it need to simply check a pixel's RGB and do something? Then the UIImage size should be the output CIImage extent.
Does the filter produce something that depends on the pixel's surrounding pixels? Then the output CIImage extent is slightly larger. How much may depend on the filter.
There are filters that produce something with no regard to an input. Most of these may have no true extent, as they can be infinite.
Points are what UIKit and CoreGraphics always work with. Pixels? At some point CoreImage does, but it's low-level to a point (unless you want to write your own kernel) you shouldn't care. Extents can usually - but keep in mind the above - be equated to a UIImage size.
EDIT
Many images (particularly RAW ones) can have so large a size as to affect performance. I have an extension for UIImage that resizes an image to a specific rectangle to help maintain consistent CI performance.
extension UIImage {
public func resizeToBoundingSquare(_ boundingSquareSideLength : CGFloat) -> UIImage {
let imgScale = self.size.width > self.size.height ? boundingSquareSideLength / self.size.width : boundingSquareSideLength / self.size.height
let newWidth = self.size.width * imgScale
let newHeight = self.size.height * imgScale
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContext(newSize)
self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
return resizedImage!
}
}
Usage:
image = image.resizeToBoundingSquare(640)
In this example, an image size of 3200x2000 would be reduced to 640x400. Or an image size or 320x200 would be enlarged to 640x400. I do this to an image before rendering it and before creating a CIImage to use in a CIFilter.
I suggest you think of them as points. There is no scale and no screen (a CIImage is not something that is drawn), so there are no pixels.
A UIImage backed by a CGImage is the basis for drawing, and in addition to the CGImage it has a scale; together with the screen resolution, that gives us our translation from points to pixels.
I used to resize an image with the following code and it used to work just fine regarding the scale factor. Now with Swift 3 I can't figure out why the scale factor is not taken into account. The image is resized but the scale factor not applied. Do you know why?
let layer = self.imageview.layer
UIGraphicsBeginImageContextWithOptions(layer.bounds.size, true, 0)
layer.render(in: UIGraphicsGetCurrentContext()!)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
print("SCALED IMAGE SIZE IS \(scaledImage!.size)")
print(scaledImage!.scale)
For example if I take a screenshot on iPhone 5 the image size will be 320*568. I used to get 640*1136 with exact same code.. What can cause the scale factor not to be applied?
When I print the scale of the image it would print 1, 2 or 3 based on the device resolution but will not be applied to the image taken from the context.
scaledImage!.size will not return the image size in pixel.
CGImageGetWidth and CGImageGetHeight returns the same size (in pixels)
That is image.size * image.scale
If you wanna test it out, at first you have to import CoreGraphics
let imageSize = scaledImage!.size //(320,568)
let imageWidthInPixel = CGImageGetWidth(scaledImage as! CGImage) //640
let imageHeightInPixel = CGImageGetHeight(scaledImage as! CGImage) //1136
I'm making a game where the user can draw lines with finger. There are tons of methods on websites. I tried two methods, one using CGContext in UIView (UIKit), and the other using CGPath and SKShapeNode in SpriteKit. But the latter shows much better quality. The first one using CGContext has ugly rough edges.
Please see following screen shots. I also attached part of code for both methods here, (in the touchesMove function).
Note: var ref = CGPathCreateMutable()
CGContext in UIView
CGPathAddLineToPoint(ref, nil, currentPoint.x, currentPoint.y)
UIGraphicsBeginImageContext(self.frame.size)
let context = UIGraphicsGetCurrentContext()
tempImageView.image?.drawInRect(CGRect(x: 0, y: 0, width: self.frame.size.width, height: self.frame.size.height))
CGContextAddPath(context, ref)
// 3
CGContextSetLineCap(context, CGLineCap.Round)
CGContextSetLineWidth(context, brushWidth)
CGContextSetRGBStrokeColor(context, red, green, blue, 1.0)
CGContextSetBlendMode(context, CGBlendMode.Normal)
// 4
CGContextStrokePath(context)
// 5
UIGraphicsEndImageContext()
SKShapeNode in SpriteKit
Note: var lineDrawing = SKShapeNode()
CGPathAddLineToPoint(ref, nil, location.x, location.y)
lineDrawing.path = ref
lineDrawing.lineWidth = 3
lineDrawing.strokeColor = UIColor.blackColor()
lineDrawing.alpha = 1.0
self.addChild(lineDrawing)
How can I draw lines in UIView with the same quality of SKShapeNode?
One obvious problem is that you're using an outdated function that doesn't handle Retina screens:
UIGraphicsBeginImageContext(self.frame.size)
You should be using this instead:
UIGraphicsBeginImageContextWithOptions(self.frame.size, false, 0)
The WithOptions version, with 0 as the final argument, creates an image context at Retina resolution if the device has a Retina screen. (The 0 argument means “use the device screen's scale factor”.)
There may be other issues, because you didn't show the code that creates the points in the path for each test case.
I am wondering how I can always pick a square image from camera library using UIImagePicker?
So I have set imagePicker.allowsEditing = true and when the image I pick is large enough (larger than the square crop), the picked image is square. But, when the image that I pick is smaller, let's say it's 748 by 466, even though the square crop enclose the image including the top and bottom black parts, the picked image does not include the black parts and so it returns a non square image. How do I make it so that it always picks the black top and bottom parts so image is always square?
Thanks a lot for help!
Here is the method using CoreGraphics to add the black area manually, add them in the UIImagePicker delegate method:
let squareSideLength = image.size.width > image.size.height ? image.size.width : image.size.height
UIGraphicsBeginImageContextWithOptions(CGSizeMake(squareSideLength, squareSideLength), false, 1)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, UIColor.blackColor().CGColor)
CGContextFillRect(context, CGRectMake(0, 0, squareSideLength, squareSideLength))
image.drawInRect(CGRectMake((squareSideLength - image.size.width) / 2, (squareSideLength - image.size.height) / 2, image.size.width, image.size.height))
let imageYouWant = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
use imageYouWant then.
I have creating a circle using CGContextFillEllipseInRect from CoreGraphic.
I'm using this circle (which is actually a disk) to replace the thumbImage of an UISlider. The antialiasing is applied by default.
But the result on my iPhone 6 is clearly not good. I can clearly see the pixels ,not as much as with the antialiasing off, but way more than the pixels of a normal UISlider.
Maybe I'm doing something wrong. So my question is, is there a way to get programmatically the same nice disk than the one used by default for an UISlider?
EDIT:
Here is how I create the disk:
class func drawDisk(rgbValue:UInt32, rectForDisk: CGRect = CGRectMake(0.0, 0.0, 1.0, 1.0)) -> UIImage {
let color = uicolorFromHex(rgbValue)
UIGraphicsBeginImageContext(rectForDisk.size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillEllipseInRect(context, rectForDisk)
let rectForCircle = CGRectMake(0.5, 0.5, rectForDisk.size.width - 1, rectForDisk.size.height - 1)
CGContextSetLineWidth(context, 1.0)
CGContextSetStrokeColorWithColor(context,
UIColor.blackColor().CGColor)
CGContextAddEllipseInRect(context, rectForCircle)
CGContextStrokePath(context)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
The problem is that you are creating a non-retina graphics context when using UIGraphicsBeginImageContext, as mentioned in the documentation:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
Instead you should be using UIGraphicsBeginImageContextWithOptions to create your image context. You can keep passing NO for the opaque parameter if you want an image that supports transparency (same as what you are implicitly doing now).
In most cases you can pass 0.0 for the scale factor. This sets the scale factor to that of the device's main screen. Again, as mentioned in the documentation:
If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
So, in short, you should create your image context like this:
UIGraphicsBeginImageContextWithOptions(rectForDisk.size, false, 0.0) // false for Swift