I am developing an iOS board game. I am trying to give the board a kind of "texture".
What I did was I created this very small image (really small, be sure to look carefully):
And I passed this image to the UIColor.init(patternImage:) initializer to create a UIColor that is this image. I used this UIColor to fill some square UIBezierPaths, and the result looks like this:
All copies of that image lines up perfectly and they form many diagonal straight lines. So far so good.
Now on the iPad, the squares that I draw will be larger, and the borders of those squares will be larger too. I have successfully calculated what the stroke width and size of the squares should be, so that is not a problem.
However, since the squares are larger on an iPad, there will be more diagonal lines per square. I do not want that. I need to resize the very small image to a bigger one, and that the size depends on the stroke width of the squares. Specifically, the width of the resized image should be twice as much as the stroke width.
I wrote this extension to resize the image, adapted from this post:
extension UIImage {
func resized(toWidth newWidth: CGFloat) -> UIImage {
let scale = newWidth / size.width
let newHeight = size.height * scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: newWidth, height: newHeight), false, 0)
self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
And called it like this:
// this is the code I used to draw a single square
let path = UIBezierPath(rect: CGRect(origin: point(for: Position(x, y)), size: CGSize(width: squareLength, height: squareLength)))
UIColor.black.setStroke()
path.lineWidth = strokeWidth
// this is the line that's important!
UIColor(patternImage: #imageLiteral(resourceName:
"texture").resized(toWidth: strokeWidth * 2)).setFill()
path.fill()
path.stroke()
Now the game board looks like this on an iPhone:
You might need to zoom in the webpage a bit to see what I mean. The board now looks extremely ugly. You can see the "borders" of each copy of the image. I don't want this. On an iPad though, the board looks fine. I suspect that this only happens when I downsize the image.
I figured that this might be due to the antialiasing that happens when I use the extension. I found this post and this post about removing antialiasing, but the former seems to be doing this in a image view while I am doing this in the draw(_:) method of my custom GameBoardView. The latter's solution seems to be exactly the same as what I am using.
How can I resize without antialiasing? Or on a higher level of abstraction, How can I make my board look pretty?
class Ruled: UIView {
override func draw(_ rect: CGRect) {
let T: CGFloat = 15 // desired thickness of lines
let G: CGFloat = 30 // desired gap between lines
let W = rect.size.width
let H = rect.size.height
guard let c = UIGraphicsGetCurrentContext() else { return }
c.setStrokeColor(UIColor.orange.cgColor)
c.setLineWidth(T)
var p = -(W > H ? W : H) - T
while p <= W {
c.move( to: CGPoint(x: p-T, y: -T) )
c.addLine( to: CGPoint(x: p+T+H, y: T+H) )
c.strokePath()
p += G + T + T
}
}
}
Enjoy.
Note that you would, obviously, clip that view.
If you want to have a number of them on the screen or in a pattern, just do that.
To clip to a given rectangle:
The class above simply draws it the "size of the UIView".
However, often, you want to draw a number of the "boxes" actually within the view, at different coordinates. (A good example is for a calendar).
Furthermore, this example explicitly draws "both stripes" rather than drawing one stripe over the background color:
func simpleStripes(x: CGFloat, y: CGFloat, width: CGFloat, height: CGFloat) {
let stripeWidth: CGFloat = 20.0 // whatever you want
let m = stripeWidth / 2.0
guard let c = UIGraphicsGetCurrentContext() else { return }
c.setLineWidth(stripeWidth)
let r = CGRect(x: x, y: y, width: width, height: height)
let longerSide = width > height ? width : height
c.saveGState()
c.clip(to: r)
var p = x - longerSide
while p <= x + width {
c.setStrokeColor(pale blue)
c.move( to: CGPoint(x: p-m, y: y-m) )
c.addLine( to: CGPoint(x: p+m+height, y: y+m+height) )
c.strokePath()
p += stripeWidth
c.setStrokeColor(pale gray)
c.move( to: CGPoint(x: p-m, y: y-m) )
c.addLine( to: CGPoint(x: p+m+height, y: y+m+height) )
c.strokePath()
p += stripeWidth
}
c.restoreGState()
}
extension UIImage {
func ResizeImage(targetSize: CGSize) -> UIImage
{
let size = self.size
let widthRatio = targetSize.width / self.size.width
let heightRatio = targetSize.height / self.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSize(width: size.width * heightRatio, height: size.height * heightRatio)
} else {
newSize = CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRect(x: 0, y: 0, width: newSize.width,height: newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
Related
I am relatively new to coding, and I have recently been working on a program that allows a user to scan a crystal, using the iPhones rear camera, and it will identify what kind of crystal it is. I used CreateML to build the model, and Vision to identify the crystal. What I can't seem to figure out is how to get the results into the UI I built. The results are printing to the Xcode console.
Here's a picture of the Storyboard:
I assume you want to draw a box around the detected crystal?
You should be getting a boundingBox of your crystal that looks something like this:
(0.166666666666667, 0.35, 0.66666666666667, 0.3)
These are "normalized" coordinates, which means that they are relative to the image that you send to Vision. I explain this more in detail here...
What you are used to
What Vision returns
You need to convert these "normalized" coordinates to UIKit coordinates that you can use. To do that, I have this converting function:
func getConvertedRect(boundingBox: CGRect, inImage imageSize: CGSize, containedIn containerSize: CGSize) -> CGRect {
let rectOfImage: CGRect
let imageAspect = imageSize.width / imageSize.height
let containerAspect = containerSize.width / containerSize.height
if imageAspect > containerAspect { /// image extends left and right
let newImageWidth = containerSize.height * imageAspect /// the width of the overflowing image
let newX = -(newImageWidth - containerSize.width) / 2
rectOfImage = CGRect(x: newX, y: 0, width: newImageWidth, height: containerSize.height)
} else { /// image extends top and bottom
let newImageHeight = containerSize.width * (1 / imageAspect) /// the width of the overflowing image
let newY = -(newImageHeight - containerSize.height) / 2
rectOfImage = CGRect(x: 0, y: newY, width: containerSize.width, height: newImageHeight)
}
let newOriginBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
var convertedRect = VNImageRectForNormalizedRect(newOriginBoundingBox, Int(rectOfImage.width), Int(rectOfImage.height))
/// add the margins
convertedRect.origin.x += rectOfImage.origin.x
convertedRect.origin.y += rectOfImage.origin.y
return convertedRect
}
You can use it like this:
let convertedRect = self.getConvertedRect(
boundingBox: observation.boundingBox,
inImage: image.size, /// image is the image that you feed into Vision
containedIn: self.previewView.bounds.size /// the size of your camera feed's preview view
)
self.drawBoundingBox(rect: convertedRect)
/// draw the rectangle
func drawBoundingBox(rect: CGRect) {
let uiView = UIView(frame: rect)
previewView.addSubview(uiView)
uiView.backgroundColor = UIColor.orange.withAlphaComponent(0.2)
uiView.layer.borderColor = UIColor.orange.cgColor
uiView.layer.borderWidth = 3
}
Result (I'm doing a VNDetectRectanglesRequest):
If you want to "track" the detected object while your phone is moving, check out my answer here
I want to implement a bar constisting of colored rectangles to visualise a distribution, similar to the storage graphic on the iPhone.
My current approach is this, which is working correctly:
override open func draw(_ rect: CGRect) {
let context = UIGraphicsGetCurrentContext()
drawRects(context!)
}
func drawRects(_ context:CGContext) {
if values.count == 0 { return }
var currentX: Float = 0
let scale = (Float(self.frame.width)) / Float(totalSum)
for i in 0..<values.count {
let color = colors[i].cgColor
let value = values[i]
let rect = CGRect(x: CGFloat(currentX), y: 0, width: CGFloat(Float(value) * scale), height: frame.size.height)
context.addRect(rect)
context.setFillColor(color)
context.fill(rect)
currentX += ((Float(value) * scale) + (value == 0 ? 0 : barSpacing))
}
}
The result is this:
Now I want to round the edges of the individual rectangles with UIBezierPath using this code at the end of the for-loop in the drawRects method:
let path = UIBezierPath(roundedRect: rect, cornerRadius: rect.height / 2)
UIColor.white.setStroke()
path.stroke()
This leads to this:
Somehow this doesn't just round the edges, but cuts off a bit of the height and width, which leads to the small leftovers of the rectangle in the corners. As you can see in the two images, the height of the rectangles changes when applying the path.
Setting the UIBezierPath as a path to the current Core Graphics context instead of setting it as a stroke leads to the same problem.
What is it that I'm doing wrong?
Disclaimer: Did this without testing.
I do believe the way you are filling in your context is incorrect. So, you are filling via context.fill(rect), which does exactly as you are stating. Fill the rect, everything except the path. Well, the path doesn't interfere with the corner, thus your result.
To counter this, we utilize the method context.fillPath() which provides the default rule, CGPathFillRule = .winding. It should only fill the interior of the `rect.
https://developer.apple.com/documentation/coregraphics/cgpathfillrule
CGPathFillRule: When filling a path, regions that a fill rule defines as interior to the path are painted. When clipping with a path, regions interior to the path remain visible after clipping.
override open func draw(_ rect: CGRect) {
let context = UIGraphicsGetCurrentContext()
drawRects(context!)
}
func drawRects(_ context:CGContext) {
if values.count == 0 { return }
var currentX: Float = 0
let scale = (Float(self.frame.width)) / Float(totalSum)
for i in 0..<values.count {
let color = colors[i].cgColor
let value = values[i]
let rect = CGRect(x: CGFloat(currentX), y: 0, width: CGFloat(Float(value) * scale), height: frame.size.height)
let path = UIBezierPath(roundedRect: rect, cornerRadius: rect.height / 2)
context.addPath(path.cgPath)
context.addRect(rect)
context.setFillColor(color)
context.fillPath()
context.closePath()
currentX += ((Float(value) * scale) + (value == 0 ? 0 : barSpacing))
}
}
The solution that finally worked for me is to fill the UIBezierPath, rather than the rectangle:
let rect = CGRect(x: CGFloat(currentX), y: 0, width: CGFloat(Float(value) * scale), height: frame.size.height)
let path = UIBezierPath(roundedRect: rect, cornerRadius: rect.height / 2)
context.setFillColor(color)
path.fill()
I want to get original x and y position of UIImage when we set it in UIImageView with scaleAspectFill.
As we know in scaleAspectFill, some of the portion is clipped. So as per my requirement I want to get x and y value (it may be - value I don't know.).
Here is the original image from gallery
Now I am setting this above image to my app view.
So as above situation, I want to get it's hidden x, y position of image which are clipped.
Can any one tell how to get it?
Use following extension
extension UIImageView {
var imageRect: CGRect {
guard let imageSize = self.image?.size else { return self.frame }
let scale = UIScreen.main.scale
let imageWidth = (imageSize.width / scale).rounded()
let frameWidth = self.frame.width.rounded()
let imageHeight = (imageSize.height / scale).rounded()
let frameHeight = self.frame.height.rounded()
let ratio = max(frameWidth / imageWidth, frameHeight / imageHeight)
let newSize = CGSize(width: imageWidth * ratio, height: imageHeight * ratio)
let newOrigin = CGPoint(x: self.center.x - (newSize.width / 2), y: self.center.y - (newSize.height / 2))
return CGRect(origin: newOrigin, size: newSize)
}
}
Usage
let rect = imageView.imageRect
print(rect)
UI Test
let testView = UIView(frame: rect)
testView.backgroundColor = UIColor.red.withAlphaComponent(0.5)
imageView.superview?.addSubview(testView)
Use below extension to find out accurate details of Image in ImageView.
extension UIImageView {
var contentRect: CGRect {
guard let image = image else { return bounds }
guard contentMode == .scaleAspectFit else { return bounds }
guard image.size.width > 0 && image.size.height > 0 else { return bounds }
let scale: CGFloat
if image.size.width > image.size.height {
scale = bounds.width / image.size.width
} else {
scale = bounds.height / image.size.height
}
let size = CGSize(width: image.size.width * scale, height: image.size.height * scale)
let x = (bounds.width - size.width) / 2.0
let y = (bounds.height - size.height) / 2.0
return CGRect(x: x, y: y, width: size.width, height: size.height)
}
}
How to test
let rect = imgTest.contentRect
print("Image rect:", rect)
Reference: https://www.hackingwithswift.com/example-code/uikit/how-to-find-an-aspect-fit-images-size-inside-an-image-view
If you want to show image like it shows in gallery then you can use contraints
"H:|[v0]|" and "V:|[v0]|" and in imageview use .aspectFit
And if you want the image size you can use imageView.image!.size and calculate the amount of image which is getting cut. In aspectFill the width is matched to screenwidth and accordingly the height gets increased. So I guess you can find how how much amount of image is getting cut.
Try this Library ImageCoordinateSpace
I am not sure if it works for you or not, but it has a feature to convert CGPoint from image coordinates to any view coordinates and vice versa.
I am trying to merge two UIImages of different sizes.
I have UIImage A is of the following size: 1287 × 1662 pixels
And UIImage B is of the following size: 200 × 200 pixels
I am showing A and B in following UIImageViews.
UIImageView backgroundImageView of the size: 375 x 667
And UIImageView foregroundImageView of the size: 100 x 100
User can move foregroundImageView to any position above the backgroundImageView.
This is the merging code:
let previewImage:UIImage? = mergeImages(img: imgBackground.image!, sizeWaterMark: CGRect.init(origin: imgForeground.frame.origin, size: CGSize.init(width: 100, height: 100)), waterMarkImage: imgForeground.image!)
func mergeImages(img:UIImage, sizeWaterMark:CGRect, waterMarkImage:UIImage) -> UIImage {
let size = self.imgBackground.frame.size
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main.scale)
img.draw(in: getAspectFitFrame(sizeImgView: size, sizeImage: img.size))
let frameAspect:CGRect = getAspectFitFrame(sizeImgView: sizeWaterMark.size, sizeImage: waterMarkImage.size)
let frameOrig:CGRect = CGRect(x: sizeWaterMark.origin.x+frameAspect.origin.x, y: sizeWaterMark.origin.y+frameAspect.origin.y, width: frameAspect.size.width, height: frameAspect.size.height)
waterMarkImage.draw(in: frameOrig, blendMode: .normal, alpha: 1)
let result:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return result
}
func getAspectFitFrame(sizeImgView:CGSize, sizeImage:CGSize) -> CGRect {
let imageSize:CGSize = sizeImage
let viewSize:CGSize = sizeImgView
let hfactor : CGFloat = imageSize.width/viewSize.width
let vfactor : CGFloat = imageSize.height/viewSize.height
let factor : CGFloat = max(hfactor, vfactor)
// Divide the size by the greater of the vertical or horizontal shrinkage factor
let newWidth : CGFloat = imageSize.width / factor
let newHeight : CGFloat = imageSize.height / factor
var x:CGFloat = 0.0
var y:CGFloat = 0.0
if hfactor > vfactor {
y = (sizeImgView.height - newHeight) / 2
} else {
x = (sizeImgView.width - newWidth) / 2
}
let newRect:CGRect = CGRect(x: x, y: y, width: newWidth, height: newHeight)
return newRect
}
This is actually merging and giving me what I am looking for. But it's reducing the size of merged image. As you can see this line in the mergeImages function.
let size = self.imgBackground.frame.size
I want the size should be the original UIImage A size. So if I change it to this,
let size = self.imgBackground.image!.size
This will change the location of the B over A, after merging.
For testing, you can download and check the source code from here.
What should I do to keep the original size as it is while having the exact position of B over A with proper aspect ratio?
I made utility functions static (it's even better to move them in separate file) to be sure that they are not using ViewController instance properties and methods.
In mergeImages I removed:
let size = self.imgBackground.frame.size
and replaced size with img.size. It's the same as using self.imgBackground.image!.size as you described in question.
Because source and target image sizes are the same there is no need to adjust aspect and we simply replace:
img.draw(in: getAspectFitFrame(sizeImgView: size, sizeImage: img.size))
with
img.draw(in: CGRect(origin: CGPoint(x: 0, y: 0), size: img.size))
Also I extracted aspect factor calculation to separate function getFactor to make code more granular and made getAspectFitFrame return not only CGRect but also aspect factor (it'll be useful later).
So utility functions are now looking like:
static func mergeImages(img: UIImage, sizeWaterMark: CGRect, waterMarkImage: UIImage) -> UIImage {
UIGraphicsBeginImageContextWithOptions(img.size, false, UIScreen.main.scale)
img.draw(in: CGRect(origin: CGPoint(x: 0, y: 0), size: img.size))
let (frameAspect, _) = getAspectFitFrame(from: sizeWaterMark.size, to: waterMarkImage.size)
let frameOrig = CGRect(x: sizeWaterMark.origin.x + frameAspect.origin.x, y: sizeWaterMark.origin.y + frameAspect.origin.y, width: frameAspect.size.width, height: frameAspect.size.height)
waterMarkImage.draw(in: frameOrig, blendMode: .normal, alpha: 1)
let result = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return result
}
static func getAspectFitFrame(from: CGSize, to: CGSize) -> (CGRect, CGFloat) {
let (hfactor, vfactor, factor) = ViewController.getFactor(from: from, to: to)
// Divide the size by the greater of the vertical or horizontal shrinkage factor
let newWidth = to.width / factor
let newHeight = to.height / factor
var x: CGFloat = 0.0
var y: CGFloat = 0.0
if hfactor > vfactor {
y = (from.height - newHeight) / 2
} else {
x = (from.width - newWidth) / 2
}
return (CGRect(x: x, y: y, width: newWidth, height: newHeight), factor)
}
static func getFactor(from: CGSize, to: CGSize) -> (CGFloat, CGFloat, CGFloat) {
let hfactor = to.width / from.width
let vfactor = to.height / from.height
return (hfactor, vfactor, max(hfactor, vfactor))
}
Also you need another utility function to calculate scaled water mark origin and size:
static func getScaledFrame(from: CGSize, to: CGSize, target: CGRect) -> CGRect {
let (aspectFitFrame, factor) = ViewController.getAspectFitFrame(from: from, to: to)
return CGRect(
origin: CGPoint(
x: (target.origin.x - aspectFitFrame.origin.x) * factor,
y: (target.origin.y - aspectFitFrame.origin.y) * factor),
size: CGSize(width: target.width * factor, height: target.height * factor)
)
}
Now you are ready to render merged image:
let previewImage = ViewController.mergeImages(
img: imgBackground.image!,
sizeWaterMark: ViewController.getScaledFrame(from: imgBackground.frame.size, to: imgBackground.image!.size, target: imgForeground.frame),
waterMarkImage: imgForeground.image!
)
The code is added to Github to let you understand the real problem.
This is the hierarchy:
-- ViewController.View P [width: 375, height: 667]
---- UIImageView A [width: 375, height: 667] Name: imgBackground
[A is holding an image of size(1287,1662)]
---- UIImageView B [width: 100, height: 100] Name: imgForeground
[B is holding an image of size(2400,982)]
I am trying to merge A with B but the result is stretched.
This is the merge code:
func mixImagesWith(frontImage:UIImage?, backgroundImage: UIImage?, atPoint point:CGPoint, ofSize signatureSize:CGSize) -> UIImage {
let size = self.imgBackground.frame.size
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main.scale)
backgroundImage?.draw(in: CGRect.init(x: 0, y: 0, width: size.width, height: size.height))
frontImage?.draw(in: CGRect.init(x: point.x, y: point.y, width: signatureSize.width, height: signatureSize.height))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
Note:
.contentMode = .scaleAspectFit
Code works but the result is stretched.
See this line in code, let size = self.imgBackground.frame.size – I need to change this to fix the problem. Find the origin of subview with respect to UIImage size
Here's the screenshot to understand the problem:
What should I do to get the proper output of merge function?
You have two bugs in your code:
You should also calculate aspect for document image to fit it into UIImageView. In mergeImages() replace:
img.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
with:
img.draw(in: getAspectFitFrame(sizeImgView: size, sizeImage: img.size))
When calculating aspect you center image horizontally/vertically if its width/height less then UIImageView width/height. But instead of comparing newWidth and newHeight you should compare factors:
if hfactor > vfactor {
y = (sizeImgView.height - newHeight) / 2
} else {
x = (sizeImgView.width - newWidth) / 2
}
Try bellow code it works for me, hope it works for you too,
func addWaterMarkToImage(img:UIImage, sizeWaterMark:CGRect, waterMarkImage:UIImage, completion : ((UIImage)->())?){
handler = completion
let img2:UIImage = waterMarkImage
let rect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
UIGraphicsBeginImageContext(img.size)
img.draw(in: rect)
let frameAspect:CGRect = getAspectFitFrame(sizeImgView: sizeWaterMark.size, sizeImage: waterMarkImage.size)
let frameOrig:CGRect = CGRect(x: sizeWaterMark.origin.x+frameAspect.origin.x, y: sizeWaterMark.origin.y+frameAspect.origin.y, width: frameAspect.size.width, height: frameAspect.size.height)
img2.draw(in: frameOrig, blendMode: .normal, alpha: 1)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if handler != nil {
handler!(result!)
}
}
//MARK - Get Aspect Fit frame of UIImage
func getAspectFitFrame(sizeImgView:CGSize, sizeImage:CGSize) -> CGRect{
let imageSize:CGSize = sizeImage
let viewSize:CGSize = sizeImgView
let hfactor : CGFloat = imageSize.width/viewSize.width
let vfactor : CGFloat = imageSize.height/viewSize.height
let factor : CGFloat = max(hfactor, vfactor)
// Divide the size by the greater of the vertical or horizontal shrinkage factor
let newWidth : CGFloat = imageSize.width / factor
let newHeight : CGFloat = imageSize.height / factor
var x:CGFloat = 0.0
var y:CGFloat = 0.0
if newWidth > newHeight{
y = (sizeImgView.height - newHeight)/2
}
if newHeight > newWidth{
x = (sizeImgView.width - newWidth)/2
}
let newRect:CGRect = CGRect(x: x, y: y, width: newWidth, height: newHeight)
return newRect
}