I am trying to make a simple drawing app for the iPhone. Everything worked fine, until I attempted the layout.
My Imageview should have a fixed aspect ration of 3:4, because of iPhone picture format. I also pinned it to the top layout guide and to the sides.
The lines I draw are being distorted, and "flow away"
The closer I get to the bottom end of the view, the more the line gets pulled upwards.
I read somewhere that it could be caused by the view receiving a height that is not an integer after the autolayout, which seems to be true:
I don't want to hardcode the rect for every iOS device.
How can I "round away" this 0.5 in the height?
Here is my draw line method:
func drawLineFrom(fromPoint: CGPoint, toPoint: CGPoint) {
// 1
UIGraphicsBeginImageContext(tempImageView.frame.size)
let context = UIGraphicsGetCurrentContext()
tempImageView.image?.drawInRect(CGRect(x: 0, y: 0, width: tempImageView.frame.size.width, height: tempImageView.frame.size.height))
// 2
CGContextMoveToPoint(context, fromPoint.x, fromPoint.y)
CGContextAddLineToPoint(context, toPoint.x, toPoint.y)
// 3
CGContextSetLineCap(context, CGLineCap.Round)
CGContextSetLineWidth(context, brushWidth)
CGContextSetRGBStrokeColor(context, red, green, blue, 1.0)
CGContextSetBlendMode(context, CGBlendMode.Normal)
// 4b
CGContextStrokePath(context)
// 5
tempImageView.image = UIGraphicsGetImageFromCurrentImageContext()
tempImageView.alpha = opacity
UIGraphicsEndImageContext()
}
I already tried to switch of antialising, but that didn't help, it just made it look horrible.
When creating a canvas with UIGraphicsBeginImageContext, it's default scale is 1. This may cause the generated image to be not clear enough for display on a retina screen.
So you need to specify the scale obviously, either specify you desired scale, or specify 0, to let screen decide the scale.
UIGraphicsBeginImageContextWithOptions(tempImageView.frame.size, false, 0.0)
I've encountered a problem with code I'd written to cut off the corners of a UILabel (or, indeed, any UIView-derived object to which you can add sublayers) -- I do have to thank Kurt Revis for his answer to Use a CALayer to add a diagonal banner/badge to the corner of a UITableViewCell that pointed me in this direction.
I don't have a problem if the corner overlays a solid color -- it's simple enough to make the cut-off corner match that color. But if the corner overlays an image, how would you let the image show through?
I've searched SO for anything similar to this problem, but most of those answers have to do with cells in tables and all I'm doing here is putting a label on a screen's view.
Here's the code I use:
-(void)returnChoppedCorners:(UIView *)viewObject
{
NSLog(#"Object Width = %f", viewObject.layer.frame.size.width);
NSLog(#"Object Height = %f", viewObject.layer.frame.size.height);
CALayer* bannerLeftTop = [CALayer layer];
bannerLeftTop.backgroundColor = [UIColor blackColor].CGColor;
// or whatever color the background is
bannerLeftTop.bounds = CGRectMake(0, 0, 25, 25);
bannerLeftTop.anchorPoint = CGPointMake(0.5, 1.0);
bannerLeftTop.position = CGPointMake(10, 10);
bannerLeftTop.affineTransform = CGAffineTransformMakeRotation(-45.0 / 180.0 * M_PI);
[viewObject.layer addSublayer:bannerLeftTop];
CALayer* bannerRightTop = [CALayer layer];
bannerRightTop.backgroundColor = [UIColor blackColor].CGColor;
bannerRightTop.bounds = CGRectMake(0, 0, 25, 25);
bannerRightTop.anchorPoint = CGPointMake(0.5, 1.0);
bannerRightTop.position = CGPointMake(viewObject.layer.frame.size.width - 10.0, 10.0);
bannerRightTop.affineTransform = CGAffineTransformMakeRotation(45.0 / 180.0 * M_PI);
[viewObject.layer addSublayer:bannerRightTop];
}
I'll be adding similar code to do the BottomLeft and BottomRight corners, but, right now, those are corners that overlay an image. The bannerLeftTop and bannerRightTop are actually squares that are rotated over the corner against a black background. Making them clear only lets the underlying UILabel background color appear, not the image. Same for using the z property. Is masking the answer? Oo should I be working with the underlying image instead?
I'm also encountering a problem with the Height and Width being passed to this method -- they don't match the constrained Height and Width of the object. But we'll save that for another question.
What you need to do, instead of drawing an opaque corner triangle over the label, is mask the label so its corners aren't drawn onto the screen.
Since iOS 8.0, UIView has a maskView property, so we don't actually need to drop to the Core Animation level to do this. We can draw an image to use as a mask, with the appropriate corners clipped. Then we'll create an image view to hold the mask image, and set it as the maskView of the label (or whatever).
The only problem is that (in my testing) UIKit won't resize the mask view automatically, either with constraints or autoresizing. We have to update the mask view's frame “manually” if the masked view is resized.
I realize your question is tagged objective-c, but I developed my answer in a Swift playground for convenience. It shouldn't be hard to translate this to Objective-C. I didn't do anything particularly “Swifty”.
So... here's a function that takes an array of corners (specified as UIViewContentMode cases, because that enum includes cases for the corners), a view, and a “depth”, which is how many points each corner triangle should measure along its square sides:
func maskCorners(corners: [UIViewContentMode], ofView view: UIView, toDepth depth: CGFloat) {
In Objective-C, for the corners argument, you could use a bitmask (e.g. (1 << UIViewContentModeTopLeft) | (1 << UIViewContentModeBottomRight)), or you could use an NSArray of NSNumbers (e.g. #[ #(UIViewContentModeTopLeft), #(UIViewContentModeBottomRight) ]).
Anyway, I'm going to create a square, 9-slice resizable image. The image will need one point in the middle for stretching, and since each corner might need to be clipped, the corners need to be depth by depth points. Thus the image will have sides of length 1 + 2 * depth points:
let s = 1 + 2 * depth
Now I'm going to create a path that outlines the mask, with the corners clipped.
let path = UIBezierPath()
So, if the top left corner is clipped, I need the path to avoid the top left point of the square (which is at 0, 0). Otherwise, the path includes the top left point of the square.
if corners.contains(.TopLeft) {
path.moveToPoint(CGPoint(x: 0, y: 0 + depth))
path.addLineToPoint(CGPoint(x: 0 + depth, y: 0))
} else {
path.moveToPoint(CGPoint(x: 0, y: 0))
}
Do the same for each corner in turn, going around the square:
if corners.contains(.TopRight) {
path.addLineToPoint(CGPoint(x: s - depth, y: 0))
path.addLineToPoint(CGPoint(x: s, y: 0 + depth))
} else {
path.addLineToPoint(CGPoint(x: s, y: 0))
}
if corners.contains(.BottomRight) {
path.addLineToPoint(CGPoint(x: s, y: s - depth))
path.addLineToPoint(CGPoint(x: s - depth, y: s))
} else {
path.addLineToPoint(CGPoint(x: s, y: s))
}
if corners.contains(.BottomLeft) {
path.addLineToPoint(CGPoint(x: 0 + depth, y: s))
path.addLineToPoint(CGPoint(x: 0, y: s - depth))
} else {
path.addLineToPoint(CGPoint(x: 0, y: s))
}
Finally, close the path so I can fill it:
path.closePath()
Now I need to create the mask image. I'll do this using an alpha-only bitmap:
let colorSpace = CGColorSpaceCreateDeviceGray()
let scale = UIScreen.mainScreen().scale
let gc = CGBitmapContextCreate(nil, Int(s * scale), Int(s * scale), 8, 0, colorSpace, CGImageAlphaInfo.Only.rawValue)!
I need to adjust the coordinate system of the context to match UIKit:
CGContextScaleCTM(gc, scale, -scale)
CGContextTranslateCTM(gc, 0, -s)
Now I can fill the path in the context. The use of white here is arbitrary; any color with an alpha of 1.0 would work:
CGContextSetFillColorWithColor(gc, UIColor.whiteColor().CGColor)
CGContextAddPath(gc, path.CGPath)
CGContextFillPath(gc)
Next I create a UIImage from the bitmap:
let image = UIImage(CGImage: CGBitmapContextCreateImage(gc)!, scale: scale, orientation: .Up)
If this were in Objective-C, you'd want to release the bitmap context at this point, with CGContextRelease(gc), but Swift takes care of it for me.
Anyway, I convert the non-resizable image to a 9-slice resizable image:
let maskImage = image.resizableImageWithCapInsets(UIEdgeInsets(top: depth, left: depth, bottom: depth, right: depth))
Finally, I set up the mask view. I might already have a mask view, because you might have clipped the view with different settings already, so I'll reuse an existing mask view if it is an image view:
let maskView = view.maskView as? UIImageView ?? UIImageView()
maskView.image = maskImage
Finally, if I had to create the mask view, I need to set it as view.maskView and set its frame:
if view.maskView != maskView {
view.maskView = maskView
maskView.frame = view.bounds
}
}
OK, how do I use this function? To demonstrate, I'll make a purple background view, and put an image on top of it:
let view = UIImageView(image: UIImage(named: "Kaz-256.jpg"))
view.autoresizingMask = [ .FlexibleWidth, .FlexibleHeight ]
let backgroundView = UIView(frame: view.frame)
backgroundView.backgroundColor = UIColor.purpleColor()
backgroundView.addSubview(view)
XCPlaygroundPage.currentPage.liveView = backgroundView
Then I'll mask some corners of the image view. Presumably you would do this in, say, viewDidLoad:
maskCorners([.TopLeft, .BottomRight], ofView: view, toDepth: 50)
Here's the result:
You can see the purple background showing through the clipped corners.
If I were to resize the view, I'd need to update the mask view's frame. For example, I might do this in my view controller:
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
self.cornerClippedView.maskView?.frame = self.cornerClippedView.bounds
}
Here's a gist of all the code, so you can copy and paste it into a playground to try out. You'll have to supply your own adorable test image.
UPDATE
Here's code to create a label with a white background, and overlay it (inset by 20 points on each side) on the background image:
let backgroundView = UIImageView(image: UIImage(named: "Kaz-256.jpg"))
let label = UILabel(frame: backgroundView.bounds.insetBy(dx: 20, dy: 20))
label.backgroundColor = UIColor.whiteColor()
label.font = UIFont.systemFontOfSize(50)
label.text = "This is the label"
label.lineBreakMode = .ByWordWrapping
label.numberOfLines = 0
label.textAlignment = .Center
label.autoresizingMask = [ .FlexibleWidth, .FlexibleHeight ]
backgroundView.addSubview(label)
XCPlaygroundPage.currentPage.liveView = backgroundView
maskCorners([.TopLeft, .BottomRight], ofView: label, toDepth: 50)
Result:
I have not enough experience on iOS/swift. I want to make a canvas for drawing image from gallery(which I could from some tutorials) - but cannot make UIView's size same as image size. I had to scale down the image to get in the screen. But I want it other way around.
Let's say if I want to make a UIView with 1600x1200 (my image size) what I have to do? some example code/idea will be great !!!
Try this:
myView.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: CGSize(width: 1600, height:1200))
I've got the following screen design :
I want to render MKMapView in UIImage, than apply elipsis UIBezierPath and clip top part of UIImage. How can i achieve this? Thanks in advance.
Here is a simple implementation that you can follow to have the similar effect using CAShapeLayer.
Create sufficiently ellipse path to fit height of your image view, but width can be adjusted to control curve.
Create rectangular path to fit the size of the imageView, width and height should be match the size of imageView.
Transform the circle in such a way that rectangular path is exactly at the middle of the circle.
Now, if you look at the image above, the rectangle has the same size as your imageView. If you somehow manage to remove the portion of shapes which are not intersected, you will have your desired effect.
And this will be the portion of the image that you will be masking,
This can be achieved quite easily using CAShapeLayer.
Here is a simple implementation that you can use,
let image = UIImage(named: "image.jpg")
let imageSize = image!.size
let imageView = UIImageView(frame: CGRect(origin: CGPoint.zero, size: imageSize))
imageView.clipsToBounds = true
imageView.image = image
let curveRadius: CGFloat = imageSize.width * 0.005
let invertedRadius: CGFloat = 1.0 / curveRadius
// draw ellipse in rect with big width, but same height
let ellipticalPath = UIBezierPath(ovalInRect: CGRectMake(0, 0, imageSize.width + 2 * invertedRadius * imageSize.width, imageSize.height))
// transform it to center of imageView
ellipticalPath.applyTransform(CGAffineTransformMakeTranslation(-imageSize.width * invertedRadius, 0))
// create rectangle path exactly similar to imageView
let rectanglePath = UIBezierPath(rect: imageView.bounds)
// translate it by 0.5 ratio in order to create intersection between circle and rectangle
rectanglePath.applyTransform(CGAffineTransformMakeTranslation(0, -imageSize.height * CGFloat(0.5)))
// append rectangle to elliptical path
ellipticalPath.appendPath(rectanglePath)
// create mask
let maskLayer = CAShapeLayer()
maskLayer.frame = imageView.bounds
maskLayer.path = ellipticalPath.CGPath
imageView.layer.mask = maskLayer
And here is how it looks,
You can adjust the value of curveRadius to suit your need.
Note: That the shape layer intersection is possible due to something called fillRule property on CAShapeLayer, which has a default value of kCAFillRuleNonZero. Read more about it here, https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CAShapeLayer_class/#//apple_ref/occ/instp/CAShapeLayer/fillRule
I try to create a functionallity like in PicFrame or any other application for creating photo collage in one frame.
I've created two scroll views and two image views in these scroll views for scrolling and zooming the images. It works well.
Then I need to create one square image of the two rectangular images.
var firstImage = UIImage(named: leftImagePath)
var secondImage = UIImage(named: rightImagePath)
var size = CGSize(width: 1080, height: 1080)
UIGraphicsBeginImageContext(size)
let leftImageAreaSize = CGRect(x: 0, y: 0, width: size.width / 2, height: size.height)
firstImage!.drawInRect(leftImageAreaSize)
let rightImageAreaSize = CGRect(x: size.width / 2 + 1 , y: 0, width: size.width / 2, height: size.height)
secondImage!.drawInRect(rightImageAreaSize)
var newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
This code works well, but I need to implement the scroll and zoom values, crop and scale the images using these values before creating one square image.
Can anyone guide me how to do this?
I make PicFrame so I suppose I have some experience here. Although this isn't what I do, and I haven't tried it myself, if you just want a quick image of what you see you could use drawViewHierarchyInRect and capture the screen area.
Otherwise what you want to do is get the CGPoint contentOffset and the CGSize bounds.size from the UIScrollView. Then you modify these by the UIScrollView zoomScale. Make sure your contentSize is the size of the image so that a zoomScale of 1.0 would have the width and height of the original image.
From this you should be able to make a CGRect which is the x, y, width, height of what is visible in your scroll view but translated to the size of the image. Crop the image to this size and then draw it into your final graphics context with your desired CGRect.