I have three paths. I want two of those paths, path1 and path2, to be subtracted from path3. I do not want the area that overlaps between path1 and path2 to be filled. Here's a diagram I made to explain what I mean:
I already tried this question, but it the accepted answer produces what is found above in "Result With EOClip." I tried CGContextSetBlendMode(ctx, kCGBlendModeClear), but all it did was make the fill black. Any ideas?
Playing a bit with PaintCode (paint-code) I landed with this. Maybe it works for your case?
let context = UIGraphicsGetCurrentContext()
CGContextBeginTransparencyLayer(context, nil)
let path3Path = UIBezierPath(rect: CGRectMake(0, 0, 40, 40))
UIColor.blueColor().setFill()
path3Path.fill()
CGContextSetBlendMode(context, kCGBlendModeDestinationOut)
let path2Path = UIBezierPath(rect: CGRectMake(5, 5, 20, 20))
path2Path.fill()
let path1Path = UIBezierPath(rect: CGRectMake(15, 15, 20, 20))
path1Path.fill()
CGContextEndTransparencyLayer(context)
Related
I need to add a drop shadow on my image, not the image view. Is there anyway to do that? I know I can add shadow to imageView like -
imageView.layer.masksToBounds true
imageView.layer.shadowColor = UIColor.gray.cgColor
imageView.layer.shadowOffset = CGSizeMake(0, 1)
imageView.layer.shadowOpacity = 1
imageView.layer.shadowRadius = 1.0
but I need to add the shadow to the image, not imageView. Does anyone have any clue?
I think you can use CIFilter in Core Image. CIFilter is an image processor that produces an image by manipulating one or more input images or by generating new image data.
You can check various references here.
I think you can CIHighlightShadowAdjust
CIHighlightShadowAdjust is the properties you use to configure a highlight-shadow adjust filter.
Just for #dfd.
So, I went and had a look at Create new UIImage by adding shadow to existing UIImage. After scrolling down a bit, I started to find several Swift based solutions. Intrigued, I threw them into a Playground to see what they could do.
I settled on this solution...
import UIKit
extension UIImage {
/// Returns a new image with the specified shadow properties.
/// This will increase the size of the image to fit the shadow and the original image.
func withShadow(blur: CGFloat = 6, offset: CGSize = .zero, color: UIColor = UIColor(white: 0, alpha: 0.8)) -> UIImage {
let shadowRect = CGRect(
x: offset.width - blur,
y: offset.height - blur,
width: size.width + blur * 2,
height: size.height + blur * 2
)
UIGraphicsBeginImageContextWithOptions(
CGSize(
width: max(shadowRect.maxX, size.width) - min(shadowRect.minX, 0),
height: max(shadowRect.maxY, size.height) - min(shadowRect.minY, 0)
),
false, 0
)
let context = UIGraphicsGetCurrentContext()!
context.setShadow(
offset: offset,
blur: blur,
color: color.cgColor
)
draw(
in: CGRect(
x: max(0, -shadowRect.origin.x),
y: max(0, -shadowRect.origin.y),
width: size.width,
height: size.height
)
)
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
}
let sourceImage = UIImage(named: "LogoSmall.png")!
let shadowed = sourceImage.withShadow(blur: 6, color: .red)
But wait, that's not a drop shadow, it's an outline!
🙄 Apparently we need to hand hold everybody now days...
Changing the parameters to ...
let shadowed = sourceImage.withShadow(blur: 6, offset: CGSize(width: 5, height: 5), color: .red)
Produces a drop shadow. I like the solution because it doesn't make assumptions and provides a suitable number of parameters to change the output as desired.
I liked the solution so much, I copied the extension into my personal library, never know when it might come in handy.
Remember, in order to produce this style of image, the original image needs to be transparent.
A little bit like...
Remember, iOS has been around a LONG time, ObjC has been around even longer. You're likely to come across many solutions which are only presented in ObjC, which means, it's still important to have the skill/ability to at least read the code. If we're lucky, other members of the community will produce suitable Swift variants, but this isn't always possible.
I'm sure I don't need to go to the extent of writing a full tutorial on how to include images in Playground, there are plenty of examples about that 😉
This is the hierarchy:
-- ViewController.View P [width: 375, height: 667]
---- UIImageView A [width: 375, height: 667]
[A is holding an image of size(1287,1662)]
---- UIImageView B [width: 100, height: 100]
[B is holding an image of size(2400,982)]
Note: B is not a subview of A. In the app, B is draggable. And I am trying to merge two A and B where positions are, A [background] and B [foreground].
I want to find the exact position of B such that, after merging it will be at the same position where I dragged & dropped.
I have written all the code for drag and drop.
I couldn't find a proper position of B for merging with A.
Here are the problem screenshots:
Showing the initial position of B.
This is the new position of B after drag and drop.
Wrong position of B after merging.
Any help would be appreciated.
Thanks in advance.
Attention! : Attached images Sample Later and a Signature inside the screenshots has been founded in Google search. I have used for demonstration purpose only.
Because the imageView's size is different from the original image ,instead of merging two images via two view's ABSOLUTE position , you can use a RELATIVE position.
Sudo-code :
let relativeOrigin = {(B's origin.x in A) / A.width , (B's origin.y in A) / A.height}
let actualOrigin = { imgAWidth * relativeOrigin.x , imgAHeight * relativeOrigin.y } //this should be the correct absolute origin.
You can consider it as a suggestion:
FullscreenPoint.frame.origin.y = (screenShot.frame.origin.y*FullImage.frame.size.height)/screenShot.frame.size.height;
FullscreenPoint.frame.origin.x = (screenShot.frame.origin.x*FullImage.frame.size.width)/screenShot.frame.size.width;
I needed to use imageView's (A) frame.size instead of A.image.size to get the right location while merging.
Here's the code:
func mixImagesWith(frontImage:UIImage?, backgroundImage: UIImage?, atPoint point:CGPoint, ofSize signatureSize:CGSize) -> UIImage {
let size = self.imgBackground.frame.size
//let size = self.imgBackground.image!.size
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main.scale)
backgroundImage?.draw(in: CGRect.init(x: 0, y: 0, width: size.width, height: size.height))
frontImage?.draw(in: CGRect.init(x: point.x, y: point.y, width: signatureSize.width, height: signatureSize.height))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
I have two UIBezierPaths, one that represents the polygon of part of an image and the other is a path to be drawn on top of it.
I need to find the intersection between them so that only the point that are inside this intersection area will be colored.
Is there a method in UIBezierPath that can find intersection points - or a new path - between two paths ?
I'm not aware of a method for getting a new path that's the intersection of two paths, but you can fill or otherwise draw in the intersection by using the clipping property of each path.
In this example, there are two paths, a square and a circle:
let path1 = UIBezierPath(rect: CGRect(x: 0, y: 0, width: 100, height: 100))
let path2 = UIBezierPath(ovalIn: CGRect(x:50, y:50, width: 100, height: 100))
I make a renderer to draw these, but you could be doing this in drawRect or wherever:
let renderer = UIGraphicsImageRenderer(bounds: CGRect(x: 0, y: 0, width: 200, height: 200))
let image = renderer.image {
context in
// You wouldn't actually stroke the paths, this is just to illustrate where they are
UIColor.gray.setStroke()
path1.stroke()
path2.stroke()
// You would just do this part
path1.addClip()
path2.addClip()
UIColor.red.setFill()
context.fill(context.format.bounds)
}
The resulting image looks like this (I've stroked each path for clarity as indicated in the code comments, in actual fact you'd only do the fill part):
I wrote a UIBezierPath library that will let you cut a given closed path into sub shapes based on an intersecting path. It'll do essentially exactly what you're looking for: https://github.com/adamwulf/ClippingBezier
NSArray<UIBezierPath*>* componentShapes = [shapePath uniqueShapesCreatedFromSlicingWithUnclosedPath:scissorPath];
Alternatively, you can also just find the intersection points:
NSArray* intersections = [scissorPath findIntersectionsWithClosedPath:shapePath andBeginsInside:nil];
I am trying to make a basic game for iOS10 using swift 3 and scenekit. In one part of my games code I have a function that adds fishes to the screen, and gives each one a certain tag so i can find them later:
let fish = UIImageView(frame: CGRect(x: 0, y: 0, width: CGFloat(fishsize.0), height: CGFloat(fishsize.1)))
fish.contentMode = .scaleAspectFit
fish.center = CGPoint(x: CGFloat(fishsize.0) * -0.6, y: pos)
fish.animationImages = fImg(Fishes[j].size, front: Fishes[j].front)
fish.animationDuration = 0.7
fish.startAnimating()
fish.tag = j + 200
self.view.insertSubview(fish, belowSubview: big1)
What I would like is to be able to, at a certain point, recall the fish and
Change the images shown, and
Stop the animation.
Is this possible? I've been trying it with var fish = view.viewWithTag(killer+200)! but from this I can't seem to change any image properties of the new variable fish.
Any help would be much appreciated.
Tom
Try to cast the UIView to UIImageView like this.
if let fish = view.viewWithTag(killer+200) as? UIImageView {
//Perform your action on imageView object
}
I'm trying to draw on top of an image in a CALayer and am having trouble with where the drawing shows up on different size displays.
func drawLayer(){
let circleLayer = CAShapeLayer()
let radius: CGFloat = 30
let x = Thermo.frame.origin.x
let y = Thermo.frame.origin.y
let XX = Thermo.frame.width
let YY = Thermo.frame.height
print("X: \(x) Y: \(y) Width: \(XX) Height: \(YY)")
circleLayer.path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: 2.0 * radius, height: 2.0 * radius) , cornerRadius: radius).CGPath
circleLayer.fillColor = UIColor.redColor().CGColor
circleLayer.shadowOffset = CGSizeMake(0, 3)
circleLayer.shadowRadius = 5.0
circleLayer.shadowColor = UIColor.blackColor().CGColor
circleLayer.shadowOpacity = 0.8
circleLayer.frame = CGRectMake(0, 410, 0, 192);
self.Thermo.layer.addSublayer(circleLayer)
circleLayer.setNeedsDisplay()
}
That draws a circle, in the correct place ... for an iPhone 6s. But when the enclosing UIImageView component is scaled for a smaller device, well, to clearly doesn't. I added the print() to see what the image size, and position was and ... well, it's exactly the same on every device I run it on X: 192.0 Y: 8.0 Width: 216.0 Height: 584.0 but clearly it's being scaled by the constraints in the AuoLayout manager.
So, my question is how can I figure out the proper radio and position for different screen sizes if I can't use the enclosing View's size and position since that seems to never change?
Here is the image I am starting with, in a UIImageView, and trying to draw over.
Im of course trying to color it in based on data from an external device. Any suggestions/sample code most appreciated!
CALayer and its subclasses incl. CAShapeLayer have a property
var contentsScale: CGFloat
From class reference :
For layers you create and manage yourself, you must set the value of this property yourself based on the resolution of the screen and the content you are providing. Core Animation uses the value you specify as a cue to determine how to render your content.
So what you need to do is set the scale on the layer and you get the scale of the device from UIDevice class
circleLayer.scale = UIScreen.mainScreen().scale