I have an UIImageview that contains a circle (The obstacle) and then another UIImageview that contains another image (my character). When my circle hits my character the game ends.
My problem occurs when the game ends on the character touching the UIImageview box that my circle is within, rather than the circle inside the UIImageview box.
The solutions I can think of are:
Make the UIImageview rounded to fit my circle.
Somehow detect a collision between my characters pixels and the circles pixels rather than the UIImageviews.
Help and ideas would be really appreciated. I am a beginner with xcode.
Thanks!
Couple of different techniques:
1)UIView#layer.cornerRadius - Super simple to implement, just set that one property. But really bad for scrolling elements like table views.
2)UIImageView as a mask - Also easy to implement. Basically make a square image with a transparent circle in the middle and slap it on top of your view with a new UIImageView. If your circle design has some reflections or styling, this might make your life easier.
3)Custom drawing - Make a UIView subclass and override drawRect to draw your image and clip it to a circular path.
Hope that helps!
EDIT:
You can read this answers.
1)Fill an UIBezierPath with a UIImage
2)iOS: Inverse UIBezierPath (bezierPathWithOvalInRect)
3)How to mask a square image into an image with round corners in the iPhone SDK?
Related
I have the following problem: I'm scanning a QR Code with AVFoundation. This works quite well and I can also create a border around the code by adding a subview and set the frame attribute by subview.frame = qrCodeObject.bounds. This only has the problem that the border is only a rectangle and dismisses the perspective of the QR code.
I know that the qrCodeObject has a property corners which incorporates the top right, top left, bottom right and bottom left points of the QR code detected.
My question is now: how can I apply those corner points to the "border" view to make this border to have the same perspective as the QR code? Or in other words: how to "transform" the view according to the corner points?
Thanks a lot in advance!
UPDATE:
Here you can see the problem: the red box is a UIView, having it's frame property set to the QR codes bounds property. This misses perspective. I would like to transform the UIView (the red box) to following the corners property of the QR code, which includes the top right, top left, bottom right and bottom left points (CGPoint) of the QR code. It is important to apply this to a UIView, because I later want to apply it to an ImageView. Also a mask is not usable, as it just hides part of the view, but does not stretch or transform the content of the view.
I found a solution: AGGeometryKit did the trick: https://github.com/hfossli/AGGeometryKit/
Thanks everybody for helping!
You can't transform a CGRect that way, as far as I know. (At least I'm unaware of any framework that can do that kind of image processing.)
What you can do is to draw a polygon using the points of the qrCodeObject.
In drawRect of your UIView, change use CGContext and CGPath to draw the path you'd like.
You want your drawing UIView to be the same size as the one showing the QR code so that you don't have to translate the points onto a second coordinate space.
This answer has directions if you need more guidance on how to do that.
Ok, the problem you are facing is that a CGRect can only represent a rectangle that is not tilted or distorted. What you are dealing with is an image that has different kinds of perspective distortion.
I haven't tried to do this, but it sounds like AVFoundation gives you 4 CGPoint objects for a reason. You need to draw those 4 CGPoints using a UIBezierPath rather than trying to draw a CGRect. Simply create a bezier path that moves to the first point, then draws lines to each subsequent point, and finally, back to the first point. That will give you a quadrilateral that takes into account the distortion of your QR code.
CATransform3DRotate could be your friend, here.
https://guides.codepath.com/ios/Using-Perspective-Transforms might be a good starting point.
I have a UIImageView displaing an image. This view's layer is masked with CAShapeLayer in order to create circular "hole" in the image. To create the hole I use UIBezierPath with .usesEvenOddFillRule = true.
It works fine when static. But I need that hole to move with user finger. To do that I create new UIBezierPath with even-odd rule each time user moves their finger. On smaller phones with smaller images it looks OK but on iPhone 6 Plus it is choppy.
Any ideas on how to make it smooth are very wellcome. I cannot just move the frame of masking CAShapeLayer - it would move the hole bot also hide some edges of the image. So the only way is to change its .path each time user moves finger and that is slow.
EDIT: matt's answer would work in some scenarios but not in my case: I am not displaying the whole image only a part of it defined by UIBezierPath. This part is most often oval (but can be rectangular or rounded rectangle) and it has "hole" cut in it. While the hole is mowing with users finger the displayed part/shape of the image does not change - it is static.
The ineficient solution that was in place so far wa:
Create UIBezierPath with boundary of displayed part of the image
Set 'even off fill rule' on it
Add UIBezierPath of the hole to it
Set it as path of CAShapeLayer with some opaque fill color
Use that CAShapeLayer as mask of the UIImageView
This procedure was repeated each time a user moved their finger. I cannot simply move the whole mask layer as that would also change the part of the image being displayed. I what it to stay static and move only the hole in it.
it would move the hole bot also hide some edges of the image
Well, I don't agree. Moving the mask is exactly the way to do this. I don't see why you think there's a problem with that. Perhaps the issue is merely that you have not made the mask layer big enough. It does not have to be the same size as the layer it is masking. In this case, it needs to be about 9 times the size of the masked layer (3 horizontal and 3 vertical), so that it will continue to cover the masked the layer no matter how far in any direction the user slides it.
I am learning Spritekit right now and I want to detect a collision between two images.
Just a fun picture as an example :
The image is still a rectangle. How can I fit it that this rectangle will fit to the original face? I don't want the collision early when it hits the rectangle of the Image, I want it to collide when it actually hits the black lines of the face.
I hope you can understand my problem.
Thanks for any help.
EDIT:
There are two issues here:
You probably want to set up your SKPhysicsBody to be a path around your SKSpriteNode. For example, something like the following is close:
UIBezierPath *path = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(-40, -22, 80, 39)];
node.physicsBody = [SKPhysicsBody bodyWithPolygonFromPath:path.CGPath];
That's probably not quite right, so feel free to tweak those values, but hopefully it illustrates the idea: Come up with a path that defines your boundary, and then set the physicsBody accordingly.
If you want to see the physics outlines, set showsPhysics on the SKView.
You probably want to mask your image so that the corners of the image are transparent (i.e. have an alpha of zero):
So, when that's on a colored background it looks like:
By masking the corners, you don't have to worry about the white corner covering up something else on the scene.
If its an image thats got white borders all around it, u can edit it in Preview and use Alpha to get rid of all of the white around it.
I think in that case, the sprite's collision method will collide with the black lines.
Here is a photo explanation to better illustrate what I mean:
I have two UIImageViews, with View1 being on the bottom and View2 at the top:
What I would like to do accomplish is to programmatically set an area on View2 that is completely transparent (i.e. has an alpha of 0), so that this will be the end result:
I haven't been able to find a similar problem related to marking a part of a UIImageView transparent in the form of a shape (specifically, a circle), and was wondering how I should tackle this problem?
Thanks!
One road you could go is CoreGraphics:
Create an image context
Set the clipping path you need (or just clear the circle for the "hole" after drawing)
Draw the original image into the context
Make an UIImage from that context
Assign the image to the top UIImageView
https://developer.apple.com/library/ios/documentation/2ddrawing/conceptual/drawingprintingios/HandlingImages/Images.html#//apple_ref/doc/uid/TP40010156-CH13-SW1
I'm working on an app that lets the user stack graphics on top of each other.
The graphics are instantiated as a UIImageView, and is transparent outside of the actual graphic. I'm also using pan gestures to let the user drag them around the screen.
So when you have a bunch of graphics of different sizes and shapes on top of one another, you may have the illusion that you are touching a sub-indexed view, but you're actually touching the top one because some transparent part of it its hovering over your touch point.
I was wondering if anyone had ideas on how we could accomplish ONLY listening to the pan gesture on the solid part of the imageview. Or something that would tighten up the user experience so that whatever they touched was what they select. Thanks
Create your own subclass of UIImageView. In your subclass, override the pointInside:withEvent: method to return NO if the point is in a transparent part of the image.
Of course, you need to determine if a point is in a transparent part. :)
If you happen to have a CGPath or UIBezierPath that outlines the opaque parts of your image, you can do it easily using CGPathContainsPoint or -[UIBezierPath containsPoint:].
If you don't have a handy path, you will have to examine the image's pixel data. There are many answers on stackoverflow.com already that explain how to do that. Search for get pixel CGImage or get pixel UIImage.