SKShapeNode empty rectangle - ios

I started to learn SpriteKit and immediatly run into a problem. I'm trying to draw an empty rectangle with a SKShapeNode but does appear on the screen. If i set a color to the fill property the rectangle appears. What am i doing wrong ?
CGRect box = CGRectMake(0, 0, self.frame.size.width/2, self.frame.size.height/2);
SKShapeNode *shapeNode = [[SKShapeNode alloc] init];
shapeNode.path = [UIBezierPath bezierPathWithRect:box].CGPath;
shapeNode.fillColor = nil;
shapeNode.strokeColor = SKColor.redColor;
shapeNode.lineWidth = 3;
[self addChild:shapeNode];

Welcome to sprite kit, I am learning it as well and haven't had much experience with shapeNodes, but here is what I would suggest:
//If you want the shape to be that of a rectangle I would suggest using a simpler allocation method such as the following:
SKShapeNode *shapeNode = [SKShapeNode shapeNodeWithRectOfSize:CGSizeMake(self.frame.size.width/2, self.frame.size.height/2))];
/*shapeNodeWithRectOfSize is a built in allocation method for SKShapeNodes that handles
allocation and initialization for you, and will also create the rectangle shape for you.
The CGSizeMake method will return a CGSizeMake object for you*/
/*a CGSizeMake object is an object with two properties: width, and height. It is used to hold
the dimensions of objects. self.frame.size is a CGSize object*/
/*You do not need to set the fill color to nil. This is because the default is [SKColor clearColor]
which is an empty color already*/
//Make sure that you use an initializer method when setting the colour as below
shapeNode.strokeColor = [SKColor redColor];
shapeNode.lineWidth = 3;
[self addChild:shapeNode];
If you would like a reference to the details of the SKShapeNode object then I would suggest looking here: Apple - SKShapeNode Reference
If you would like a source of excellent quality tutorials I would suggest looking here:enter link description here
I haven't tested the code as I am not able to at the moment, so let me know if it does not work and I will see what I can do to help you. Once again welcome to Sprite-Kit, I hope it is a pleasant experience.

Related

SpriteKit SKPhysicsBody with Inner Edges

I have created an SKSpriteNode called let's say Map that has an edge path that I have defined (some simple polygon shape).
What I am trying to figure out is how to add several other edge paths that would act as interior edges of the Map. As if the "map" as a whole did in fact have holes. Some sort of inner boundary shapes that could act with Map as a whole: one edge path (as shown below)
©
I understand that there is a method that allows for creating an SKPhysicsBody with bodies (some NSArray), like such
Map.physicsBody = [SKPhysicsBody bodyWithBodies:bodiesArray];
Does this method in fact generate what I have shown in the image? Assuming that the bodiesArray contains 3 SKSpriteNode's each with a defined path from using such method:
+ (SKPhysicsBody *)bodyWithEdgeChainFromPath:(CGPathRef)path
, with creating the path like such
SKSpriteNode *innerNode1 = [SKSpriteNode spriteNodeWithImageNamed:#"map"];
CGMutablePathRef innerNode1Path = CGPathCreateMutable();
CGPathMoveToPoint(mapPath, NULL, 1110, 1110);
CGPathAddLineToPoint(mapPath, NULL, <some x1>, <some y1>);
CGPathAddLineToPoint(mapPath, NULL, <some x2>, <some y2>);
CGPathAddLineToPoint(mapPath, NULL, <some x3>, <some y3>);
.
.
.
CGPathCloseSubpath(mapPath);
innerNode1.physicsBody = [SKPhysicsBody bodyWithPolygonFromPath:innerNode1Path];
[bodiesArray addObject:innerNode1];
// Repeat for other 2 nodes
I understand that an alternative would be to create 3 separate nodes with the location and shape of the intended "holes", but I am tying to avoid creating more nodes than I need. If anyone can confirm what I am trying to do is correct, or perhaps suggest an alternative that I am unaware of.
NOTE: IF what I am doing is correct but I am missing something, I would appreciate it if someone can show me the correct way to do what I am trying to do (even a simple example of a square with an inner smaller square would be great). Thanks!
EDIT 1:
Below is the code snippet that I am using as an attempt to create the "inner boundaries". This issue here, is that while both the outer and inner rect's are drawn and shown, when I add the inner rect to the Map bodyWithBodies, it takes full control of the collision detection, removing all contact control from the outer rect shell. When I remove the bodyWithBodies it goes back to normal with showing both rects, the outer has collision detection (does not allow me to pass through), while the inner one has nothing... so close
// 1 Create large outer shell Map
CGRect mapWithRect = CGRectMake(map.frame.origin.x + offsetX, map.frame.origin.y + offsetY, map.frame.size.width * shrinkage, map.frame.size.height * shrinkage);
self.physicsWorld.gravity = CGVectorMake(0.0, 0.0);
self.physicsWorld.contactDelegate = self;
// 2 Create smaller inner boundary
CGRect innerRect = CGRectMake(100, 100, 300, 300);
SKPhysicsBody *body = [SKPhysicsBody bodyWithEdgeLoopFromRect:innerRect];
body.categoryBitMask = wallCategory;
NSArray *bodyArray = [NSArray arrayWithObject:body];
// 3 Add bodies to main Map body
myWorld.physicsBody = [SKPhysicsBody bodyWithBodies:bodyArray];
myWorld.physicsBody.categoryBitMask = wallCategory;
if ( [[levelDict objectForKey:#"DebugBorder"] boolValue] == YES) {
// This will draw the boundaries for visual reference during testing
[self debugPath:mapWithRect];
[self debugPath:innerRect];
}
EDIT 2
This approach works..by just adding a new node with the same properties as the outer rect:
SKPhysicsBody *innerRectBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:innerRect];
innerRectBody.collisionBitMask = playerCategory;
innerRectBody.categoryBitMask = wallCategory;
SKNode *innerBoundary = [SKNode node];
innerBoundary.physicsBody = innerRectBody;
[myWorld addChild: innerBoundary];
...but I would very much like a cleaner solution that does not require additional nodes..thoughts?
you are doing nothing wrong here i come with an example where i created two edge rect bodies with two physics bodies
//adding bodies after some time using gcd
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1.2 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[self addBodyA];
});
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1.2 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[self addBodyB];
});
-(void)addBodyB
{
SKSpriteNode *node=[SKSpriteNode spriteNodeWithColor:[SKColor redColor] size:CGSizeMake(20, 20)];
node.physicsBody=[SKPhysicsBody bodyWithRectangleOfSize:node.frame.size];
node.position=CGPointMake(550, 420);
node.physicsBody.restitution=1;
[self addChild:node];
}
-(void)addBodyA
{
SKSpriteNode *node=[SKSpriteNode spriteNodeWithColor:[SKColor redColor] size:CGSizeMake(20, 20)];
node.physicsBody=[SKPhysicsBody bodyWithRectangleOfSize:node.frame.size];
node.position=CGPointMake(400, 420);
node.physicsBody.restitution=1;
[self addChild:node];
}
-(void)addEdgesBodies
{
SKAction *r=[SKAction rotateByAngle:1.0/60 duration:1.0/60];
SKSpriteNode *rect=[SKSpriteNode spriteNodeWithColor:[SKColor clearColor] size:CGSizeMake(300,300)];
rect.physicsBody=[SKPhysicsBody bodyWithEdgeLoopFromRect:rect.frame];
rect.position=CGPointMake(500, 400);
[self addChild:rect];
//
SKSpriteNode *rect1=[SKSpriteNode spriteNodeWithColor:[SKColor clearColor] size:CGSizeMake(100,100)];
rect1.physicsBody=[SKPhysicsBody bodyWithEdgeLoopFromRect:rect1.frame];
rect1.position=CGPointMake(550, 450);
[self addChild:rect1];
[rect1 runAction:[SKAction repeatActionForever:r]];
}
[self addEdgesBodies];
remember edge bodies comes with low cpu overhead so don't worry about performance untill your polygon don't have so many edges.
Your code for making a path then using it in a physics body looks like it would work. As well as your physics body from bodies. Unfortunately I do not know SKPhysicsBody's can really support holes because you can not flip the normals of the body. The way I read apples documentation is that it is meant to do things like take two circle shapes and make them into one body, rather then creating a complex shape like that. The problem being that having a hole inside of your bigger shape would mean ignoring collision in that area.
Here are some alternative options
One option is you could build your stages from multiple shapes. For example if you break your map into two pieces (with a line going through each shape) and make physics bodies for those. Then have them overlap and make them into one body then it might work out. I made a diagram showing this (pardon its terrible quality you should still be able to understand what it is doing (hopefully)).
Another option would be to make it with a texture, this can hurt preformance a bit but if you can manage it then it probably would work nicely.
Map.physicsBody = SKPhysicsBody(texture: Map.texture, size: Map.size)

How to Draw a single point line in iOS

I was wondering what is the best way to draw a single point line?
My goal is to draw this line in a tableViewCell to make it look just like the native cell separator.
I don't want to use the native separator because i want to make in a different color and in a different position (not the bottom..).
At first i was using a 1px UIView and colored it in grey. But in Retina displays it looks like 2px.
Also tried using this method:
- (void)drawLine:(CGPoint)startPoint endPoint:(CGPoint)endPoint inColor:(UIColor *)color {
CGMutablePathRef straightLinePath = CGPathCreateMutable();
CGPathMoveToPoint(straightLinePath, NULL, startPoint.x, startPoint.y);
CGPathAddLineToPoint(straightLinePath, NULL, endPoint.x, endPoint.y);
CAShapeLayer *shapeLayer = [CAShapeLayer layer];
shapeLayer.path = straightLinePath;
UIColor *fillColor = color;
shapeLayer.fillColor = fillColor.CGColor;
UIColor *strokeColor = color;
shapeLayer.strokeColor = strokeColor.CGColor;
shapeLayer.lineWidth = 0.5f;
shapeLayer.fillRule = kCAFillRuleNonZero;
[self.layer addSublayer:shapeLayer];
}
It works in like 60% of the times for some reason.. Is something wrong with it?
Anyway ,i'd be happy to hear about a better way.
Thanks.
I did the same with a UIView category. Here are my methods :
#define SEPARATOR_HEIGHT 0.5
- (void)addSeparatorLinesWithColor:(UIColor *)color
{
[self addSeparatorLinesWithColor:color edgeInset:UIEdgeInsetsZero];
}
- (void)addSeparatorLinesWithColor:(UIColor *)color edgeInset:(UIEdgeInsets)edgeInset
{
UIView *topSeparatorView = [[UIView alloc] initWithFrame:CGRectMake(edgeInset.left, - SEPARATOR_HEIGHT, self.frame.size.width - edgeInset.left - edgeInset.right, SEPARATOR_HEIGHT)];
[topSeparatorView setBackgroundColor:color];
[self addSubview:topSeparatorView];
UIView *separatorView = [[UIView alloc] initWithFrame:CGRectMake(edgeInset.left, self.frame.size.height + SEPARATOR_HEIGHT, self.frame.size.width - edgeInset.left - edgeInset.right, SEPARATOR_HEIGHT)];
[separatorView setBackgroundColor:color];
[self addSubview:separatorView];
}
Just to add to Rémy's great answer, it's perhaps even simpler to do this. Make a class UILine.m
#interface UILine:UIView
#end
#implementation UILine
-(id)awakeFromNib
{
// careful, contentScaleFactor does NOT WORK in storyboard during initWithCoder.
// example, float sortaPixel = 1.0/self.contentScaleFactor ... does not work.
// instead, use mainScreen scale which works perfectly:
float sortaPixel = 1.0/[UIScreen mainScreen].scale;
UIView *topSeparatorView = [[UIView alloc] initWithFrame:
CGRectMake(0, 0, self.frame.size.width, sortaPixel)];
topSeparatorView.userInteractionEnabled = NO;
[topSeparatorView setBackgroundColor:self.backgroundColor];
[self addSubview:topSeparatorView];
self.backgroundColor = [UIColor clearColor];
self.userInteractionEnabled = NO;
}
#end
In IB, drop in a UIView, click identity inspector and rename the class to a UILine. Set the width you want in IB. Set the height to 1 or 2 pixels - simply so you can see it in IB. Set the background colour you want in IB. When you run the app it will become a 1-pixel line, that width, in that colour. (You probably should not be affected by any default autoresize settings in storyboard/xib, I couldn't make it break.) You're done.
Note: you may think "Why not just resize the UIView in code in awakeFromNib?" Resizing views upon loading, in a storyboard app, is problematic - see the many questions here about it!
Interesting gotchya: it's likely you'll just make the UIView, say, 10 or 20 pixels high on the storyboard, simply so you can see it. Of course it disappears in the app and you get the pretty one pixel line. But! be sure to remember self.userInteractionEnabled = NO, or it might get over your other, say, buttons!
2016 solution ! https://stackoverflow.com/a/34766567/294884
shapeLayer.lineWidth = 0.5f;
That's a common mistake and is the reason this is working only some of the time. Sometimes this will overlap pixels on the screen exactly and sometimes it won't. The way to draw a single-point line that always works is to draw a one-point-thick rectangle on integer boundaries, and fill it. That way, it will always match the pixels on the screen exactly.
To convert from points to pixels, if you want to do that, use the view's scale factor.
Thus, this will always be one pixel wide:
CGContextFillRect(con, CGRectMake(0,0,desiredLength,1.0/self.contentScaleFactor));
Here's a screen shot showing the line used as a separator, drawn at the top of each cell:
The table view itself has no separators (as is shown by the white space below the three existing cells). I may not be drawing the line in the position, length, and color that you want, but that's your concern, not mine.
AutoLayout method:
I use a plain old UIView and set its height constraint to 1 in Interface Builder. Attached it to the bottom via constraints. Interface builder doesn't allow you to set the height constraint to 0.5, but you can do it in code.
Make a connector for the height constraint, then call this:
// Note: This will be 0.5 on retina screens
self.dividerViewHeightConstraint.constant = 1.0/[UIScreen mainScreen].scale
Worked for me.
FWIW I don't think we need to support non-retina screens anymore. However, I am still using the main screen scale to future proof the app.
You have to take into account the scaling due to retina and that you are not referring to on screen pixels. See Core Graphics Points vs. Pixels.
Addition to Rémy Virin's answer, using Swift 3.0
Creating LineSeparator class:
import UIKit
class LineSeparator: UIView {
override func awakeFromNib() {
let sortaPixel: CGFloat = 1.0/UIScreen.main.scale
let topSeparatorView = UIView()
topSeparatorView.frame = CGRect(x: 0, y: 0, width: self.frame.size.width, height: sortaPixel)
topSeparatorView.isUserInteractionEnabled = false
topSeparatorView.backgroundColor = self.backgroundColor
self.addSubview(topSeparatorView)
self.backgroundColor = UIColor.clear
self.isUserInteractionEnabled = false
}
}

Use Bezier Path as Clipping Mask

I am wondering if it is possible to clip a view to a Bezier Path. What I mean is that I want to be able to see the view only in the region within the closed Bezier Path. The reason for this is that I have the outline of an irregular shape, and I want to fill in the shape gradually with a solid color from top to bottom. If I could make it so that a certain view is only visible within the path then I could simply create a UIView of the color I want and then change the y coordinate of its frame as I please, effectively filling in the shape. If anyone has any better ideas for how to implement this that would be greatly appreciated. For the record the filling of the shape will match the y value of the users finger, so it can't be a continuous animation. Thanks.
Update (a very long time later):
I tried your answer, Rob, and it works great except for one thing. My intention was to move the view being masked while the mask remains in the same place on screen. This is so that I can give the impression of the mask being "filled up" by the view. The problem is that with the code I have written based on your answer, when I move the view the mask moves with it. I understand that that is to be expected because all I did was add it as the mask of the view so it stands to reason that it will move if the thing it's tied to moves. I tried adding the mask as a sublayer of the superview so that it stays put, but that had very weird results. Here is my code:
self.test = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 200, 200)];
self.test.backgroundColor = [UIColor greenColor];
[self.view addSubview:self.test];
UIBezierPath *myClippingPath = [UIBezierPath bezierPath];
[myClippingPath moveToPoint:CGPointMake(100, 100)];
[myClippingPath addCurveToPoint:CGPointMake(200, 200) controlPoint1:CGPointMake(self.screenWidth, 0) controlPoint2:CGPointMake(self.screenWidth, 50)];
[myClippingPath closePath];
CAShapeLayer *mask = [CAShapeLayer layer];
mask.path = myClippingPath.CGPath;
self.test.layer.mask = mask;
CGRect firstFrame = self.test.frame;
firstFrame.origin.x += 100;
[UIView animateWithDuration:3 animations:^{
self.test.frame = firstFrame;
}];
Thanks for the help already.
You can do this easily by setting your view's layer mask to a CAShapeLayer.
UIBezierPath *myClippingPath = ...
CAShapeLayer *mask = [CAShapeLayer layer];
mask.path = myClippingPath.CGPath;
myView.layer.mask = mask;
You will need to add the QuartzCore framework to your target if you haven't already.
In Swift ...
let yourCarefullyDrawnPath = UIBezierPath( .. blah blah
let maskForYourPath = CAShapeLayer()
maskForYourPath.path = carefullyRoundedBox.CGPath
layer.mask = maskForYourPath
Just an example of Rob's solution, there's a UIWebView sitting as a subview of a UIView called smoothView. smoothView uses bezierPathWithRoundedRect to make a rounded gray background (notice on right). That works fine.
But if smoothView has only normal clip-subviews, you get this:
If you do what Rob says, you get the rounded corners in smoothView and all subviews ...
Great stuff.

iOS 7 CAEmitterLayer spawning particles inappropriately

Strange issue I can't seem to resolve where on iOS 7 only, CAEmitterLayer will spawn particles on the screen incorrectly when birth rate is initially set to a nonzero value. It's as if it calculates the state the layer would be in the future.
// Create black image particle
CGRect rect = CGRectMake(0, 0, 20, 20);
UIGraphicsBeginImageContext(rect.size);
CGContextFillRect(UIGraphicsGetCurrentContext(), rect);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Create cell
CAEmitterCell *cell = [CAEmitterCell emitterCell];
cell.contents = (__bridge id)img.CGImage;
cell.birthRate = 100.0;
cell.lifetime = 10.0;
cell.velocity = 100.0;
// Create emitter with particles emitting from a line on the
// bottom of the screen
CAEmitterLayer *emitter = [CAEmitterLayer layer];
emitter.emitterShape = kCAEmitterLayerLine;
emitter.emitterSize = CGSizeMake(self.view.bounds.size.width,0);
emitter.emitterPosition = CGPointMake(self.view.bounds.size.width/2,
self.view.bounds.size.height);
emitter.emitterCells = #[cell];
[self.view.layer addSublayer:emitter];
I saw on the DevForums one post where a few people mentioned they had similar problems with iOS 7 and CAEmitterLayer, but no one had any ideas how to fix it. Now that iOS 7 is no longer beta, I figured I should ask here and see if anyone can crack it. I really hope this isn't just a bug that we have to wait for 7.0.1 or 7.1 to get fixed. Any ideas would be much appreciated. Thanks!
YES!
I spent hours on this problem myself.
To get the same kind of animation of the birthRate we had before we use a couple of strategies.
Firstly, if you want the layer to look like it begins emitting when added to the view you need to remember that CAEmitterLayer is a subclass of CALayer which conforms to the CAMediaTiming protocol. We have to set the whole emitter layer to begin at the current moment:
emitter.beginTime = CACurrentMediaTime();
[self.view.layer addSublayer:emitter];
It's as if it calculates the state the layer would be in the future.
You were eerily close, but actually its that the emitter was beginning in the past.
Secondly, to animate between a birthrate of 0 and n, with the effect that we had before we can manipulate the lifetime property instead:
if (shouldBeEmitting){
emitter.lifetime = 1.0;
}
else{
emitter.lifetime = 0;
}
Note that i set the lifetime on the emitter layer itself. This is because when emitting the emitter cell's version of this property gets multiplied by the value in the emitter layer. Setting the lifetime of the emitter layer sets a multiple of the lifetimes of all your emitter cells, allowing you to turn them all on and off with ease.
For me, the issue with my CAEmitterLayer, when moving to iOS7 was the following:
In iOS7 setting the CAEmitterLayerCell's duration resulted in the particle not showing at all!
The only thing I had to change was remove the cell.duration = XXX and then my particles began showing up again. I am going to eat an Apple over this unexpected, unexplained hassle.

What is the best technique to render circles in iOS?

When rendering opaque non-gradient circular shapes of uniform color in iOS, there seem to be three possible techniques:
Using images like circle-icon.png and circle-icon#2px.png. Then, one may implement the following code to have iOS automagically render the appropriate size:
UIImage *image = [UIImage imageNamed:#"circle-icon"];
self.closeIcon = [[UIImageView alloc] initWithImage:image];
self.closeIcon.frame = CGRectMake(300, 16, image.size.width, image.size.height);
Rendering rounded corners and using layers, like so:.
self.circleView = [[UIView alloc] initWithFrame:CGRectMake(10,20,100,100)];
circleView.alpha = 0.5;
self.circleView.layer.cornerRadius = 50;
self.circleView.backgroundColor = [UIColor blueColor];
Using the native drawing libraries, with something like CGContextFillEllipseInRect
What are the exact performance and maintenance tradeoffs of these 3 approaches?
You're overlooking another very logical alternative, UIBezierPath and CAShapeLayer. Create UIBezierPath that is a circle, create a CAShapeLayer that uses that UIBezierPath, and then add that layer to your view/layer hierarchy.
Add the QuartzCore framework to your project.
Second, import the appropriate header:
#import <QuartzCore/QuartzCore.h>
And then you can add a CAShapeLayer to your view's layer:
UIBezierPath *path = [UIBezierPath bezierPath];
[path addArcWithCenter:CGPointMake(self.view.bounds.size.width / 2.0, self.view.bounds.size.height / 2.0) radius:self.view.bounds.size.width * 0.40 startAngle:0 endAngle:M_PI * 2.0 clockwise:YES];
CAShapeLayer *layer = [[CAShapeLayer alloc] init];
layer.path = [path CGPath];
layer.fillColor = [[UIColor blueColor] CGColor];
[self.view.layer addSublayer:layer];
I think that either a CoreGraphics implementation, or this CAShapeLayer implementation make more sense than PNG files or UIView objects with rounded corners.
The sharpest and best looking result is to have a well drawn image that is exactly the size you want the circle to be. If you are sizing the circles, you will often not get the look you want, and there is some overhead associated with them.
I think your best performance for cleanliness and speed would come from using core graphics and its add ellipse in a square:
CGPathRef roundPath = CGPathCreateMutable();
CGRect rectThatIsASquare = CGRectMake(0, 0, 40, 40);
CGPathAddEllipseInRect(roundPath, NULL, rectThatIsASquare);
CGContextSetRGBFillColor(context, 0.7, 0.6, 0.5, 1.0);
CGContextFillPath(context);
Personally, I think you are over-thinking the problem. If you're only drawing a few circles, there is going to be very very little performance/maintenance impact whichever you decide on, and even if you optimize it to hell your users aren't getting any benefits from it. Do whatever you're doing now; focus on making the app's content great, and come back to performance later on if you really need to.
With that being said, I would recommend using drawing libraries.
Rounding corners is slow and rather non-intuitive
Using image files will be a problem if you decide to do stuff like change colors. Also, sometimes images don't look that great after you scale them.

Resources