iOS Sprite Kit - SKSpriteNode's .centerRect property not working - ios

I was going through the SpriteKit documentation by Apple and came across a really useful feature that I could use when programming my UI. The problem is I can't get it to work.
Please see this page and scroll down to "Resizing a Sprite" - Apple Docs
I have literally copied the image dimensions and used the same code incase I was doing something wrong. But I always end up with a stretched looking image rather than the correct "end caps" staying the same scale.
I am referring to this code:
SKSpriteNode *button = [SKSpriteNode spriteWithImageNamed:#"stretchable_button.png"];
button.centerRect = CGRectMake(12.0/28.0,12.0/28.0,4.0/28.0,4.0/28.0);
What am I doing wrong? Is there a step I have missed?
EDIT:
Here is the code I have been using. I stripped it of my button class and tried to use it with an SKSPriteNode but still the problem persists. I also changed the image just to make sure it wasnt that. The image im using is a 32x32 at normal size.
SKSpriteNode *button = [SKSpriteNode spriteNodeWithImageNamed:#"Button.png"];
[self addChild:button];
button.position = ccp(200, 200);
button.size = CGSizeMake(128, 64);
button.centerRect = CGRectMake(9/32, 9/32, 14/32, 14/32);

The .centerRect property works as documented if you adjust the sprites .scale property.
Try:
SKTexture *texture = [SKTexture textureWithImageNamed:#"Button.png"];
SKSpriteNode *button = [[SKSpriteNode alloc] initWithTexture:texture];
button.centerRect = CGRectMake(9/32, 9/32, 14/32, 14/32);
[self addChild:button];
button.xScale = 128.0/texture.size.width;
button.yScale = 64.0/texture.size.height;

9/32 is integer division, so the result passed to CGRectMake is zero. Ditto the other three parameters. If you use floating point literals like the example you cite, you might get better results.

Here's a refresh of how exactly this works. By the way, my image size width is 48 pixels and height is 52 pixels, but this doesn't matter at all. Any image can be used:
SKSpriteNode *button = [SKSpriteNode spriteNodeWithImageNamed:#"Button.png"];
//(x, y, width, height). First two values are the four corners of the image that you DON'T want touched/resized (They will just be moved).
//The second two values represent how much of images width & height you want cut out & used as stretching material. Cut out happens from the center of the image.
button.centerRect = CGRectMake(20/button1.frame.size.width, 20/button1.frame.size.height, 5/button1.frame.size.width, 15/button1.frame.size.height);
button.position = CGPointMake(self.frame.size.width/2, self.frame.size.height/2); //Positions sprite in the middle of the screen.
button.xScale = 4; //Resizes width (This is all I needed).
//button.yScale = 2; //Resizes height (Commented out because I didn't need this. You can uncomment if the button needs to be higher).
[self addChild:button];
Read the section called "Resizing a Sprite" in this document: https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Sprites/Sprites.html#//apple_ref/doc/uid/TP40013043-CH9-SW10
'Figure 2-4 A stretchable button texture' demonstrates how the (x, y, width, height) works.

Based on rwr's answer here is a working init method for a SKSpriteNode. I use this in my own game. Basically you make insets of 10px all around the the output image. And then call it like this:
[[HudBoxScalable alloc] initWithTexture:[atlas textureNamed:#"hud_box_9grid.png"] inset:10 size:CGSizeMake(300, 100) delegate:(id<HudBoxDelegate>)clickedObject];
-(id) initWithTexture:(SKTexture *)texture inset:(float)inset size:(CGSize)size {
if (self=[super initWithTexture:texture]) {
self.centerRect = CGRectMake(inset/texture.size.width,inset/texture.size.height,(texture.size.width-inset*2)/texture.size.width,(texture.size.height-inset*2)/texture.size.height);
self.xScale = size.width/texture.size.width;
self.yScale = size.height/texture.size.height;
}
return self;
}

Related

how to make a rectangle at run time to fit a dragged label of words in it objective c

I am making a words match app in iOS in which i have two parts of words strings and I want to make a complete word from two labels of random words so for all is fine.
Now I want to make a rectangle at run time as a target of draggable label, when I click on label to drag it the rectangle should become hi-lighted with the same size as the size of label of words.
How can i achieve this in objective-C?
For more clearness you can see in image where I want to make this green rectangle at run time to drop the right side labels in it
The left side labels are not moveable and should always in the rectangle as you see in given image.The code so far i try to make the rectangle as a UIView in viewDidLoad as
for(int i = 0; i <5;i++){
customView = [[UIView alloc]initWithFrame:CGRectMake(10,y,50, 30)];
customView.backgroundColor = [UIColor greenColor];
[gameLayer addSubview:customView];
y = y+40;
}
But this is not what i actually want. Any help appreciated...
Finally i got the rectangle size same as the size of tapped label as here, i make an array in .h file which contain the reference of my target rectangle NSMutableArray *rectangleLabels
after that i make the size of rectangle same as the size of my draggable labels using a loop in tapped gestures method and it works fine...
-(void)gotTapped:(Id)sender {
for (UIView *v in rectangleLabels) {
v.hidden = !v.hidden;
UILabel *tapLbl = (UILabel *)[sender view];
CGRect rect = tapLbl.frame;
for(int i=0;i<rectangleLabels.count;i++) {
UILabel *lblChange = (UILabel *)[rectangleLabels objectAtIndex:i];
lblChange.frame = CGRectMake(lblChange.frame.origin.x, lblChange.frame.origin.y, rect.size.width, rect.size.height);
}
}
}

Draw rectangles on image view.image not scaling properly - iOS

I start out with an imageView.image (a photo).
I submit (POST) the imageView.image to remote service (Microsoft face detection) for processing.
Remote service returns JSON of CGRect's for each detected face on the image.
I feed JSON into my UIView to draw the rectangles. I initiate my UIView with a frame of {0, 0, imageView.image.size.width, imageView.image.size.height}. <-- my thinking, a frame equivalent to the size of the imageView.image
Add my UIView as a subview of self.imageView OR self.view (tried both)
End Result:
Rectangles are drawn but they do not appear correctly on the imageView.image. That is, the CGRects generated for each of the faces are supposed to be relative to the image's coordinate space, as returned by the remote service but they appear off once I add my custom view.
I believe I may have a scaling issue of some sort as, if I divide each value in the CGRects / 2 (as a test) I can get an approximation but still off. The microsoft documentation states the detected faces are returned with rectangles indicating the location of faces in the image in Pixels. Yet, aren't they being treated as points when drawing my path?
Also, shouldn't I be initiating my view with a frame equivalent to the imageView.image's frame so that the view matches an identical coordinate space as the submitted image?
Here is a screenshot example of what it looks like if i try to scale down each CGRect by dividing them by 2.
I am new to iOS and broke away from the books to work on this as a self exercise. I can provide more code as needed. Thanks in advance for your insight!
EDIT 1
I add a subview for each rectangle as I iterate over an array of face attributes which include the rectangle for each face via the following method, which gets called during (void)viewDidAppear:(BOOL)animated
- (void)buildFaceRects {
// build an array of CGRect dicts off of JSON returned from analized image
NSMutableArray *array = [self analizeImage:self.imageView.image];
// enumerate over array using block - each obj in array represents one face
[array enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
// build dictionary of rects and attributes for the face
NSDictionary *json = [NSDictionary dictionaryWithObjectsAndKeys:obj[#"attributes"], #"attributes", obj[#"faceId"], #"faceId", obj[#"faceRectangle"], #"faceRectangle", nil];
// initiate face model object with dictionary
ZGCFace *face = [[ZGCFace alloc] initWithJSON:json];
NSLog(#"%#", face.faceId);
NSLog(#"%d", face.age);
NSLog(#"%#", face.gender);
NSLog(#"%f", face.faceRect.origin.x);
NSLog(#"%f", face.faceRect.origin.y);
NSLog(#"%f", face.faceRect.size.height);
NSLog(#"%f", face.faceRect.size.width);
// define frame for subview containing face rectangle
CGRect imageRect = CGRectMake(0, 0, self.imageView.image.size.width, self.imageView.image.size.height);
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imageRect];
// add view as subview of imageview (?)
[self.imageView addSubview:faceRect];
}];
}
EDIT 2:
/* Image info */
UIImageView *iv = self.imageView;
UIImage *img = iv.image;
CGImageRef CGimg = img.CGImage;
// Bitmap dimensions [pixels]
NSUInteger imgWidth = CGImageGetWidth(CGimg);
NSUInteger imgHeight = CGImageGetHeight(CGimg);
NSLog(#"Image dimensions: %lux%lu", imgWidth, imgHeight);
// Image size pixels (size * scale)
CGSize imgSizeInPixels = CGSizeMake(img.size.width * img.scale, img.size.height * img.scale);
NSLog(#"image size in Pixels: %fx%f", imgSizeInPixels.width, imgSizeInPixels.height);
// Image size points
CGSize imgSizeInPoints = img.size;
NSLog(#"image size in Points: %fx%f", imgSizeInPoints.width, imgSizeInPoints.height);
// Calculate Image frame (within imgview) with a contentMode of UIViewContentModeScaleAspectFit
CGFloat imgScale = fminf(CGRectGetWidth(iv.bounds)/imgSizeInPoints.width, CGRectGetHeight(iv.bounds)/imgSizeInPoints.height);
CGSize scaledImgSize = CGSizeMake(imgSizeInPoints.width * imgScale, imgSizeInPoints.height * imgScale);
CGRect imgFrame = CGRectMake(roundf(0.5f*(CGRectGetWidth(iv.bounds)-scaledImgSize.width)), roundf(0.5f*(CGRectGetHeight(iv.bounds)-scaledImgSize.height)), roundf(scaledImgSize.width), roundf(scaledImgSize.height));
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imgFrame];
// add view as subview of image view
[iv addSubview:faceRect];
}];
We've got several problems :
Microsoft returns pixel and iOS uses points. The difference between them is just about screen dimension. For instance on an iPhone 5 : 1 pt = 2 px and on an 3GS 1px = 1 pt. Look at the iOS documentation for more informations.
The frame of your UIImageView is not the image frame. When Microsofts returns the frame of a face, it returns it in the frame of the image and not in the frame of the UIImageView. So we've got a problem of coordinates system.
Be careful about time if you use Autolayout. The frame of a view set by constraints is not the same when ViewDidLoad: is called than when you see it on the screen.
Solution :
I'm just a read-only Objective C developer so I can't give you code. I could in Swift but it's not necessary.
Convert pixels into points. That's easy : use ratio.
Define the frame of a face using what you did. Then you have to move the coordinates you determined from the image frame coordinates system to the UIImageView coordinates system. That's less easy. It depends on the contentMode of your UIImageView. But I quickly find informations about it on the Internet.
If you use AutoLayout, add the frame of the face when AutoLayout finishes to calculate the layouts. So when ViewDidLayoutSubview: is called.
Or, that's better, use constraints to set your frame in the UIImageView.
I hope to be clear enough.
Some links :
iOS Drawing Concepts
Displayed Image Frame In UIImageView

iOS: understanding frame and views

I am working programmatically an application for iOS based on a ViewController. I am trying to do so programmatically as I want to understand the underlying concepts.
I have created a subclass of UIImageView and initialized this using an image. In the initialization method I added also a second UIImageView as I would like to handle the two differently but be part of the same object. Ultimately I would like to be able to scale the object (and hence the 2 UIImages) according to the device screen resolution (e.g. if resolution is low then I will scale the two images by 50%). I want to do this because I would like to be able to implement a zoom in and zoom out feature as well as supporting multiple resolutions and screen layouts.
Additional information:
The two images have different size (500x500 pixels) and (350x350
pixels).
My questions are:
how do I position the second image exactly in the center of the first? (I used the center property of the main UIImage but I think I got it wrong.. I thought that the center was the exact center of the square but either I am using it incorrectly or there is something I am missing)
are there any negative side effects for using this approach (UIView subclass class containing an additional UIView?) (E.g. Is it going to create confusion when applying transformation algorithms? Does it reduce the randering speed? Or more simply is it a bad design pattern?)
I find it difficult to understand the positioning of the second image. See code snipped below, this is what I use:
CGRect innerButtonFrame = CGRectMake(self.center.x/2, self.center.y/2,innerButtonSelectedImage.size.width,innerButtonSelectedImage.size.height);
Taken from:
-(id) initWithImage:(UIImage *)image
{
if(self = [super initWithImage:image]){
//
self.userInteractionEnabled = true;
// Initialize gesture recognizers
UITapGestureRecognizer *tapInView = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tapInImageView:)];
[self addGestureRecognizer:tapInView];
UILongPressGestureRecognizer *longPress = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(longPressInView:)];
[self addGestureRecognizer:longPress];
// Initialize labels
..
// Inner circle image
innerButtonView = [[UIImageView alloc] init];
innerButtonSelectedImage = [UIImage imageNamed:#"inner circle.png"];
CGRect innerButtonFrame = CGRectMake(self.center.x/2, self.center.y/2,innerButtonSelectedImage.size.width,innerButtonSelectedImage.size.height);
innerButtonView.frame = innerButtonFrame;
[innerButtonView setImage:innerButtonSelectedImage];
// Add additional ui components to view
[self addSubview:innerButtonView];
..
[self addSubview:descriptionLabel];
}
return self;
}
EDIT: This is how it looks like if I change the positioning code to the following:
CGRect innerButtonFrame = CGRectMake(0, 0,innerButtonSelectedImage.size.width,innerButtonSelectedImage.size.height);
innerButtonView.frame = innerButtonFrame;
I also don't understand why the image is bigger than the screen.. as the blue one should be 500x500 pixel wide and the screen of the iPhone 6 should be 1334 x 750.
How about:
CGRect innerButtonFrame = CGRectMake(0, 0, innerButtonSelectedImage.size.width,innerButtonSelectedImage.size.height);
innerButtonFrame.center = self.center;
If you need 500*500 circle then add the circle half means Replace 500*500 with 250*250 . And small circle replace 350*350 with 175*175 And solve your problem.
I hope your problem will solve..Enjoy
Thanks..

cocos2d: why isn't label appearing?

Hello I am making a side scrolling cocos2d game and I want a label to show how far the user has flown in the game. For some reason with the code I wrote the label is not appearing. Here is my GameEngine class that calls the class method that is supposed to make the label appear:
//Set the meterDistance
meterDistance = [MeterDistance createTheMeterDistance];
[self addChild:meterDistance z:10];
Here is the code in the MeterDistance class:
meters = 1;
meterLabel = [CCLabelBMFont labelWithString:#"0" fntFile:#"green_arcade-ipad.fnt"];
meterLabel.position = ccp(200, screenHeight - 100);
[self addChild:meterLabel z:10];
meterLabel.anchorPoint = ccp(1.0, 0.5);
[self schedule:#selector(updateLabel:)interval:1.0f/20.0f];
Here is the updateLabel method:
-(void)updateLabel:(ccTime)delta{
meters++;
NSString* scoreString = [NSString stringWithFormat:#"%d", meters];
[meterLabel setString:scoreString];
}
It's been a while since I last dealt with cocos2d code...
What you've written looks ok.
Take it one step at a time and see where it goes wrong.
Position your label in the middle of the screen (maybe screenheight is off, or anchorPoint moves the label outside of the screen).
Another possible cause is if the font file name is not exactly #"green_arcade-ipad.fnt".
Maybe you missed a capital letter?
Otherwise maybe some other element of your layer could be obstructing the label.

How to add Stretchable UIImage in the CALayer.contents?

I have a CALayer and I want to add to it a stretchable image. If I just do:
_layer.contents = (id)[[UIImage imageNamed:#"grayTrim.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0.0, 15.0, 0.0, 15.0)].CGImage;
it won't work since the layers' default contentGravity is kCAGravityResize.
I've read that this could be accomplished using the contentsCenter but I cannot seem to figure out how exactly would I use that to achieve the stretched image in my CALayer.
Any ideas are welcome!
Horatiu
The response to this question is this one. Lets say you have a stretchable image which stretches only in width and has the height fixed (for simplicity sake).
The image is 31px width ( 15px fixed size - doesn't stretch -, 1px will be stretched)
Assuming your layer is a CALayer subclass your init method should look like this:
- (id)init
{
self = [super init];
if (self) {
UIImage *stretchableImage = (id)[UIImage imageNamed:#"stretchableImage.png"];
self.contents = (id)stretchableImage.CGImage;
self.contentsScale = [UIScreen mainScreen].scale; //<-needed for the retina display, otherwise our image will not be scaled properly
self.contentsCenter = CGRectMake(15.0/stretchableImage.size.width,0.0/stretchableImage.size.height,1.0/stretchableImage.size.width,0.0/stretchableImage.size.height);
}
return self;
}
as per documentation the contentsCenter rectangle must have values between 0-1.
Defaults to the unit rectangle (0.0,0.0) (1.0,1.0) resulting in the entire image being scaled. If the rectangle extends outside the unit rectangle the result is undefined.
This is it. Hopefully someone else will find this useful and it will save some development time.

Resources