Issues with sprite location in cocos2d - ios

I have a sprite, that I'd like to use as background image(using cocos2d).
CCSprite *abc =[CCSprite spriteWithImageNamed:#"background.png"];
abc.position = ccp(self.contentSize.width/2,self.contentSize.height/2);
First image is original, second one is a screenshot from the simulator. Resolution of the image is 640/1136. What should I do to fit all space of the screen normally? How schould I locate it?

The code you are using is correct.
The result you are getting is not what you expected because you probably loaded a Retina Image on a non Retina Screen.
Check out Cocos2d Naming conventions here for more info.
For the background image you chose of the sun, I am assuming that you want it to be always in the top right corner. (That makes more sense to me than centering it).
Now an elegant solution to accomplish this would be to get your image for the 4 inch screen that you have already created, and define a rule so that it's top left corner is always at the top left corner of the screen. For the 3.5 inch screens this would be clipped.
Now, first you want to define an anchor point as
_background.anchorPoint = ccp(1.0f, 1.0f);
This will tell Cocos to position your background relative to the top right corner.
Now you can go on and position it so that it is always at the top corner of the screen.
background.position = ccp(self.scene.bounds.size.width, self.scene.size.height);
This would be the standard and best way to do it. Results and benefits:
Works on 3.5 and 4 inch screens without needing specific image sizes
Simple, no unnecessary code and especially UI_INTERFACE_IDIOM testing
The way most everybody does it
Another way of positioning in the top right corner
You can also check out the new positionType property for CCNode in the reference. CCPositionUnitNormalized can help you define a positioning rule similar to saying position this to the 100% width and 100% height of the parent container. It would be something like this.
_background.positionType = CCPositionUnitNormalized;
_background.position = ccp (1.0f, 1.0f);
and have the same result if you prefer this syntax.

You can use either scaling the image to fit the device height or just use separate image for the 4 inch iPhone and 3.5 inch
For Scaling
abc.scaleY = winSize.height/abc.contentSize.height;
For Specific image
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone)
{
if([[UIScreen mainScreen] bounds].size.height == 568)
{
// iPhone 5 add different image
}
else
{
// add 3.5 screen image
}
}

I am unsure what "self" is referencing, but I assume it is a layer. It appears you are trying to center the image on the screen, or you need to offset it based on the size of the screen. If so you should get the screen size as it will allow you to place the image properly no matter what the resolution of the screen is. I am using Cocos2d 2.1.
CGSize winSize = [CCDirector sharedDirector].winSize
This is the winSize in points. You can also get the winSize in pixels:
CGSize winSizeInPixels = [CCDirector sharedDirector].winSizeInPixels;
Use whatever works best for you. You can then center the image with the following, for example:
abc.position = ccp(winSize.width / 2, winSize.height / 2);
Regardless of whether or not you are trying to center the image, knowing the screen size will allow you to place the image based on that screen size so that it appears properly.
Obviously the size of the image and whether or not it fills the screen must be addressed.
I hope this helps.

Related

Access iPhone Absolute Pixel Position

In the screenspace of an iPhone/iPad, Apple uses points, which are typically half the actual resolution of the screen. My question is, is it possible to access the actual pixels themselves? For example, if i take a UIView, and make it have a size/location of 0,0,0.5,0.5 with a background color of red, i can't see it on the screen.
Just wondering if this is possible.
Thanks!
Sure it's possible.
The code you already have should be working (a UIView with a size of (0.5, 0.5)). I just ran it and captured this result from the simulator:
Yea. That's difficult to see. Let's zoom that in.
So yes, you can draw on-screen in smaller values than a single point.
However, to draw a single pixel, you'll want to be using a point value that is 1/scaleOfScreen (as not all devices have 2x displays). So, for example, you'll want your code to look something like this:
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat pixelPointWidth = 1/scale;
UIView* v = [[UIView alloc] initWithFrame:CGRectMake(20, 20, pixelPointWidth, pixelPointWidth)];
v.backgroundColor = [UIColor redColor];
[self.view addSubview:v];
This will now create a UIView that occupies a single pixel on-screen.
Although, if you want to be doing a lot of pixel-perfect drawing, you should probably be using something lower level than a single UIView (have a look at Core Graphics).
However.
You may encounter some issues with this method when drawing on an iPhone 6 Plus. Because it's screen's scale differs from its nativeScale, it will first render your content in the logical coordinate space of 3x and then downsample to the actual screen resolution (around 2.6x).
This will most probably result in some pixel bleeding, where your 'pixel' view can be rendered in neighboring pixels (although usually at a reduced brightness).
Unfortunately, there is no easy way around this problem without using an even lower level API such as OpenGL or Metal, where you can circumvent this automatic scaling and then downsampling, and draw directly into the screen's actual coordinate space.
Have a look here for a nice little overview on how different devices render content onto their screens.
Have a look here for more info on how pixel bleeding can occur on the iPhone 6 Plus.
You can guess the pixels based on the point depending on the device resolution (in ppi) by multiplying a coefficient but you don't want to do this.
Also, in your exemple you did not state that you normalized the coordinates so basically you are trying to display a red box at the first pixel (top left) with a size of half a point, which is why you can't see it.
EDIT
To draw a red box you can use this sample code :
// Draw a red box
[[UIColor redColor] set];
UIRectFill(CGRectMake(20, 20, 100, 100)); // position (x : 20, y: 20) (still top left) and size (100*100 points)

Position an UIImageView over another with scaling

Is there a method on UIImageView that tells me the position of its image within its bounds? Say I have an image of a car, like this:
This image is 600x243, and, where the rear wheel should be, there's a hole which is 118,144,74,74 (x,y,w,h).
I want to let the user see different rear wheel options, and I have a few wheel images to choose from (all square, so they are easily scaled to match the hole in the car).
I wanted to place the car image in a UIImageView whose size is arbitrary based on layout, and I wanted to see the whole car at the natural aspect ratio. So I set the image view's content mode to UIViewContentModeScaleAspectFit, and that worked great.
For example, here's the car in an imageView that is 267x200:
I think doing this scaled the image from w=600 to w=267, or, by a factor of 267/600=0.445, and (I think) that means that the height changed from 200 to 200*0.445=89. And I think it's true that the hole was scaled by that factor, too
But I want to add a UIImageView subview to show the wheel, this is where I get confused. I know the image size, I know the imageView size, and I know the hole frame in terms of the original image size. How do I get the hole frame after the image is scaled?
I've tried something like this:
determine the position of the car image in its UIImageView. That's something like:
float ratio=carImage.width/carImageView.frame.size.width; // 0.445
CGFloat yPos=(carImageView.frame.size.height-carImage.height)/2; // there should be a method for this?
determine the scaled frame of the hole:
CGFloat holeX = ratio*118;
CGFloat holeY = yPos + ratio*144;
CGFloat holeEdge = ratio*74;
CGRect holeRect = CGRectMake(holeX,holeY,holeEdge,holeEdge);
But there must be a better way. These calculations (if they are right) are only right for a car image view that is taller than the car. The code needs to be different if the image view is wider.
I think I can work out the logic for a wider view, but it still might be wrong. For example, that yPos calculation. Do the docs say that, for content mode = AspectFit, the image is centered inside the larger dimension? I don't see that any place.
Please tell me there's a better way, or, if not, is it proven that my idea here will work for arbitrary size images, image views, holes?
Thanks.
The easiest solution (by far) is to simply use the same sizes for both the car image and the wheel option images.
Just give the wheel options a transparent padding (easy to do in nearly every graphics editing program), and overlay them over the car with the same frame.
You may increase your asset sizes by a minuscule amount.. but it'll save you one hell of a headache trying to work out positions and sizings, especially as you're scaling the car image.

Displaying Images in iOS application

I need to display a few images in my IOS Application. What should I do so that the images display appropriately across all devices?
Do I have to set the size of the image manually based on the device? Please clarify.
You wouldn't use SpriteKit just to display images. You would load the images as UIImage and then create a UIImageView that you can place on the screen wherever and however you want and then you just assign the UIImageView your UIImage. UIImageView has a lot of properties you can set how images are displayed (e.g. if they are scaled and how they are scaled or if they are not scaled, how they shall be aligned within the viewable area, and so on). You can draw a UIImageView on top of a SpriteKit scene, that is no problem on iOS (on iOS everything is drawn by OpenGL ES or Metal anyway).
Of course you can also embed any image as a sprite if you like:
UIImages * img = ...;
SKTexture * tex = [SKTexture textureWithImage:img];
SKSpriteNode * sprite = [[SKSpriteNode alloc] initWithTexture:tex];
// If you don't use ARC, I'd add the following below:
// [sprite autorelease];
Now you can perfectly integrate it into the scene in whatever way you like an perfectly align it will all your other sprites. Yet if you just want to paint an image over the scene:
SKScene * scene = ...;
SKView * sceneView = scene.view;
UIImageView * imgView = [[UIImageView alloc] init];
imgView.image = img;
// Whatever content mode you prefer
imgView.contentMode = UIViewContentModeScaleAspectFit;
// Where shall it be placed and how big shall it be.
imgView.frame = CGRectMake(posX, posY, width, height);
// If you'd use this, it will cover the whole scene:
// imgView.frame = sceneView.frame;
// Add it on top of your scene
[[sceneView parent] addSubview:imgView];
// If you don't use ARC, don't forget to release it:
// [imgView release];
If you load an UIImage from your application bundle with [UIImage imageNamed:#"blah"] and the image exists in different resolutions for retina devices (blah.png, blah#2.png, blah#3.png), the system will automatically load the image it considers most suitable for the screen of the current device. This is nothing you have to deal with.
If you need to convert between scene coordinates and view coordinates, SKScene offers -convertPointFromView: and -convertPointToView: for this case. If the scene fills the whole screen, then these actually convert between scene and screen coordinates.
Even if devices have different resolutions, your scene can always have the same "virtual size". So you can always say that the scene is 400x300, no matter what the real screen resolution is. In that case placing a sprite of virtual dimension 200x150 at the virtual coordinates (100,75) will always center it on the screen, no matter what device or how big the screen really is (well, assuming that the SKSceneView really covers exactly the whole screne, of course). The size of a SKScene is just the coordinate system you want to have for layouting your game, it can be whatever you want it to be, it can be bigger or smaller than the real screen.
The scene is always drawn into a SKSceneView. The size of the SKSceneView is the real size of your scene in screen coordinates. So if you SKScene is 480x320 and the size of the SKSceneView is 1440x960, then moving a sprite one pixel in your scene will in fact move it 3 pixels on the screen. Yet if your SKScene is 1136x640, but your SKSceneView is only 586x320, then moving a sprite two pixels in your scene will only move it one pixel on screen. Your scene is always scaled up or down as required.
Personally I'd either stick with the same size across all devices or maybe just make two or three device classes but not adopt the game for every single device and every existing screen resolution.
There a lot of things to consider when dealing with images in SpriteKit, The short answer is you should be creating images in 1x, 2x, and 3x (background.png, background#2x.png and background#3x.png). You will get the best image quality if you do that.
As far as resizing images based on different devices that usually is done at the scene level. There are a lot of good SO questions out there that cover a lot of the questions you will have.
For example:
Dealing with different iOS device resolutions in SpriteKit
I recommend searching for "creating a universal app with SpriteKit".
Hopefully that answers your immediate question and helps get you started with the other questions you will have.

Does sprite-kit make images larger?

I'm working on a sprite kit game and I can't figure out why it won't scale my image the way I want it to. The game is in the landscape orientation. This is the code I'm having a problem with:
SKSpriteNode *Pathway = [SKSpriteNode spriteNodeWithTexture:[SKTexture textureWithImageNamed:#"path.png"] size:CGSizeMake(568, 220)];
Pathway.zPosition = -1.0;
Pathway.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
Pathway.name = #"Pathway";
[self addChild:Pathway];
Basically I want the texture to be like this on the screen:
But for some reason, even when I change the size of the texture (568, 220), it scales the image and doesn't fit on the screen.
I tried using [Pathway setScale:0.7]; which was close to the size I was looking for, but I need to to be exactly 568 x 220. How come it keeps distorting my image even when I'm setting its size of 568 x 220?
If more info is needed please let me know, I think this should suffice.
I'm pretty sure you are testing this on device with Retina display.
Rename path.png to path#2x.png and use [SKTexture textureWithImageNamed:#"path"] instead of [SKTexture textureWithImageNamed:#"path.png"]. This should solve you problem.
Read Apple's tutorial Supporting High-Resolution Screens In Views for more info.
Using setScale directly scales content proportional and thus resulting in the sprite that follows same aspect ratio as it is before scaling.
Try setting xScale and yScale properties to scale irrespective of aspect ratio

Adaptive Positioning Based on iOS Device

I made an iPhone game a few months ago, and am now trying to port it as a universal app to both the iPad and iPhone 5 with Cocos2D. I was wondering if there was a simple-ish way to determine where an object should be placed based on the device running the game.
I could use if statements to figure out which device the game is running on, so when I get the correct sized images for the device I could have separate positions for each object, but it seems like there would be a maths formula which would allow me to use a lot less code. Obviously something like a full screen background is very simple, because it just needs to be centred with:
[background setPosition:CGPointMake(screenSize.width/2,screenSize.height/2)];
I haven't a clue how to adapt a button that would be X = 144 & Y = 330 on the old 3.5inch, 640 by 960 resolution iPhone to an iPad or iPhone 5 resolution.
I'm willing to use a more recent version of iOS if it will make my life easier, but because I'm not using any of Apple's objects I don't know if that is possible.
Maybe this isn't even possible because the button will be different sizes for the iPhone and iPad version, but I thought I would ask.
yeah, i am usually facing the same problem,
but if it is just a static objects placement
i would have relative coordinates instead of absolute for every object
and then use screen sizes to place them correctly
so you might want to use a function like:
-(CGPoint) relativeToScreen:(CGPoint) p {
return ccp(screenSize.width * p.x, screenSize.height * p.y)
}
where 0.0 <= p.x =< 1.0 and the same for p.y
and don't forget about your anchorPoint, because the node position is based on it as well
and i hope you have discovered that cocos2d already does image choosing instead of you,
you just have to set right suffixes for your images: -hd, -ipad, -ipadhd
For iphone5 resolution, I position hud buttons relative to the screen dimensions. Very similar to what you are doing for the background. So for example, a pause button I want in the top left I would position like this:
[pauseButton setPosition:CGPointMake(0.0f + 30.0f, screenSize.height - 50.0f)];
For ipad it gets really tricky. The lazy way which I have implemented is to play around with the content scale factor and zoom everything up and have "dead" borders to compensate for the ipad's screen ratio. Not the best, but at least you can re-use all the same assets for the ipad.

Resources