How are co-ordinates calculated in Swift - ios

I've been trying to get the following code to display a rectangle at the bottom of an iPhone simulator in landscape mode:
let size = CGSize(width: 100, height: 10)
let myRect = SKShapeNode(rectOf: size)
myRect.position = CGPoint(x:self.frame.midX, y:self.frame.maxY - 50)
myRect.strokeColor = SKColor(red: 0.0/255.0, green: 0.0/255.0, blue: 200.0/255.0, alpha: 1.0)
myRect.lineWidth = 4
myRect.physicsBody = SKPhysicsBody(rectangleOf: size)
myRect.physicsBody?.affectedByGravity = false
myRect.physicsBody?.isDynamic = true
The result of this is that the rectangle is not drawn inside the visible screen; I then inserted this debug statement:
print(self.frame.minY, self.frame.maxY, self.frame.minX, self.frame.maxX, self.frame.width, self.frame.height)
Which outputs this:
-667.0 667.0 -375.0 375.0 750.0 1334.0
Further, when I changed the co-ordinates to y:100 I noticed the shape move up - not down. So, my question is: when the phone is in landscape mode, do I need to manually translate the X & Y co-ordinates and, do I need to know which way it has been rotated so that I can tell up from down?

Try to understand the difference between Scene and Screen. These are 2 independent constructs in iPhone development.
When creating a SpriteKit game in XCode 11 (Others may vary), the default Scene size is 750x1334. This is the "Standard" size now.
What this means is on a standard size iPhone device, you will get pixel perfect graphics, everything else will either scale or resize the scene to fit the screen.
There are 4 modes to accomplish this, .aspectFill, .aspectFit, .fill, .resize and are set via scaleMode. I am not going to go into all of these, you can look it up, but the default in the template is .aspectFill.
What .aspectFill means is preserve the aspect ratio and scale the image until all edges are covered, leaving no black bars. This means cropping will happen if your scene does not meet the aspect ratio of your screen.
Now when dealing with orientation, you need to find some way to handle this, because the scene will still stay the same size, and be forced to scale or resize.
So what is happening, is your scene is in an aspect ratio of 9:16, but your screen is in an aspect ratio of 16:9. this means it is going to scale your scene to a factor of 16:28.44~ to make sure that all screen edges are covered. This means 68% (19.44~/28.44~) of your scene is now getting cropped to fit the scene.
To remedy this, you have a few choices.
1) You can support only 1 orientation, and set the scene size to properly fit it.
2) You can design your scene in a square, and handle cropping on both sides.
3) You can design multiple scenes to handle aspect ratio
4) You can manually keep changing the size and work on a percentage based positioning.
Of these choices, I recommend either doing the 1 or 2

Related

iOS Vision: Drawing Detected Rectangles on Live Camera Preview Works on iPhone But Not on iPad

I'm using the iOS Vision framework to detect rectangles in real-time with the camera on an iPhone and it works well. The live preview displays a moving yellow rectangle around the detected shape.
However, when the same code is run on an iPad, the yellow rectangle tracks accurately along the X axis, but on the Y it is always slightly offset from the centre and it is not correctly scaled. The included image shows both devices tracking the same test square to better illustrate. In both cases, after I capture the image and plot the rectangle on the full camera frame (1920 x 1080), everything looks fine. It's just the live preview on the iPad that does not track properly.
I believe the issue is caused by how the iPad screen has a 4:3 aspect ratio. The iPhone's full screen preview scales its 1920 x 1080 raw frame down to 414 x 718, where both X and Y dims are scaled down by the same factor (about 2.6). However, the iPad scales the 1920 x 1080 frame down to 810 x 964, which warps the image and causes the error along the Y axis.
A rough solution could be to set a preview layer size smaller than the full screen and have it be scaled down uniformly in a 16:9 ratio matching 1920 x 1080, but I would prefer to use the full screen. Has anyone here come across this issue and found a transform that can properly translate and scale the rect observation onto the iPad screen?
Example test images and code snippet are below.
let rect: VNRectangleObservation
//Camera preview (live) image dimensions
let previewWidth = self.previewLayer!.bounds.width
let previewHeight = self.previewLayer!.bounds.height
//Dimensions of raw captured frames from the camera (1920 x 1080)
let frameWidth = self.frame!.width
let frameHeight = self.frame!.height
//Transform to change detected rectangle from Vision framework's coordinate system to SwiftUI
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -(previewHeight))
let scale = CGAffineTransform.identity.scaledBy(x: previewWidth, y: previewHeight)
//Convert the detected rectangle from normalized [0, 1] coordinates with bottom left origin to SwiftUI top left origin
//and scale the normalized rect to preview window dimensions.
var bounds: CGRect = rect.boundingBox.applying(scale).applying(transform)
//Rest of code draws the bounds CGRect in yellow onto the preview window, as shown in the image.
In case it helps anyone else, based on the info posted by Mr.SwiftOak's comment, I was able to resolve the problem through a combination of changing the preview layer to scale as .resizeAspect, rather than .resizeAspectFill, preserving the ratio of the raw frame in the preview. This led to the preview no longer taking up the full iPad screen, but made it a lot simpler to overlay accurately.
I then drew the rectangles as a .overlay to the preview window, so that the drawing coords are relative to the origin of the image (top left) rather than the view itself, which has an origin at (0, 0) top left of the entire screen.
To clarify on how I've been drawing the rects, there are two parts:
Converting the detect rect bounding boxes into paths on CAShapeLayers:
let boxPath = CGPath(rect: bounds, transform: nil)
let boxShapeLayer = CAShapeLayer()
boxShapeLayer.path = boxPath
boxShapeLayer.fillColor = UIColor.clear.cgColor
boxShapeLayer.strokeColor = UIColor.yellow.cgColor
boxLayers.append(boxShapeLayer)
Appending the layers in the updateUIView of the preview UIRpresentable:
func updateUIView(_ uiView: VideoPreviewView, context: Context)
{
if let rectangles = self.viewModel.rectangleDrawings {
for rect in rectangles {
uiView.videoPreviewLayer.addSublayer(rect)
}
}
}

Spritekit scaling for universal app

I am having trouble properly setting up my app so that it displays correctly on all devices. I want my game to look best for iPhone and I understand that setting my scene size using GameScene(size: CGSize(width: 1334, height: 750)) and use .aspectFill means that on iPads there will be less space to display things which I'm fine with. The problem is, how do I position my nodes so that they are relative to the each devices frame height and width? I use self.frame.height, self.frame.width, self.frame.midX, etc. for positioning my nodes and when I run my game, it positions things properly considering I run on my iPhone 6, but on my iPad, everything seems blown up and nodes are off the screen. I'm going crazy trying to figure this out
I solved this in my game by using scaleFactors, numbers which tell my app how much to enlarge each length, width, height etc. Then I just make my game look well for one phone, and use that phone's width and height to calculate with which factor I need to enlarge it for other devices. In this example I use the iPhone 4 as a base, but you can use any device just change the numbers according to that device.
Portrait mode:
var widthFactor = UIScreen.main.bounds.width/320.0 //I divide it by the default iPhone 4 width
var heightFactor = UIScreen.main.bounds.height/480.0
Landscape mode:
var widthFactor = UIScreen.main.bounds.width/480.0 //I divide it by the default iPhone 4 landscape width
var heightFactor = UIScreen.main.bounds.height/320.0
Then when you make a node, a coin image for example, multiply its coordinates or width/height by the scaleFactors:
let coin = SKSpriteNode(imageNamed: "coin")
coin.position = CGPoint(x: 25 * widthFactor, y: self.size.height - 70 * heightFactor)
I think what you're looking for might be in this answer: https://stackoverflow.com/a/34878528/6728196
Specifically I think this part is what you're looking for (edited to fit your example):
if UIDevice.current.userInterfaceIdiom == .pad {
// Set things only for iPad
// Example: Adjust y positions using += or -=
buttonNode.position.y += 100
labelNode.position.y -= 100
}
Basically, this just adds or subtracts a certain amount from the iPhone position if the user is using an iPad. It's not too complicated, and you can increase or decrease both x and y values of position by a certain value of percentage of the screen (self.size.width * decimalPercentage).
Another benefit of using this way is that you're just modifying the iPhone positions, so it starts by using the default values that you set. Then if on iPad, it will make changes.
If this is hard to understand let me know so I can clear up the explanation

Swift sprite kit vertical background infinite image

I have 3 images:
topBg.png
midBg.png
botBg.png
I want to set topBg.png at top scene and height = 200
middleBg.png should be infinite scale or repeat vertically
botBg.png - should be in bottom and height = 200
i have next code:
override func didMove(to view: SKView) {
self.bgTopSpriteNode = self.childNode(withName: "//bgTopNode") as? SKSpriteNode
self.bgMiddleSpriteNode = self.childNode(withName: "//bgMiddleNode") as? SKSpriteNode
self.bgBottomSpriteNode = self.childNode(withName: "//bgBottomNode") as? SKSpriteNode
if let bgTopSpriteNode = self.bgTopSpriteNode,
let bgMiddleSpriteNode = self.bgMiddleSpriteNode,
let bgBottomSpriteNode = self.bgBottomSpriteNode {
bgTopSpriteNode.size.width = self.frame.width
bgTopSpriteNode.size.height = 200
bgTopSpriteNode.position.x = 0
bgMiddleSpriteNode.size.width = self.frame.width
bgMiddleSpriteNode.size.height = self.frame.height-400
bgMiddleSpriteNode.position.x = 0
bgBottomSpriteNode.size.width = self.frame.width
bgBottomSpriteNode.size.height = 200
bgBottomSpriteNode.position.x = 0
}
}
But how to set Y position of images. Because coordinates begin from center of screen, not from left top and i don't know how to convert them.
There are a couple of different ways to achieve what you're looking to do.
First, you can compute the y position of the top and the bottom of the screen using simply size.height / 2 if you have the anchorPoint of your scene at (0.5,0.5). (Don't use frame - use size. That way, you take into account the scaleMode of the scene.)
It sounds like you are frustrated that the origin of the scene is in the center. If you'd like to move it to the corner, you can easily do so by setting the scene's anchorPoint property, say, to (0.0, 0.0) for the lower left corner. Then, your y-values are 0 and size.height. If you are using the .sks editor, this is exposed in the interface - you can just set it there. Otherwise, you can set it programmatically.
Finally, you can set the scaleMode of your scene to something like .aspectFill, set the size of the scene directly (say, to 1024x768 for an iPad), and just place the images wherever they need to go. This approach works particularly well with .sks files, if you are using them; when you load up a scene, you can set the size of the scene based on the aspect ratio of the view it's in to accommodate different aspect ratios. For instance, you could adopt a 320x480 "reference size" for your iPhone scenes. Whenever you load up the scene, you could set the size of the scene to be 320 points wide and however many points tall to match the aspect ratio of the device. Then, all your graphics would be produced at 320pt wide, and you could slide them up or down proportionally across the scene's size for layout. This is a little more complicated, but it's a lot easier than trying to deal with separate layout considerations for multiple devices.
I should also point out a couple of things.
You can use the anchorPoint property of a sprite to dictate where the sprite's coordinates are measured from. This is handy for cases where you want images to be flush up against something. For instance, if you want an image flush against the left side of the screen, set its position to be exactly the left side of the screen, and then set its anchorPoint.x to 0.0; this will put the left edge of the sprite against the left edge of the screen. This also works for scenes, as you encountered - moving the anchorPoint of the scene moves everything in the scene relative to its size.
You don't need three images for what you're describing. You can use a single sprite and just set its centerRect property to tell it to use the top and bottom of an image and stretch the center part vertically. You have to do a little math to set the right xScale and yScale (not width and height, IIRC), but then you can draw all of that with one sprite instead of three. This would be really handy in your case, because you could just leave the sprite at (0,0), set its scale to match the size of the entire scene, and set the centerRect property - you wouldn't have to do any positioning math at all.

How do I change the aspect ratio of the camera in Xamarin iOS?

First of all, my app is written using mostly in Xamarin Forms, but uses a CustomRenderer for the camera. Not sure if that will affect anything.
My problem currently is that I need to change the aspect ratio of the camera from 16:9 to something custom, if this is even possible.
I have tried changing the width and height independently, but, for example, if I were to change the height to something significantly higher, all it does is expand the width and height at the same rate until it 'hits the side of the box' that the camera is contained in, so it results in the width being correct, but the height I specified in code is a lot higher than the actual height of the camera.
var videoPreviewLayer = new AVCaptureVideoPreviewLayer(captureSession)
{
// 16 / 9 = 1.77777778
Frame = new CGRect(-15, -45, size, size * 1.77777778) /*LiveCameraStream.Bounds*/,
BackgroundColor = new CGColor(0, 255, 0) //Green
};
viewLayer.AddSublayer(videoPreviewLayer);
viewLayer.Bounds = new CGRect(4, -50, 100, 50);
viewLayer.BackgroundColor = new CGColor(255, 0, 0); //Red
//Background colours included just for dev purposes to distinguish between layers
Whenever I set the height to be width * 1.777... it results in a perfect 16:9 aspect ratio, which unfortunately is not what I need. The image below shows how it is currently looking, with ideally the camera taking up all of the red area.
So my question, is how do I change the aspect ratio of this camera, if it is even possible?

Getting actual view size in Swift / IOS

Even after reading several posts in frame vs view I still cannot get this working. I open Xcode (7.3), create a new game project for IOS. On the default scene file, right after addChild, I add the following code:
print(self.view!.bounds.width)
print(self.view!.bounds.height)
print(self.frame.size.width)
print(self.frame.size.height)
print(UIScreen.mainScreen().bounds.size.width)
print(UIScreen.mainScreen().bounds.size.height)
I get following results when I run it for iPhone 6s:
667.0
375.0
1024.0
768.0
667.0
375.0
I can guess that first two numbers are Retina Pixel size at x2. I am trying to understand why frame size reports 1024x768 ?
Then I add following code to resize a simple background image to fill the screen:
self.anchorPoint = CGPointMake(0.5,0.5)
let theTexture = SKTexture(imageNamed: "intro_screen_phone")
let theSizeFromBounds = CGSizeMake(self.view!.bounds.width, self.view!.bounds.height)
let theImage = SKSpriteNode(texture: theTexture, color: SKColor.clearColor(), size: theSizeFromBounds)
I get an image smaller than the screen size. Image is displayed even smaller if I choose landscape mode.
I tried multiplying bounds width/height with two, hoping to get actual screen size but then the image gets too big. I also tried frame size which makes the image slightly bigger than the screen.
Main reason for my confusion, besides lack of knowledge is the fact that I've seen this exact example on a lesson working perfectly. Either I am missing something obvious or ?
The frame is given as 1024x768 because it is defined in points, not pixels.
If you want your scene to be the same size as your screen, before the scene is presented in your GameViewController, before:
skView.presentScene(scene)
use this:
scene.size = self.view.frame.size
which will make the scene the exact size of the screen.
Then you could easily make an image fill the scene like so:
func addBackground() {
let bgTexture = SKTexture(imageNamed: "NAME")
let bgSprite = SKSpriteNode(texture: bgTexture, color: SKColor.clearColor(), size: scene.size)
bgSprite.anchorPoint = CGPoint(x: 0, y: 0)
bgSprite.position = self.frame.origin
self.addChild(bgSprite)
}
Also, you may want to read up on the difference between a view's bounds and it's frame.

Resources