Getting actual view size in Swift / IOS - ios

Even after reading several posts in frame vs view I still cannot get this working. I open Xcode (7.3), create a new game project for IOS. On the default scene file, right after addChild, I add the following code:
print(self.view!.bounds.width)
print(self.view!.bounds.height)
print(self.frame.size.width)
print(self.frame.size.height)
print(UIScreen.mainScreen().bounds.size.width)
print(UIScreen.mainScreen().bounds.size.height)
I get following results when I run it for iPhone 6s:
667.0
375.0
1024.0
768.0
667.0
375.0
I can guess that first two numbers are Retina Pixel size at x2. I am trying to understand why frame size reports 1024x768 ?
Then I add following code to resize a simple background image to fill the screen:
self.anchorPoint = CGPointMake(0.5,0.5)
let theTexture = SKTexture(imageNamed: "intro_screen_phone")
let theSizeFromBounds = CGSizeMake(self.view!.bounds.width, self.view!.bounds.height)
let theImage = SKSpriteNode(texture: theTexture, color: SKColor.clearColor(), size: theSizeFromBounds)
I get an image smaller than the screen size. Image is displayed even smaller if I choose landscape mode.
I tried multiplying bounds width/height with two, hoping to get actual screen size but then the image gets too big. I also tried frame size which makes the image slightly bigger than the screen.
Main reason for my confusion, besides lack of knowledge is the fact that I've seen this exact example on a lesson working perfectly. Either I am missing something obvious or ?

The frame is given as 1024x768 because it is defined in points, not pixels.
If you want your scene to be the same size as your screen, before the scene is presented in your GameViewController, before:
skView.presentScene(scene)
use this:
scene.size = self.view.frame.size
which will make the scene the exact size of the screen.
Then you could easily make an image fill the scene like so:
func addBackground() {
let bgTexture = SKTexture(imageNamed: "NAME")
let bgSprite = SKSpriteNode(texture: bgTexture, color: SKColor.clearColor(), size: scene.size)
bgSprite.anchorPoint = CGPoint(x: 0, y: 0)
bgSprite.position = self.frame.origin
self.addChild(bgSprite)
}
Also, you may want to read up on the difference between a view's bounds and it's frame.

Related

Removing statusbar from screenshot on iOS

Im trying to remove the top part of an image by cropping, but the result is unexpected.
The code used:
extension UIImage {
class func removeStatusbarFromScreenshot(_ screenshot:UIImage) -> UIImage {
let statusBarHeight = 44.0
let newHeight = screenshot.size.height - statusBarHeight
let newSize = CGSize(width: screenshot.size.width, height: newHeight)
let newOrigin = CGPoint(x: 0, y: statusBarHeight)
let imageRef:CGImage = screenshot.cgImage!.cropping(to: CGRect(origin: newOrigin, size: newSize))!
let cropped:UIImage = UIImage(cgImage:imageRef)
return cropped
}
}
My logic is that I need to make the image smaller in heigh by 44px and move the origin y by 44px, but it ends up only creating an image much smaller of the top left corner.
The only way that I get it to work as expected is by multiplying the width by 2 and height by 2.5 in newSize, but that also double the size of the image produced..
Which anyways doesnt make much sense.. can someone help make it work without using magic values?
There are two main problems with what you're doing:
A UIImage has a scale (usually tied to resolution of your device's screen), but a CGImage does not.
Different devices have different "status bar" heights. In general, what you want to cut off from the top is not the status bar but the safe area. The top of the safe area is where your content starts.
Because of this:
You are wrong to talk about 44 px. There are no pixels here. Pixels are physical atomic illuminations on your screen. In code, there are points. Points are independent of the scale (and the scale is the multiplier between points and pixels).
You are wrong to talk about the number 44 itself as if it were hard-coded. You should get the top of the safe area instead.
By crossing into the CGImage world without taking scale into account, you lose the scale information, because CGImage knows nothing of scale.
By crossing back into the UIImage world without taking scale into account, you end up with a UIImage with a resolution of 1, which may not be the resolution of the original UIImage.
The simplest solution is not to do any of what you are doing. First, get the height of the safe area; call it h. Then just draw the snapshot image into a graphics image context that is the same scale as your image (which, if you play your cards right, it will be automatically), but is h points shorter than the height of your image — and draw it with its y origin at -h, thus cutting off the safe area. Extract the resulting image and you're all set.
Example! This code comes a view controller. First, I'll take a screenshot of my own device's current screen (this view controller's view) as my app runs:
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let screenshot = renderer.image { context in
view.layer.render(in: context.cgContext)
}
Now, I'll cut the safe area off the top of that screenshot:
let h = view.safeAreaInsets.top
let size = screenshot.size
let r = UIGraphicsImageRenderer(
size: .init(width: size.width, height: size.height - h)
)
let result = r.image { _ in
screenshot.draw(at: .init(x: 0, y: -h))
}
Experimentation will confirm that this works perfectly on every device, regardless of whether it has a bezel and regardless of its screen resolution: the top of the resulting image, result, is the top of your actual content.

iOS Vision: Drawing Detected Rectangles on Live Camera Preview Works on iPhone But Not on iPad

I'm using the iOS Vision framework to detect rectangles in real-time with the camera on an iPhone and it works well. The live preview displays a moving yellow rectangle around the detected shape.
However, when the same code is run on an iPad, the yellow rectangle tracks accurately along the X axis, but on the Y it is always slightly offset from the centre and it is not correctly scaled. The included image shows both devices tracking the same test square to better illustrate. In both cases, after I capture the image and plot the rectangle on the full camera frame (1920 x 1080), everything looks fine. It's just the live preview on the iPad that does not track properly.
I believe the issue is caused by how the iPad screen has a 4:3 aspect ratio. The iPhone's full screen preview scales its 1920 x 1080 raw frame down to 414 x 718, where both X and Y dims are scaled down by the same factor (about 2.6). However, the iPad scales the 1920 x 1080 frame down to 810 x 964, which warps the image and causes the error along the Y axis.
A rough solution could be to set a preview layer size smaller than the full screen and have it be scaled down uniformly in a 16:9 ratio matching 1920 x 1080, but I would prefer to use the full screen. Has anyone here come across this issue and found a transform that can properly translate and scale the rect observation onto the iPad screen?
Example test images and code snippet are below.
let rect: VNRectangleObservation
//Camera preview (live) image dimensions
let previewWidth = self.previewLayer!.bounds.width
let previewHeight = self.previewLayer!.bounds.height
//Dimensions of raw captured frames from the camera (1920 x 1080)
let frameWidth = self.frame!.width
let frameHeight = self.frame!.height
//Transform to change detected rectangle from Vision framework's coordinate system to SwiftUI
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -(previewHeight))
let scale = CGAffineTransform.identity.scaledBy(x: previewWidth, y: previewHeight)
//Convert the detected rectangle from normalized [0, 1] coordinates with bottom left origin to SwiftUI top left origin
//and scale the normalized rect to preview window dimensions.
var bounds: CGRect = rect.boundingBox.applying(scale).applying(transform)
//Rest of code draws the bounds CGRect in yellow onto the preview window, as shown in the image.
In case it helps anyone else, based on the info posted by Mr.SwiftOak's comment, I was able to resolve the problem through a combination of changing the preview layer to scale as .resizeAspect, rather than .resizeAspectFill, preserving the ratio of the raw frame in the preview. This led to the preview no longer taking up the full iPad screen, but made it a lot simpler to overlay accurately.
I then drew the rectangles as a .overlay to the preview window, so that the drawing coords are relative to the origin of the image (top left) rather than the view itself, which has an origin at (0, 0) top left of the entire screen.
To clarify on how I've been drawing the rects, there are two parts:
Converting the detect rect bounding boxes into paths on CAShapeLayers:
let boxPath = CGPath(rect: bounds, transform: nil)
let boxShapeLayer = CAShapeLayer()
boxShapeLayer.path = boxPath
boxShapeLayer.fillColor = UIColor.clear.cgColor
boxShapeLayer.strokeColor = UIColor.yellow.cgColor
boxLayers.append(boxShapeLayer)
Appending the layers in the updateUIView of the preview UIRpresentable:
func updateUIView(_ uiView: VideoPreviewView, context: Context)
{
if let rectangles = self.viewModel.rectangleDrawings {
for rect in rectangles {
uiView.videoPreviewLayer.addSublayer(rect)
}
}
}

SpriteKit SKCropNode cropping wrong area

I am using SpriteKit to draw a graph (with the ability to zoom in and pan around).
When I use an SKCropNode to crop the grid of my graph it doesn't crop the desired area. It crops less, no matter if I use a rectangular SKShapeNode or a SKSpriteNode (with image) as .maskNode.
Here is my code:
//GRID
let grid = SKCropNode()
graphViewModel.graphScene.addChild(grid)
let ratio:CGFloat = 1000 / 500
let width = (graphViewModel.sceneSize.width*0.95)
let newSize = CGSize(width: width, height: width/ratio)
let origin = CGPoint(x: -newSize.width/2.0, y: 0.0)
let rectangularMask = SKShapeNode(rect: CGRect(origin: origin, size: newSize))
rectangularMask.fillColor = UIColor.lightGray
rectangularMask.zPosition = -10.0 //So it appears behind the grid, doesn't affect the cropping
grid.maskNode = rectangularMask
graphViewModel.graphScene.addChild(rectangularMask)
Here are two screenshots to illustrate what I mean:
This is the graph with its grid not being cropped.
This is the graph with the maskNode set.
The lightGray Area is the actual rectangularNode and the grid is being cut off a lot less than it ought to be.
My scene is scaled so I can zoom in without pixelating.
When I disable zooming (setting the scene's size to the view's size) then the bug disappears. Unfortunately I need zooming without any pixel artefacts.
Maybe someone has an idea how to fix this issue. It might also be a SpriteKit Bug.

How are co-ordinates calculated in Swift

I've been trying to get the following code to display a rectangle at the bottom of an iPhone simulator in landscape mode:
let size = CGSize(width: 100, height: 10)
let myRect = SKShapeNode(rectOf: size)
myRect.position = CGPoint(x:self.frame.midX, y:self.frame.maxY - 50)
myRect.strokeColor = SKColor(red: 0.0/255.0, green: 0.0/255.0, blue: 200.0/255.0, alpha: 1.0)
myRect.lineWidth = 4
myRect.physicsBody = SKPhysicsBody(rectangleOf: size)
myRect.physicsBody?.affectedByGravity = false
myRect.physicsBody?.isDynamic = true
The result of this is that the rectangle is not drawn inside the visible screen; I then inserted this debug statement:
print(self.frame.minY, self.frame.maxY, self.frame.minX, self.frame.maxX, self.frame.width, self.frame.height)
Which outputs this:
-667.0 667.0 -375.0 375.0 750.0 1334.0
Further, when I changed the co-ordinates to y:100 I noticed the shape move up - not down. So, my question is: when the phone is in landscape mode, do I need to manually translate the X & Y co-ordinates and, do I need to know which way it has been rotated so that I can tell up from down?
Try to understand the difference between Scene and Screen. These are 2 independent constructs in iPhone development.
When creating a SpriteKit game in XCode 11 (Others may vary), the default Scene size is 750x1334. This is the "Standard" size now.
What this means is on a standard size iPhone device, you will get pixel perfect graphics, everything else will either scale or resize the scene to fit the screen.
There are 4 modes to accomplish this, .aspectFill, .aspectFit, .fill, .resize and are set via scaleMode. I am not going to go into all of these, you can look it up, but the default in the template is .aspectFill.
What .aspectFill means is preserve the aspect ratio and scale the image until all edges are covered, leaving no black bars. This means cropping will happen if your scene does not meet the aspect ratio of your screen.
Now when dealing with orientation, you need to find some way to handle this, because the scene will still stay the same size, and be forced to scale or resize.
So what is happening, is your scene is in an aspect ratio of 9:16, but your screen is in an aspect ratio of 16:9. this means it is going to scale your scene to a factor of 16:28.44~ to make sure that all screen edges are covered. This means 68% (19.44~/28.44~) of your scene is now getting cropped to fit the scene.
To remedy this, you have a few choices.
1) You can support only 1 orientation, and set the scene size to properly fit it.
2) You can design your scene in a square, and handle cropping on both sides.
3) You can design multiple scenes to handle aspect ratio
4) You can manually keep changing the size and work on a percentage based positioning.
Of these choices, I recommend either doing the 1 or 2

Why are SKSpriteNodes only partially rendering?

I've coded a basic layout of "cards" for level selection in a game using Swift and SpriteKit. It's basically just 6 level selection cards side by side that have a picture of the level that the user can select. To create them I am running a for-loop and placing the first one in the center of the screen, a padding, then the second one, then padding etc. Each card is an SKSpriteNode from a png image. Each card is approx a 3rd of the device wide and about a 3rd of the devices height.
I create all six of them and then created an action that moves all 6 cards left or right to select which on the player would like. The one in the center is the one that is selected.
Everything works great on iphone simulators (tested on iPhone 6, iPhone 6 plus, iPhone 7 Plus and iPhone 5... all work great). On iPad simulators the first and last card have a portion of the image that doesn't render at all. The first card has about 1/4 of it lost on the left side, the last card has about 1/4 or it lost on the far right side. I tried running it on a physical iPad as well and it has the same issue. When I run it on an iPad Pro 12.7 it gets worse... it cuts off more of the image.
If I choose only to display 5 of the 6 cards they all render great.
If I choose to shrink them down to about 1/4 of the device width and 1/4 of the device height and lessen the padding they render fine.
I tried playing with the Scene and View sizes and scales and didn't have any improvement.
I've tried using different images and there is no changes at all.
I've double checked all zPositions and found no improvement.
I've tried systematically removing all other objects in the scene and still have the problem.
I've put them on their own "layer" which is an SKEffectNode named cardNode. (it's an SKEffectNode because I choose to later blur it when an alert screen comes up in front of it). I thought that putting them onto their own layer might help but it didn't.
I've put physics bodies on the cards just to make sure that they are still "there" and the physics bodies appear in the correct places. If I click on part of the node that isn't rendered it still does behave properly as though it was still rendered in that area.
I can't figure out where to go from here to fix this. Ideally I would like to add more cards yet in the future but getting stuck on this problem.
Here is the code that I have for creating the cards.
let cardNode = SKEffectNode()
let levelCardArray: [String] = [
"BlackBoxLevelCard.png"
,"FruitLevelCard.png"
,"SportsLevelCard.png"
,"BarnLevelCard.png"
,"SeaLevelCard.png"
,"SpaceLevelCard.png"
]
let screenWidth = UIScreen.main.bounds.width
let screenHeight = UIScreen.main.bounds.height
let w10 = screenWidth * 0.10
let w40 = screenWidth * 0.40
let w50 = screenWidth * 0.50
let w60 = screenWidth * 0.60
let h50 = screenHeight * 0.50
let cardMargin = w10
let cardSize = CGSize(width: w40, height: w60)
let startPosition = CGPoint(x: w50, y: h50)
override func didMove(to view: SKView) {
let scene = levelSelectionScene(size: view.bounds.size)
scene.backgroundColor = UIColor.black
let skView = view as SKView
skView.ignoresSiblingOrder = true
cardNode.zPosition = 100
self.addChild(cardNode)
for i in 1...levelCardArray.count {
let currentArrayValue = i - 1
let cardSprite = SKSpriteNode(imageNamed: levelCardArray[currentArrayValue])
cardSprite.size = cardSize
cardSprite.position = CGPoint(x: startPosition.x + (CGFloat(currentArrayValue) * (cardSize.width + cardMargin)), y: startPosition.y)
cardSprite.zPosition = cardNode.zPosition
cardSprite.name = "levelCardObject"
cardNode.addChild(cardSprite)
}
Any help or insight would be greatly appreciated. Thanks guys!
I tried all day today to try to find a way to make this work through resizing the scenes and views and haven't had any luck with it at all.
My solution is to check for the device type and if it's an iPad I am reducing the image sizes and buffer between images until it doesn't cut them off. I don't consider this a very good solution, really just a work around until I can find a better way to do it. Thank you guys for your thoughts though. I definitely appreciate it!

Resources