Zooming In By Modifying Camera Window - webgl

I am working on an assignment where I need to implement a zoom in button. The following is code to initialize the scene:
// select the viewport
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
// reset the modelview matrix
mvMatrix.setIdentity(); // erase all prior transformations
// select the view window (projection camera)
var left=-boardW/2.0, right=boardW/2.0, bottom=-boardH/2.0, top=boardH/2.0, near=0, far=10;
pMatrix.setIdentity();
pMatrix.ortho(left,right,bottom,top,near,far);
mvMatrix.multiply(pMatrix);
// set the camera position and orientation (viewing transformation)
var eyeX=0, eyeY=0, eyeZ=10;
var centerX=0, centerY=0, centerZ=0;
var upX=0, upY=1, upZ=0;
mvMatrix.lookAt(eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ);
The professor states that zoom should be done by changing the camera window, so that the image appears bigger if the window is smaller.
This is what I've done in my zoom function:
function zoomIn() {
var eyeX=0, eyeY=0, eyeZ=(10 * (scaleFactor + 0.1));
var centerX=0, centerY=0, centerZ=0;
var upX=0, upY=1, upZ=0;
mvMatrix.lookAt(eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ);
}
Where scaleFactor has the initial value of 1. When I click the zoom button, however, my image disappears. Does anyone know what I am doing incorrectly? Is this not changing the camera window?

Related

iOS Vision: Drawing Detected Rectangles on Live Camera Preview Works on iPhone But Not on iPad

I'm using the iOS Vision framework to detect rectangles in real-time with the camera on an iPhone and it works well. The live preview displays a moving yellow rectangle around the detected shape.
However, when the same code is run on an iPad, the yellow rectangle tracks accurately along the X axis, but on the Y it is always slightly offset from the centre and it is not correctly scaled. The included image shows both devices tracking the same test square to better illustrate. In both cases, after I capture the image and plot the rectangle on the full camera frame (1920 x 1080), everything looks fine. It's just the live preview on the iPad that does not track properly.
I believe the issue is caused by how the iPad screen has a 4:3 aspect ratio. The iPhone's full screen preview scales its 1920 x 1080 raw frame down to 414 x 718, where both X and Y dims are scaled down by the same factor (about 2.6). However, the iPad scales the 1920 x 1080 frame down to 810 x 964, which warps the image and causes the error along the Y axis.
A rough solution could be to set a preview layer size smaller than the full screen and have it be scaled down uniformly in a 16:9 ratio matching 1920 x 1080, but I would prefer to use the full screen. Has anyone here come across this issue and found a transform that can properly translate and scale the rect observation onto the iPad screen?
Example test images and code snippet are below.
let rect: VNRectangleObservation
//Camera preview (live) image dimensions
let previewWidth = self.previewLayer!.bounds.width
let previewHeight = self.previewLayer!.bounds.height
//Dimensions of raw captured frames from the camera (1920 x 1080)
let frameWidth = self.frame!.width
let frameHeight = self.frame!.height
//Transform to change detected rectangle from Vision framework's coordinate system to SwiftUI
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -(previewHeight))
let scale = CGAffineTransform.identity.scaledBy(x: previewWidth, y: previewHeight)
//Convert the detected rectangle from normalized [0, 1] coordinates with bottom left origin to SwiftUI top left origin
//and scale the normalized rect to preview window dimensions.
var bounds: CGRect = rect.boundingBox.applying(scale).applying(transform)
//Rest of code draws the bounds CGRect in yellow onto the preview window, as shown in the image.
In case it helps anyone else, based on the info posted by Mr.SwiftOak's comment, I was able to resolve the problem through a combination of changing the preview layer to scale as .resizeAspect, rather than .resizeAspectFill, preserving the ratio of the raw frame in the preview. This led to the preview no longer taking up the full iPad screen, but made it a lot simpler to overlay accurately.
I then drew the rectangles as a .overlay to the preview window, so that the drawing coords are relative to the origin of the image (top left) rather than the view itself, which has an origin at (0, 0) top left of the entire screen.
To clarify on how I've been drawing the rects, there are two parts:
Converting the detect rect bounding boxes into paths on CAShapeLayers:
let boxPath = CGPath(rect: bounds, transform: nil)
let boxShapeLayer = CAShapeLayer()
boxShapeLayer.path = boxPath
boxShapeLayer.fillColor = UIColor.clear.cgColor
boxShapeLayer.strokeColor = UIColor.yellow.cgColor
boxLayers.append(boxShapeLayer)
Appending the layers in the updateUIView of the preview UIRpresentable:
func updateUIView(_ uiView: VideoPreviewView, context: Context)
{
if let rectangles = self.viewModel.rectangleDrawings {
for rect in rectangles {
uiView.videoPreviewLayer.addSublayer(rect)
}
}
}

Rotate camera around itself

I am using two virtual joysticks to move my camera around the scene. The left stick controls the position and the right one controls the rotation.
When using the right stick, the camera rotates, but it seems that the camera rotates around the center point of the model.
This is my code:
fileprivate func rotateCamera(_ x: Float, _ y: Float)
{
if let cameraNode = self.cameraNode
{
let moveX = x / 50.0
let rotated = SCNMatrix4Rotate(cameraNode.transform, moveX, 0, 1, 0)
cameraNode.transform = rotated
}
}
I have also tried this code:
fileprivate func rotateCamera(_ x: Float, _ y: Float)
{
if let cameraNode = self.cameraNode
{
let moveX = x / 50.0
cameraNode.rotate(by: SCNQuaternion(moveX, 0, 1, 0), aroundTarget: cameraNode.transform)
}
}
But the camera just jumps around. What is my error here?
There are many ways to handle rotation, some are very suitable for giving headaches to the coder.
It sounds like the model is at 0,0,0, meaning it’s in the center of the world, and the camera is tranformed to a certain location. In the first example using matrices, you basically rotate that transformation. So you transform first, then rotate, which yes will cause it to rotate around the origin (0,0,0).
What you should do instead, to rotate the camera in local space, is rotate the camera first in local space and then translate it to its position in world space.
Translation x rotation matrix results in rotation in world space
Rotation x translation matrix results in rotation in local space
So a solution is to remove the translation from the camera first (moving it back to 0,0,0), then apply the rotation matrix, and then reapply the translation. This comes down to the same result as starting with an identity matrix. For example:
let rotated = SCNMatrix4Rotate(SCNMatrixIdentity, moveX, 0, 1, 0)
cameraNode.transform = SCNMatrix4Multiply(rotated, cameraNode.transform)

sprite kit do not draw line at correct position in portrait mode

sprite kit do not draw line at correct position in portrait mode
sprite kit do not draw line at correct position in portrait mode. I try to draw line in sprite kit. before today sprite kit was working correctly but now only when app is in portrait mode it draw x position wrong . I try to draw x position = -750/4 but it draws x something like -750/2 + 750/20. y position of line is not correct also. how to make it work correctly?
extension GameScene{
func setScreenSize(width inputWidth:Double,height inputHeight:Double)
{
let coordinatesX =
Double(self.frame.size.width)
// coordinatesX = 750
let coordinatesY = Double(self.frame.size.height)
// coordinatesY = 1334.0
}
func drawShapeNode()
{
theX = CGFloat(-750/4)
theY = CGFloat(-1334.0/4)
thePathToDraw.move(to: CGPoint(x: theX, y: theY))
thePathToDraw.addLine(to: CGPoint(x: 0, y: 0))
}
I found that this wrong position occurs only when game view size width < screen width. this is why I noticed that only today and it was normal before. so making game view size width >= screen width fix it. this error is not related to scroll view so it occurs when view is not inside of scroll view.

Correctly position the camera when panning

I'm having a hard time setting boundaries and positioning camera properly inside my view after panning. So here's my scenario.
I have a node that is bigger than the screen and I want to let user pan around to see the full map. My node is 1000 by 1400 when the view is 640 by 1136. Sprites inside the map node have the default anchor point.
Then I've added a camera to the map node and set it's position to (0.5, 0.5).
Now I'm wondering if I should be changing the position of the camera or the map node when the user pans the screen ? The first approach seems to be problematic, since I can't simply add translation to the camera position because position is defined as (0.5, 0.5) and translation values are way bigger than that. So I tried multiplying/dividing it by the screen size but that doesn't seem to work. Is the second approach better ?
var map = Map(size: CGSize(width: 1000, height: 1400))
override func didMove(to view: SKView) {
(...)
let pan = UIPanGestureRecognizer(target: self, action: #selector(panned(sender:)))
view.addGestureRecognizer(pan)
self.anchorPoint = CGPoint.zero
self.cam = SKCameraNode()
self.cam.name = "camera"
self.camera = cam
self.addChild(map)
self.map.addChild(self.cam!)
cam.position = CGPoint(x: 0.5, y: 0.5)
}
var previousTranslateX:CGFloat = 0.0
func panned (sender:UIPanGestureRecognizer) {
let currentTranslateX = sender.translation(in: view!).x
//calculate translation since last measurement
let translateX = currentTranslateX - previousTranslateX
let xMargin = (map.nodeSize.width - self.frame.width)/2
var newCamPosition = CGPoint(x: cam.position.x, y: cam.position.y)
let newPositionX = cam.position.x*self.frame.width + translateX
// since the camera x is 320, our limits are 140 and 460 ?
if newPositionX > self.frame.width/2 - xMargin && newPositionX < self.frame.width - xMargin {
newCamPosition.x = newPositionX/self.frame.width
}
centerCameraOnPoint(point: newCamPosition)
//(re-)set previous measurement
if sender.state == .ended {
previousTranslateX = 0
} else {
previousTranslateX = currentTranslateX
}
}
func centerCameraOnPoint(point: CGPoint) {
if cam != nil {
cam.position = point
}
}
Your camera is actually at a pixel point 0.5 points to the right of the centre, and 0.5 points up from the centre. At (0, 0) your camera is dead centre of the screen.
I think the mistake you've made is a conceptual one, thinking that anchor point of the scene (0.5, 0.5) is the same as the centre coordinates of the scene.
If you're working in pixels, which it seems you are, then a camera position of (500, 700) will be at the top right of your map, ( -500, -700 ) will be at the bottom left.
This assumes you're using the midpoint anchor that comes default with the Xcode SpriteKit template.
Which means the answer to your question is: Literally move the camera as you please, around your map, since you'll now be confident in the knowledge it's pixel literal.
With one caveat...
a lot of games use constraints to stop the camera somewhat before it gets to the edge of a map so that the map isn't half off and half on the screen. In this way the map's edge is showing, but the furthest the camera travels is only enough to reveal that edge of the map. This becomes a constraints based effort when you have a player/character that can walk/move to the edge, but the camera doesn't go all the way out there.

Swift: Positioning Children of the SKCameraNode

Context:
there is a cursor (like your mouse) SKSpriteNode
cam is a SKCameraNode and is a child to the cursor (i.e. wherever your cursor goes, so follows the camera).
cam is purposely not centered on the cursor; rather, it is offset so the cursor appears at the top of the view, and there remains empty space below
A simple schematic is given below
Goal:
The goal is two add to sprites to the lower left and lower right corners of the camera's view. The sprites will be children of the camera, so that they always stay in view.
Question
How can I position a sprite in the corner of a camera, especially given that the SKSpriteNode does not have an anchorPoint attribute (as an SKSpriteNode typically has, which let me offset the camera as a child to the cursor)?
Note: One can position the SKSpriteNodes on the GameScene and then call .move(toParent: SKNode), which gets you closers but also messes with the position and scale of the SKSpriteNodes
var cam: SKCameraNode!
let cursor = SKSpriteNode(imageNamed: "cursor")
override func didMove(to view: SKView) {
// Set up the cursor
cursor.setScale(spriteScale)
cursor.position = CGPoint(x: self.frame.midX, y: raisedPositioning)
cursor.anchorPoint = CGPoint(x:0.5, y:0.5)
cursor.zPosition = CGFloat(10)
addChild(cursor)
// Set up the camera
cam = SKCameraNode()
self.camera = cam
cam.setScale(15.0)
// Camera is child of Cursor so that the camera follows the cursor
cam.position = CGPoint(x: cursor.size.width/2, y: -(cursor.size.height * 4))
cursor.addChild(cam)
// Add another sprite here and make it child to cursor
...
the cameraNode has no size, but you can get the current screen size with the frame property
frame.size
then you can position your node accordingly, for example if you want to position the center of yournode in the left corner you set the position as this:
yournode.position.x = 0
yournode.position.y = frame.size.height
This is best solved with a "dummy node" that acts as the camera's screen space coordinates system.
Place this dummy node at the exact centre of the view of the camera, at a zPosition you're happy with, as a child of the camera.
...from SKCameraNode docs page:
The scene is rendered so that the camera node’s origin is placed in
the middle of the scene.
Attach all the HUD elements and other pieces of graphics and objects you want to stay in place, relative to the camera, to this dummy object, in a coordinate system that makes sense relative to the camera's "angle of view", which is its frame of view.
...from a little further down the SKCameraNode docs page:
The camera’s viewport is the same size as the scene’s viewport
(determined by the scene’s size property) and the scene is still
scaled by its scaleMode property when it is rendered into the view.
Whenever the camera moves, it moves the dummy object, and all the children of the dummy object move with the dummy object.
The biggest advantage of this approach is that you can shake or otherwise move the dummy object to create visual effects indicative of motion and explosions. But also a neat system for removal from view, too.

Resources