scenekit - zoom in/out to selected node of scene - ios

I have a scene in which a human body is displayed. I want to zoom in to a specific body part when a user taps on it.
I changed the position of the camera to the position of Node but it points not exactly on it.
Also I need to keep the selected part in center of the screen when zoom in.
How can I accomplish zoom in / out?

I solved my problem by moving the camera instead of scaling the Model. I got the tap point by Gesture Recognizer and similarly the point of touch.
Now I converted the View-Coordinates to Scene Coordinates
CGPoint p = [gestureRecognize locationInView:scnView];
NSArray *hitResults = [scnView hitTest:p options:nil];
SCNVector3 projectedOrigin = [scnView projectPoint:SCNVector3Zero];
SCNVector3 vector = SCNVector3Make(p.x, p.y, projectedOrigin.z);
SCNVector3 worldPoint = [scnView unprojectPoint:vector];
and then positioned the Camera to the worldPoint.

To reposition it in a Z-axis you want to multiply the currents node matrix with the new matrix.
var node = childNode.transform
var translation = SCNMatrix4MakeTranslation(1.0, 1.0, adjustedZValue)
var newTrans = SCNMatrix4Mult(node, translation)
childNode.transform = newTrans
Edit: Had some names mixed up

a bit cleaned up and more "swifty":
let transform = childNode.transform
let adjustedZValue = Float32(3)
let translation = SCNMatrix4MakeTranslation(1.0, 1.0, adjustedZValue)
let newTrans = SCNMatrix4Mult(transform, translation)
childNode.transform = newTrans

Related

Retrieve ARVector3 from touch on cameraView

I've been playing around with the enhanced samples and read the full SDK documentation but I'm can't figure out this problem. I'm trying to convert a touch on the self.cameraView to a ARVector3 so I can move the 3D model to that position. At the moment I'm trying to convert the CGPoint from the tapGesture to the world ARNode but no luck so far.
float x = [gesture locationInView:self.cameraView].x;
float y = [gesture locationInView:self.cameraView].y;
CGPoint gesturePoint = CGPointMake(x, y);
ARVector3 *newPosition = [arbiTrack.world nodeFromViewPort:gesturePoint];
ARNode *touchNode = [ARNode nodeWithName:#"touchNode"];
ARImageNode *targetImageNode = [[ARImageNode alloc] initWithImage:[UIImage imageNamed:#"drop.png"]];
[touchNode addChild:targetImageNode];
[targetImageNode scaleByUniform:1];
touchNode.position = newPosition;
[arbiTrack.world addChild:touchNode];
This results in the following situation:
And seen on my iPhone:
Why is a touch in the upper left corner of the self.cameraView a point that is closest by? I actually want to click on the screen and get a X, Y, Z (ARVector3) coordinate back.

New center point for transformed UIView with new frame

I been working on an iOS app which should display indoor blueprints. You should be able to switch between floors and each floor image is controlled by gesture recognisers to handle pan, rotate and scale.
I have been using this example for the gesture recognisers: https://github.com/GreenvilleCocoa/UIGestures/blob/master/UIGestures/RPSimultaneousViewController.m
So now to the problem. Whenever the user switch floor I want to keep the transformation of the image as well as the corresponding center lat/lng. However, the new image can have another rotation offset and aspect ratio.
I have been able to update the new frame of the image with the new size and update the transform with the new rotation offset and verified it. It is when I try to calculate the new center point I can not get it to work. The following code is how I currently do it and it works as long as the view is not rotated:
-(void)changeFromFloor:(int)oldFloorNr toFloor:(int)newFloorNr
{
CGPoint centerPoint = CGPointMake(self.frame.size.width/2, self.frame.size.height/2);
// This is the old non transformed center point.
CGPoint oldCenterOnImage = [self.layer convertPoint:centerPoint toLayer:self.mapOverlayView.layer]; // Actual non transformed point
// This point is verified to be the corresponding non transformed center point
CGPoint newCenterOnImage = [self calculateNewCenterFor:oldCenterOnImage fromFloor:oldFloorNr toFloor:newFloorNr];
// Change image, sets a new image and change the fram of mapOverlayView
[self changeImageFromFloor:oldFloorNr toFloor:newFloorNr]
// Adjust transformed rotation on map if new map have different rotation
[self adjustRotationFromFloorNr:oldFloorNr toFloorNr:newFloorNr];
CGPoint centerOfMapOverlay = CGPointMake((self.mapOverlayView.frame.size.width / 2), (self.mapOverlayView.frame.size.height / 2));
CGPoint newCenterOnImageTransformed = CGPointApplyAffineTransform(newCenterOnImage, self.mapOverlayView.transform);
CGFloat newCenterX = centerPoint.x + centerOfMapOverlay.x - newCenterOnImageTransformed.x;
CGFloat newCenterY = centerPoint.y + centerOfMapOverlay.y - newCenterOnImageTransformed.y;
// This only works without any rotation
self.mapOverlayView.center = CGPointMake(newCenterX, newCenterY);
}
Any idea where I go wrong? I have been working with this problem some days now and I can not seem to figure it out.
Please let me know if I need to add something or if something is unclear.
Thanks!
Code added after help was given:
CGPoint centerOfMapOverlay = CGPointMake(
(self.mapOverlayView.bounds.size.width / 2,
(self.mapOverlayView.bounds.size.height / 2)
);
centerOfMapOverlay = CGPointApplyAffineTransform(
centerOfMapOverlay,
self.mapOverlayView.transform
);
If you change the transform on it's view then the frame property becomes undefined. You should instead use the center property to change the view's position and bounds.size to change the view's size.

How to setup camera to point at object

In my app I load models from different files (format is the same) and they have different geometry: big, small, wide, etc. I have object and camera position hard coded and for some cases I don't see anything because camera not point to object.
Maybe there is a way to normalise model before adding it to scene.
Update.
With Moustach answer I came up to following solution:
// import object from file
SCNNode *object = [importer load:path];
object.position = SCNVector3Make(0, 0, 0);
[scene.rootNode addChildNode:object];
// create and add a camera to the scene
SCNNode *cameraNode = [SCNNode node];
cameraNode.camera = [SCNCamera camera];
// to avoid view clipping
cameraNode.camera.automaticallyAdjustsZRange = YES;
// set camera position to front of object
SCNVector3 sphereCenter;
CGFloat sphereRadius;
[object getBoundingSphereCenter:&sphereCenter radius:&sphereRadius];
cameraNode.position = SCNVector3Make(sphereCenter.x, sphereCenter.y, sphereCenter.z + 2 * sphereRadius);
[scene.rootNode addChildNode:cameraNode];
Works well for me.
You can calculate the bounding box of your mesh, and scale it based on the number you get to make it the same size as other objects.

Sprite Kit - World not in drawing sprites in correct position despite skview with right dimensions

I am trying to draw a basic ground to the game for my sprite to run on.
But it seems that the ground is too short although it is suppose to take up 1/3 of the height of the screen.
My GameScene.sks is already changed to 568x320 (landscape, iPhone 5/5S)
this is my current code
func initMainGround() {
let gSize = CGSizeMake(self.size.width/4*3*2, 120);
let ground = SKSpriteNode(color: SKColor.brownColor(), size: gSize);
ground.name = gName; //Ground
ground.position = CGPointMake(0, 0);
ground.physicsBody = SKPhysicsBody(rectangleOfSize: gSize);
ground.physicsBody.restitution = 0.0;
ground.physicsBody.friction = 0.0;
ground.physicsBody.angularDamping = 0.0;
ground.physicsBody.linearDamping = 0.0;
ground.physicsBody.allowsRotation = false;
ground.physicsBody.usesPreciseCollisionDetection = true; //accurate collision
ground.physicsBody.affectedByGravity = false;
ground.physicsBody.dynamic = false;
ground.physicsBody.categoryBitMask = gBitmask; // 0x1 << 0
ground.physicsBody.collisionBitMask = pBitmask; //0x1 << 1 playerCategoryBitmask
self.addChild(ground);
}
NSLog(String(self.size.height)) return 320.0 which is perfectly fine.
But why is it that the SKSpriteNode is draw wrongly?
Setting the height of the ground to 320 only fills up half of the screen although the height of the screen in landscape is 320.
Like Jon said, this is a placement issue not a size issue. The default anchor point of any given node is in its center, so you have two options here:
1) set ground.position to CGPointMake(CGRectGetMidX(self.frame),CGRectGetMidY(self.frame))
(or even better yet, capture that as an ivar, because you'll be referring to it a whole lot when adding things to your screen, and there's no real reason to do the calculations dozens of times)
2) change the anchor point of the ground node. This is done as a CGPoint, but is interpreted as a percentage of the size of the node in question, with the default (center) being (0.5, 0.5). ground.anchorPoint = CGPointZero (which is just a shortcut for CGPointMake(0, 0)) will set the node's anchor point to its lower-left corner, at which point setting its position to (0,0) will correctly place it starting at the lower-left corner of your scene (or its parent node, in any event).

Cocos2d 2.0 - centering a sprite on a layer

I have a CClayer class and when this class inits it creates a CCSprite that should be centered, so later, when I rotate an object created with that CCLayer class, it rotates around its center. I mean, if the sprite on that class is an image 200 pixels wide and 300 pixels height, I want the CCLayer pivot to be at 100,150.
I have tried to set it at 0,0 and 0.5,0.5 without success.
As far as I understand, CCLayer has no bounding box, it is like a kind of node, right? so, I create the class like this:
-(id) initWithImage:(UIImage*)image Name:(NSString*)name
{
if( (self=[super init])) {
self.isTouchEnabled = YES;
self.mySprite =
[CCSprite spriteWithCGImage:image.CGImage key:name];
self.mySprite.position = CGPointZero;
[self addChild:self.mySprite];
self.mySprite.anchorPoint = ccp(0.0f, 0.0f);
// have tried also 0.5f, 0.5f... no success
}
return self;
}
How do I do that?
thanks
Provide a method in your CCLayer subclass to rotate the sprite:
-(void) rotateMySpriteToAngle:(float) angle
{
self.mySprite.rotation = angle;
}
The anchor point of the sprite should be (0.5, 0.5) to rotate it about its centre.
I feel you are making your program too complex though? Could you just use a sprite instead of a layer with a sprite as a child? Then you could rotate it directly.
It looks as though you want to make your sprite touchable. Consider using CCMenu and CCMenuItems if you are looking to implement buttons.
Edit
Try setting the anchor point of the layer to (0, 0) and the anchor point of the sprite to (0.5, 0.5), then set the position of the sprite to (0, 0)
This means the centre of the sprite is at (0, 0) on the layer and you then rotate the layer around it's origin.
Scene
=============================================
= =
= =
= =
= =
= | =
= | Layer (effective infinite size) =
= __|__ =
= | | | =
= | +--|-------------- =
= |_____| =
= Sprite =
=============================================
The + is the origin of the layer and the center point of the sprite
When you rotate the layer around it's origin, you are simultaneously rotating the sprite about it's centre.

Resources