Applying 3D Transform After Pinching It - ios

I am trying to Apply 3D transform after scaling image from pinch Gesture Recognizer.. When a 3d Transform is applied The image get rescaled to default size ( i.e size before Pinch) . How can i Stop imageView from going back to the Previous state ( i.e before Pinch )
self.transform = CATransform3DIdentity
self.transform.m34 = 1.0 / 500.0;
self.transform = CATransform3DRotate(self.transform, CGFloat(145 * M_PI / 180), 0, 1, 0)
viewToDelete.layer.transform = self.transform
func handlePinch(_ nizer:UIPinchGestureRecognizer) {
nizer.view!.transform = nizer.view!.transform.scaledBy(x: nizer.scale, y: nizer.scale)
nizer.scale = 1
}
In the Image Left most view is the created view//.. view next to It is scaled using Pinch... 3rd downside is after 3dTransform

Related

How to translate X-axis correctly from VNFaceObservation boundingBox (Vision + ARKit)

I'm using both ARKit & Vision, following along Apple's sample project, "Using Vision in Real Time with ARKit". So I am not setting up my camera as ARKit handles that for me.
Using Vision's VNDetectFaceRectanglesRequest, I'm able to get back a collection of VNFaceObservation objects.
Following various guides online, I'm able to transform the VNFaceObservation's boundingBox to one that I can use on my ViewController's UIView.
The Y-axis is correct when placed on my UIView in ARKit, but the X-axis is completely off & inaccurate.
// face is an instance of VNFaceObservation
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -view.frame.height)
let translate = CGAffineTransform.identity.scaledBy(x: view.frame.width, y: view.frame.height)
let rect = face.boundingBox.applying(translate).applying(transform)
What is the correct way to display the boundingBox on the screen (in ARKit/UIKit) so that the X & Y axis match up correctly to the detected face rectangle? I can't use self.cameraLayer.layerRectConverted(fromMetadataOutputRect: transformedRect) since I'm not using AVCaptureSession.
Update: Digging into this further, the camera's image is 1920 x 1440. Most of it is not displayed on ARKit's screen space. The iPhone XS screen is 375 x 812 points.
After I get Vision's observation boundingBox, I've transformed it to fit the current view (375 x 812). This isn't working since the actual width seems to be 500 (the left & right sides are out of the screen view). How do I CGAffineTransform the CGRect bounding box (seems like 500x812, a total guess) from 375x812?
The key piece missing here is ARFrame's displayTransform(for:viewportSize:). You can read the documentation for it here.
This function will generate the appropriate transform for a given frame and viewport size (the CGRect of the view you're displaying the image and bounding box in).
func visionTransform(frame: ARFrame, viewport: CGRect) -> CGAffineTransform {
let orientation = UIApplication.shared.statusBarOrientation
let transform = frame.displayTransform(for: orientation,
viewportSize: viewport.size)
let scale = CGAffineTransform(scaleX: viewport.width,
y: viewport.height)
var t = CGAffineTransform()
if orientation.isPortrait {
t = CGAffineTransform(scaleX: -1, y: 1)
t = t.translatedBy(x: -viewport.width, y: 0)
} else if orientation.isLandscape {
t = CGAffineTransform(scaleX: 1, y: -1)
t = t.translatedBy(x: 0, y: -viewport.height)
}
return transform.concatenating(scale).concatenating(t)
}
You can then use this like so:
let transform = visionTransform(frame: yourARFrame, viewport: yourViewport)
let rect = face.boundingBox.applying(transform)

Scale UIVIew upwards

I have an UIView which I want to scale to double its size. I've tried these 2 commands:
self.bottomView.transform = CGAffineTransform(translationX: 0, y: -125)
This one moves to the point I want but it moves the entire view so I get a gap at the bottom.(125 is my original height)
self.bottomView.transform = CGAffineTransform(scaleX: 1, y: 2)
This one stretches the view but it stretches both ways, up and down. I want it to only stretch in an upward Y-axis direction and not to both ways.
Which one should I continue with? Is there any way to choose which way the view should stretch? Furthermore, scaleX: y: stretches the subviews as well which isn't optimal for my cause.
I think you can use the below API.
CGAffineTransformScale(CGAffineTransform t, CGFloat sx, CGFloat sy)
Update only one axis: X or Y with recognizer.scale and keep the other one 1.0f to achieve one direction scale.
Here is a simple Solution that may help you -- Just Animate your stretching operation on UIView.
UIView.animate(withDuration: 0.2) {
self.bottomView.frame.size.height = self.bottomView.frame.size.height * 2
}

How to find X, Y and Rotation value of a UIView after CATransform3D applied

I have a UIView on my screen. I am applying layer.transform to that view with translation and rotation according to users tap movement using tap and rotation gesture. At last i want to retrieve the final x and y position with the rotation separately. Could not find any such post here to get those information from transform. Can anyone help with this?
Here is the code am using to apply the transform.
var transform = CATransform3DIdentity
transform = CATransform3DTranslate(transform, displacementX, displacementY, 1.0)
transform = CATransform3DRotate(transform, gesture.rotation, 0, 0, 1.0)
self.currentItem.imageView.layer.transform = transform
Please refer the following code,
For Applying Transform,
let degrees = 90.0
let radians = CGFloat(degrees * Double.pi / 180)
sampleView.layer.transform = CATransform3DMakeRotation(radians, 0.0, 0.0, 1.0)
To get rotation angle after transform,
let radiansFromSampleView = atan2(sampleView.transform.b, sampleView.transform.a)
let DegreesFromRadiansOFSampleView = CGFloat(180 * Double(radiansFromSampleView) / Double.pi)
For x and y positions you can directly take from frame of the view even after transformation.
Hope this can be helpful.

SceneKit: use transform or directly manipulate rotation/position properties if goal is to rotate camera and project node X units in front of camera?

For a voxel art app, the goal is to let users move and rotate a camera in a SceneKit scene then tap to place a block.
The code below lets a user rotate a camera by panning. After the gesture ends, we move an existing block so it is -X units on the camera's Z-axis (i.e., -X units in front of the camera).
cameraNode is the scene's point of view and is a child of userNode. When the user moves a joystick, we update the position of userNode.
Question: Other SO posts manipulate camera nodes by applying a transform instead of changing the rotation and position properties. Is one approach better than the other?
func sceneViewPannedOneFinger(sender: UIPanGestureRecognizer) {
// Get pan distance & convert to radians
let translation = sender.translationInView(sender.view!)
var xRadians = GLKMathDegreesToRadians(Float(translation.x))
var yRadians = GLKMathDegreesToRadians(Float(translation.y))
// Get x & y radians
xRadians = (xRadians / 6) + curXRadians
yRadians = (yRadians / 6) + curYRadians
// Limit yRadians to prevent rotating 360 degrees vertically
yRadians = max(Float(-M_PI_2), min(Float(M_PI_2), yRadians))
// Set rotation values to avoid Gimbal Lock
cameraNode.rotation = SCNVector4(x: 1, y: 0, z: 0, w: yRadians)
userNode.rotation = SCNVector4(x: 0, y: 1, z: 0, w: xRadians)
// Save value for next rotation
if sender.state == UIGestureRecognizerState.Ended {
curXRadians = xRadians
curYRadians = yRadians
}
// Set preview block
setPreviewBlock()
}
private func setPreviewBlock(var futurePosition: SCNVector3 = SCNVector3Zero, reach: Float = 8) -> SCNVector3 {
// Get future position
if SCNVector3EqualToVector3(futurePosition, SCNVector3Zero) {
futurePosition = userNode.position
}
// Get current position after accounting for rotations
let hAngle = Float(cameraNode.rotation.w * cameraNode.rotation.x)
let vAngle = Float(userNode.rotation.w * userNode.rotation.y)
var position = getSphericalCoords(hAngle, t: vAngle, r: reach)
position += userNode.position
// Snap position to grid
position = position.rounded()
// Ensure preview block never dips below floor
position.y = max(0, position.y)
// Return if snapped position hasn't changed
if SCNVector3EqualToVector3(position, previewBlock.position) {
return position
}
// If here, animate preview block to new position
SCNTransaction.begin()
SCNTransaction.setAnimationDuration(AnimationTime)
previewBlock.position = position
SCNTransaction.commit()
// Return position
return position
}
func getSphericalCoords(s: Float, t: Float, r: Float) -> SCNVector3 {
return SCNVector3(-(cos(s) * sin(t) * r),
sin(s) * r,
-(cos(s) * cos(t) * r))
}

Skew image in Objective C

I want to skew an image in objective C like pictured below, is this possible using CGAffineTransform? I'm not sure how to achieve such an effect and so far only get it to rotate or scale.
You can do it using the transform property of a layer, since a layer's transform is a 3D transform. In the code below, the anchor point of the layer is moved to the left edge, and then the 3D transform is applied. Note that self.anim3DView is just a standard UIImageView.
if ( self.anim3DView.layer.anchorPoint.x > 0.0 )
{
CGPoint position = self.anim3DView.layer.position;
position.x -= self.anim3DView.layer.bounds.size.width / 2.0;
self.anim3DView.layer.anchorPoint = CGPointMake( 0.0, 0.5 );
self.anim3DView.layer.position = position;
}
CATransform3D t = CATransform3DIdentity;
t.m34 = -0.005;
t = CATransform3DRotate( t, M_PI / 6.0, 0.0, 1.0, 0.0 );
self.anim3DView.layer.transform = t;

Resources