Applying rotation to group of nodes sometimes flip the nodes - ios

I am trying to rotate a group of objects in the scene by applying Rotation to each node. The rotation part of the code is as simple as:
for node in sceneView.scene.rootNode.childNodes {
node.localRotate(by: SCNVector4(0,1,0, Double.pi/180))
}
The expected result is the resulting node will rotate by 1 degree every time it is triggered. However, I have noticed sometimes the resulting image have reversed y values (for example, the rotation vector was -1 in the previous frame but the next frame it suddenly becomes +1), resulting in something looks like an illusion. The frequency looks like it happens every other frame or so. I tried with different axis but the same thing also occurs. I have tried both localRotate and Rotate, the problem still exists. Is this suppose to happen?
This is a video link for demonstrating the problem:
https://res.cloudinary.com/df7kpyhrg/video/upload/v1616634641/RPReplay_Final1616634008_ohvg1k.mp4

I am still not 100% sure, but I think the problem was I need to use the simd version of the ar modification. I need to use simdRotate instead of rotate. I believe the current flip in the video is the change in the angel which will look like reflection. To give a clearer example, suppose the current angel is 0 and it increments in the scale of 0.1 radian, the values will be 0, -0.1, 0.1, -0.2, 0.2 and so on. Therefore the flip results are the intermediate results to be executed. This stack overflow posts explains the difference between simd and non-simd methods nicely
What does SIMD mean?

Related

How to temporarily freeze a node in front of the camera using ARKit, SceneKit in Swift

I built a complete structure as a node (with its child nodes) and the user will walk through it using ARKit.
At some point, if the user cannot continue because of some real obstacle in the real world, I added a "pause" button which should freeze whatever the user currently sees in front of the camera, the user could then move freely to some other open space and when the user will release the pause button he/she will be able to resume where they left off (only someplace else in the real world).
A while ago I asked about it in the Apple Developer forum and an Apple Frameworks Engineer gave the following reply:
For "freezing" the scene, you could transform the anchor's position (in world coordinates) to camera coordinates, and then anchor your content to the camera. This will give you the effect that the scene is "frozen", i.e., does not move relative to the camera.
I'm currently not using an anchor because I don't necessarily need to find a flat surface. Rather, my node is placed at a certain position relative to where we start at (0,0,0).
My question is how do I exactly do what the Apple engineer told me to do?
I have the following code which I'm still stuck with. When I add the node to the camera (pointOfView, last line of the code below), it does freeze in place, but I can't get it to freeze in the same position and orientation as it was before it was frozen.
#IBAction func pauseButtonClicked(_ sender: UIButton) {
let currentPosition = sceneView.pointOfView?.position
let currentEulerAngles = sceneView.pointOfView?.eulerAngles
var internalNodeTraversal = lastNodeRootPosition - currentPosition! // for now, lastNodeRootPosition is (0,0,0)
internalNodeTraversal.y = lastNodeRootPosition.y + 20 // just so it’s positioned a little higher in front of the camera
myNode?.removeFromParentNode() // remove the node from the Real World view. Looks like this line has no effect and just adding the node as a child to the camera (pointOfView) is enough, but it feels more right to do this anyway.
myNode?.position = internalNodeTraversal // the whole node is moved respectively in the opposite direction from the root to where I’m standing to reposition the camera in my current position inside the node
// myNode?.eulerAngles = (currentEulerAngles! * -1) — this code put the whole node in weird positions so I removed it
myNode?.eulerAngles.y = currentEulerAngles!.y * -1 // opposite orientation of the node so the camera will be oriented in the same direction
myNode?.eulerAngles.x = 0.3 // just tilting it up a little bit to have a better view, more similar to the view as before it was locked to the camera
// I don’t think I need to change the eulerAngles.z
myNode!.convertPosition(internalNodeTraversal, to: sceneView.pointOfView) // I’m not sure I wrote this correctly. Also, this line doesn’t seem tp change anything
sceneView.pointOfView?.addChildNode(myNode!) // attaching the node to the camera so it will remain stuck while the user moves around until the button is released
}
So I first calculate where in the node I'm currently standing and then I change the position of the node in the opposite direction so that the camera will now be in that position. That seems to be correct.
Now I need to change the orientation of the node so that it will point in the right direction and here things get funky. I've been trying so many things for days now.
I use the eulerAngles for the orientation. If I set the whole vector multiplied by -1, it would show weird orientations. I ended up only using the eulerAngles.y which is the left/right orientation and I hardcoded the x orientation (up/down).
Ultimately what I have in the code above is the closest that I was able to get. If I'm pointing straight, the freeze will be correct. If I turn just a little bit, the freeze will be pretty close as well. Almost the same as what the user saw before the freeze. But the more I turn, the more the frozen image is off and more slanted. At some point (say I turn 50 or 60 degrees to the side) the whole node is off the camera and cannot be seen.
Somehow I have a feeling that there must be an easier and more correct way to achieve the above.
The Apple engineer wrote to "transform the anchor's position (in world coordinates) to camera coordinates". For that reason I added the "convertPosition" function in my code, but a) I'm not sure I used it correctly and b) it doesn't seem to change anything in my code if I have that line or not.
What am I doing wrong?
Any help would be very much appreciated.
Thanks!
I found the solution!
Actually, the problem I had was not even described as I didn't think it was relevant. I built the AR nodes 2 meters in front of the origin (-2 for the z-coordinate) while the center of my node was still at the origin. So when I changed the rotation or eulerAngles, it rotated around the origin so my nodes moved in a large curve and in fact also changed their position as a result.
The solution was to use a simdPivot. Instead of changing the position and rotation of the node itself, I created a translation matrix and a rotation matrix which was at the point of the camera (where the user is standing) and I then multiplied both matrices. Now when I added the node as a child of the camera (pointOfView) this would freeze the image and in effect show exactly what the user was seeing before it was frozen as the position is the same and the rotation is exactly around the user's standing position.

Vary speed of object moving through SkAction.followPath (iOS)

Is it possible to vary the speed of an object moving because of SKAction.followPath? For instance, let's say we use the code below to have a ball follow a rectangular path. The code will use a constant speed throughout the path. But what if we want to vary the speed of the object along the rectangle?
let goPath = SKAction.followPath(ballPath!.CGPath, duration: 2.5)
movingBall.runAction(goPath)
Is the only option to effectively have the ball follow a rectangular path built of separate lines with different speeds (as opposed to one path)?
Thanks!
two ways of accomplishing this.
FIRST WAY:
You can use SKAction.speedBy. You'd group SKAction.speedBy with your followPath. example:
movingBall.runAction(SKAction.group([
SKAction.followPath(ballPath!.CGPath, duration: 2.5)
SKAction.speedBy(4, duration: 2.5)
]))
Now the ball is going to reach 4 times the speed over 2.5 seconds. NOTE: now we're not going to be honoring the followPath duration. As time passes we're increasing the speed of the sprite. It's going to reach its destination in less than 2.5 seconds.
SECOND WAY:
The other way is to use a timing function. This is probably the better way to go because we have a lot more control over how fast the ball is going to move during the animation.
example:
let goPath = SKAction.followPath(ballPath!.CGPath, duration: 2.5)
goPath.timingFunction = {
t in
return powf(t, 3)
}
movingBall.runAction(goPath)
The way a timing function works is that it gives you a block that has the current elapsed time as a float. it's your job to use some kind of computation to modify the value that comes in, and then return it.
I'm just cubing whatever time comes into the function and returning it. This makes it so the ball starts off very slow, and then accelerates very quickly towards the end of the animation.
You can get really creative and use sin waves to make things bounce, etc. Just note that any returned values <0 are ignored.
With the followPath SKAction this isn't possible.
What you could do though is use the update method of the node to update its position manually.
This will take a bit of work to make it follow a path but you can do a lot more than is available with SKAction.

cvCalcOpticalFlowPyrLK not working as expected

I know this is a well documented problem but I didn't manage to find a satisfactory solution online. Here goes.
I am using cvCalcOpticalFlowPyrLK to track motion of feature points. I find the feature points with cvGoodFeaturesToTrack and refine it with cvFindCornerSubPix. I find the feature points in my first frame (reference frame) and use LK to track the movement of these points with respect to the reference frame. I update the points with current frame feature points coordinate with they are found. Heres what I observed:
1) The no. of good feature points found by cvGoodFeaturesToTrack is very little. I tried to find 100 points but I always get less than 10 points.
2) The no. of feature points after 5-6 frames decreased by 50 percent and then another 50 by 5 frames later, and this is when the subject is not in motion. The tracking is patchy in the sense some of the points are correctly tracked but some are way off.
I have seen demo application on youtube or iphone app. The drop off of the no. of feature points from frame to frame is not what I see in my application. So I am suspecting parameters I set might be wrong.
This is how I call the functions:
cvGoodFeaturesToTrack(
image,
eigen_image,
temp_image,
corners_point,
&corner_count,
0.01(quality level),
3(min distance),
0,
10(block size),
0(use harris),
0.04(k));
cvFindCornerSubPix(
image,
cornersPoint,
corner_count,
cvSize(WINDOW_SIZE, WINDOW_SIZE),
cvSize(-1, -1),
cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.3));
cvCalcOpticalFlowPyrLK(image,
currentFrame,
rpV->pyramid_images0,
rpV->pyramid_images1,
cornersPoint,
cornersCurrent,
corner_count,
cvSize(WINDOW_SIZE, WINDOW_SIZE),
10(level),
features_found,
feature_errors,
cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.3),
0);
Another thing is that I am using a greyscale camera (infra red camera). I dont it matters by too much though. I am wondering if I am missing anything important here.
Any form of help is much appreciated.
Thanks,
Kelvin
There are a few issues:
Calling cvFindCornerSubpix does not help if the features you are tracking don't look like corners of a checkerboard.
Use of a pyramid is appropriate only if the apparent motion is larger than the window size from frame to frame, for a reasonable window size.
Hard to tell why you are not getting enough good features to track without seeing your imagery. Perhaps it's rather blurry?

Smooth Rotation using Bullet and Ogre3D

I've been suffering from an issue regarding the implementation of orienting characters in a game I'm implementing using Ogre3D and Bullet physics.
What I have: A Direction Vector that the character is moving in, along with its current orientation.
What I need: To set the orientation of the character to face the way it is moving.
I have a snippet of code that sort of does what I want:
btTransform src = body->getCenterOfMassTransform();
btVector3 up = BtOgre::Convert::toBullet(Ogre::Vector3::UNIT_X);
btVector3 normDirection = mDirection.normalized();
btScalar angle = acos(up.dot(normDirection));
btVector3 axis = up.cross(normDirection);
src.setRotation(btQuaternion(axis, angle));
body->setCenterOfMassTransform(src);
Where 'body' is the rigidbody I'm trying to orient.
This snippet has a couple of problems however:
1) When changing direction, it tends to 'jitter' i.e. it rapidly faces one way, then the opposite for a second or so before correcting itself to the orientation it is supposed to be at.
2) Most times that the code is run I get an assertion error from Bullet's btQuaternion on
assert(d != btScalar(0.0));
Can anyone help?
Thanks!
I think you shouldn't use functions like 'acos' for such things, as it may cause some inconsistencies in border-cases as the 180 vs 0 rotation mentioned above. You can also get high numerical error for such data.
The second thing is that - in general - you should avoid setting explicit position and rotation in physics engines, but rather apply forces and torques to make your body moving as you want. Your current approach may work perfectly now, but when you add another object and force you character to occupy the same space, your simulation will explode. And at this stage it's very hard to fix it, so it's better to do it right from start :) .
I know that finding correct force/torque can be tricky but it's the best way to make your simulation consistent.

EmguCV Shape Detection affected by Image Size

I'm using the Emgu shape detection example application to detect rectangles on a given image. The dimensions of the resized image appear to impact the number of shapes detected even though the aspect ratio remains the same. Here's what I mean:
Using (400,400), actual img size == 342,400
Using (520,520), actual img size == 445,520
Why is this so? And how can the optimal value be determined?
Thanks
I replied to your post on EMGU but figured you haven't checked back but this is it. The shape detection works on the principle of thresh-holding unlikely matches, this prevents lots of false classifications. This is true for many image processing algorithms. Basically there are no perfect setting and a designer must select the most appropriate settings to produce the most desirable results. I.E. match the most objects without saying there's more than there actually is.
You will need to adjust each variable individually to see what kind of results you get. Start of with the edge detection.
Image<Gray, Byte> cannyEdges = gray.Canny(cannyThreshold, cannyThresholdLinking);
Have a look at your smaller image see what the difference is between the rectangles detected and the one that isn't. You could be missing and edge or a corner which is why it's not classified. If you are adjust cannyThreshold and observe the results, if good then keep it :) if bad :( go back to the original value. Once satisfied adjust cannyThresholdLinking and observe.
You will keep repeating this until you get a preferred image the advantage here is that you have 3 items to compare you will continue until the item that's not being recognised matches the other two.
If they are the similar, likely as it is a black and white image you'll need to go onto the Hough lines detection.
LineSegment2D[] lines = cannyEdges.HoughLinesBinary(
1, //Distance resolution in pixel-related units
Math.PI / 45.0, //Angle resolution measured in radians.
20, //threshold
30, //min Line width
10 //gap between lines
)[0]; //Get the lines from the first channel
Use the same method of adjusting one value at a time and observing the output you will hopefully find the settings you need. Never jump in with both feet and change all the values as you will never know if your improving the accuracy or not. Finally if all else fails look at the section that inspects the Hough results for a rectangle
if (angle < 80 || angle > 100)
{
isRectangle = false;
break;
}
Less variables to change as hough should do all the work for you. but still it could all work out here.
I'm sorry that there is no straight forward answer, but I hope you keep at it and solve the problem. Else you could always resize the image each time.
Cheers
Chris

Resources