how to move the camera in 3d space while animation is done in Manim - manim

I just want to move a sphere in a circle while showing it from different angles.
First from front, then from a side.
I tried this:
from manim import *
class ThreeDCameraRotation(ThreeDScene):
def construct(self):
self.camera.background_color = WHITE
self.set_camera_orientation(phi=0 * DEGREES, theta=0* DEGREES)
axes = ThreeDAxes()
circle = Circle(radius=1, color=RED)
self.add(circle, axes)
sphere = Sphere(radius=0.1,color=RED).shift(RIGHT)
#completed the setup
self.play(MoveAlongPath(sphere, circle), run_time=3, rate_func=linear)
#circular motion
self.move_camera(phi=90 * DEGREES, theta=0 * DEGREES,run_time =2)
#Camera movement
self.wait()
self.move_camera(phi=0 * DEGREES, theta=0 * DEGREES)
#again camera movement
self.wait()
But the problem is , the camera angle changes only after the rotation is finished.
But i want the sphere to keep rotating while it is shown from different angles.
How can i do this?
Please HELP
The purpose is to show Simple harmonic motion

There are two ways you can do this:
The Easy Way:
The ThreeDScene has an effect called ambient camera rotation. You use it in three steps:
Call the function self.begin_ambient_camera_rotation(rate, about).
This starts the camera rotating at the rate specified (in radians per second). There are three options for about: “theta”, “phi”, or “gamma”.
Play whatever animations you want. The camera will keep on rotating while this animations play.
Call self.stop_ambient_camera_rotation(about) and the camera will stop rotating.
In your case, you should do something like
self.begin_ambient_camera_rotation(90*DEGREES/3, about='phi')
self.play(MoveAlongPath(sphere, circle), run_time=3, rate_func=linear)
self.stop_ambient_camera_rotation(about='phi')
The Way With More Control :
If you want more control over the initial and final position of the camera, or if you don't want to figure out the rates of rotation, you can use ValueTrackers. The camera has a set of trackers that control its position, and you can access them with
self.camera.get_value_trackers()
which returns a list with trackers for phi, theta, the focal distance, gamma, and the distance to origin. You can do
self.play(tracker.animate.set_value(some_value), Some_other_animation())
to have the animation play at the same time as the camera moves to the specified position.
Try tweaking the this example to see how to use these trackers work:
class CameraTest(ThreeDScene):
def construct(self):
phi, theta, focal_distance, gamma, distance_to_origin = self.camera.get_value_trackers()
self.add(ThreeDAxes())
self.wait()
self.play(phi.animate.set_value(50*DEGREES))
self.play(theta.animate.set_value(50*DEGREES))
self.play(gamma.animate.set_value(1))
self.play(distance_to_origin.animate.set_value(2))
self.play(focal_distance.animate.set_value(25))
self.wait()

Related

How to zoom to fit 3D points in the scene to screen?

I store my 3D points (many points) in a TGLPoints object. There is no other object in the scene than points. When drawing the points, I would like to fit them to the screen so they do not look far away or too close. I tried TGLCamera.ZoomAll but with no success and also the solution given here which adjusts the camera location, depth of view and scene scale:
objSize:=YourCamera.TargetObject.BoundingSphereRadius;
if objSize>0 then begin
if objSize<1 then begin
GLCamera.SceneScale:=1/objSize;
objSize:=1;
end else GLCamera.SceneScale:=1;
GLCamera.AdjustDistanceToTarget(objSize*0.27);
GLCamera.DepthOfView:=1.5*GLCamera.DistanceToTarget+2*objSize;
end;
The points did not appear on the screen this time.
What should I do to fit the 3D points to screen?
For each point build the scale factor by taking length of vector from points position to camera position. Then using this scale build your transformation matrix that you will apply to camera matrix. If scale is large that means point is farther you will apply reverse translation to bring that point in close proximity. I hope this is clear. To compute translation vector use following formula
translation vector = translation vector +/- (abs(scale)/2)
+/- will be decided by the scale magnitude if it is too far from camera you chose - in above equation.

finding the depth in arkit with SCNVector3Make

the goal of the project is to create a drawing app. i want it so that when i touch the screen and move my finger it will follow the finger and leave a cyan color paint. i did created it BUT there is one problem. the paint DEPTH is always randomly placed.
here is the code, just need to connect the sceneView with the storyboard.
https://github.com/javaplanet17/test/blob/master/drawingar
my question is how do i make the program so that the depth will always be consistent, by consistent i mean there is always distance between the paint and the camera.
if you run the code above you will see that i have printed out all the SCNMatrix4, but i none of them is the DEPTH.
i have tried to change hitTransform.m43 but it only messes up the x and y.
If you want to get a point some consistent distance in front of the camera, you don’t want a hit test. A hit test finds the real world surface in front of the camera — unless your camera is pointed at a wall that’s perfectly parallel to the device screen, you’re always going to get a range of different distances.
If you want a point some distance in front of the camera, you need to get the camera’s position/orientation and apply a translation (your preferred distance) to that. Then to place SceneKit content there, use the resulting matrix to set the transform of a SceneKit node.
The easiest way to do this is to stick to SIMD vector/matrix types throughout rather than converting between those and SCN types. SceneKit adds a bunch of new accessors in iOS 11 so you can use SIMD types directly.
There’s at least a couple of ways to go about this, depending on what result you want.
Option 1
// set up z translation for 20 cm in front of whatever
// last column of a 4x4 transform matrix is translation vector
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
// get camera transform the ARKit way
let cameraTransform = view.session.currentFrame.camera.transform
// if we wanted, we could go the SceneKit way instead; result is the same
// let cameraTransform = view.pointOfView.simdTransform
// set node transform by multiplying matrices
node.simdTransform = cameraTransform * translation
This option, using a whole transform matrix, not only puts the node a consistent distance in front of your camera, it also orients it to point the same direction as your camera.
Option 2
// distance vector for 20 cm in front of whatever
let translation = float3(x: 0, y: 0, z: -0.2)
// treat distance vector as in camera space, convert to world space
let worldTranslation = view.pointOfView.simdConvertPosition(translation, to: nil)
// set node position (not whole transform)
node.simdPosition = worldTranslation
This option sets only the position of the node, leaving its orientation unchanged. For example, if you place a bunch of cubes this way while moving the camera, they’ll all be lined up facing the same direction, whereas with option 1 they’d all be in different directions.
Going beyond
Both of the options above are based only on the 3D transform of the camera — they don’t take the position of a 2D touch on the screen into account.
If you want to do that, too, you’ve got more work cut out for you — essentially what you’re doing is hit testing touches not against the world, but against a virtual plane that’s always parallel to the camera and a certain distance away. That plane is a cross section of the camera projection frustum, so its size depends on what fixed distance from the camera you place it at. A point on the screen projects to a point on that virtual plane, with its position on the plane scaling proportional to the distance from the camera (like in the below sketch):
So, to map touches onto that virtual plane, there are a couple of approaches to consider. (Not giving code for these because it’s not code I can write without testing, and I’m in an Xcode-free environment right now.)
Make an invisible SCNPlane that’s a child of the view’s pointOfView node, parallel to the local xy-plane and some fixed z distance in front. Use SceneKit hitTest (not ARKit hit test!) to map touches to that plane, and use the worldCoordinates of the hit test result to position the SceneKit nodes you drop into your scene.
Use Option 1 or Option 2 above to find a point some fixed distance in front of the camera (or a whole translation matrix oriented to match the camera, translated some distance in front). Use SceneKit’s projectPoint method to find the normalized depth value Z for that point, then call unprojectPoint with your 2D touch location and that same Z value to get the 3D position of the touch location with your camera distance. (For extra code/pointers, see my similar technique in this answer.)

Rotate 3D cube so side is facing user

How do I figure out the new angle and rotation vectors for the most visible side of the cube?
Why: The user can rotate cube, but when finished I'd like the cube to snap to a side facing the user.
What: I'm currently using CoreAnimation in iOS to do the rotation with CATransform3D. I have the current angle and the rotation vectors so I can do this:
CATransform3DMakeRotation(angle, rotationVector[0], rotationVector[1], rotationVector[2]);
Additional Info: I'm currently using Bill Dudney's Trackball code to generate movement and calculate angle and rotation vector.
Your camera's lookAt vector - probably {0, 0, 1} - determines what side is closer to user.
You need to create normal for every side of the cube. Then rotate them in same way as cube. After that, calculate the angle between every normal vector and the camera lookAt vector using a dot product. Whichever normal has the largest dot product is the side closest to the camera.

Rotate and zoom entire 2d scene

I'm currently working on a Final Fantasy like game, and I'm at the point where I'm working on the effect when switching from world map to battles. I wanted a zoom-in while rotating effect, I was thinking of simply animating a transformation matrix that would be passed to SpriteBatch.Begin, but my problem is when I rotate, the rotation origin is the top left of my entire scene and it doesn't "zoom-in" centered. I saw that you could specify a rotation origin on SpriteBatch.Draw but that sets it per sprites and I want to rotate the entire scene.
The transform you are looking for is this:
Matrix Transform = Matrix.CreateTranslation(-Position)
* Matrix.CreateScale(scale)
* Matrix.CreateRotationZ(angle)
* Matrix.CreateTranslation(GraphisDevice.Viewport.Bounds.Center);

What is this rotation behavior in XNA?

I am just starting out in XNA and have a question about rotation. When you multiply a vector by a rotation matrix in XNA, it goes counter-clockwise. This I understand.
However, let me give you an example of what I don't get. Let's say I load a random art asset into the pipeline. I then create some variable to increment every frame by 2 radians when the update method runs(testRot += 0.034906585f). The main thing of my confusion is, the asset rotates clockwise in this screen space. This confuses me as a rotation matrix will rotate a vector counter-clockwise.
One other thing, when I specify where my position vector is, as well as my origin, I understand that I am rotating about the origin. Am I to assume that there are perpendicular axis passing through this asset's origin as well? If so, where does rotation start from? In other words, am I starting rotation from the top of the Y-axis or the x-axis?
The XNA SpriteBatch works in Client Space. Where "up" is Y-, not Y+ (as in Cartesian space, projection space, and what most people usually select for their world space). This makes the rotation appear as clockwise (not counter-clockwise as it would in Cartesian space). The actual coordinates the rotation is producing are the same.
Rotations are relative, so they don't really "start" from any specified position.
If you are using maths functions like sin or cos or atan2, then absolute angles always start from the X+ axis as zero radians, and the positive rotation direction rotates towards Y+.
The order of operations of SpriteBatch looks something like this:
Sprite starts as a quad with the top-left corner at (0,0), its size being the same as its texture size (or SourceRectangle).
Translate the sprite back by its origin (thus placing its origin at (0,0)).
Scale the sprite
Rotate the sprite
Translate the sprite by its position
Apply the matrix from SpriteBatch.Begin
This places the sprite in Client Space.
Finally a matrix is applied to each batch to transform that Client Space into the Projection Space used by the GPU. (Projection space is from (-1,-1) at the bottom left of the viewport, to (1,1) in the top right.)
Since you are new to XNA, allow me to introduce a library that will greatly help you out while you learn. It is called XNA Debug Terminal and is an open source project that allows you to run arbitrary code during runtime. So you can see if your variables have the value you expect. All this happens in a terminal display on top of your game and without pausing your game. It can be downloaded at http://www.protohacks.net/xna_debug_terminal
It is free and very easy to setup so you really have nothing to lose.

Resources