please correct me if I am wrong: if we have x=10, y=20, when we apply a transform on these coordinates (Lets say scaling x and y by 10), the new coordinates will be x=100 and y=200.
So, if we apply scaling of x by -1 we get x= -10, y =20. But why this action causes the view to be mirrored? shouldn't the view just be re-drawn at it's new coordinates?
What am I missing here ?
Don't think about a single coordinate, think about a range of coordinates.
If you take the coords (x-value only here) of... 0, 1, 2, 3, 4 and scale them by 10 then they will map to 0, 10, 20, 30, 40 respectively. This will stretch out the x axis and so the view will look 10 times bigger than it did originally.
If you take those same x coords and scale them by -1 then they will map to 0, -1, -2, -3, -4 respectively.
That is, the pixel that is furthest away from the origin (4) is still furthest away from the origin but now at -4.
Each pixel is mirrored through the origin.
That's how scaling works in iOS, Android and general mathematics.
If you just want to slide the view around without changing the size of it at all then you can use a translation instead.
Related
I want to rotate a node in y axis which has euler x value -90 (x:0,y:0,z:0). How can i achieve this? I have checked other posts related to rotation but all solutions are provided for euler values (0,0,0).
Don't rotate the around (0, 1, 0) but rotate this vector by multiplying it to the nodes transform. For eulerAngles of (-90, 0, 0) this results in the axis (0.0, -0.448074, 0.893997).
But I assume you actually want the euler angles set to -90° (all angles are in radians in SceneKit), wich results in the axis (0, 0, 1), so you need to rotate the node around the z-axis of the rotated node.
You can also wrap your node with your euler angles set in a parent node and rotate this parent node around the y-axis and let SceneKit handle the transformations and coordinate spaces.
I am applying a force and a torque on an node. This is my code:
myNode?.physicsBody?.applyForce(SCNVector3Make(0, -6, 4), atPosition: SCNVector3Make(0, 1, -1), impulse: true)
myNode?.physicsBody?.applyForce(SCNVector3Make(0, -2, 10), impulse: true)
myNode?.physicsBody?.applyTorque(SCNVector4Make(4, 2, 2.5, 1.6), impulse: true)
The object now falls down and moves from left to right afterwards. I want it fall down and move from right to the left(basically a reflection of the first movement across y-axis). I figured it out that there is very little I can do about the first 2 lines of code, because the force has no x-component. The last line, applyTorque, is the one I need to manipulate. How do you map across the y-axis if the vector has 4 components? I am a little rusty with math
The fuller version of the applyTorque function looks something like this:
.applyTorque(SCNVector4Make(x:, y:, z:, w:), impulse:)
So any numbers you put in the second position should be torque amounts around the y axis.
There's probably a relationship between the numbers and what they create in terms of rotational force on an object, but I've always just used trial-and-error to find what works. Sometimes it's HUGE numbers.
I am assuming that the x-axis is horizontal and the y-axis is vertical and the z-axis points straight at you (see the black arrows below):
I found evidence that this is indeed the case in SceneKit.
If
applyTorque(SCNVector4Make(x, y, z, w), impulse: boolean)
is the correct usage, then x is the amount of counter-clockwise rotation around the x-axis (see green circle arrow), and similarly for y and z. Again, this is my best guess, and it is possible that SceneKit uses clockwise rotation. Therefore, x, y, and z together determine the axis of rotation of the torsional force.
Here is a simpler way to think of it. x, y, and z create a vector in the 3D space described above. The object will rotate counter-clockwise around this vector.
w on the other hand, is the magnitude of the torque, and has nothing to do with the axis of rotation.
Your request to "map vector across the y-axis" is actually a reflection across the yz-plane. If you think about it, what you want is to rotate the opposite direction around the y-axis (negate y) and the same for z.
So the answer should be:
myNode?.physicsBody?.applyTorque(SCNVector4Make(4, -2, -2.5, 1.6), impulse: true)
According to the SceneKit documentation the SCNVector4 argument specifies the direction (x, y, z vector components) and magnitude (w vector component) of the force in newton-meters. To mirror the direction of the applied torque, all you have to do is invert the magnitude. (x, y, z, -magnitude)
I am drawing a texture with 4 vertices in OpenGL ES 1.1.
It can rotate around z:
glRotatef(20, 0, 0, 1);
But when I try to rotate it around x or y like a CALayer then the texture just disappears completely. Example for rotation around x:
glRotatef(20, 1, 0, 0);
I also tried very small values and incremented them in animation loop.
// called in render loop
static double angle = 0;
angle += 0.005;
glRotatef(angle, 1, 0, 0);
At certain angles I see only the edge of the texture. As if OpenGL ES would clip away anything that goes into depth.
Can the problem be related to projection mode? How would you achieve a perspective transformation of a texture like you can do with CALayer transform property?
The problem is most likely in one of the glFrustumf or glOrthof. The last parameter in this 2 calls will take z-far and it should be large enough for the primitive to be drawn. If a side length of the square is 1.0 and centre is in (.0, .0, .5) then z-far should be (> 1.0) to see the square rotated 90 degrees around X or Y axis. Though note these can depend on other matrix operations as well (translating the object or using tools like lookAt).
Making this parameter large enough should solve your problem.
To achieve a perspective transformation use glFrustumf instead of glOrthof.
Considering the screen resolutions of ios device is (320*480) points. What is the point value of top left pixel? Is it 0,0 or 1,1 ?
Technically, the top-left pixel is actually a rect that stretches from {0, 0} to {1, 1}, with the center point at {0.5, 0.5}.
When expressing rects, to include the top-left pixel you want to start at {0, 0}. But, for example, if you want to draw a line that's centered on the top-left pixel, then your line needs to pass through {0.5, 0.5}.
(0, 0). In computing world, numbering always starts at zero. That's how you can tell between programmers and other people: tell them to count to ten.
I am trying to use FFTW for image convolution.
At first just to test if the system is working properly, I performed the fft, then the inverse fft, and could get the exact same image returned.
Then a small step forward, I used the identity kernel(i.e., kernel[0][0] = 1 whereas all the other components equal 0). I took the component-wise product between the image and kernel(both in the frequency domain), then did the inverse fft. Theoretically I should be able to get the identical image back. But the result I got is very not even close to the original image. I am suspecting this has something to do with where I center my kernel before I fft it into frequency domain(since I put the "1" at kernel[0][0], it basically means that I centered the positive part at the top left). Could anyone enlighten me about what goes wrong here?
For each dimension, the indexes of samples should be from -n/2 ... 0 ... n/2 -1, so if the dimension is odd, center around the middle. If the dimension is even, center so that before the new 0 you have one sample more than after the new 0.
E.g. -4, -3, -2, -1, 0, 1, 2, 3 for a width/height of 8 or -3, -2, -1, 0, 1, 2, 3 for a width/height of 7.
The FFT is relative to the middle, in its scale there are negative points.
In the memory the points are 0...n-1, but the FFT treats them as -ceil(n/2)...floor(n/2), where 0 is -ceil(n/2) and n-1 is floor(n/2)
The identity matrix is a matrix of zeros with 1 in the 0,0 location (the center - according to above numbering). (In the spatial domain.)
In the frequency domain the identity matrix should be a constant (all real values 1 or 1/(N*M) and all imaginary values 0).
If you do not receive this result, then the identify matrix might need padding differently (to the left and down instead of around all sides) - this may depend on the FFT implementation.
Center each dimension separately (this is an index centering, no change in actual memory).
You will probably need to pad the image (after centering) to a whole power of 2 in each dimension (2^n * 2^m where n doesn't have to equal m).
Pad relative to FFT's 0,0 location (to center, not corner) by copying existing pixels into a new larger image, using center-based-indexes in both source and destination images (e.g. (0,0) to (0,0), (0,1) to (0,1), (1,-2) to (1,-2))
Assuming your FFT uses regular floating point cells and not complex cells, the complex image has to be of size 2*ceil(2/n) * 2*ceil(2/m) even if you don't need a whole power of 2 (since it has half the samples, but the samples are complex).
If your image has more than one color channel, you will first have to reshape it, so that the channel are the most significant in the sub-pixel ordering, instead of the least significant. You can reshape and pad in one go to save time and space.
Don't forget the FFTSHIFT after the IFFT. (To swap the quadrants.)
The result of the IFFT is 0...n-1. You have to take pixels floor(n/2)+1..n-1 and move them before 0...floor(n/2).
This is done by copying pixels to a new image, copying floor(n/2)+1 to memory-location 0, floor(n/2)+2 to memory-location 1, ..., n-1 to memory-location floor(n/2), then 0 to memory-location ceil(n/2), 1 to memory-location ceil(n/2)+1, ..., floor(n/2) to memory-location n-1.
When you multiply in the frequency domain, remember that the samples are complex (one cell real then one cell imaginary) so you have to use a complex multiplication.
The result might need dividing by N^2*M^2 where N is the size of n after padding (and likewise for M and m). - You can tell this by (a. looking at the frequency domain's values of the identity matrix, b. comparing result to input.)
I think that your understanding of the Identity kernel may be off. An Identity kernel should have the 1 at the center of the 2D kernal not at the 0, 0 position.
example for a 3 x 3, you have yours setup as follows:
1, 0, 0
0, 0, 0
0, 0, 0
It should be
0, 0, 0
0, 1, 0
0, 0, 0
Check this out also
What is the "do-nothing" convolution kernel
also look here, at the bottom of page 3.
http://www.fmwconcepts.com/imagemagick/digital_image_filtering.pdf
I took the component-wise product between the image and kernel in frequency domain, then did the inverse fft. Theoretically I should be able to get the identical image back.
I don't think that doing a forward transform with a non-fft kernel, and then an inverse fft transform should lead to any expectation of getting the original image back, but perhaps I'm just misunderstanding what you were trying to say there...