Detecting closest vertex to camera in openGL ES - ios

I have a mesh that is stored as an array of Vertices with an Index array used to draw it. Four of the vertices are also redrawn with a shader to highlight the points, and the indices for these are stored in another array.
The user can rotate the model using touches, which affects the modelViewMatrix:
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, _rotMatrix);
My problem is that I need to detect which of my four highlighted points is closest to the screen when the user makes a rotation.
I think the best method would be to calculate the distance from the near clip of the view frustum to the the point, but how to I calculate those points in the first place?

You can do this easily from camera/eye space[1], where everything is relative to the camera (So, the camera will be at (0, 0, 0) and looking down the negative z axis).
Use your modelViewMatrix to transform the vertex to camera space, say vertex_cs. Then the distance of the vertex from the camera (plane) would simply be the -vertex_cs.z .
--
1. What exactly are eye space coordinates?

Related

How to zoom to fit 3D points in the scene to screen?

I store my 3D points (many points) in a TGLPoints object. There is no other object in the scene than points. When drawing the points, I would like to fit them to the screen so they do not look far away or too close. I tried TGLCamera.ZoomAll but with no success and also the solution given here which adjusts the camera location, depth of view and scene scale:
objSize:=YourCamera.TargetObject.BoundingSphereRadius;
if objSize>0 then begin
if objSize<1 then begin
GLCamera.SceneScale:=1/objSize;
objSize:=1;
end else GLCamera.SceneScale:=1;
GLCamera.AdjustDistanceToTarget(objSize*0.27);
GLCamera.DepthOfView:=1.5*GLCamera.DistanceToTarget+2*objSize;
end;
The points did not appear on the screen this time.
What should I do to fit the 3D points to screen?
For each point build the scale factor by taking length of vector from points position to camera position. Then using this scale build your transformation matrix that you will apply to camera matrix. If scale is large that means point is farther you will apply reverse translation to bring that point in close proximity. I hope this is clear. To compute translation vector use following formula
translation vector = translation vector +/- (abs(scale)/2)
+/- will be decided by the scale magnitude if it is too far from camera you chose - in above equation.

What are the correct cube map coordinates for openGL ES 2.0 on iOS after loading the cube map with GLKit?

In order to debug my shader, I am trying to display the just the front face of the cube map.
The cube map is a 125x750 image with the 6 faces on top of each other:
First, I load the cube map with GLKit:
_cubeTexture = [GLKTextureLoader cubeMapWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"uffizi_cube_map_ios" ofType:#"png"] options:kNilOptions error:&error];
Then I load it into the shader:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, self.cubeTexture.name);
glUniform1i( glGetUniformLocation( self.shaderProgram, "cube"), 0);
Then in the fragment shader:
gl_FragColor = textureCube(cube, vec3(-1.0+2.0*(gl_FragCoord.x/resolution.x),-1.0+2.0*(gl_FragCoord.y/resolution.y),1.0));
This displays a distorted image which seems to be a portion of the top of the cube map:
It shouldn't be distorted, and it should show the right face, not the top face.
I can't find any documentation that describes how the coordinates map to the cube, so what am I doing wrong?
It seems that there is a problem with cubeMapWithContentsOfFile. The cubeMapWithContentsOfFiles method (the one that takes an array of 6 images) works perfectly on the simulator. (There is a different issue with both methods on device).
To visualize how texture coordinates work for cube maps, picture a cube centered at the origin, with the faces at distance 1 from the origin, and with the specified cube map image on each face.
The texture coordinates can then be seen as direction vectors. Starting at the origin, the 3 components define a vector that can point in any direction. The ray defined by the vector will then intersect one of the 6 cube faces at a given point. This is the point where the corresponding cube map image is sampled during texturing.
For example, take a vector that points in a direction that is closest to the positive z axis. The ray defined by this vector intersects the top face of the cube. Therefore, the top (POSITIVE_Z) image of the cube map is sampled, at the point where the ray intersects the face.
Equivalent rules applies to all other directions. The face corresponding to the largest absolute value of one of the vector components determines which face is sampled, and the intersection point determines the position within the image.
The exact rules and formula can be found in the spec document. For example in the latest spec (OpenGL 4.5), see Section 8.13 "Cube Map Texture Selection", with the matching table 8.19. But as long as you understand that the texture coordinates define a direction vector, you have the main aspect covered.
How you determine the texture coordinates really depends on what you want to achieve. Common cases include:
Using normal vector as the cube map texture coordinates. This can for example be used for pre-computed lighting effects, where the content of the cube map image contains pre-computed lighting results for each possible normal direction.
Using the reflection vector as the cube map texture coordinate. This supports the implementation of environment mapping. The content of the cube map is a picture of the environment.

Calculating position of object so it matches screen pixels

I would like to move a 3D plane in a 3D space, and have the movement match
the screens pixels so I can snap the plane to the edges of the screen.
I have played around with the focal length, camera position and camera scale,
and I have managed to get a plane to match the screen pixels in terms of size,
however when moving the plane things are not correct anymore.
So basically my current status is that I feed the plane size with values
assuming that I am working with standard 2D graphics.
So if I set the plane size to 128x128, it more or less is viewed as a 2D sqaure with that
exact size.
I am not using and will not use Orthographic view, I am using and will be using Projection view because my application needs some perspective to it.
How can this be calculated?
Does anyone have any links to resources that I can read?
you need to grab the transformation matrices you use in the vertex shader and apply them to the point/some points that represents the plane
that will result in a set of points in -1,-1 to 1,1 (after dividing by w) which you will need to map to the viewport

Water surface sample for iOS

I'm looking for an water surface effect sample like Pocket pond HD. I have found some tutorials:
iPhone OpenGL demo water waves
Waves effect
However, it's sketchy.
It is very simple.
You just have to make a 2D heightmap (2D array of water height at that particular place). With heightmap, you can calculate (approximate, interpolate) a normal at each place depending on the nearest height points.
Then you perform a "simple raytracing". You "refract each ray" depending on normal, intersect it with plane (bottom) and get a color from texture at that place.
Practically: you make a triangle mesh from height map and render those triangles. You can send normals in Vertex Buffer or compute them in Vertex Shader. Raytracing is done in Fragment Shader. Direction of each ray can be (0, 0, 1). You refract it by current normal and scale the result, so Z coordinate equals water depth. The new X and Y coordinates are texture coordinates.
To make an animation, just update the heightmap in time.

Translating screen coordinates to sprite coordinates in XNA

I have a sprite object in XNA.
It has a size, position and rotation.
How to translate a point from the screen coordinates to the sprite coordinates ?
Thanks,
SW
You need to calculate the transform matrix for your sprite, invert that (so the transform now goes from world space -> local space) and transform the mouse position by the inverted matrix.
Matrix transform = Matrix.CreateScale(scale) * Matrix.CreateRotationZ(rotation) * Matrix.CreateTranslation(translation);
Matrix inverseTransform = Matrix.Invert(transform);
Vector3 transformedMousePosition = Vector3.Transform(mousePosition, inverseTransform);
You might find the following XNA picking sample useful:
http://creators.xna.com/en-us/sample/picking
One solution is to hit test against the sprite's original, unrotated bounding box.
So given the 2D screen vector (x,y):
translate the 2D vector into local sprite space: (x,y) - (spritex,spritey)
apply inverse sprite rotation
perform hit testing against bounding box
The hit test can of course be made more accurate by taking into account the sprite shape.
I think it may be as simple as using the Contains method on Rectangle, the rectangle being the bounding box of your sprite. I've implemented drag-and-drop this way in XNA; I believe Contains tests based on x and y being screen coordinates.

Resources