Let's say we have a texture (in this case 8x8 pixels) we want to use as a sprite sheet. One of the sub-images (sprite) is a subregion of 4x3 inside the texture, like in this image:
(Normalized texture coordinates of the four corners are shown)
Now, there are basically two ways to assign texture coordinates to a 4px x 3px-sized quad so that it effectively becomes the sprite we are looking for; The first and most straightforward is to sample the texture at the corners of the subregion:
// Texture coordinates
GLfloat sMin = (xIndex0 ) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth ) / imageWidth;
GLfloat tMin = (yIndex0 ) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight) / imageHeight;
Although when first implementing this method, ca. 2010, I realized the sprites looked slightly 'distorted'. After a bit of search, I came across a post in the cocos2d forums explaining that the 'right way' to sample a texture when rendering a sprite is this:
// Texture coordinates
GLfloat sMin = (xIndex0 + 0.5) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth - 0.5) / imageWidth;
GLfloat tMin = (yIndex0 + 0.5) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight - 0.5) / imageHeight;
...and after fixing my code, I was happy for a while. But somewhere along the way, and I believe it is around the introduction of iOS 5, I started feeling that my sprites weren't looking good. After some testing, I switched back to the 'blue' method (second image) and now they seem to look good, but not always.
Am I going crazy, or something changed with iOS 5 related to GL ES texture mapping? Perhaps I am doing something else wrong? (e.g., the vertex position coordinates are slightly off? Wrong texture setup parameters?) But my code base didn't change, so perhaps I am doing something wrong from the beginning...?
I mean, at least with my code, it feels as if the "red" method used to be correct but now the "blue" method gives better results.
Right now, my game looks OK, but I feel there is something half-wrong that I must fix sooner or later...
Any ideas / experiences / opinions?
ADDENDUM
To render the sprite above, I would draw a quad measuring 4x3 in orthographic projection, with each vertex assigned the texture coords implied in the code mentioned before, like this:
// Top-Left Vertex
{ sMin, tMin };
// Bottom-Left Vertex
{ sMin, tMax };
// Top-Right Vertex
{ sMax, tMin };
// Bottom-right Vertex
{ sMax, tMax };
The original quad is created from (-0.5, -0.5) to (+0.5, +0.5); i.e. it is a unit square at the center of the screen, then scaled to the size of the subregion (in this case, 4x3), and its center positioned at integer (x,y) coordinates. I smell this has something to do too, especially when either width, height or both are not even?
ADDENDUM 2
I also found this article, but I'm still trying to put it together (it's 4:00 AM here)
http://www.mindcontrol.org/~hplus/graphics/opengl-pixel-perfect.html
There's slightly more to this picture than meets the eye, the texture coordinates are not the only factor in where the texture gets sampled. In your case I believe the blue is probably what want to have.
What you ultimately want is to sample each texel in center. You don't want to be taking samples on the boundary between two texels, because that either combines them with linear sampling, or arbitrarily chooses one or the other with nearest, depending on which way the floating point calculations round.
Having said that, you might think that you don't want to have your texcoords at (0,0), (1,1) and the other corners, because those are on the texel boundary. However an important thing to note is that opengl samples textures in the center of a fragment.
For a super simple example, consider a 2 by 2 pixel monitor, with a 2 by 2 pixel texture.
If you draw a quad from (0,0) to (2,2), this will cover 4 pixels. If you texture map this quad, it will need to take 4 samples from the texture.
If your texture coordinates go from 0 to 1, then opengl will interpolate this and sample from the center of each pixel, with the lower left texcoord starting at the bottom left corner of the bottom left pixel. This will ultimately generate texcoord pairs of (0.25, 0.25), (0.75,0.75), (0.25, 0.75), and (0.75, 0.25). Which puts the samples right in the middle of each texel, which is what you want.
If you offset your texcoords by a half pixel as in the red example, then it will interpolate incorrectly, and you'll end up sampling the texture off center of the texels.
So long story short, you want to make sure that your pixels line up correctly with your texels (don't draw sprites at non-integer pixel locations), and don't scale sprites by arbitrary amounts.
If the blue square is giving you bad results, can you give an example image, or describe how you're drawing it?
Picture says 1000 words:
Related
Although this is an inappropriate use of a compute shader I was doing some experiments to determine if I could use one to produce the general UV gradient where one channel of the image goes linearly from 0-1 across the x axis and the other channel goes from 0-1 across the y axis of the image. However I became confused when I generated this image by varying the b value of a texture by the thread_position_in_grid.x value divided by the image width. I edited the pixel of the texture at the thread_position_in_grid position:
Yes it was a gradient but it certainly did not appear to be the gradient from 0-1 that I wanted. I dropped it into an image editor and sure enough it was not linear. (The part added below shows what a linear gradient from 0-1 would look like)
It would appear that I do not understand what exactly the thread_position_in_grid value means. I know it has something to do with the threads per thread-groups and thread execution width but I dont exactly understand what. I suppose my end goal is to know whether it would be possible to generate the gradient below in a compute shader however I dont understand what is going on.
For reference I was working with a 100x100 texture with the following thread settings. Really I dont know why I use these values but this is what I saw recommended somewhere so I am sticking with them. I would love to be able to generalize this problem to other texture sizes as well including rectangles.
let w = greenPipeline.threadExecutionWidth
let h = greenPipeline.maxTotalThreadsPerThreadgroup / w
let threadsPerThreadgroup = MTLSizeMake(w, h, 1)
let threadgroupsPerGrid = MTLSize(width: (texture.width + w - 1) / w,
height: (texture.height + h - 1) / h,
depth: 1)
encoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
And my shader looks like this:
kernel void green(texture2d<float, access::write> outputTexture [[texture(0)]],
uint2 position [[thread_position_in_grid]])
{
if (position.x >= outputTexture.get_width() || position.y >= outputTexture.get_height()) {
return;
}
outputTexture.write(float4(position.x / 100.0, 0.0, 0.0, 0.0), position);
}
Two things about this shader confuse me because I cannot explain them:
I am using position as the coordinate to write to on the texture so
it bothers me that position doesnt work to generate the gradient.
You cannot reaplace position.x / 100.0 value with position.x / outputTexture.getWidth() even though it should also be 100. Doing so causes a black image. Yet when I made a shader that colored everything with outputTexture.getWidth() as its value it did indeed shade everything to a value equivalent to 100 (or more accurately 101 because of rounding)
It is ok to use position to check if the kernel is within bounds but not to create the UV gradient.
What is going on?
The thread_position_in_grid means whatever you want it to mean because you decide how large the grid is and what each thread in the grid does.
In your example, thread_position_in_grid is the pixel coordinate in the texture because your grid size is equal to the number of pixels in the texture (rounded up to a multiple of the pipeline's max thread execution width).
You can see this if you change the threadGroupsPerGrid to:
let threadgroupsPerGrid = MTLSize(width: (texture.width/2 + w - 1) / w,
height: (texture.height/2 + h - 1) / h,
depth: 1)
Now only the top quarter of your texture should be filled in because the grid only covers half the texture's width and height.
As to why your texture looks weird, it's probably related to the pixel format. After all, you're writing into the red color component and your texture comes out as blue.
I am attempting to map a fisheye image to a 360 degree view using a sky sphere in Unity. The scene is inside the sphere. I am very close but I am seeing some slight distortion. I am calculating UV coordinates as follows:
Vector3 v = currentVertice; // unit vector from edge of sphere, -1, -1, -1 to 1, 1, 1
float r = Mathf.Atan2(Mathf.Sqrt(v.x * v.x + v.y * v.y), v.z) / (Mathf.PI * 2.0f);
float phi = Mathf.Atan2(v.y, v.x);
textureCoordinates.x = (r * Mathf.Cos(phi)) + 0.5f;
textureCoordinates.y = (r * Mathf.Sin(phi)) + 0.5f;
Here is the distortion and triangles:
The rest of the entire sphere looks great, it's just at this one spot that I get the distortion.
Here is the source fish eye image:
And the same sphere with a UV test texture over the top showing the same distortion area. Full UV test texture is on the right, and is a square although stretched into a rectangle on the right for purposes of my screenshot.
The distortion above is using sphere mapping rather than fish eye mapping. Here is the UV texture using fish eye mapping:
Math isn't my strong point, am I doing anything wrong here or is this kind of mapping simply not 100% possible?
The spot you are seeing is the case where r gets very close to 1. As you can see in the source image, this is the border area between the very distorted image data and the black.
This area is very distorted, however that's not the main problem. Looking at the result you can see that there are problems with UV orientation.
I've added a few lines to your source image to demonstrate what I mean. Where r is small (yellow lines) you can see that the UV coordinates can be interpolated between the corners of your quad (assuming quads instead of tris). However, where r is big (red corners), interpolating UV coordinates will make them travel through areas of your source image whose r is much smaller than 1 (red lines), causing distortions in UV space. Actually, those red lines should not be straight, but they should travel along the border of your source image data.
You can improve this by having a higher polycount in the area of your skysphere where r gets close to 1, but it will never be perfect as long as your UVs are interpolated in a linear way.
I also found another problem. If you look closely at the spot, you'll find that the complete source image is present there in small. This is because your UV coordinates wrap around at that point. As rendering passes around the viewer, uv coordinates travel from 0 towards 1. At the spot they are at 1, the neighboring vertex however is at 0.001 or something, causing the whole source image to be rendered inbetween. To fix that, you'll need to have two seperate vertices at the seam of your skysphere, one where the surface of the sphere starts, and one where it ends. In object space they are identical, but in uv space one is at 0, the other at 1.
I have a very simple terrain map, 256x256 tiles for example, it's divided into tiles (same squares...). every tile have height, slope...
Something like figure below. My default look will be an iso view. (Each tile can be divied into smaller tiles for smooth, i called tesselation)
D3DXMatrixOrthoLH(&matProj, videoWidth , videoHeight , -100000, 100000);
float xPitch=0;
float yPitch=PI/3.0; //rotate yPitch 60 degree
float zPitch=PI/4.0; //rotate zPitch 45 degree
Now I need to select a unit tile onscreen with mouse (where to move to or build something...)! I have mouse position Mx,My I need to know what tile it is! If the map is flat, it's very easy! but with height, it's difficult. I'm planning to make the map quite static (not rotate often)... only translation. So I can store all the Vertex coordinate (x,y) on screen. By using D3DXVec3Project. And then search the triangles that contain the mouse position-> the tile we needed! However with this approach we may need to search 5-10 or 20 triangles. Do you know any betterway, more optimized or elegant ? Thanks!
I read about something like RayCasting for detection, maybe it can be used in my case. Because there is no eye pos in my view setup, the view vector is constant!
3D Screenspace Raycasting/Picking DirectX9
D3DXVec3Unproject also look quite promising!
Image : http://i1335.photobucket.com/albums/w666/greenpig83/sc4_zpsaaa61249.png
i have a sphere in my 3d project, and i have an earth texture, i use the algorithm from wiki to calculate the texture coordinate.
the code in my effect file look like this:
float pi = 3.14159265359f;
output.uvCoords.x = 0.5 + atan2(input.normal.z, input.normal.x) / (2 * pi);
output.uvCoords.y = 0.5f - asin(input.normal.y) / pi;
the result is the picture below:
look from left( there is a line, this is my question)
look from front
3.look from right
Not pretend to be a complete answer at all, but there are some ideas:
Try 6.28 instead 6.18, because 3.14 * 2 = 6.28. It is always a good idea to create variables or macro instead of plain numbers, to prevent such sad mistakes in future
Try to use more precise value of Pi (numbers to the right of the decimal point)
Try to normalize normal vector before calculations
Even better calculate texcoords on CPU once and for all, instead of calculating on each shader invocation. You can use any asset library for this purpose or just quickly move your HLSL to main code.
#define PI 3.14159265359f
#define PImul2 6.28318530718f // pi*2
#define PIdiv2 1.57079632679f // pi/2
#define PImul3div2 2.09439510239 // 3*pi/2
#define PIrev 0.31830988618f // 1/pi
...
Hope it helps.
finally i figure it out by myself, The problem lies in the fact, that i'am calculating the texture coordinates in the vertex shader. The problem is that one vertex is on the far right of the texture, while the other 2 vertices of a triangle are on the far left of the texture, which results in almost the whole texture being visible on such a triangle. so there is a line of jumbled texture coords. the solution is i should send the normal to pixel shader and calculate the texture coord in the pixel shader
I'm looking for an water surface effect sample like Pocket pond HD. I have found some tutorials:
iPhone OpenGL demo water waves
Waves effect
However, it's sketchy.
It is very simple.
You just have to make a 2D heightmap (2D array of water height at that particular place). With heightmap, you can calculate (approximate, interpolate) a normal at each place depending on the nearest height points.
Then you perform a "simple raytracing". You "refract each ray" depending on normal, intersect it with plane (bottom) and get a color from texture at that place.
Practically: you make a triangle mesh from height map and render those triangles. You can send normals in Vertex Buffer or compute them in Vertex Shader. Raytracing is done in Fragment Shader. Direction of each ray can be (0, 0, 1). You refract it by current normal and scale the result, so Z coordinate equals water depth. The new X and Y coordinates are texture coordinates.
To make an animation, just update the heightmap in time.