How to place a texture at the pole of a sphere? - webgl

Even with high resolution textures (4k*2k) of Earth the mapping of the poles is distorted. Is it possible to place a square texture with the middle directly at the poles of a sphere with THREE.js and rotate accordingly?
Example map: http://rapidfire.sci.gsfc.nasa.gov/imagery/subsets/?mosaic=Arctic.2012154.terra.4km.jpg
Sorry, no code, looking for a starting point.

Maybe you should 'correct' your rectangular texture to avoid the distortion on the poles.
This link might be of help for that: http://local.wasp.uwa.edu.au/~pbourke/texture_colour/texturemap/

In Cesium, a WebGL globe & map engine, we fixed the poles by creating a screen-space quad over where each pole will be, and using a shader to render the pole correctly. The code is in this pull request. We don't yet offer inlay high-res images, but that's coming.

Related

WebGL: Draw different geometric primitives within same draw call?

I have a WebGL scene that wants to draw both point and line primitives, and am wondering: Is it possible to draw multiple WebGL primitives inside a single draw call?
My hunch is this is not possible, but WebGL is constantly surprising me with tricks one can do to accomplish strange edge cases, and searching has not let me confirm whether this is possible or not.
I'd be grateful for any insight others can offer on this question.
You can't draw WebGL lines, points, and triangles in the same draw call. You can generate points and lines from triangles and then just draw triangles in one draw call that happens to have triangles that make points and triangles that draw lines and triangles that draw other stuff all one draw call.
Not a good example but for fun here's a vertex shader than generates points and lines from triangles on the fly.
There's also this for an example of making lines from triangles
How creative you want to get with your shaders vs doing things on the CPU is up to you but it's common to draw lines with triangles as the previous article points out since WebGL lines can generally only be a single pixel thick.
It's also common to draw points with triangles since
WebGL is only required to support points of size 1
By drawing with triangles that limit is removed
WebGL points are always aligned with the screen
Triangle based points are far more flexible. You can rotate the point for example and or orient them in 3D. Here's a bunch of points made from triangles
Triangle based points can be scaled in 3D with no extra work
In other words a triangle based point in 3d space will scale with distance from the camera using standard 3D math. A WebGL point requires you to compute the size the point should be so you can set gl_PointSize and so requires extra work if you want it to scale with the scene.
It's not common to mix points, lines, and triangles in a single draw call but it's not impossible by any means.

Directx shadow mapping

I have successfully implemented shadow maps in my engine but the problem is the shadow map doesn't cover the whole scene. If I make the shadowmap large shadow quality will drop. So I'm trying make my shadows move with camera. I can do this if I can calculate the 8 world space positions of camera frustum vertices.
So how can I calculate world space positions of camera frustum vertices ? I'm working with Directx if it changes the way how it's calculated.
Thanks.
The frustum (near plane, far plane, fov) is in view space so multiplying it with an inverse view matrix will move it into world space. If you use DirectXMath (which I recommend) you can utilize the bounding frustum object. An example code might look something like this:
DirectX::BoundingFrustum frustum;
DirectX::BoundingFrustum::CreateFromMatrix(frustum, camera.getProjectionMatrix());
DirectX::XMMATRIX inverseViewMatrix = DirectX::XMMatrixInverse(nullptr, camera.getViewMatrix());
frustum.Transform(frustum, inverseViewMatrix);
BoundingFrustum docs: https://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.directxmath.boundingfrustum(v=vs.85).aspx
If your frustum is big and you have to view a large area (e.g. outdoor scene) then even a large moving shadow map might not be enough (or it takes a huge amount of memory). One technique to solve this is called cascaded shadow maps (CSM). In CSM more precise shadow maps are rendered close to the camera and less precise shadows are rendered in the distance where the low quality is not visible anyway. Here is a CSM tutorial in case you are interested: https://msdn.microsoft.com/en-us/library/windows/desktop/ee416307(v=vs.85).aspx

Cocos2d - lunar eclipse effect on iPhone

i have a question about achieving an effect like on a lunar eclipse. The effect should look like in the first seconds of this gif. So just like a black shadow which goes over the circle. The ideal situation would be a function where i can passed a parameter in percentage to get this amount as a shadow on the circle:
The problem which i am facing is that my background is an gradient. So it's not possible to have a black circle which moves over the moon to get the effect.
I tried something with CCClippingNode but it looks not nice. Furthermore the clip on the edges was always a bit pixelated.
I thought about using something like a GLSL Shader to achieve the effect but i am not so familiar with GLSL and i can't find an example.
The effect is for an app game developed for an iphone. I use the cocos2d framework in version 3 (the current one).
Has somebody an idea how to get this effect? An idea where i can start to search?
Thank you in advance
The physics behind is simple you change the light shining on the moon. So
I would create a 1D gradient texture representing the lighting conditions
compute each rendered pixel of moon
you obviously have the 2D texture of moon. So you now need to obtain the position of each pixel inside the 1D lighting texture. So if moon is fully visible you are in sunlight. When partially eclipsed then you are in the umbra region. And finaly while total eclipse you are in penumbra region. so just compute the middle point's of the moon position. And for the rest use relative position in the moons motion direction.
So now just multiply the Moon surface with the lighting texture and render the output.
when working you can add the curvature correction
Now you got linerly cutted Moon phases but the real phases are curved as the lighting conditions differs also with radial distance from motion direction and moons center. To fix this you can do
convert the lighting to 2D texture
or shift the texture coordinate by some curvature dependent on the radial distance

Texture getting stretched across faces of a cuboid in Open Inventor

I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?
Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.

"warping" an image on iOS

I'm trying to find a way to do something similar to this on iOS:
Does anyone know a simple way to do it?
I don't know of a oneliner to do this, but you can use OpenGL to render a textured grid with quads, which has the texture coordinates equally distributed.
Exampe of 2x2 grid:
{0.0,1.0} {0.33333,1.0} {1.0,1.0}
{0.0,0.33333} {0.33333,0.33333} {1.0,0.33333}
{0.0,0.0} {0.33333,0.0} {1.0,0.0}
If you move shared vertices of adjacent quads (like in your example) while texture coords remain, you get a warp effect. You need a trivial vertex and fragment shader when using OpenGL ES, especially if you want to smoothen the warp effect, which is linearly interpolated per quad/triangle in its simple form.

Resources