Currently I'm drawing with OpenGL ES 2.0 an object with pins in it and display it on a CAEAGLLayer. I'm able to identify objects via color picking.
Now I need to calculate the screen coordinates for the pin's world coordinates in order to draw for example a label on the right position (I want to use cocoa touch components). What would be a proper way to calculate the screen coordinates (hidden objects should be ignored)?
Running through the whole image and use each pixel to perform a color picking on it doesn't sound like the right way to go.
Thanks in advance.
Apple provides its own flavour of GL math functions :
See GLKMathUtils documentation.
As Lukas pointed out, you can use it to project (world -> screen coordinates) or un-project (screen -> world coordinates)
So, if you're already using GLKit for your matrix transformations, you can use this :
GLKVector3 screenPoint = GLKMathProject(modelPoint, modelViewMatrix, projectionMatrix, viewport);
I could answer this question myself. On some versions of OpenGL glProject() is available and can be used to calculate the position of a vertex on the screen. Unfortunately this function is not available in OpenGL ES 2.0 so you have to do the calculations yourself or you use a math library like OpenGL Mathematics which provides a function glProject():
#include "matrix_transform.hpp"
#include "transform.hpp"
vec3 pinScreenPosition = glm::project(vec3(0,0,0), modelMatrix, projectionMatrix, vec4(0, 0, screenDimensions.x * screenScale, screenDimensions.y * screenScale));
Related
Because of computation efficiency, I use a fragment shader to implement a simple 2D metaballs algorithm. The data of the circles to render is top-left oriented.
I have everything working, except that the origin of WebGL's coordinate system (bottom-left) is giving me a hard time: Obviously, the rendered output is mirrored along the horizontal axis.
Following https://webglfundamentals.org/webgl/lessons/webgl-2d-rotation.html (and others), I tried to rotate things using a vertex shader. Without any success unfortunately.
What is the most simple way of achieving the reorientation of WebGL's coordinate system?
I'd appreciate any hints and pointers, thanks! :)
Please find a working (not working ;) ) example here:
https://codesandbox.io/s/gracious-fermat-znbsw?file=/src/index.js
Since you are using gl_FragCoord in your pixels shader, you can't do it from the vertex shader becasuse gl_FragCoord is the canvas coordinates but upside down. You could easily invert it in javascript in your pass trough to WebGL
gl.uniform3fv(gl.getUniformLocation(program, `u_circles[${i}]`), [
circles[i].x,
canvas.height - circles[i].y - 1,
circles[i].r
]);
If you want to do it in the shader and keep using gl_FragCoord then you should pass the height of the canvas to the shader using a uniform and do the conversion of y there by doing something like
vec2 screenSpace = vec2(gl_FragCoord.x, canvasHeight - gl_FragCoord.y - 1);
The -1 is because the coordinates start at 0.
In OpenGL, I am using the following in my pixel shaders to get the correct pixel position, which is used to sample diffuse, normal, position gbuffer textures:
ivec2 texcoord = ivec2(textureSize(unifDiffuseTexture) * (gl_FragCoord.xy / UnifAmbientPass.mScreenSize));
So far, this is what I do in HLSL:
float2 texcoord = input.mPosition.xy / gScreenSize;
Most notably, in GLSL I am using textureSize() to get accurate pixel position. I am wondering, is there a HLSL equivalent to textureSize()?
In HLSL, you have GetDimensions
But it may be costlier than reading it from a constant buffer, even if it looks easier to use at first to do quick tests.
Also, you have alternative, using SV_Position and Load, just use the xy as an uint2, you remove the need of an user interpolator carrying a texture coordinate to index the screen.
Here the full documentation of a TextureObject
I have a single cloud texture that I want to displace arbitrarily along the Y ("vertical") axis of a SCNNode spherical geometry, to give the illusion there are many different textures of clouds.
I read the docs about SCNMaterialProperty, CATransform3D rotation, but I'm completely lost. In a 3D program, you can set your texture "origin" along the X, Y and Z axis -- what is the equivalent in Scene Kit / Core Animation ?
Thanks for your help!
SCNMaterialProperty has a contentsTransform property that allows you to animate texture coordinates. You can also use shader modifiers if you want more control and depending on th effect you want to achieve.
In the Bananas sample code from WWDC 2014 this technique is used to animate the smoke emitted by the volcano in the background.
I finally ended up with this:
self.cloudNode.rotation = SCNVector4Make(0.0,
1.0,
0.0,
arc4random_uniform(360)*M_PI/180.0);
I'm not a maths genius anyway.
What is the best way to draw circles with OpenGL ES 2.0?
I am working on an iPad/iPhone project using cocos2d 2.0 (currently beta) which uses OpenGL ES 2.0 and shaders instead of OpenGL ES 1.0.
In my former projects I used that handy class ColoredCircleSprite that is included in the SneakyInput package. But now with OpenGL ES 2.0 that code is not working anymore and to be honest I am a little lost here in writing my own solution from scratch. I need a CCSprite subclass that draws smooth circles. (Perhaps with a little shadow shader...)
Should I build a rectangular shape in the vertex shader and then discard every pixel outside the circle radius in the fragment shader? Or should I build the circle vertices inside the vertex shader?
Are there any good tutorials about this topic on the net? As OpenGL-n00b I would appreciate every kind of help!
Use ccDrawCircle:
ccDrawCircle(CGPoint center, float radius, float angle,
NSUInteger segments, BOOL drawLineToCenter);
Increase the number of segments to make the circle smoother. Have a look at the implementation of ccDrawCircle in CCDrawingPrimitives.h if you want to learn from the code.
I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1.
I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app.
I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable.
As far as I understand it it goes like that: I need to set up a view (an orthographic one maybe?) and a quad in such a way so that the pixel shader would be applied once to each pixel in the image and so that it would be rendering to a texture or something. But how can I achieve that considering there's depth that may make things somewhat awkward to me...
I'd be very grateful if anyone could help me with this rather simple task as I am really frustrated with myself.
UPDATE:
It seems I'll have to use an FBO, getting one like this: glBindFramebuffer(...)
Use this tutorial, it's targeted at OpenGL 2.0, but most features are available in ES 2.0, the only thing i have doubts is floating point textures.
http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html
Basically, you need 4 vertex positions (as vec2) of a quad (with corners (-1,-1) and (1,1)) passed as a vertex attribute.
You don't really need a projection, because the shader will not need any.
Create an FBO, bind it and attach the target surface. Don't forget to check the completeness status.
Bind the shader, set up input textures and draw the quad.
Your vertex shader may look like this:
#version 130
in vec2 at_pos;
out vec2 tc;
void main() {
tc = (at_pos+vec2(1.0))*0.5; //texture coordinates
gl_Position = vec4(at_pos,0.0,1.0); //no projection needed
}
And a fragment shader:
#version 130
in vec2 tc;
uniform sampler2D unit_in;
void main() {
vec4 v = texture2D(unit_in,tc);
gl_FragColor = do_something();
}
If you want an example, I created this project for iOS devices for processing frames of video grabbed from the camera using OpenGL ES 2.0 shaders. I explain more about it in my writeup here.
Basically, I pull in the BGRA data for a frame and create a texture from that. I then use two triangles to generate a rectangle and map the texture on that. A shader is used to directly display the image onscreen, perform some effect on the image and display it, or perform some effect on the image while in an offscreen FBO. In the last case, I can then use glReadPixels() to pull in the image for some CPU-based processing, but ideally I want to fix this so that the processed image is just passed on as a texture to the next set of shaders.
You should also check out ogles_gpgpu, which even supports Android systems. An overview about this topic is given in this publication: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You can do more advanced GPGPU things with OpenGL ES 3.0 now. Check out this post for example. Apple now also has the "Metal API" which allows even more GPU compute operations. Both, OpenGL ES 3.x and Metal are only supported by newer devices with A7 chip.