How to draw each a vertex of a mesh as a circle?
You can do it using geometry shaders to create billboarding geometry from each vertex on the GPU. You can then either create the circles as geometry, or create quads and use a circle texture to draw them (I recommend the later). But geometry shaders are not extensively supported yet, even less in iOS. If you know for sure that that computer in which you'll run this supports it, go for it.
If geometry shading isn't an option, your two best options are:
Use a Particle System, that already handles mesh creation and billboarding. To create a particle at each vertex position use ParticleSystem.Emit. Your system's simulation space should be Local. If the vertices move, use SetParticles to update them.
Creating a procedural Mesh that already contains the geometry you need. If the camera and points don't move you can get away with creating the mesh in a fixed shape. Otherwise you will need to animate the billboarding, either on the procedural mesh, or by shader.
Update: 5,000,000 points is a lot. Although Particle Systems can work with big numbers by creating lots of internal meshes, the vast amount of processing really eats up the CPU. And even if the points are static in space, a procedural mesh with no special shaders must be updated each frame for billboading effects.
My advice is creating many meshes (a single mesh cannot handle that amount of geometry). The meshes will cointain a quad per point (or triangles if you dare, to make it faster), but the four vertices will be located in the same point. You then use the texture coordinates during the vertex program to expand it into a billboarding quad.
Assuming you are talking about 2D mesh:
Create a circle game object (or a game object with a circle shaped texture), and export it as a prefab:
var meshFilter = GetComponent(typeof(MeshFilter)) as MeshFilter;
var mesh = meshFilter.mesh;
foreach(var v in mesh.vertices)
{
var obj= Instantiate(circlePrefab, v, Quaternion.identity);
}
Related
The Metal Shading Language includes a lot of mathematic functions, but it seems most of the codes inside Metal official documentation just use it to map vertexes from pixel space to clip space like
RasterizerData out;
out.clipSpacePosition = vector_float4(0.0, 0.0, 0.0, 1.0);
float2 pixelSpacePosition = vertices[vertexID].position.xy;
vector_float2 viewportSize = vector_float2(*viewportSizePointer);
out.clipSpacePosition.xy = pixelSpacePosition / (viewportSize / 2.0);
out.color = vertices[vertexID].color;
return out;
Except for GPGPU using kernel functions to do parallel computation, what things that vertex function can do, with some examples? In a game, if all vertices positions are calculated by the CPU, why GPU still matters? What does vertex function do usually?
Vertex shaders compute properties for vertices. That's their point. In addition to vertex positions, they also calculate lighting normals at each vertex. And potentially texture coordinates. And various material properties used by lighting and shading routines. Then, in the fragment processing stage, those values are interpolated and sent to the fragment shader for each fragment.
In general, you don't modify vertices on the CPU. In a game, you'd usually load them from a file into main memory, put them into a buffer and send them to the GPU. Once they're on the GPU you pass them to the vertex shader on each frame along with model, view, and projection matrices. A single buffer containing the vertices of, say, a tree or a car's wheel might be used multiple times. Each time all the CPU sends is the model, view, and projection matrices. The model matrix is used in the vertex shader to reposition and scale the vertice's positions in world space. The view matrix then moves and rotates the world around so that the virtual camera is at the origin and facing the appropriate way. Then the projection matrix modifies the vertices to put them into clip space.
There are other things a vertex shader can do, too. You can pass in vertices that are in a grid in the x-y plane, for example. Then in your vertex shader you can sample a texture and use that to generate the z-value. This gives you a way to change the geometry using a height map.
On older hardware (and some lower-end mobile hardware) it was expensive to do calculations on a texture coordinate before using it to sample from a texture because you lose some cache coherency. For example, if you wanted to sample several pixels in a column, you might loop over them adding an offset to the current texture coordinate and then sampling with the result. One trick was to do the calculation on the texture coordinates in the vertex shader and have them automatically interpolated before being sent to the fragment shader, then doing a normal look-up in the fragment shader. (I don't think this is an optimization on modern hardware, but it was a big win on some older models.)
First, I'll address this statement
In a game, if all vertices positions are calculated by the CPU, why GPU still matters? What does vertex function do usually?
I don't believe I've seen anyone calculating positions for meshes that will be later used to render them on a GPU. It's slow, you would need to get all this data from CPU to a GPU (which means copying it through a bus if you have a dedicated GPU). And it's just not that flexible. There are much more things other than vertex positions that are required to produce any meaningful image and calculating all this stuff on CPU is just wasteful, since CPU doesn't care for this data for the most part.
The sole purpose of vertex shader is to provide rasterizer with primitives that are in clip space. But there are some other uses that are mostly tricks based on different GPU features.
For example, vertex shaders can write out some data to buffers, so, for example, you can stream out transformed geometry if you don't want to transform it again at a later vertex stage if you have multi-pass rendering that uses the same geometry in more than one pass.
You can also use vertex shaders to output just one triangle that covers the whole screen, so that fragment shaders gets called one time per pixel for the whole screen (but, honestly, you are better of using compute (kernel) shaders for this).
You can also write out data from vertex shader and not generate any primitives. You can do that by generating degenerate triangles. You can use this to generate bounding boxes. Using atomic operations you can update min/max positions and read them at a later stage. This is useful for light culling, frustum culling, tile-based processing and many other things.
But, and it's a BIG BUT, you can do most of this stuff in a compute shader without incurring GPU to run all the vertex assembly pipeline. That means, you can do full-screen effects using just a compute shader (instead of vertex and fragment shader and many pipeline stages in between, such as rasterizer, primitive culling, depth testing and output merging). You can calculate bounding boxes and do light culling or frustum culling in compute shader.
There are reasons to fire up the whole rendering pipeline instead of just running a compute shader, for example, if you will still use triangles that are output from vertex shader, or if you aren't sure how primitives are laid out in memory so you need vertex assembler to do the heavy lifting of assembling primitives. But, getting back to your point, almost all of the reasonable uses for vertex shader include outputting primitives in clip space. If you aren't using resulting primitives, it's probably best to stick to compute shaders.
I have a WebGL scene that wants to draw both point and line primitives, and am wondering: Is it possible to draw multiple WebGL primitives inside a single draw call?
My hunch is this is not possible, but WebGL is constantly surprising me with tricks one can do to accomplish strange edge cases, and searching has not let me confirm whether this is possible or not.
I'd be grateful for any insight others can offer on this question.
You can't draw WebGL lines, points, and triangles in the same draw call. You can generate points and lines from triangles and then just draw triangles in one draw call that happens to have triangles that make points and triangles that draw lines and triangles that draw other stuff all one draw call.
Not a good example but for fun here's a vertex shader than generates points and lines from triangles on the fly.
There's also this for an example of making lines from triangles
How creative you want to get with your shaders vs doing things on the CPU is up to you but it's common to draw lines with triangles as the previous article points out since WebGL lines can generally only be a single pixel thick.
It's also common to draw points with triangles since
WebGL is only required to support points of size 1
By drawing with triangles that limit is removed
WebGL points are always aligned with the screen
Triangle based points are far more flexible. You can rotate the point for example and or orient them in 3D. Here's a bunch of points made from triangles
Triangle based points can be scaled in 3D with no extra work
In other words a triangle based point in 3d space will scale with distance from the camera using standard 3D math. A WebGL point requires you to compute the size the point should be so you can set gl_PointSize and so requires extra work if you want it to scale with the scene.
It's not common to mix points, lines, and triangles in a single draw call but it's not impossible by any means.
I am working on an Android application that slims or fatten faces by detecting it. Currently, I have achieved that by using the Thin-plate spline algorithm.
http://ipwithopencv.blogspot.com.tr/2010/01/thin-plate-spline-example.html
The problem is that the algorithm is not fast enough for me so I decided to change it to OpenGL. After some research, I see that the lookup table texture is the best option for this. I have a set of control points for source image and new positions of them for warp effect.
How should I create lookup table texture to get warp effect?
Are you really sure you need a lookup texture?
Seems that it`d be better if you had a textured rectangular mesh (or a non-rectangular mesh, of course, as the face detection algorithm you have most likely returns a face-like mesh) and warped it according to the algorithm:
Not only you`d be able to do that in a vertex shader, thus processing each mesh node in parallel, but also it`s less values to process compared to dynamic texture generation.
The most compatible method to achieve that is to give each mesh point a Y coordinate of 0 and X coordinate where the mesh index would be stored, and then pass a texture (maybe even a buffer texture if target devices support it) to the vertex shader, where at the needed index the R and G channels contain the desired X and Y coordinates.
Inside the vertex shader, the coordinates are to be loaded from the texture.
This approach allows for dynamic warping without reloading geometry, if the target data texture is properly updated — for example, inside a pixel shader.
I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?
Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.
We have an iOS drawing app. Currently, the drawing is implemented with OpenGL ES 1.1. We use some algorithms to smooth the lines such as Bezier curves. So, when touch events occur, we get some set of points out of touch event points (based on algorithms) and draw these points. We also use brush texture for points to have more natural look.
I wonder if it's possible to implement these algorithms in OpenGL ES 2.0 shaders. Something like to call an OpenGL function to draw lines made of touch points and on output have smoothed brush-textured curve rendered.
Points P0, P1, ... P4 here are touch events and the points on red curve - output points, with such step for T so that the distance between two neighbor points on curve is not greater than 1 pixel.
And here is the link with Bezier algorithm explanation:
Bézier curve - Wikipedia, the free encyclopedia
Any help is much appreciated.
Thanks.
You cannot generate new vertices inside the vertex shader (you can do it in the geometry shader, which ES doesn't have). The number of output vertices is always the same as the number of input vertices, you can only change their positions (and ohter attributes of course).
So you would have to draw a line strip made out of enough vertices to guarantee a smooth enough curve. What you can do is put in always the same line strip, having the curve parameter values T as 1D vertex positions. In the shader you then use this input position (the parameter value) to compute the actual 2D/3D position on the curve using the DeCasteljau algorithm (or whatever) and the points P0 to P4 which you put into the shader as constants (uniform variables in GLSL terms).
But I'm not sure if that would really buy you anything over just computing those points on the CPU and putting them into a dynamic VBO. What you save is the copying of the curve points from CPU to GPU and the computation on the CPU, but on the other hand your vertex shader is much more complex. It needs to be evaluated which is the better approach. If you need to compute the curve points each frame (because the control points change each frame) and the curve is rather high detail, it might not be that bad an idea. But otherwise I don't think it really pays. And also your shader won't be adaptable that easily to a changing number of control points/curve degree at runtime.
But once again, you cannot put in 5 control points and generate N curve points on the GPU. The vertex shader always works on a single vertex and results in a single vertex, the same as the fragment shader always works on a single fragment (say pixel, though it isn't one yet) and result in a single (or no) fragment.