I've been making progress in a fan-replicated game I'm coding, but I'm stuck with this problem.
Right now I'm drawing a texture pixel by pixel on the curve path, but this cuts down frames per second from 4000 to 50 on curves with long lengths.
I need to store pixel by pixel Vector2 + length data anyway, so I can produce static speed movement along it, looping through it to draw the curve as well.
Curves I need to be able to draw are Bezier, Circular and Catmull.
Any ideas of how to make it more efficient?
Maybe I have misunderstood the question but I did this once:
Create the curve and sample x points on it. (Red dots)
Create a mesh from it by calculating the cross vector of each point. (Green lines)
Build a quad between all of these. So basically 5 of them in my picture.
Set the U coordinate to be on the perpendicular plane and V coordinate follows the curve length. So 0 at the start an 1 at the end of it.
You can of course scale V if you want you texture to repeat.
Any ideas of how to make it more efficient?
Assuming the texture needs to be dynamic, draw the texture on the GPU-side using a shader. Drawing it on the CPU-side is not only slow, but bogs down both the CPU and GPU when you need to send it back to the GPU every frame. Much better to draw it GPU-side.
I need to store pixel by pixel Vector2 + length data anyway
The shader can store additional information into the texture. e.g. even though you may allocate a RGBA texture, it doesn't mean that it needs to store color information when it is your shaders that will interpret the data.
Related
I have a WebGL scene that wants to draw both point and line primitives, and am wondering: Is it possible to draw multiple WebGL primitives inside a single draw call?
My hunch is this is not possible, but WebGL is constantly surprising me with tricks one can do to accomplish strange edge cases, and searching has not let me confirm whether this is possible or not.
I'd be grateful for any insight others can offer on this question.
You can't draw WebGL lines, points, and triangles in the same draw call. You can generate points and lines from triangles and then just draw triangles in one draw call that happens to have triangles that make points and triangles that draw lines and triangles that draw other stuff all one draw call.
Not a good example but for fun here's a vertex shader than generates points and lines from triangles on the fly.
There's also this for an example of making lines from triangles
How creative you want to get with your shaders vs doing things on the CPU is up to you but it's common to draw lines with triangles as the previous article points out since WebGL lines can generally only be a single pixel thick.
It's also common to draw points with triangles since
WebGL is only required to support points of size 1
By drawing with triangles that limit is removed
WebGL points are always aligned with the screen
Triangle based points are far more flexible. You can rotate the point for example and or orient them in 3D. Here's a bunch of points made from triangles
Triangle based points can be scaled in 3D with no extra work
In other words a triangle based point in 3d space will scale with distance from the camera using standard 3D math. A WebGL point requires you to compute the size the point should be so you can set gl_PointSize and so requires extra work if you want it to scale with the scene.
It's not common to mix points, lines, and triangles in a single draw call but it's not impossible by any means.
I am working on an Android application that slims or fatten faces by detecting it. Currently, I have achieved that by using the Thin-plate spline algorithm.
http://ipwithopencv.blogspot.com.tr/2010/01/thin-plate-spline-example.html
The problem is that the algorithm is not fast enough for me so I decided to change it to OpenGL. After some research, I see that the lookup table texture is the best option for this. I have a set of control points for source image and new positions of them for warp effect.
How should I create lookup table texture to get warp effect?
Are you really sure you need a lookup texture?
Seems that it`d be better if you had a textured rectangular mesh (or a non-rectangular mesh, of course, as the face detection algorithm you have most likely returns a face-like mesh) and warped it according to the algorithm:
Not only you`d be able to do that in a vertex shader, thus processing each mesh node in parallel, but also it`s less values to process compared to dynamic texture generation.
The most compatible method to achieve that is to give each mesh point a Y coordinate of 0 and X coordinate where the mesh index would be stored, and then pass a texture (maybe even a buffer texture if target devices support it) to the vertex shader, where at the needed index the R and G channels contain the desired X and Y coordinates.
Inside the vertex shader, the coordinates are to be loaded from the texture.
This approach allows for dynamic warping without reloading geometry, if the target data texture is properly updated — for example, inside a pixel shader.
We have an iOS drawing app. Currently, the drawing is implemented with OpenGL ES 1.1. We use some algorithms to smooth the lines such as Bezier curves. So, when touch events occur, we get some set of points out of touch event points (based on algorithms) and draw these points. We also use brush texture for points to have more natural look.
I wonder if it's possible to implement these algorithms in OpenGL ES 2.0 shaders. Something like to call an OpenGL function to draw lines made of touch points and on output have smoothed brush-textured curve rendered.
Points P0, P1, ... P4 here are touch events and the points on red curve - output points, with such step for T so that the distance between two neighbor points on curve is not greater than 1 pixel.
And here is the link with Bezier algorithm explanation:
Bézier curve - Wikipedia, the free encyclopedia
Any help is much appreciated.
Thanks.
You cannot generate new vertices inside the vertex shader (you can do it in the geometry shader, which ES doesn't have). The number of output vertices is always the same as the number of input vertices, you can only change their positions (and ohter attributes of course).
So you would have to draw a line strip made out of enough vertices to guarantee a smooth enough curve. What you can do is put in always the same line strip, having the curve parameter values T as 1D vertex positions. In the shader you then use this input position (the parameter value) to compute the actual 2D/3D position on the curve using the DeCasteljau algorithm (or whatever) and the points P0 to P4 which you put into the shader as constants (uniform variables in GLSL terms).
But I'm not sure if that would really buy you anything over just computing those points on the CPU and putting them into a dynamic VBO. What you save is the copying of the curve points from CPU to GPU and the computation on the CPU, but on the other hand your vertex shader is much more complex. It needs to be evaluated which is the better approach. If you need to compute the curve points each frame (because the control points change each frame) and the curve is rather high detail, it might not be that bad an idea. But otherwise I don't think it really pays. And also your shader won't be adaptable that easily to a changing number of control points/curve degree at runtime.
But once again, you cannot put in 5 control points and generate N curve points on the GPU. The vertex shader always works on a single vertex and results in a single vertex, the same as the fragment shader always works on a single fragment (say pixel, though it isn't one yet) and result in a single (or no) fragment.
I am trying to draw a series of squares in XNA. I am looking at all these articles about TriangleStrips and DynamicVertexBuffers. But, not sure where to begin.
Current step
I am able to draw 1 square using VertexPositionColor, TriangleList and indices. Now I want to draw a series of squares with varying colors.
End Goal
Something to keep in mind is the number of such squares that I would like to be able to draw, eventually. If we assume a 5px width, on a 1920x1080 screen, we can calculate the number of squares to be (1920 * 1080) / 25 = 82944.
Any pointers on how to accomplish this would be great!
Generally, you can draw more squares in the same way you draw the first one. However, there will be a significant loss in performance.
Instead, you can add all triangles to one vertex buffer / index buffer. You already are able to draw two triangles as a triangle list. You should be able to easily adjust this routine to draw more than two triangles. Just add the according vertices and indices to the buffers and modify the draw call.
If you need vertices at the same position with different colors, you need to add two vertices to the buffer.
This way, the performance loss is very little, because you draw everything with only one draw call. Although the amount of triangles should be no problem for most graphic cards, some smaller or older ones can get into trouble. If so, you should consider changing your drawing strategy. Maybe it is not even necessary to draw that much triangles. But you can think about that, if the resulting performance is too low...
If you don't care about 3D, just 2D - you can use SpriteBatch to draw squares/rectangles on the screen. This will handle batching all the vertex/index buffer management for you.
I got a problem with texture coordinates. First I would like to describe what I want to do then I will ask the question.
I want to have a mesh that has more textures using only one big texture. The big texture merges all textures the mesh is using in it. I made a routine that merges textures, that is no problem, but I still have to modify the texture coordinates, so the mesh that now uses only one texture instead of many has everything placed well.
See the picture:
On the upper left corner I got one of the textures (let's call it A) I merged into a big texture, the right one (B). A's top left is 0,0 and bottom right is 1,1. For easy use let's say that B.width = A.width * 2 and so for the height too. So on B the mini texture (M what is the A originally) bottom-right should be 0.5,0.5.
I got no problems understanding these so far and I hope I understood it ok. But the problem here is, that there are texture coordinates that are:
above 1
negative
on the original A. What should these be on M?
Let's say, A has -0.1,0 - is that -0.05,0 on M inside B?
What about those numbers that are outside 0..1 region? Is -3.2,0 on A -1.6 or -3.1 on B? So I clip of the part that is %1 and divide by 2 (because I stated above that width is double) or should I divide the whole number by 2? As far I understand so far, numbers outside this region are about mirroring the texture. How do I manage this, so the output does not contain the orange texture from B?
If my question is not clear enough (I am not much skilled in English), please ask and I will edit/answer, just help me clear my confusion :)
Thanks in advance:
Péter
A single texture has coordinates in [0-1,0-1] range
The new texture has coordinates in [0-1,0-1] range
In your new texture composed by four single textures, your algoritm has to translate texture coordinates this way.
Blue single square texture will have new coordinates in [0-0.5,
0-0.5] range
Orange single square texture will have new coordinates
in [0.5-1, 0-0.5] range