I'm making a drawing application using swift (based on GLPaint) and open gl. Now I would like to improve the curve so that it varies with stroke speed (in eg thicker if drawing fast)
However, since my knowledge in open gl is quite limited I need some guidance. What I want to do is to vary the size of my texture/point for each CGPoint I calculate and add to the screen. Is it possible?
func addQuadBezier(var from:CGPoint, var ctrl:CGPoint, var to:CGPoint, startTime:CGFloat, endTime:CGFloat) {
scalePoints(from: from, ctrl: ctrl, to: to)
let pointCount = calculatePointsNeeded(from: from, to: to, min: 16.0, max: 256.0)
var vertexBuffer: [GLfloat] = [GLfloat](count: Int(pointCount), repeatedValue:0.0)
var t : CGFloat = startTime + 0.0002
for i in 0..<Int(pointCount) {
let p = calculatePoint(from:from, ctrl: ctrl, to: to)
vertexBuffer.insert(p.x.f, atIndex: i*2)
vertexBuffer.insert(p.y.f, atIndex: i*2+1)
t += (CGFloat(1)/CGFloat(pointCount))
}
glBufferData(GL_ARRAY_BUFFER.ui, Int(pointCount)*2*sizeof(GLfloat), vertexBuffer, GL_STATIC_DRAW.ui)
glDrawArrays(GL_POINTS.ui, 0, Int(pointCount).i)
}
func render()
{
context.presentRenderbuffer(GL_RENDERBUFFER.l)
}
where render() is called every 1/60 s.
shader
attribute vec4 inVertex;
uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;
varying lowp vec4 color;
void main()
{
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
color = vertexColor;
}
Thanks in advance!
In your vertex shader, set gl_pointSize to the width you want. That measurement is in framebuffer pixels, so if the size of your framebuffer changes with the device's scale factor, you'll need to adjust your point size appropriately.
If you find a way to control the line width in the vertex shader it would most likely be the best solution. Not only the lines would have different width but even a single line may have an increasing width (interpolated) between the points. I am not sure you will be able to achieve this on your platform though.
So if you do find a way you would add the point size to your buffer and use it with a new attribute in the vertex shader.
If not you will need to use triangles to draw the line which is generally a better practice anyway. To define vertices between point A and B you can get the normal as W = (B-A).normalized(), normal = N = (W.y, -W.x). Then the 4 positions are k = lineWidth/2.0, t1 = A + N*k, t2 = A - N*k, t3 = B + N*k, t4 = B - N*k. So this is what you add into your buffer and draw as a triangle strip or triangles depending on what you are looking for.
Related
I have a node size of 64x32 and texture size of 192x192 and I am trying to draw the first part of this texture at the first node, the second part at the second node...
Fragment shader (attached to SKSpriteNode with texture size of 64x32)
void main() {
float bX = 64.0 / 192.0 * (offset.x + 1);
float aX = 64.0 / 192.0 * (offset.x );
float bY = 32.0 / 192.0 * (offset.y + 1);
float aY = 32.0 / 192.0 * (offset.y);
float normalizedX = (bX - aX) * v_tex_coord.x + aX;
float normalizedY = (bY - aY) * v_tex_coord.y + aY;
gl_FragColor = texture2D(u_temp, vec2(normalizedX, normalizedY));
}
offset.x - [0, 2]
offset.y - [0, 5]
u_temp - texture size of 192x192
function to convert a value from [0,1] to, for example, [0, 0.33]
But the result seems to be wrong:
SKSpriteNode with attached texture
SKSpriteNode without texture (what I want to achieve with texture)
When a texture is in an altas, it's not addressed by coordinates from (0,0) to (1,1) anymore. The atlas is really one large texture that has been assembled behind the scenes. When you use a particular named image from an atlas in a normal sprite, SpriteKit is looking up that image name in information about how the atlas was assembled and then telling the GPU something like "draw this sprite with bigAtlasTexture, coordinates (0.1632,0.8814) through (0.1778, 0.9143)". If you're going to write a custom shader using the same texture, you need that information about where it lives inside the atlas, which you get from textureRect:
https://developer.apple.com/documentation/spritekit/sktexture/1519707-texturerect
So you have your texture which is not really one image but defined by a location textureRect() in a big packed-up image of lots of textures. I find it easiest to think in terms of (0,0) to (1,1), so when writing a shader I usually do textureRect => subtract and scale to get to (0,0)-(1,1) => compute desired modified coordinates => scale and add to get to textureRect again => texture2D lookup.
Since your shader will need to know about textureRect but you can't call that from the shader code, you have two choices:
Make an attribute or uniform to hold that information, fill it in from the outside, and then have the shader reference it.
If the shader is only used for a specific texture or for a few textures, then you can generate shader code that's specialized for the required textureRect, i.e., it just has some constants in the code for the texture.
Here's a part of an example using approach #2:
func myShader(forTexture texture: SKTexture) -> SKShader {
// Be careful not to assume that the texture has v_tex_coord ranging in (0, 0) to
// (1, 1)! If the texture is part of a texture atlas, this is not true. I could
// make another attribute or uniform to pass in the textureRect info, but since I
// only use this with a particular texture, I just pass in the texture and compile
// in the required v_tex_coord transformations for that texture.
let rect = texture.textureRect()
let shaderSource = """
void main() {
// Normalize coordinates to (0,0)-(1,1)
v_tex_coord -= vec2(\(rect.origin.x), \(rect.origin.y));
v_tex_coord *= vec2(\(1 / rect.size.width), \(1 / rect.size.height));
// Update the coordinates in whatever way here...
// v_tex_coord = desired_transform(v_tex_coord)
// And then go back to the actual coordinates for the real texture
v_tex_coord *= vec2(\(rect.size.width), \(rect.size.height));
v_tex_coord += vec2(\(rect.origin.x), \(rect.origin.y));
gl_FragColor = texture2D(u_texture, v_tex_coord);
}
"""
let shader = SKShader(source: shaderSource)
return shader
}
That's a cut-down version of some specific examples from here:
https://github.com/bg2b/RockRats/blob/master/Asteroids/Hyperspace.swift
I have to draw something and I need to use two or more size points. I don't know how to get it, I only have one point size from my vertex shader.
<script id="myVertexShader"
type="x-shader/x-vertex">#version 300 es
in vec3 VertexPosition;
in vec4 VertexColor;
out vec4 colorOut;
uniform float pointSize;
void main() {
colorOut = VertexColor;
gl_Position = vec4(VertexPosition, 1.0);
gl_PointSize = 10.0;
}
</script>
Answer: You set gl_PointSize
Examples:
Using a constant
gl_PointSize = 20.0;
Using a uniform
uniform float pointSize;
gl_PointSize = pointSize;
Using some arbitrary formula
// base the size on the red and blue colors
gl_PointSize = abs(VertexColor.r * VertexColor.b) * 20.0;
Using a an attribute
attrbute float VertexSize;
...
gl_PointSize = VertexSize;
Any combination of the above (eg:
attrbute float VertexSize;
uniform float baseSize;
// use a uniform, an atribute, some random formula, and a constant
gl_PointSize = baseSize + VertexSize + abs(VertexColor.r * VertexColor.b) * 10.0;
PS: the forumla above is nonsense. The point is you set gl_PointSize. How you set it is up to you.
Note there are issues with gl.POINTS
WebGL implementations have a maximum point size. That maximum size is not required > 1.0. So if you want to draw points of any size you can not use gl.POINTS
WebGL doesn't guarantee whether or not large points who's center is outside the viewport will be drawn. So if you want to draw sizes larger than 1.0 and you want them to behave the same across devices you can't use gl.POINTS
See this
I queries the pointSize range gl.getParameter(gl.ALIASED_POINT_SIZE_RANGE) and got [1,1024] this means, that using this point to cover a texture (so it triggers the fragment shader to draw all pixels spans by the pointSize
at best, using this method i cannot render images larger then 1024x1024, ?
I guess i have to bind 2 triangles (6 points) to the fragment shader so it covers all of clipspace and then gl.viewport(x, y, width, height); will map this entire area to the output texture (frame buffer object or canvas)?
is there any other way (maybe something new in webgl2) other then using an attribute in the fragment shader?
Correct, the largest size area you can render with a single point is whatever is returned by gl.getParameter(gl.ALIASED_POINT_SIZE_RANGE)
The spec does not require any size larger than 1. The fact that your GPU/Driver/Browser returned 1024 does not mean that your users' machines will also return 1024.
note: Answering based on your history of questions
The normal thing to do in WebGL for 99% off all cases is to submit vertices. Want to draw a quad, submit 4 vertices and 6 indices or 6 vertex. Want to draw a triangle, submit 3 vertices. Want to draw a circle, submit the vertices for a circle. Want to draw a car, submit the vertices for a car or more likely submit the vertices for a wheel, draw 4 wheels with those vertices, submit the vertices for other parts of the car, draw each part of the car.
You multiply those vertices by some matrices to move, scale, rotate, and project them into 2D or 3D space. All your favorite games do this. The canvas 2D api does this via OpenGL ES internally. Chrome itself does this to render all the parts of this webpage. That's the norm. Anything else is an exception and will likely lead to limitations.
For fun, in WebGL2, there are some other things you can do. They are not the normal thing to do and they are not recommended to actually solve real world problems. They can be fun though just for the challenge.
In WebGL2 there is an global variable in the vertex shader called gl_VertexID which is the count of the vertex currently being processed. You can use that with clever math to generate vertices in the vertex shader with no other data.
Here's some code that draws a quad that covers the canvas
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
const vs = `#version 300 es
void main() {
int x = gl_VertexID % 2;
int y = (gl_VertexID / 2 + gl_VertexID / 3) % 2;
gl_Position = vec4(ivec2(x, y) * 2 - 1, 0, 1);
}
`;
const fs = `#version 300 es
precision mediump float;
out vec4 outColor;
void main() {
outColor = vec4(1, 0, 0, 1);
}
`;
// compile shaders, link program
const prg = twgl.createProgram(gl, [vs, fs]);
gl.useProgram(prg);
const count = 6;
gl.drawArrays(gl.TRIANGLES, 0, count);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
Example: And one that draws a circle
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
const vs = `#version 300 es
#define PI radians(180.0)
void main() {
const int TRIANGLES_AROUND_CIRCLE = 100;
int triangleId = gl_VertexID / 3;
int pointId = gl_VertexID % 3;
int pointIdOffset = pointId % 2;
float angle = float((triangleId + pointIdOffset) * 2) * PI /
float(TRIANGLES_AROUND_CIRCLE);
float radius = 1. - step(1.5, float(pointId));
float x = sin(angle) * radius;
float y = cos(angle) * radius;
gl_Position = vec4(x, y, 0, 1);
}
`;
const fs = `#version 300 es
precision mediump float;
out vec4 outColor;
void main() {
outColor = vec4(1, 0, 0, 1);
}
`;
// compile shaders, link program
const prg = twgl.createProgram(gl, [vs, fs]);
gl.useProgram(prg);
const count = 300; // 100 triangles, 3 points each
gl.drawArrays(gl.TRIANGLES, 0, 300);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
There is an entire website based on this idea. The site is based on the puzzle of making pretty pictures given only an id for each vertex. It's the vertex shader equivalent of shadertoy.com. On Shadertoy.com the puzzle is basically given only gl_FragCoord as input to a fragment shader write a function to draw something interesting.
Both sites are toys/puzzles. Doing things this way is not recommended for solving real issues like drawing a 3D world in a game, doing image processing, rendering the contents of a browser window, etc. They are cute puzzles on given only minimal inputs, drawing something interesting.
Why is this technique not advised? The most obvious reason is it's hard coded and inflexible where as the standard techniques are super flexible. For example above to draw a fullscreen quad required one shader. To draw a circle required a different shader. Where a standard vertex buffer based attributes multiplied by matrices can be used for any shape provided, 2d or 3d. Not just any shape, with just a simple single matrix multiply in the shader those shapes can be translated, rotated, scaled, projected into 3D, there rotation centers and scale centers can be independently set, etc.
Note: you are free to do whatever you want. If you like these techniques then by all means use them. The reason I'm trying to steer you away form them is based on your previous questions you're new to WebGL and I feel like you'll end up making WebGL much harder for yourself if you use obscure and hard coded techniques like these instead of the traditional more common flexible techniques that experienced devs use to get real work done. But again, it's up to you, do whatever you want.
I have an OpenGL view that displays a set of 3D points with some basic shaders:
// Fragment Shader
static const char* PointFS = STRINGIFY
(
void main(void)
{
gl_FragColor = vec4(0.8, 0.8, 0.8, 1.0);
}
);
// Vertex Shader
static const char* PointVS = STRINGIFY
(
uniform mediump mat4 uProjectionMatrix;
attribute mediump vec4 position;
void main( void )
{
gl_Position = uProjectionMatrix * position;
gl_PointSize = 3.0;
}
);
And the MVP matrix is calculated as:
- (void)setMatrices
{
// ModelView Matrix
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, 2, 2, 2);
// Projection Matrix
const GLfloat aspectRatio = (GLfloat)(self.view.bounds.size.width) / (GLfloat)(self.view.bounds.size.height);
const GLfloat fieldView = GLKMathDegreesToRadians(90.0f);
const GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(fieldView, aspectRatio, 0.1f, 10.0f);
glUniformMatrix4fv(self.pointShader.uProjectionMatrix, 1, 0, GLKMatrix4Multiply(projectionMatrix, modelViewMatrix).m);
}
This works fine, but I have a set of 500 points and I see only a few.
How do I scale/translate the MVP matrix to display all of them (they are a dynamic set)? Ideally the "centroid" should be at the origin, and all of the points visible. It should be able to adapt to rotations of the view (gestures are the next step I want to implement).
Seeing how you present this you might need quite a lot... I guess best approach might be using "look at", the point you are looking at is (0,0,0) as you stated, camera position should probably be (0,0,Z) and up (0,1,0). So the only issue here is the Z component of camera position.
If you start the Z with for instance -.1 and the iterate through all the points then sin(fieldView*.5f) * (p.z-Z) >= point.y for the point to be visible. So you can compute Z1 = p.z-(point.y/sin(fieldView*.5f)) and if Z1<Z then Z=Z1. This check is only for the positive Y check, you also need the same for negative Y and same for +-X. These evasions are very similar though when checking X you could also take the screen ratio into account.
This procedure should give you the smallest field possible to see all the points (with given limitations such as looking towards (0,0,0)) but is far from the simplest. You also need to consider if the equation will work if p.z<-Z.
Another bit easier approach is to generate the smallest cube around centre which holds all the points: iterate through points and get the coordinate with largest absolute value (any of X,Y or Z). When you have it use it with frustum instead perspective so that all rect parameters (top, bottom, left and right) are generated with this value as +-largest. Then you need to compute the translation which for 90 degrees field is Z = (largest*.5). Z is the zNear for the frustum and then also translate the matrix by -(Z+largest). Again one of the coordinate in frustum must be multiplied by screen ratio.
In any case do watch out what your zFar is, having it only 10.0f might be a bit too short in your case. Until you need the depth buffer you should not worry about that value being too large.
I have a requirement to implement an iOS UIImage filter / effect which is a copy of Photoshop's Distort Wave effect. The wave has to have multiple generators and repeat in a tight pattern within a CGRect.
Photos of steps are attached.
I'm having problems creating the glsl code to reproduce the sine wave pattern. I'm also trying to smooth the edge of the effect so that the transition to the area outside the rect is not so abrupt.
I found some WebGL code that produces a water ripple. The waves produced before the center point look close to what I need, but I can't seem to get the math right to remove the water ripple (at center point) and just keep the repeating sine pattern before it:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float time;
uniform highp vec2 center;
uniform highp float angle;
void main() {
highp vec2 cPos = -1.0 + 2.0 * gl_FragCoord.xy / center.xy;
highp float cLength = length(cPos);
highp vec2 uv = gl_FragCoord.xy/center.xy+(cPos/cLength)*cos(cLength*12.0-time*4.0)*0.03;
highp vec3 col = texture2D(inputImageTexture,uv).xyz;
gl_FragColor = vec4(col,1.0);
}
I have to process two Rect areas, one at top and one at the bottom. So being able to process two Rect areas in one pass would be ideal. Plus the edge smoothing.
Thanks in advance for any help.
I've handled this in the past by generating an offset table on the CPU and uploading it as an input texture. So on the CPU, I'd do something like:
for (i = 0; i < tableSize; i++)
{
table [ i ].x = amplitude * sin (i * frequency * 2.0 * M_PI / tableSize + phase);
table [ i ].y = 0.0;
}
You may need to add in more sine waves if you have multiple "generators". Also, note that the above code offsets the x coordinate of each pixel. You could do Y instead, or both, depending on what you need.
Then in the glsl, I'd use that table as an offset for sampling. So it would be something like this:
uniform sampler2DRect table;
uniform sampler2DRect inputImage;
//... rest of your code ...
// Get the offset from the table
vec2 coord = glTexCoord [ 0 ].xy;
vec2 newCoord = coord + texture2DRect (table, coord);
// Sample the input image at the offset coordinate
gl_FragColor = texture2DRect (inputImage, newCoord);