I have to draw something and I need to use two or more size points. I don't know how to get it, I only have one point size from my vertex shader.
<script id="myVertexShader"
type="x-shader/x-vertex">#version 300 es
in vec3 VertexPosition;
in vec4 VertexColor;
out vec4 colorOut;
uniform float pointSize;
void main() {
colorOut = VertexColor;
gl_Position = vec4(VertexPosition, 1.0);
gl_PointSize = 10.0;
}
</script>
Answer: You set gl_PointSize
Examples:
Using a constant
gl_PointSize = 20.0;
Using a uniform
uniform float pointSize;
gl_PointSize = pointSize;
Using some arbitrary formula
// base the size on the red and blue colors
gl_PointSize = abs(VertexColor.r * VertexColor.b) * 20.0;
Using a an attribute
attrbute float VertexSize;
...
gl_PointSize = VertexSize;
Any combination of the above (eg:
attrbute float VertexSize;
uniform float baseSize;
// use a uniform, an atribute, some random formula, and a constant
gl_PointSize = baseSize + VertexSize + abs(VertexColor.r * VertexColor.b) * 10.0;
PS: the forumla above is nonsense. The point is you set gl_PointSize. How you set it is up to you.
Note there are issues with gl.POINTS
WebGL implementations have a maximum point size. That maximum size is not required > 1.0. So if you want to draw points of any size you can not use gl.POINTS
WebGL doesn't guarantee whether or not large points who's center is outside the viewport will be drawn. So if you want to draw sizes larger than 1.0 and you want them to behave the same across devices you can't use gl.POINTS
See this
Related
I have this working scaled masked video texture over an image texture background. However it is positioned in the bottom left corner. I tried some tricks multiplying the coords but it doesn't seem to make much difference. I'll probably have to make alot of the values changeable uniforms but hardcoded ok for now.
What values can be used to change the video texture coords to display in the top right or bottom right corner ?
The video is a webcam stream with bodypix data providing the mask.
The alpha in mix is from bodypix data and needs to be calculated at 255 to properly display.
Fragment example
precision mediump float;
uniform sampler2D background;
uniform sampler2D frame;
uniform sampler2D mask;
uniform float texWidth;
uniform float texHeight;
void main(void) {
vec2 texCoord = gl_FragCoord.xy / vec2(texWidth,texHeight);
vec2 frameuv = texCoord * vec2(texWidth, texHeight) / vec2(200.0, 200.0);
vec4 texel0 = texture2D(background, texCoord);
vec4 frameTex = texture2D(frame, frameuv.xy);
vec4 maskTex = texture2D(mask, frameuv.xy);
gl_FragColor = mix(texel0, frameTex, step(frameuv.x, 1.0) * step(frameuv.y, 1.0) * maskTex.a * 255.);
}
https://jsfiddle.net/danrossi303/82tpoy94/3/
I am currently using this fragment shader in WebGL to apply highlights/shadows adjustments to photo textures.
The shader itself was pulled directly from the excellent GPUImage library for iOS.
uniform sampler2D inputImageTexture;
varying highp vec2 textureCoordinate;
uniform lowp float shadows;
uniform lowp float highlights;
const mediump vec3 luminanceWeighting = vec3(0.3, 0.3, 0.3);
void main()
{
lowp vec4 source = texture2D(inputImageTexture, textureCoordinate);
mediump float luminance = dot(source.rgb, luminanceWeighting);
mediump float shadow = clamp((pow(luminance, 1.0/(shadows+1.0)) + (-0.76)*pow(luminance, 2.0/(shadows+1.0))) - luminance, 0.0, 1.0);
mediump float highlight = clamp((1.0 - (pow(1.0-luminance, 1.0/(2.0-highlights)) + (-0.8)*pow(1.0-luminance, 2.0/(2.0-highlights)))) - luminance, -1.0, 0.0);
lowp vec3 result = vec3(0.0, 0.0, 0.0) + ((luminance + shadow + highlight) - 0.0) * ((source.rgb - vec3(0.0, 0.0, 0.0))/(luminance - 0.0));
gl_FragColor = vec4(result.rgb, source.a);
}
This shader as it stands, will only reduce highlights on a scale of 0.0 - 1.0. However I would like it to also brighten the highlights on a scale of 1.0-2.0.
With the aim of having a complete filter that reduces the images highlights when the highlights uniform is less than 1.0 and increases the intensity of the highlights when it is above 1.0. The same goes for the darkness shadows uniform
Highlights:
0.0(duller) ---- 1.0 (default - original pixel values) ----- 2.0 (brighter)
I have tried simply changing the clamp on the highlights variable to 0.0,2.0, and although this does indeed increase the brightness of the highlights when the uniform is above 1.0 it also seriously messes up the colors.
My understanding of image processing and constructing fragment shaders is extremely weak at best as you my be able to tell.
I'm just hoping someone can point me in the right direction.
EDIT:
Here are some example screenshots:-
The current filter with highlights set to 1.00 (basically the source image)
The current filter with highlights set to 0.00 as you can see the highlights get flattened/removed.
And finally here is what happens when I change the clamp in the fragment shader to allow values above 1.00 and set the highlights value to 2.00
I simply wish to be able to boost the highlights, making them brighter/more defined. i.e. the opposite of setting the value to 0.00
I don't really understand the shadow and highlight equations, but I can see that they are set up to never enhance shadows and highlights, but rather to wash them out. So we need a secondary step for enhancement.
For the highlights, I think to handle brighter colors, you need to blend towards white instead of adding something, so you don't get hue-shifts. I used a basic contrast equation to pick out the highlights, and then cubed it to clip out the midtones and shadows. The whiteTarget is just pulling out the top half of the 0.0-2.0 range to use as a multiplier to determine the strength of the brightening effect.
For the shadows, we are changing our range from 0.0-1.0 (where 0 is unchanged and 1 is washed out) to 0.0-2.0 (where 1 is unchanged and 2 is washed out). Therefore, the +1.0's in the shadow equation should be removed. Then for the 0.0-1.0 range, I just copied what I did for the highlights, except blending toward black. Maybe that can be optimized to avoid a mix function (not sure).
So here is my unoptimized version of the shader, set up so both shadows and highlights are on 0.0-2.0 scales, with 1.0 being the nominal. You might want to play around with those lines where I cube the luminance, and also with the value I used for contrast (currently 1.5), but it seems pretty good to me the way it is now--I adjusted it to try to avoid any ugly overlap between shadows and highlight ranges when the input parameters are at the two extremes.
uniform sampler2D inputImageTexture;
varying highp vec2 textureCoordinate;
uniform lowp float shadows;
uniform lowp float highlights;
const mediump vec3 luminanceWeighting = vec3(0.3, 0.3, 0.3);
void main()
{
lowp vec4 source = texture2D(inputImageTexture, textureCoordinate);
mediump float luminance = dot(source.rgb, luminanceWeighting);
//(shadows+1.0) changed to just shadows:
mediump float shadow = clamp((pow(luminance, 1.0/shadows) + (-0.76)*pow(luminance, 2.0/shadows)) - luminance, 0.0, 1.0);
mediump float highlight = clamp((1.0 - (pow(1.0-luminance, 1.0/(2.0-highlights)) + (-0.8)*pow(1.0-luminance, 2.0/(2.0-highlights)))) - luminance, -1.0, 0.0);
lowp vec3 result = vec3(0.0, 0.0, 0.0) + ((luminance + shadow + highlight) - 0.0) * ((source.rgb - vec3(0.0, 0.0, 0.0))/(luminance - 0.0));
// blend toward white if highlights is more than 1
mediump float contrastedLuminance = ((luminance - 0.5) * 1.5) + 0.5;
mediump float whiteInterp = contrastedLuminance*contrastedLuminance*contrastedLuminance;
mediump float whiteTarget = clamp(highlights, 1.0, 2.0) - 1.0;
result = mix(result, vec3(1.0), whiteInterp*whiteTarget);
// blend toward black if shadows is less than 1
mediump float invContrastedLuminance = 1.0 - contrastedLuminance;
mediump float blackInterp = invContrastedLuminance*invContrastedLuminance*invContrastedLuminance;
mediump float blackTarget = 1.0 - clamp(shadows, 0.0, 1.0);
result = mix(result, vec3(0.0), blackInterp*blackTarget);
gl_FragColor = vec4(result, source.a);
}
By the way, any idea why the original result line keeps adding 0's to everything? Seems like it could be simplified to
vec3 result = (luminance + shadow + highlight) * source.rgb / luminance;
But maybe it's a trick to cast to lowp within the calculation instead of after the calculation. Just a guess.
I have a requirement to implement an iOS UIImage filter / effect which is a copy of Photoshop's Distort Wave effect. The wave has to have multiple generators and repeat in a tight pattern within a CGRect.
Photos of steps are attached.
I'm having problems creating the glsl code to reproduce the sine wave pattern. I'm also trying to smooth the edge of the effect so that the transition to the area outside the rect is not so abrupt.
I found some WebGL code that produces a water ripple. The waves produced before the center point look close to what I need, but I can't seem to get the math right to remove the water ripple (at center point) and just keep the repeating sine pattern before it:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float time;
uniform highp vec2 center;
uniform highp float angle;
void main() {
highp vec2 cPos = -1.0 + 2.0 * gl_FragCoord.xy / center.xy;
highp float cLength = length(cPos);
highp vec2 uv = gl_FragCoord.xy/center.xy+(cPos/cLength)*cos(cLength*12.0-time*4.0)*0.03;
highp vec3 col = texture2D(inputImageTexture,uv).xyz;
gl_FragColor = vec4(col,1.0);
}
I have to process two Rect areas, one at top and one at the bottom. So being able to process two Rect areas in one pass would be ideal. Plus the edge smoothing.
Thanks in advance for any help.
I've handled this in the past by generating an offset table on the CPU and uploading it as an input texture. So on the CPU, I'd do something like:
for (i = 0; i < tableSize; i++)
{
table [ i ].x = amplitude * sin (i * frequency * 2.0 * M_PI / tableSize + phase);
table [ i ].y = 0.0;
}
You may need to add in more sine waves if you have multiple "generators". Also, note that the above code offsets the x coordinate of each pixel. You could do Y instead, or both, depending on what you need.
Then in the glsl, I'd use that table as an offset for sampling. So it would be something like this:
uniform sampler2DRect table;
uniform sampler2DRect inputImage;
//... rest of your code ...
// Get the offset from the table
vec2 coord = glTexCoord [ 0 ].xy;
vec2 newCoord = coord + texture2DRect (table, coord);
// Sample the input image at the offset coordinate
gl_FragColor = texture2DRect (inputImage, newCoord);
I'm trying to create a fragment shader to recolor a 2D grayscale sprite but leave white and near-white fragments intact (ie: don't recolor pure white fragments, and only slightly recolor near-white fragments). I'm not sure how to do this without using a conditional branch which results in poor performance on certain hardware.
The existing shader in the game engine just performs a simple multiplication:
#ifdef GL_ES
precision lowp float;
#endif
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform sampler2D CC_Texture0;
void main()
{
vec4 texColor = texture2D(CC_Texture0, v_texCoord);
gl_FragColor = texColor * v_fragmentColor;
}
I think that in order to avoid the conditional, I need some sort of continuous mathematical function that will recolor fragments with RGB values greater than, say, (0.9, 0.9, 0.9) less than it would for fragments which are less than (0.9, 0.9, 0.9).
Any help would be great!
I would do something like this: Calculate the fully-recolored pixel, then mix with the original based on a function. Here's an idea:
vec4 texColor = texture2D(CC_Texture0, v_texCoord);
const vec4 kLumWeights = vec4(.2126, .7152, .0722, 0.0); // Rec. 709 luminance weights
float luminance = dot (texColor, kLumWeights);
vec4 recolored = texColor * v_fragmentColor;
const float kThreshold = 0.8;
float mixAmount = (luminance - kThreshold) / (1.0 - kThreshold); // Everything below kThreshold becomes 0, and from kThreshold to 1.0 becomes 0 to 1.0
mixAmount = clamp (mixAmount, 0.0, 1.0);
gl_FragColor = mix (recolored, texColor, mixAmount);
Let me know if that works.
I have just completed the first version of my iOS app, Corebox, and am now working on some new features.
One of the new features is a "small" tweak to the OpenGL rendering to force some objects to never be drawn smaller than a minimum size. All of the objects needing this treatment are simple 2 point lines drawn with GL_LINES.
This annotated screenshot explains what I'm after. Ignore the grey lines, the only objects I'm interested in altering are the yellow wider lines.
I have googled this extensively and it seems what I need to do is alter the geometry of the lines using a vertex shader. I'm quite new to GLSL and most shader examples I can find deal with applying lighting and other effects, eg: GLSL Heroku Editor and KicksJS shader editor.
My current vertex shader is extremely basic:
// GL_LINES vertex shader
uniform mat4 Projection;
uniform mat4 Modelview;
attribute vec4 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
DestinationColor = SourceColor;
gl_Position = Projection * Modelview * Position;
}
As is my fragment shader:
// GL_LINES fragment shader
varying lowp vec4 DestinationColor;
void main(void) {
gl_FragColor = DestinationColor;
}
My guess as to what is required:
Determine the distance between the viewer (camera position) and the object
Determine how big the object is on the screen, based on its size and distance from camera
If the object will be too small then adjust its vertices such that it becomes large enough to easily see on the screen.
Caveats and other notes:
But if you zoom out won't this cause the model to be just a blob of orange on the screen? Yes, this is exactly the effect I'm after.
Edit: Here is the final working version implementing suggestions by mifortin
uniform mat4 Projection;
uniform mat4 Modelview;
uniform float MinimumHeight;
attribute vec4 Position;
attribute vec4 ObjectCenter;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
// screen-space position of this vertex
vec4 screenPosition = Projection * Modelview * Position;
// screen-space mid-point of the object this vertex belongs to
vec4 screenObjectCenter = Projection * Modelview * ObjectCenter;
// Z should be 0 by this time and the projective transform in w.
// scale so w = 1 (these two should be in screen-space)
vec2 newScreenPosition = screenPosition.xy / screenPosition.w;
vec2 newObjectCenter = screenObjectCenter.xy / screenObjectCenter.w;
float d = distance(newScreenPosition, newObjectCenter);
if (d < MinimumHeight && d > 0.0) {
// Direction of this object, this really only makes sense in the context
// of a line (eg: GL_LINES)
vec2 towards = normalize(newScreenPosition - newObjectCenter);
// Shift the center point then adjust the vertex position accordingly
// Basically this converts: *--x--* into *--------x--------*
newObjectCenter = newObjectCenter + towards * MinimumHeight;
screenPosition.xy = newObjectCenter.xy * screenPosition.w;
}
gl_Position = screenPosition;
DestinationColor = SourceColor;
}
Note that I didn't test the code, but it should illustrate the solution.
If you want to use shaders, add in another uniform vec4 that is the center position of your line. Then you can do something similar to (note center could be precomputed on the CPU once):
uniform float MIN; //Minimum size of blob on-screen
uniform vec4 center; //Center of the line / blob
...
vec4 screenPos = Projection * Modelview * Position;
vec4 center = Projection * Modelview * Position;
//Z should be 0 by this time and the projective transform in w.
//scale so w = 1 (these two should be in screen-space)
vec2 nScreenPos = screenPos.xy / screenPos.w;
vec2 nCenter = center.xy / center.w;
float d = distance(nScreenPos, nCenter);
if (d < MIN && d > 0)
{
vec2 towards = normalize(nScreenPos - nCenter);
nCenter = nCenter + towards * MIN;
screenPos.xy = nCenter.xy * screenPos.w;
}
gl_Position = screenPos;
Find where on the screen the vertex would be drawn, then from the center of the blob stretch it if needed to ensure a minimum size.
This example is for round objects. For corners, you could make MIN an attribute so the distance from the center varies on a per-vertex basis.
If you just want something more box-like, check that the minimum distance of the x and y coordinates separately.
On the CPU, you could compute the coordinates in screen-space and scale accordingly before submitting to the GPU.