How can I change the normals of my model so it will look less rounded - stage3d

My AGAL code for creating the normals is this:
"nrm ft1.xyz, v1.xyz\n" + // renormalize normal
"dp3 ft1, fc2.xyz, ft1.xyz \n" + // directional light contribution
but I get a very rounded object, is there a way to generate the normals so that it will be more sharp at the edges.
Thanks

Found the solution:
I export the model from 3DMax with my script (exporting script from http://not-so-stupid.com/), but select Sandy 3.0 format, which maks the "export vertex normal" valid, and then I copy the information from the as file created by the script. I put the normal in a new vertex buffer and upload them:
context.setVertexBufferAt( 2, normalBuffer, 0, Context3DVertexBufferFormat.FLOAT_3);
then use them in the shaders code:
private const VERTEX_SHADER_LIGHT:String = "" +
"m44 op, va0, vc0\n" + // pos to clipspace
"mov v0, va1 \n" + // copy uv
"mov vt1.xyz, va2.xyz\n"+
"mov vt1.w, va2.w\n"+
"mov v1, vt1\n";
private const FRAGMENT_SHADER_LIGHT:String = "" +
"tex ft0, v0, fs0 <2d,linear,nomip>\n" + // read from texture
"nrm ft1.xyz, v1.xyz\n" + // renormalize normal v1 contains the normals created in the vertex code
"dp3 ft1, fc2.xyz, ft1.xyz \n" + // directional light contribution
"neg ft1, ft1 \n" + // negation because we have a vector "from" light
"max ft1, ft1, fc0 \n"+ // clamp to [0, dot]
"mul ft1, ft1, fc3 \n"+ // contribution from light
"mul ft1, ft1, ft0 \n"+ // contribution from light + texture
"add oc, ft1, fc1"; // final color as surface + ambient

Related

Edge/outline detection from texture in fragment shader

I am trying to display sharp contours from a texture in WebGL.
I pass a texture to my fragment shaders then I use local derivatives to display the contours/outline, however, it is not smooth as I would expect it to.
Just printing the texture without processing works as expected:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
gl_FragColor = color;
With local derivatives, it misses some edges:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
float maxColor = length(color.rgb);
gl_FragColor.r = abs(dFdx(maxColor));
gl_FragColor.g = abs(dFdy(maxColor));
gl_FragColor.a = 1.;
In theory, your code is right.
But in practice most GPUs are computing derivatives on blocks of 2x2 pixels.
So for all 4 pixels of such block the dFdX and dFdY values will be the same.
(detailed explanation here)
This will cause some kind of aliasing and you will miss some pixels for the contour of the shape randomly (this happens when the transition from black to the shape color occurs at the border of a 2x2 block).
To fix this, and get the real per pixel derivative, you can instead compute it yourself, this would look like this :
// get tex coordinates
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
// compute the U & V step needed to read neighbor pixels
// for that you need to pass the texture dimensions to the shader,
// so let's say those are texWidth and texHeight
float step_u = 1.0 / texWidth;
float step_v = 1.0 / texHeight;
// read current pixel
vec4 centerPixel = texture2D(uTextureFilled, texc);
// read nearest right pixel & nearest bottom pixel
vec4 rightPixel = texture2D(uTextureFilled, texc + vec2(step_u, 0.0));
vec4 bottomPixel = texture2D(uTextureFilled, texc + vec2(0.0, step_v));
// now manually compute the derivatives
float _dFdX = length(rightPixel - centerPixel) / step_u;
float _dFdY = length(bottomPixel - centerPixel) / step_v;
// display
gl_FragColor.r = _dFdX;
gl_FragColor.g = _dFdY;
gl_FragColor.a = 1.;
A few important things :
texture should not use mipmaps
texture min & mag filtering should be set to GL_NEAREST
texture clamp mode should be set to clamp (not repeat)
And here is a ShaderToy sample, demonstrating this :

GPUImage Lookup Filter - creating a color depth greater than 512² colors

GPUImage's LookupFilter uses an RGB pixel map that's 512x512. When the filter executes, it creates a comparison between a modified version of this image with the original, and extrapolates an image filter.
The filter code is pretty straightforward. Here's an extract so you can see what's going on:
void main()
{
highp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
highp float blueColor = textureColor.b * 63.0;
highp vec2 quad1;
quad1.y = floor(floor(blueColor) / 8.0);
quad1.x = floor(blueColor) - (quad1.y * 8.0);
highp vec2 quad2;
quad2.y = floor(ceil(blueColor) / 8.0);
quad2.x = ceil(blueColor) - (quad2.y * 8.0);
highp vec2 texPos1;
texPos1.x = (quad1.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos1.y = (quad1.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
highp vec2 texPos2;
texPos2.x = (quad2.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos2.y = (quad2.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
lowp vec4 newColor1 = texture2D(inputImageTexture2, texPos1);
lowp vec4 newColor2 = texture2D(inputImageTexture2, texPos2);
lowp vec4 newColor = mix(newColor1, newColor2, fract(blueColor));
gl_FragColor = mix(textureColor, vec4(newColor.rgb, textureColor.w), intensity);
}
);
See where the filter map is dependent on this being a 512x512 image?
I'm looking at ways to 4x the color depth here, using a 1024x1024 source image instead, but I'm not sure how this lookup filter image would have originally been generated.
Can something like this be generated in code? If so, I realize it's a very broad question, but how would I go about doing that? If it can't be generated in code, what are my options?
—-
Update:
Turns out the original LUT generation code was included in the header file all along. The questionable part here is from the header file:
Lookup texture is organised as 8x8 quads of 64x64 pixels representing all possible RGB colors:
How is 64x64 a map of all possible RGB channels? 64³ = 262,144 but that only accounts for 1/64th of the presumed 24-bit capacity of RGB, which is 64³ (16,777,216). What's going on here? Am I missing the way this LUT works? How are we accounting for all possible RGB colors with only 1/64th of the data?
for (int by = 0; by < 8; by++) {
for (int bx = 0; bx < 8; bx++) {
for (int g = 0; g < 64; g++) {
for (int r = 0; r < 64; r++) {
image.setPixel(r + bx * 64, g + by * 64, qRgb((int)(r * 255.0 / 63.0 + 0.5),
(int)(g * 255.0 / 63.0 + 0.5),
(int)((bx + by * 8.0) * 255.0 / 63.0 + 0.5)));
}
}
}
}
I'm not quite sure what problem you are actually having. When you say you want "4x the color depth" what do you actually mean. Color depth normally means the number of bits per color channel (or per pixel), which is totally independent of the resolution of the image.
In terms of lookup table accuracy (which is resolution dependent), assuming you are using bilinear filtered texture inputs from the original texture, and filtered lookups into the transform table, then you are already linearly interpolating between samples in the lookup table. Interpolation of color channels will be at higher precision than the storage format; e.g. often fp16 equivalent, even for textures stored at 8-bit per pixel.
Unless you have a significant amount of non-linearity in your color transform (not that common) adding more samples to the lookup table is unlikely to make a significant difference to the output - the interpolation will already be doing a reasonably good job of filling in the gaps.
Lev Zelensky provided the original work for this, so I'm not as familiar with how this works internally, but you can look at the math being performed in the shader to get an idea of what's going on.
In the 512x512 lookup, you have an 8x8 grid of cells. Within those cells, you have a 64x64 image patch. The red values go from 0 to 255 (0.0 to 1.0 in normalized values) going from left to right in that patch, and the green values go from 0 to 255 going down. That means that there are 64 steps in red, and 64 steps in green.
Each cell then appears to increase the blue value as you progress down the patches, left to right, top to bottom. With 64 patches, that gives you 64 blue values to match the 64 red and green ones. That gives you equal coverage across the RGB values in all channels.
So, if you wanted to double the number of color steps, you'd have to double the patch size to 128x128 and have 128 grids. It'd have to be more of a rectangle due to 128 not having an integer square root. Just going to 1024x1024 might let you double the color depth in the red and green channels, but blue would now be half their depth. Balancing the three out would be a little trickier than just doubling the image size.

Stage3D Error #3632: AGAL linkage: Varying 1 is read in the fragment shader but not written to by the vertex shader

I am a beginner in AGAL, I'm sure this is not complicated.
I have a vertex and fragment shader, for simply drawing a box with a texture without light effect, here is the code:
vertexAssembly.assemble( Context3DProgramType.VERTEX,
"m44 op, va0, vc0\n" + // pos to clipspace
"mov v0, va1" // copy uv
);
fragmentAssembly.assemble(Context3DProgramType.FRAGMENT,
"tex ft1, v0, fs0 <2d,linear,nomip>\n" +
"mov oc, ft1"
);
I also have AGAL code for a Box with no texture, just color, and with light effect, here is the code for the shaders:
private const VERTEX_SHADER_LIGHT:String =
"mov vt0, va0\n"+
"m44 op, vt0, vc0\n"+
"nrm vt1.xyz, va0.xyz\n"+
"mov vt1.w, va0.w\n"+
"mov v1, vt1\n" +
"mov v2, va1"
private const FRAGMENT_SHADER_LIGHT:String =
"dp3 ft1, fc2, v1 \n"+
"neg ft1, ft1 \n"+
"max ft1, ft1, fc0 \n"+
"mul ft2, fc4, ft1 \n"+
"mul ft2, ft2, fc3 \n"+
"add oc, ft2, fc1";
Question is, how do I combine the 2 codes, I want a box model with texture map, to show with light effect.
I did this:
private const VERTEX_SHADER_LIGHT:String =
"m44 op, va0, vc0\n" + // pos to clipspace
"mov v0, va1" // copy uv
//"mov vt0, va0\n"+
//"m44 op, vt0, vc0\n"+
"nrm vt1.xyz, va0.xyz\n"+
"mov vt1.w, va0.w\n"+
"mov v1, vt1\n" +
"mov v2, va1"
private const FRAGMENT_SHADER_LIGHT:String =
"tex ft1, v0, fs0 <2d,linear,nomip>\n" +
"mov oc, ft1 \n" +
"dp3 ft1, fc2, v1 \n"+
"neg ft1, ft1 \n"+
"max ft1, ft1, fc0 \n"+
"mul ft2, fc4, ft1 \n"+
"mul ft2, ft2, fc3 \n"+
"add oc, ft2, fc1";
but it gives me an error:
"Error: Error #3632: AGAL linkage: Varying 1 is read in the fragment shader but not written to by the vertex shader.
at flash.display3D::Program3D/upload()
at Context3DExample/setupScene()
at Context3DExample/contextCreated()"
I'm sure someone with experiance can solve this in 5 minutes.
Thanks
Looks like you forgot to concatenate a string, i.e.
"mov v0, va1" // copy uv
"nrm vt1.xyz, va0.xyz\n"
should be
"mov v0, va1\n" + // copy uv
"nrm vt1.xyz, va0.xyz\n"
Notice extra \n and + on the first line.
Found the answer here is the code (based on nikitablack answer bellow):
private const VERTEX_SHADER_LIGHT:String = "" +
"m44 op, va0, vc0\n" +// pos to clipspace
"mov v0, va1\n" +// pass uv
"mov v1, va0"; // pas normal for vertex shader.
private const FRAGMENT_SHADER_LIGHT:String = "" +
"tex ft0, v0, fs0 <2d,linear,nomip>\n" + // read from texture
"nrm ft1.xyz, v1.xyz\n" + // renormalize normal
"dp3 ft1, fc2.xyz, ft1.xyz \n" + // directional light contribution
//"neg ft1, ft1 \n" + // negation because we have a vector "from" light
"max ft1, ft1, fc0 \n"+ // clamp to [0, dot]
"mul ft1, ft1, fc3 \n"+ // contribution from light
"mul ft1, ft1, ft0 \n"+ // contribution from light + texture
//"add oc, ft1, fc1"; // final color as surface + ambient
"add oc, ft1, ft0"; // final color as surface + texture
I took out the "neg of ft1" no need to negate this verctor in my code it is OK as it is. and I didn't add the ambient color at the end, just the texture once again so it will become bright and clear, with just a bit of shading.

Image processing: interpolation using intensity values of pixels in the input image

When we do image interpolation, I think we will use intensity values of pixels in the input image.
(A)
I am reading the code of cubic interpolation from GPU Gems Chapter 24. High-Quality Filtering. Here is a snippet of their code:
Example 24-9. Filtering Four Texel Rows, Then Filtering the Results as a Column
float4 texRECT_bicubic(uniform
samplerRECT tex,
uniform
samplerRECT kernelTex,
float2 t)
{
float2 f = frac(t); // we want the sub-texel portion
float4 t0 = cubicFilter(kernelTex, f.x,
texRECT(tex, t + float2(-1, -1)),
texRECT(tex, t + float2(0, -1)),
texRECT(tex, t + float2(1, -1)),
texRECT(tex, t + float2(2, -1)));
Since they get the sub-texel protion from frac(t), "t" is not exactly on pixel positions of the input image.
Then how come "t" is directly used to sample intensity values from the original images, like in "texRECT(tex, t + float2(-1, -1))"?
Personally, I think we should use
t - frac(t)
(B)
Same in an example from "Zoom An Image With Different Interpolation Types"
Their snippet of "GLSL shader code for Bi-Cubic Interpolation" is:
float a = fract( TexCoord.x * fWidth ); // get the decimal part
float b = fract( TexCoord.y * fHeight ); // get the decimal part
for( int m = -1; m <=2; m++ )
{
for( int n =-1; n<= 2; n++)
{
vec4 vecData = texture2D(textureSampler,
TexCoord + vec2(texelSizeX * float( m ),
texelSizeY * float( n )));
I think we should use:
TexCoord - vec2(a,b)
then use offset of m and n
(C) Now I am confused. I think we will use intensity values of "exact" pixels in the input image.
Which way should we use?

OpenGL ES shader to convert color image to black-and-white infrared?

I was able to create a fragment shader to convert a color image to greyscale, by:
float luminance = pixelColor.r * 0.299 + pixelColor.g * 0.587 + pixelColor.b * 0.114;
gl_FragColor = vec4(luminance, luminance, luminance, 1.0);
Now I'd like to mimic a Photoshop channel mixer effect:
How can I translate the % percentage values (-70%, +200%, -30%) into r g b floating point numbers (e.g. 0.299, 0.587, 0.114)?
You should know from school that 10% of a value means multiplying that value by 0.1, so just use (-0.7, 2.0, -0.3).

Resources