I am trying to mess around in SceneKit and teach myself it. Basically, I am creating a quad with 3 rectangular sides and 1 sloping slide.
I want my texture on it to stretch and warp/deform across the surface.
Reading some stuff online, it seems that I need to make an SCNProgram with custom vertex and fragment shaders to get the effect. But, I can't seem to get the texture to spread across the surface. Need help please. (I am new to graphics programming hence trying to teach it to myself).
My Swift code to create the geometry and texture it is as follows:
func geometryCreate() -> SCNNode {
let verticesPosition = [
SCNVector3Make(0.0, 0.0, 0.0),
SCNVector3Make(5.0, 0.0, 0.0),
SCNVector3Make(5.0, 5.0, 0.0),
SCNVector3Make(0.0, 3.0, 0.0)
]
let textureCord = [CGPoint (x: 0.0,y: 0.0), CGPoint(x: 1.0,y: 0.0), CGPoint(x: 1.0,y: 1.0), CGPoint(x: 0.0,y: 1.0)]
let indices: [CInt] = [
0, 2, 3,
0, 1, 2
]
let vertexSource = SCNGeometrySource(vertices: verticesPosition, count: 4)
let srcTex = SCNGeometrySource(textureCoordinates: textureCord, count: 4)
let date = NSData(bytes: indices, length: sizeof(CInt) * indices.count)
let scngeometry = SCNGeometryElement(data: date, primitiveType: SCNGeometryPrimitiveType.Triangles, primitiveCount: 2, bytesPerIndex: sizeof(CInt))
let geometry = SCNGeometry(sources: [vertexSource,srcTex], elements: [scngeometry])
let program = SCNProgram()
if let filepath = NSBundle.mainBundle().pathForResource("vertexshadertry", ofType: "vert") {
do {
let contents = try NSString(contentsOfFile: filepath, encoding: NSUTF8StringEncoding) as String
program.vertexShader = contents
} catch {
print("**** happened loading vertex shader")
}
}
if let fragmentShaderPath = NSBundle.mainBundle().pathForResource("fragshadertry", ofType:"frag")
{
do {
let fragmentShaderAsAString = try NSString(contentsOfFile: fragmentShaderPath, encoding: NSUTF8StringEncoding)
program.fragmentShader = fragmentShaderAsAString as String
} catch {
print("**** happened loading frag shader")
}
}
program.setSemantic(SCNGeometrySourceSemanticVertex, forSymbol: "position", options: nil)
program.setSemantic(SCNGeometrySourceSemanticTexcoord, forSymbol: "textureCoordinate", options: nil)
program.setSemantic(SCNModelViewProjectionTransform, forSymbol: "modelViewProjection", options: nil)
do {
let texture = try GLKTextureLoader.textureWithCGImage(UIImage(named: "stripes")!.CGImage!, options: nil)
geometry.firstMaterial?.handleBindingOfSymbol("yourTexture", usingBlock: { (programId:UInt32, location:UInt32, node:SCNNode!, renderer:SCNRenderer!) -> Void in
glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), Float(GL_CLAMP_TO_EDGE) )
glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), Float(GL_CLAMP_TO_EDGE) )
glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), Float(GL_LINEAR) )
glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), Float(GL_LINEAR) )
glBindTexture(GLenum(GL_TEXTURE_2D), texture.name)
})
} catch {
print("Texture not loaded")
}
geometry.firstMaterial?.program = program
let scnnode = SCNNode(geometry: geometry)
return scnnode
}
My Vertex shader is:
attribute vec4 position;
attribute vec2 textureCoordinate;
uniform mat4 modelViewProjection;
varying highp vec2 pos;
varying vec2 texCoord;
void main() {
texCoord = vec2(textureCoordinate.s, 1.0 - textureCoordinate.t) ;
gl_Position = modelViewProjection * position;
pos = vec2(position.x, 1.0 - position.y);
}
My fragment shader is:
precision highp float;
uniform sampler2D yourTexture;
varying highp vec2 texCoord;
varying highp vec2 pos;
void main() {
gl_FragColor = texture2D(yourTexture, vec2(pos.x, pos.y));
}
I just can't seem to get the texture at the bottom left spread out across the surface. Could you please help?
Doing some manual vertex and frag shader juggling, I can get the result but it feels very inelegant and I am pretty sure it should not be writing specific code like this.
attribute vec4 position;
attribute vec2 textureCoordinate;
uniform mat4 modelViewProjection;
varying highp vec2 pos;
varying vec2 texCoord;
void main() {
// Pass along to the fragment shader
texCoord = vec2(textureCoordinate.s, 1.0 - textureCoordinate.t) ;
// output the projected position
gl_Position = modelViewProjection * position;
pos = vec2(position.x, position.y);
}
Changes to fragment shader (where 0.4 is the slope of the top of the quad):
precision highp float;
uniform sampler2D yourTexture;
varying highp vec2 texCoord;
varying highp vec2 pos;
void main() {
gl_FragColor = texture2D(yourTexture, vec2(pos.x/5.0, 1.0 - pos.y/(3.0+0.4*pos.x)));
// gl_FragColor = vec4 (0.0, pos.y/5.0, 0.0, 1.0);
}
Which gives me exactly what I am looking for but it feels very wrong way of doing things.
EDIT: I am using the pos variable instead of texCoord as the texCoord is giving me weird as hell results which I can't really understand :(.
If I was to modify my fragment shader as:
precision highp float;
uniform sampler2D yourTexture;
varying highp vec2 texCoord;
varying highp vec2 pos;
void main() {
// gl_FragColor = texture2D(yourTexture, vec2(pos.x/5.0, 1.0 - pos.y/(3.0+0.4*pos.x)));
gl_FragColor = texture2D(yourTexture, texCoord);
// gl_FragColor = vec4 (0.0, pos.y/5.0, 0.0, 1.0);
}
I get something like the pic below:
Which says to me that there is something wrong with my texture coordinates definition but I can't figure out what?
EDIT2: OK progress. Based on an answer that Lock gave on a related thread I redefined my uvs using the method below:
let uvSource = SCNGeometrySource(data: uvData,
semantic: SCNGeometrySourceSemanticTexcoord,
vectorCount: textureCord.count,
floatComponents: true,
componentsPerVector: 3,
bytesPerComponent: sizeof(Float),
dataOffset: 0,
dataStride: sizeof(vector_float2))
Now it gives me a result like this when I use texCoord in my frag shader:
Its not as great as the curved deformities I get in the texture above. But its progress. Any ideas how can I get it to smooth out like pic 2 in this Massive question?
Kindly help.
in the fragment shader you have to use texCoord instead of pos to sample your texture.
Also note that you don't need a program to texture an arbitrary geometry. You can use regular material for custom geometries too. If you want to do something that's not possible with a regular material, you can also have a look at shader modifiers, they are easier to use than programs and don't require you to manually handle lights for instance.
Related
I'm trying to follow the suggestion in Apple's OpenGL ES Programming Guide section on instanced drawing: Use Instanced Drawing to Minimize Draw Calls. I have started with the example project that XCode generates for a Game app with OpenGL and Swift and converted it to OpenGL ES 3.0, adding some instanced drawing to duplicate the cube.
This works fine when I use the gl_InstanceID technique and simply generate an offset from that. But when I try to use the 'instanced arrays' technique to pass data in via a buffer I am not seeing any results.
My updated vertex shader looks like this:
#version 300 es
in vec4 position;
in vec3 normal;
layout(location = 5) in vec2 instOffset;
out lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
void main()
{
vec3 eyeNormal = normalize(normalMatrix * normal);
vec3 lightPosition = vec3(0.0, 0.0, 1.0);
vec4 diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));
colorVarying = diffuseColor * nDotVP;
// gl_Position = modelViewProjectionMatrix * position + vec4( float(gl_InstanceID)*1.5, float(gl_InstanceID)*1.5, 1.0,1.0);
gl_Position = modelViewProjectionMatrix * position + vec4(instOffset, 1.0, 1.0);
}
and in my setupGL() method I have added the following:
//glGenVertexArraysOES(1, &instArray) // EDIT: WRONG
//glBindVertexArrayOES(instArray) // EDIT: WRONG
let kMyInstanceDataAttrib = 5
glGenBuffers(1, &instBuffer)
glBindBuffer(GLenum(GL_ARRAY_BUFFER), instBuffer)
glBufferData(GLenum(GL_ARRAY_BUFFER), GLsizeiptr(sizeof(GLfloat) * instData.count), &instData, GLenum(GL_STATIC_DRAW))
glEnableVertexAttribArray(GLuint(kMyInstanceDataAttrib))
glVertexAttribPointer(GLuint(kMyInstanceDataAttrib), 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0/*or 8?*/, BUFFER_OFFSET(0))
glVertexAttribDivisor(GLuint(kMyInstanceDataAttrib), 1);
along with some simple instance offset data:
var instData: [GLfloat] = [
1.5, 1.5,
2.5, 2.5,
3.5, 3.5,
]
I am drawing the same way with the above as with the instance id technique:
glDrawArraysInstanced(GLenum(GL_TRIANGLES), 0, 36, 3)
But it seems to have no effect. I just get the single cube and it doesn't even seem to fail if I remove the buffer setup, so I suspect my setup is missing something.
EDIT: Fixed the code by removing two bogus lines from init.
I had an unecessary gen and bind for the attribute vertex array. The code as edited above now works.
I've seen different methods for shadow mapping all around the internet but I've only seen one method for mapping the shadows cast by a point light source, i.e. Cube mapping. Even though I've heard of it I've never seen an actual explanation of it.
I started writing this code before I had heard of cube mapping. My goal with this code was to map the shadow depths from spherical coordinates to a 2D texture.
I've simplified the coloring of the fragments for now in order to better visualize what's happening.
But, basically the models are a sphere of radius 2.0 at coordinates (0.0, 0.0, -5.0) and a hyperboloid of height 1.0 at (0.0, 0.0, -2.0) with the light source at (0.0, 0.0, 8.0).
If I scale(written in the code) the depth values by an inverse factor of less than 9.6 they both appear completely colored as the ambient color. Greater than 9.6 and they slowly become normally textured. I tried to make an example in jsfiddle but I couldn't get textures to work.
The method isn't working all together and I'm lost.
<script id="shadow-vs" type="x-shader/x-vertex">
attribute vec3 aVertexPosition;
varying float vDepth;
uniform vec3 uLightLocation;
uniform mat4 uMMatrix;
void main(void){
const float I_PI = 0.318309886183790671537767; //Inverse pi
vec4 aPos = uMMatrix * vec4(aVertexPosition, 1.0); //The actual position of the vertex
vec3 position = aPos.xyz - uLightLocation; //The position of the vertex relative to the light source i.e. "the vector"
float len = length(position);
float theta = 2.0 * acos(position.y/len) * I_PI - 1.0; //The angle of the vector from the xz plane bound between -1 and 1
float phi = atan(position.z, position.x) * I_PI; //The angle of the vector on the xz plane bound between -1 and 1
vDepth = len; //Divided by some scale. The depth of the vertex from the light source
gl_Position = vec4(phi, theta, len, 1.0);
}
</script>
<script id="shadow-fs" type="x-shader/x-fragment">
precision mediump float;
varying float vDepth;
void main(void){
gl_FragColor = vec4(vDepth, 0.0, 0.0, 1.0); //Records the depth in the red channel of the fragment color
}
</script>
<script id="shader-vs" type="x-shader/x-vertex">
attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;
attribute vec2 aTextureCoord;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat4 uMMatrix;
uniform mat3 uNMatrix;
varying vec2 vTextureCoord;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
varying vec4 aPos;
void main(void) {
aPos = uMMatrix * vec4(aVertexPosition, 1.0); //The actual position of the vertex
vPosition = uMVMatrix * uMMatrix * vec4(aVertexPosition, 1.0);
gl_Position = uPMatrix * vPosition;
vTextureCoord = aTextureCoord;
vTransformedNormal = normalize(uNMatrix * mat3(uMMatrix) * aVertexNormal);
}
</script>
<script id="shader-fs" type="x-shader/x-fragment">
precision mediump float;
varying vec2 vTextureCoord;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
varying vec4 aPos;
uniform sampler2D uSampler;
uniform sampler2D uShSampler;
uniform vec3 uLightLocation;
uniform vec3 uAmbientColor;
uniform vec4 uLightColor;
void main(void) {
const float I_PI = 0.318309886183790671537767;
vec3 position = aPos.xyz - uLightLocation; //The position of the vertex relative to the light source i.e. "the vector"
float len = length(position);
float theta = acos(position.y/len) * I_PI; //The angle of the vector from the xz axis bound between 0 and 1
float phi = 0.5 + 0.5 * atan(position.z, position.x) * I_PI; //The angle of the vector on the xz axis bound between 0 and 1
float posDepth = len; //Divided by some scale. The depth of the vertex from the light source
vec4 shadowMap = texture2D(uShSampler, vec2(phi, theta)); //The color at the texture coordinates of the current vertex
float shadowDepth = shadowMap.r; //The depth of the vertex closest to the light source
if (posDepth > shadowDepth){ //Check if this vertex is further away from the light source than the closest vertex
gl_FragColor = vec4(uAmbientColor, 1.0);
}
else{
gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
}
}
</script>
I'm currently trying to write a shader that should include a simple point light in OpenGL ES 2.0, but it's not quite working.
I built my own small SceneGraph and each Object (currently only Boxes) can have its own translation/rotation/scale and rendering works fine. Each of the boxes assigns its own modelView and normals matrix and all of them use the same projection matrix.
For each object I pass the matrices and the light position to the shader as a uniform.
If the Object does not rotate the light works fine, but as soon as the Object rotates the light seems to rotate with the object instead of staying at the same position.
Here is some Code.
First the creating the matrices:
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
Each of the nodes computes an own transformation matrix containing the translation/rotation/scale and multiplies it with the modelViewMatrix:
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, transformation);
This matrix is passed to the shader and after the object has been rendered the old matrix is recovered.
The normal matrix is calculated as follows:
GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL);
Vertex-Shader:
attribute vec4 Position;
attribute vec2 TexCoordIn;
attribute vec3 Normal;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat3 normalMatrix;
uniform vec3 lightPosition;
varying vec2 TexCoordOut;
varying vec3 n, PointToLight;
void main(void) {
gl_Position = modelViewProjectionMatrix * Position;
n = normalMatrix * Normal;
PointToLight = ((modelViewMatrix * vec4(lightPosition,1.0)) - (modelViewMatrix * Position)).xyz;
// Pass texCoord
TexCoordOut = TexCoordIn;
}
Fragment-Shader:
varying lowp vec2 TexCoordOut;
varying highp vec3 n, PointToLight;
uniform sampler2D Texture;
void main(void) {
gl_FragColor = texture2D(Texture, TexCoordOut);
highp vec3 nn = normalize(n);
highp vec3 L = normalize(PointToLight);
lowp float NdotL = clamp(dot(n, L), -0.8, 1.0);
gl_FragColor *= (NdotL+1.)/2.;
}
I guess the PointToLight is computed wrong, but I can't figure out what's going wrong.
I finally figured out what went wrong.
Instead of multiplying the lightPosition with the modelViewMatrix, I just need to multiply it with the viewMatrix, which only contains the transformations of the camera and not the transformations for the box:
PointToLight = ((viewMatrix * vec4(lightPosition,1.0)) - (viewMatrix * modelMatrix * Position)).xyz;
Now it works fine.
I am finding that in my fragment shader, these 2 statements give identical output:
// #1
// pos is set from gl_Position in vertex shader
highp vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0;
// #2 - equivalent?
highp vec2 texc2 = gl_FragCoord.xy/uWinDims.xy;
If this is correct, could you please explain the math? I understand #2, which is what I came up with, but saw #1 in a paper. Is this an NDC (normalized device coordinate) calculation?
The context is that I am using the texture coordinates with an FBO the same size as the viewport. It's all working, but I'd like to understand the math.
Relevant portion of vertex shader:
attribute vec4 position;
uniform mat4 modelViewProjectionMatrix;
varying lowp vec4 vColor;
// transformed position
varying highp vec4 pos;
void main()
{
gl_Position = modelViewProjectionMatrix * position;
// for fragment shader
pos = gl_Position;
vColor = aColor;
}
Relevant portion of fragment shader:
// transformed position - from vsh
varying highp vec4 pos;
// viewport dimensions
uniform highp vec2 uWinDims;
void main()
{
highp vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0;
// equivalent?
highp vec2 texc2 = gl_FragCoord.xy/uWinDims.xy;
...
}
(pos.xy / pos.w) is the coordinate value in normalized device coordinates (NDC). This value ranges from -1 to 1 in each dimension.
(NDC + 1.0)/2.0 changes the range from (-1 to 1) to (0 to 1) (0 on the left of the screen, and 1 on the right, similar for top/bottom).
Alternatively, gl_FragCoord gives the coordinate in pixels, so it ranges from (0 to width) and (0 to height).
Dividing this value by width and height (uWinDims), gives the position again from 0 on the left side of the screen, to 1 on the right side.
So yes they appear to be equivalent.
I am trying to pass in an array of vec2 to a fragment shader but i can't seem to work out how.
In my application i have the following array.
GLfloat myMatrix[] = { 100.0, 100.0,
200.0, 200.0 };
glUniformMatrix2fv(matrixLocation, 2, 0, myMatrix);
and in my fragment shader i am trying to access those values like so
uniform vec2 myMatrix[2];
gl_FragColor = gl_FragCoord.xy + myMatrix[0].xy;
however the fragcolor does not change which it should as if i hard code it to
gl_FragColor = gl_FragCoord.xy + vec2( 100.0, 100.0 ).xy;
Any ideas how i can pass these vec2 values into the shader
Thanks in advance