Trying to flip an OpenGL texture - ios

I'm trying to invert the Y values of a texture on OpenGL ES 2.0, and have had no luck after several days of experimentation. Here's the code in my didRender block (it's a scene kit scene).
let textureCoordinates: [GLfloat] = [
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 1.0]
let flipVertical: [GLfloat] = [
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0]
glEnableVertexAttribArray(0)
glEnableVertexAttribArray(1)
glVertexAttribPointer(0, 2, GLenum(GL_FLOAT), 0, 0, flipVertical)
glVertexAttribPointer(1, 2, GLenum(GL_FLOAT), 0, 0, textureCoordinates)
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 4)
glBindTexture(GLenum(GL_TEXTURE_2D), 0)
glFlush()
Is there anything that sticks out to you as wrong? My understanding is that I can flip the texture without having to rewrite to a new texture. Is that true? Thanks!

You don't need a separate vertex attribute to do the flip; just replace the textureCoordinate array with the values from flipVertical (and then delete all of the code related to flipVertical - you don't need it).

Related

Metal Tessellation on subdivided quad

I am totally new to tessellation and relatively new to Metal API, and have been referring to this sample code https://developer.apple.com/library/archive/samplecode/MetalBasicTessellation/Introduction/Intro.html
I realise the max tessellation factor on iOS is 16, which is very low for my use case compared to 64 on OSX.
I suppose i'll need to apply tessellation on a quad that has been sub-divided to 4 by 4 smaller sections to begin with, so that after tessellation it will end up into something like one with a tessellation factor of 64?
So, i've changed the input control points to something like this
static const float controlPointPositionsQuad[] = {
-0.8, 0.8, 0.0, 1.0, // upper-left
0.0, 0.8, 0.0, 1.0, // upper-mid
0.0, 0.0, 0.0, 1.0, // mid-mid
-0.8, 0.0, 0.0, 1.0, // mid-left
-0.8, 0.0, 0.0, 1.0, // mid-left
0.0, 0.0, 0.0, 1.0, // mid-mid
0.0, -0.8, 0.0, 1.0, // lower-mid
-0.8, -0.8, 0.0, 1.0, // lower-left
0.0, 0.8, 0.0, 1.0, // upper-mid
0.8, 0.8, 0.0, 1.0, // upper-right
0.8, 0.0, 0.0, 1.0, // mid-right
0.0, 0.0, 0.0, 1.0, // mid-mid
0.0, 0.0, 0.0, 1.0, // mid-mid
0.8, 0.0, 0.0, 1.0, // mid-right
0.8, -0.8, 0.0, 1.0, // lower-right
0.0, -0.8, 0.0, 1.0, // lower-mid
};
and for the drawPatches i changed it to 16 instead of 4.
But the result is that it is only showing only the first 4 points (top left).
if i change the vertex layout stride to this:
vertexDescriptor.layouts[0].stride = 16*sizeof(float);
it is still showing the same.
I don't really know what i'm doing but what i'm going for is similar to tessellating a 3d mesh, but for my case is just a quad with subdivisions.
I am unable to find any tutorial / code samples that teach about this using Metal API.
Can someone please point me to the right direction? thanks!
Here are a few things to check:
Ensure that your tessellation factor compute kernel is generating inside/edge factors for all patches by dispatching a compute grid of the appropriate size.
When dispatching threadgroups to execute your tessellation factor kernel, use 1D threadgroup counts and sizes (such that the total thread count is the number of patches, and the heights of both sizes passed to the dispatch method are 1).
When drawing patches, the first parameter (numberOfPatchControlPoints) should equal the number of control points in each patch (3 for triangles; 4 for quads), and the third parameter should be the number of patches to draw.

ARKit nodes dissapear after sceneView.session.setWorldOrigin transformation

I have some code that is comprised of the delegate method for obtaining a heading, and a transformation. I take the heading and turn it into radians and use the angle to rotate around the y-axis:
┌ ┐
Y = | cos(ry) 0 sin(ry) 0 |
| 0 1 0 0 |
| -sin(ry) 0 cos(ry) 0 |
| 0 0 0 1 |
└ ┘
What are the first two columns in SCNMatrix4
code:
func locationManager(_ manager: CLLocationManager, didUpdateHeading newHeading: CLHeading) {
print("received heading: \(String(describing: newHeading))")
self.currentHeading = newHeading
print("\(Float(currentHeading.trueHeading)) or for magneticHeading: \(currentHeading.magneticHeading)")
let headingInRadians = degreesToRadians(deg: Float(currentHeading.trueHeading))
print("\(headingInRadians) -------- headingInRadians")
var m = matrix_float4x4()
m.columns.3 = [0.0, 0.0, 0.0, 1.0]
m.columns.2 = [sin(headingInRadians), 0.0, cos(headingInRadians), 0.0]
m.columns.1 = [0.0, 1.0, 0.0, 0.0]
m.columns.0 = [cos(headingInRadians), 0.0, -sin(headingInRadians), 0.0]
sceneView.session.setWorldOrigin(relativeTransform: m)
}
It is removing all the nodes when I call setWorldOrigin. I have tried pausing the session, running the function, and then starting the session again, and I get this weird wonky low fps, low light situation. I know it is the function call to setWorldOrigin because when I remove it I see the nodes and they persist.
UPDATE
Ive been working on this...I am debugging by simply trying to change the scale by 2...all I should see happen is the nodes I have placed in a grid should spread out...however I am still getting the same result. After trying to setWorldOrigin the nodes are removed. Does using this function reset something? Is there a special place I am supposed to use it? (some delegate function)?
UPDATE
print("\(sceneView.scene.rootNode) --- rootNode in renderer") produces:
<SCNNode: 0x1c01f8100 | 111 children> --- rootNode in renderer
So it appears that the rootNode and its children are still somewhere...but where are they going with such a simple and small transformation?
UPDATE
print("\(sceneView.scene.rootNode.position) --- rootNode in renderer") produces:
SCNVector3(x: 0.0, y: 0.0, z: 0.0) --- rootNode in renderer
Yet...I see none of the children...so the rootNode appears to be somewhere else.
UPDATE
I can confirm that the transform is not happening...the positioning of the childnodes (that I can't see) is still the same as its original state (a node every 2 grid blocks (meters) i.e.:
SCNVector3(x: 6.0, y: 0.0, z: -4.0) --- rootNode child node
SCNVector3(x: 6.0, y: 0.0, z: -2.0) --- rootNode child node
UPDATE
The narrowest view of this problem I have now is that even a simple rotate removes the nodes from view...since there is no position change...this means that something is going on with the rendering process I believe.
func viewDidLoad() {
...
sceneView.scene = scene
view.addSubview(sceneView)
let angle = coordinateConverter.getUprightMKMapCameraHeading()
print("\(angle) --- angle")
mRotate = matrix_float4x4()
mRotate.columns.3 = [0.0, 0.0, 0.0, 1.0]
mRotate.columns.2 = [Float(sin(angle)), 0.0, Float(cos(angle)), 0.0]
mRotate.columns.1 = [0.0, 0.0, 0.0, 0.0]
mRotate.columns.0 = [Float(cos(angle)), 0.0, Float(-sin(angle)), 0.0]
sceneView.session.setWorldOrigin(relativeTransform: mRotate)
console output:
281.689248803283 --- angle
Still, the virtual objects remain invisible, but placed at the origin...which should still be the same.
I also should note that the standard 1,1,1,1 transform DOES work...but obviously, it does nothing...but I guess just using the function doesn't make them disappear...they only disappear if your transform actually does something...
...
var identity = matrix_float4x4()
identity.columns.3 = [0.0, 0.0, 0.0, 1.0]
identity.columns.2 = [0.0, 0.0, 1.0, 0.0]
identity.columns.1 = [0.0, 1.0, 0.0, 0.0]
identity.columns.0 = [1.0, 0.0, 0.0, 0.0]
sceneView.session.setWorldOrigin(relativeTransform: identity)
...
The above transforms nothing...and the nodes remain in view.
If I change the matrix to this (translate by 10,10):
var identity = matrix_float4x4()
identity.columns.3 = [10.0, 0.0, 10.0, 1.0]
identity.columns.2 = [0.0, 0.0, 1.0, 0.0]
identity.columns.1 = [0.0, 1.0, 0.0, 0.0]
identity.columns.0 = [1.0, 0.0, 0.0, 0.0]
It works...
I'm just thinking... do I have to scale my world down (real-world MKMapKit coordinates/unit) maybe because the hardware can't handle a world that is scaled. Also, from the above test...I think I realized that the origin moves when you use this function, but the nodes do not, so If I want the nodes to remain at my position after the transform...and I need to transform them back. Still, the result is the same:
print("\(transformerFromPDFToMk.tx) -- tx")
print("\(transformerFromPDFToMk.ty) -- ty")
m = matrix_float4x4()
m.columns.3 = [Float(transformerFromPDFToMk.tx), 0.0, Float(transformerFromPDFToMk.ty), 1.0]
m.columns.2 = [0.0, 0.0, 1.0, 0.0]
m.columns.1 = [0.0, 1.0, 0.0, 0.0]
m.columns.0 = [1.0, 0.0, 0.0, 0.0]
sceneView.session.setWorldOrigin(relativeTransform: m)
for node in scene.rootNode.childNodes {
node.position = SCNVector3Make(-Float(transformerFromPDFToMk.tx) + node.position.x, node.position.y, Float(transformerFromPDFToMk.ty) + node.position.z)
}
console output:
81145547.3824476 -- tx
99399579.5362287 -- ty
UPDATE
I SEE MY OBJECTS (kind of)! I thought I had tried this before...but its working a little better now -
the code package I used defined a scale (coordinateConverter.unitSizeInMeters)...and I was converting to it improperly. But...they are flickering in and out rapidly...
m = matrix_float4x4()
m.columns.3 = [Float(transformerFromPDFToMk.tx) / Float(coordinateConverter.unitSizeInMeters), 0.0, Float(transformerFromPDFToMk.ty) / Float(coordinateConverter.unitSizeInMeters), 1.0]
m.columns.2 = [0.0, 0.0, 1.0, 0.0]
m.columns.1 = [0.0, 1.0, 0.0, 0.0]
m.columns.0 = [1.0, 0.0, 0.0, 0.0]
sceneView.session.setWorldOrigin(relativeTransform: m)
sceneView.session.setWorldOrigin(relativeTransform: m)
for node in scene.rootNode.childNodes {
node.position = SCNVector3Make(-Float(transformerFromPDFToMk.tx) /
Float(coordinateConverter.unitSizeInMeters) + node.position.x /
Float(coordinateConverter.unitSizeInMeters), node.position.y /
Float(coordinateConverter.unitSizeInMeters), -
Float(transformerFromPDFToMk.ty) /
Float(coordinateConverter.unitSizeInMeters) + node.position.z /
Float(coordinateConverter.unitSizeInMeters))
}
UPDATE
Is the flickering "z-fighting"? https://en.wikipedia.org/wiki/Z-fighting
After resolving the scaling issues, I reduced the size of the world (scale) and put the objects a little further apart from each other (to fix the flickering/"z-fighting") - wikipedia's page was very helpful, I'm pretty sure all I needed to do was reduce the world size.
Hey After looking through your code, I noticed that you have the 4x4 matrix set up wrong for a rotation around the y-axis.
You currently have.
var m = matrix_float4x4()
m.columns.3 = [0.0, 0.0, 0.0, 1.0]
m.columns.2 = [sin(headingInRadians), 0.0, cos(headingInRadians), 0.0]
m.columns.1 = [0.0, 1.0, 0.0, 0.0]
m.columns.0 = [cos(headingInRadians), 0.0, -sin(headingInRadians), 0.0]
The correct format is below based on the exchange:
https://math.stackexchange.com/questions/72014/given-an-angle-in-radians-how-could-i-calculate-a-4x4-rotation-matrix-about-the
var m = matrix_float4x4()
m.columns.3 = [0.0, 0.0, 0.0, 1.0]
m.columns.2 = [-sin(headingInRadians), 0.0, cos(headingInRadians), 0.0]
m.columns.1 = [0.0, 1.0, 0.0, 0.0]
m.columns.0 = [cos(headingInRadians), 0.0, sin(headingInRadians), 0.0]
This should make it work for you.
Depending on what you want to do with your scene I recommend taking the difference between the current heading and previous heading and using that as your angle. This should theoretically have it so that the scene is always facing the same direction for you.

How to get fragment coordinate in fragment shader in Metal?

This minimal Metal shader pair renders a simple interpolated gradient onto the screen (when provided with a vertex quad/triangle) based on the vertices' color attributes:
#include <metal_stdlib>
using namespace metal;
typedef struct {
float4 position [[position]];
float4 color;
} vertex_t;
vertex vertex_t vertex_function(const device vertex_t *vertices [[buffer(0)]], uint vid [[vertex_id]]) {
return vertices[vid];
}
fragment half4 fragment_function(vertex_t interpolated [[stage_in]]) {
return half4(interpolated.color);
}
…with the following vertices:
{
// x, y, z, w, r, g, b, a
1.0, -1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0,
-1.0, -1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0,
1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0
}
So far so good. It renders the well-known gradient triangle/quad.
The one you find in pretty much every single GPU HelloWorld tutorial.
I however need to have a fragment shader that instead of taking the interpolated vertex color computes a color based on the fragments position on screen.
It receives a screen-filling quad of vertices and then uses only the fragment shader to calculate the actual colors.
From my understanding the position of a vertex is a float4 with the first three elements being the 3d vector and the 4th element set to 1.0.
So—I thought—it should be easy to modify the above to have it simply reinterpret the vertex' position as a color in the fragment shader, right?
#include <metal_stdlib>
using namespace metal;
typedef struct {
float4 position [[position]];
} vertex_t;
vertex vertex_t vertex_function(const device vertex_t *vertices [[buffer(0)]], uint vid [[vertex_id]]) {
return vertices[vid];
}
fragment half4 fragment_function(vertex_t interpolated [[stage_in]]) {
float4 color = interpolated.position;
color += 1.0; // move from range -1..1 to 0..2
color *= 0.5; // scale from range 0..2 to 0..1
return half4(color);
}
…with the following vertices:
{
// x, y, z, w,
1.0, -1.0, 0.0, 1.0,
-1.0, -1.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0,
}
I was quite surprised however to find a uniformly colored (yellow) screen being rendered, instead of a gradient going from red=0.0 to red=1.0 in x-axis and green=0.0 to green=1.0 in x-axis:
(expected render output vs. actual render output)
The interpolated.position appears to be yielding the same value for each fragment.
What am I doing wrong here?
Ps: (While this dummy fragment logic could have easily been accomplished using vertex interpolation, my actual fragment logic cannot.)
The interpolated.position appears to be yielding the same value for
each fragment.
No, the values are just very large. The variable with the [[position]] qualifier, in the fragment shader, is in pixel coordinates. Divide by the render target dimensions, and you'll see what you want, except for having to invert the green value, because Metal's convention is to define the upper-left as the origin for this, not the bottom-left.

glDrawArray is not working

I want to make some changes to an image using OpenGL.
So after loading the image, I prepare the texture and I put the following code.But The image didn't change to a triangle.
What am I doing wrong ?
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
static const Vertex3D vertices[] = {
{-1.0, 1.0, -0.0},
{ 1.0, 1.0, -0.0},
{ 0.0, -1.0, -0.0},
};
static const Vector3D normals[] = {
{0.0, 0.0, 1.0},
{0.0, 0.0, 1.0},
{0.0, 0.0, 1.0},
};
static const GLfloat texCoords[] = {
0.0, 1.0,
1.0, 0.0,
0.0, 0.0,
};
glLoadIdentity();
glTranslatef(0.0, 0.0, -3.0);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glVertexPointer(3, GL_FLOAT, 3, vertices);
glNormalPointer(GL_FLOAT, 0, normals);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
//initiate the drawing process, we want a triangle, start at index 0 and draw 3 vertices
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);`
I am surprised by your glVertexPointer(3, GL_FLOAT, 3, vertices); the second 3 indicates the stride (a kind of spacing between the numbers)... I think it should be 0 instead of 3.
glVertexPointer(3, GL_FLOAT, 0, vertices);
Actually, do you see a triangle or nothing at all ?
Good luck!
Pierre
Ok: I tried your code, so I can say :
1) you shall put glVertexPointer(3, GL_FLOAT, 0, vertices); stride of 0 instead of 3, does clearly not work with 3 (even useless to check it) : there is no gap between your values.
2) it may come from your initialisation of the view (common problem): how do you setup the projection and modelview matrices? For instance, to see the triangle, I have to put
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0, 1.0, 1.0, 10.0); // field of view=45°, zNear..zFar = 1 to 10
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0, 0, -3.0);
i.e. don't forget to tell which matrix you wish to set before doing it, and setup the pojection properly: zNear <= min(your vertices.z), zFar >= max(your vertices.z) (Oooo it is over now, with OpenGL 4, no more implicit matrices)
I wish you find the bug.
Cheers

CGContextShowText is rendering upside down text (mirror image)

Without my doing anything to request upside down text, CGContextShowText is drawing it in that fashion. It is a mirror image vertically of what it should be.
float font_size = 19.;
CGContextSelectFont (context, "Helvetica", font_size, kCGEncodingMacRoman);
CGContextSetTextDrawingMode (context, kCGTextFill);
CGContextSetRGBStrokeColor (context, 255, 0, 0, 0);
CGContextSetRGBFillColor (context, 255, 0, 0, 0);
CGContextSetTextPosition (context, x, y);
CGContextShowText (context, cstring, strlen(cstring));
How can I fix this? Also why is this the default drawing mode?
Thanks.
This "upsidedownness" commonly comes into play when the API renders to your cgContext from top to bottom, but somehow you're drawing from bottom to top. Anyway, the 2 line solution I use is:
CGAffineTransform trans = CGAffineTransformMakeScale(1, -1);
CGContextSetTextMatrix(tex->cgContext, trans);
Swift 3
let context = UIGraphicsGetCurrentContext()!
let textTransform = CGAffineTransform(scaleX: 1.0, y: -1.0)
context.textMatrix = textTransform
The best solution to this problem is to simply call this at the beginning of your draw routine. (Make sure to only call it once.)
CGAffineTransform transform;
transform = CGAffineTransformConcat(CGContextGetTextMatrix(ctx),
CGAffineTransformMake(1.0, 0.0, 0.0,
-1.0, 0.0, 0.0));
CGContextSetTextMatrix(ctx, transform);

Resources