I would like to be able to dynamically repeat textures based on the scale size of a object (cube).
I have tried going through VerticesCube3D structure but get a crash when trying to change the values. I have my textures setup on repeat but currently it stretches the texture (I need to dynamically change TEX_COORD_MAX)
Vertex VerticesCube3D[] = {
// Front
{{1, -1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{1, 1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{-1, 1, 0}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{-1, -1, 0}, {1, 1, 1, 1}, {0, 0}},
// Back
{{1, 1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{-1, -1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{1, -1, -2}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{-1, 1, -2}, {1, 1, 1, 1}, {0, 0}},
// Left
{{-1, -1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{-1, 1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{-1, 1, -2}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{-1, -1, -2}, {1, 1, 1, 1}, {0, 0}},
// Right
{{1, -1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{1, 1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{1, 1, 0}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{1, -1, 0}, {1, 1, 1, 1}, {0, 0}},
// Top
{{1, 1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{1, 1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{-1, 1, -2}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{-1, 1, 0}, {1, 1, 1, 1}, {0, 0}},
// Bottom
{{1, -1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{1, -1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{-1, -1, 0}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{-1, -1, -2}, {1, 1, 1, 1}, {0, 0}}
};
You don't want to change the texture coordinates in your vertex data (because then you'll have to re-upload it to the hardware). Instead, you want to transform the texture coordinates on the GPU, probably in your vertex shader.
Read about Projective Texture Mapping. This is a pretty common technique, but surprisingly I can't find much in the way of good tutorials about it. Here's a university programming assignment that discusses it a bit.
The gist of it: you define a texture matrix to transform texture coordinate -- this is a transformation just like your model, view, and projection matrices, so on iOS you can create it with GLKMatrix4 functions just like those. Then provide that matrix to your vertex shader using a uniform variable. In the vertex shader, multiply the input texture coordinate (a vertex attribute) by the texture matrix to transform it before outputting it to a varying for use in the fragment shader.
If you define texture coordinates in your vertex data such that a texture covers an object exactly once, you can then use a texture matrix to scale the texture down. If it's scaled down, and your texture wrap parameters are set to repeat, your texture will tile across the face of the object.
Related
This works as expected and returns 1 for one of the groups.
from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [6, 6, 6, 1, 2, 2]
metrics.homogeneity_completeness_v_measure(labels_true, labels_pred)
(1.0, 0.6853314789615865, 0.8132898335036762)
But this returns 0.75 for all 3 groups while I expected "1.0" for one of the groups like the example mentioned above.
y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
labels = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 2, 2, 2, 2, 0, 2, 2, 2,
2, 2, 2, 0, 0, 2, 2, 2, 2, 0, 2, 0, 2, 0, 2, 2, 0, 0, 2, 2, 2, 2,
2, 0, 2, 2, 2, 2, 0, 2, 2, 2, 0, 2, 2, 2, 0, 2, 2, 0]
metrics.homogeneity_completeness_v_measure(y, labels)
(0.7514854021988339, 0.7649861514489816, 0.7581756800057786)
Expected 1 in one of the groups above!
Update:
As you can see, one of the groups matches with the other and therefore one of the values should have been 1 instead of 0.75 accuracy that I get for all 3 groups. This is not expected!
from collections import Counter
Counter(y)
Counter(labels)
Counter({0: 50, 1: 50, 2: 50})
Counter({1: 50, 0: 62, 2: 38})
Firstly, homogeneity, completeness and v measure score are calculated as follows:
C and K are two random variables. In your case, C is the labels true, while K is the label predicted.
If h = 1, it means that H(C|K) = 0, as H(C) is always less than 0. If H(C|K) = 0, it means that random variable C is completely determined by given random variable K, you can see more detailed definition on conditional entropy. So in your first case, why h = 1? Because when I give a value of random variable K (labels predicted), I know what the random variable C (labels true) will be. If k equals 6, I know c is 0. If k is 1, c is 1, etc. So when talking about the second case, why h != 1 or c != 1. Because even though there is a perfect match between 1 to 0, but there are no perfect match for other classes. If I give k is 1, I know c is 0. But when I give k is 0, I'm not sure whether c is 1 or 2. Thus, the homogeneity score or in reverse, completeness score, you can think about that, will not be 1.
I have this ugly model of a dodecahedron that i need to rotate (live) on every axis:
local phi = 1.618
local b = 1 / phi
local c = 2 - phi
self.polys = {
{{ c, 0, 1}, {-c, 0, 1}, {-b, b, b}, { 0, 1, c}, { b, b, b}},
{{-c, 0, 1}, { c, 0, 1}, { b, -b, b}, { 0, -1, c}, {-b, -b, b}},
{{ c, 0, -1}, {-c, 0, -1}, {-b, -b, -b}, { 0, -1, -c}, { b, -b, -b}},
{{-c, 0, -1}, { c, 0, -1}, { b, b, -b}, { 0, 1, -c}, {-b, b, -b}},
{{ 0, 1, -c}, { 0, 1, c}, { b, b, b}, { 1, c, 0}, { b, b, -b}},
{{ 0, 1, c}, { 0, 1, -c}, {-b, b, -b}, {-1, c, 0}, {-b, b, b}},
{{ 0, -1, -c}, { 0, -1, c}, {-b, -b, b}, {-1, -c, 0}, {-b, -b, -b}},
{{ 0, -1, c}, { 0, -1, -c}, { b, -b, -b}, { 1, -c, 0}, { b, -b, b}},
{{ 1, c, 0}, { 1, -c, 0}, { b, -b, b}, { c, 0, 1}, { b, b, b}},
{{ 1, -c, 0}, { 1, c, 0}, { b, b, -b}, { c, 0, -1}, { b, -b, -b}},
{{-1, c, 0}, {-1, -c, 0}, {-b, -b, -b}, {-c, 0, -1}, {-b, b, -b}},
{{-1, -c, 0}, {-1, c, 0}, {-b, b, b}, {-c, 0, 1}, {-b, -b, b}}
}
The main problem that i have is that i have absolutely no idea what i'm doing.
It’s a short question for a potentially long topic. I would suggest you start here for an gentle introduction to the math behind 3D rotation.
If you are truly interested, I would look at the “3D ebook” link on this site. The book is targeted towards users of Codea, a Lua programming environment on the iPad. The first major example that it leads up to is rotating a 3D cube. Please note that the code is not completely portable to other Lua environments, as it relies on some built-in Codea functions. That said, it is a very easy intro to 3D concepts in general, and as a plus, it uses Lua. And Codea is great, by the way.
If you are looking for someone to just code it up for you, this may not be the place. Better to learn the concepts, and come to SO when you get stuck.
I have downloaded a sample project that uses OpenGL ES with ios using Objective-C. The app creates a simple cube. I want to make the cube a rectangular prism by decreasing the distance between the front face and back face of the cube (make it slimmer). In order to do that I need to decrease the size of the top, bottom, left, and right faces. I am new to openGL and dont know which code to change in order to decrease the four faces of the cube. Here is the code
typedef struct {
float Position[3];
float Color[4];
float TexCoord[2];
} Vertex;
const Vertex Vertices[] = {
// Front
{{1, -1, 1}, {1, 1, 1, 1}, {1, 0}},
{{1, 1, 1}, {1, 1, 1, 1}, {1, 1}},
{{-1, 1, 1}, {1, 1, 1, 1}, {0, 1}},
{{-1, -1, 1}, {1, 1, 1, 1}, {0, 0}},
// Back
{{1, 1, -1}, {1, 1, 1, 1}, {0, 1}},
{{-1, -1, -1}, {1, 1, 1, 1}, {1, 0}},
{{1, -1, -1}, {1, 1, 1, 1}, {0, 0}},
{{-1, 1, -1}, {1, 1, 1, 1}, {1, 1}},
// Left
{{-1, -1, 1}, {1, 1, 1, 1}, {1, 0}},
{{-1, 1, 1}, {1, 1, 1, 1}, {1, 1}},
{{-1, 1, -1}, {1, 1, 1, 1}, {0, 1}},
{{-1, -1, -1}, {1, 1, 1, 1}, {0, 0}},
// Right
{{1, -1, -1}, {1, 1, 1, 1}, {1, 0}},
{{1, 1, -1}, {1, 1, 1, 1}, {1, 1}},
{{1, 1, 1}, {1, 1, 1, 1}, {0, 1}},
{{1, -1, 1}, {1, 1, 1, 1}, {0, 0}},
// Top
{{1, 1, 1}, {1, 1, 1, 1}, {1, 0}},
{{1, 1, -1}, {1, 1, 1, 1}, {1, 1}},
{{-1, 1, -1}, {1, 1, 1, 1}, {0, 1}},
{{-1, 1, 1}, {1, 1, 1, 1}, {0, 0}},
// Bottom
{{1, -1, -1}, {1, 1, 1, 1}, {1, 0}},
{{1, -1, 1}, {1, 1, 1, 1}, {1, 1}},
{{-1, -1, 1}, {1, 1, 1, 1}, {0, 1}},
{{-1, -1, -1}, {1, 1, 1, 1}, {0, 0}}
};
const GLubyte Indices[] = {
// Front
0, 1, 2,
2, 3, 0,
// Back
4, 6, 5,
4, 5, 7,
// Left
8, 9, 10,
10, 11, 8,
// Right
12, 13, 14,
14, 15, 12,
// Top
16, 17, 18,
18, 19, 16,
// Bottom
20, 21, 22,
22, 23, 20
};
If you guys think that this isnt the code to determine the size of the faces, please tell me what method was probably used so I can find it in the project and post it here.
The problem was fixed thanks to Tommy. But now I have new issue. The size of the four faces has decreased but the front and back face now have a gap between the rest of the faces, here is a screenshot.
How can I move the front face inwards towards the other faces so its attached to them?
Each entry in the Vertices array defines an instance of the Vertex struct. So the first three things are the Position — the first vertex listed has position {1, -1, 1}, the second has {1, 1, 1}, etc. They're all floating-point numbers in this code so anything will do.
Indices groups the vertices, into triangles it is strongly implied. So the 'front' is the triangle between the 0th, 1st and 2nd vertex plus the triangle between the 2nd, 3rd and 0th vertex.
Therefore the size of the top face is determined by the position of vertices 0, 1, 2 and 3. They all have z = 1. If you changed that to e.g. z = 0.5 then you'd move the top face towards the centre of the cube.
I downloaded a project that displays a square with some texture. The square is currently located near the bottom of the screen, and I want to move it to the middle. Here is the code.
Geometry.h
#import "GLKit/GLKit.h"
#ifndef Geometry_h
#define Geometry_h
typedef struct {
float Position[3];
float Color[4];
float TexCoord[2];
float Normal[3];
} Vertex
typedef struct{
int x;
int y;
int z;
} Position;
extern const Vertex VerticesCube[24];
extern const GLubyte IndicesTrianglesCube[36];
#endif
Here is the code in geometry.m
const Vertex VerticesCube[] = {
// Front
{{1, -1, 1}, {1, 0, 0, 1}, {0, 1}, {0, 0, 1}},
{{1, 1, 1}, {0, 1, 0, 1}, {0, 2.0/3.0}, {0, 0, 1}},
{{-1, 1, 1}, {0, 0, 1, 1}, {1.0/3.0, 2.0/3.0}, {0, 0, 1}},
{{-1, -1, 1}, {0, 0, 0, 1}, {1.0/3.0, 1}, {0, 0, 1}},
};
const GLubyte IndicesTrianglesCube[] =
{
// Front
0, 1, 2,
2, 3, 0,
}
What part of this code determines the position of the rendered object on the screen?
None of the code you posted has to do with screen position.
VerticesCube specifies the cube corners in an arbitrary 3D space. Somewhere in code you haven't posted, a projection transform (and probably also view and model transforms) map each vertex to clip space, and a glViewport call (which is probably implicitly done for you if you're using GLKView) maps clip space to screen/view coordinates.
Rearranging things on screen could involve any one of those transforms, and which one to use is a choice that depends on understanding each one and how it fits into the larger context of your app design.
This is the sort of thing you'd get from the early stages of any OpenGL tutorial. Here's a decent one.
I want to draw as many pyramids to fill up the space. I can draw a single pyramid, change its color etc. But, now, I want to draw a lot of pyramids that can fill the screen. I want to use a single set of vertices and indices.
The vertex and index with color information are as follows :
const Vertex Vertices [] = {
{{-1, -1, -1}, {1, 0, 0, 1}},
{{1, -1, -1}, {1, 0, 0, 1}},
{{1, -1, 1}, {1, 0, 0, 1}},
{{-1, -1, 1}, {1, 0, 0, 1}},
{{0, 1, 0}, {1, 0, 0, 1}}
};
const GLubyte Indices[] = {
2, 4, 3,
1, 4, 2,
0, 4, 1,
4, 0, 3
};
Can anybody help me with the code as, I know I am doing some mistakes.
In OpenGL ES 2.0 the only way you can do this is simply by re-rendering the pyramid in the different positions on the screen. What you're getting at is called 'instancing' and it's only supported in OpenGL ES 3.0, where you would have as you said one set of vertices and indices but you would issue a gl command to draw many instances of them, all the while in the fragment shader you would have a built-in variable "gl_InstanceID" that you would use to tell what current instance you're on.
There are some vendor-specific extensions that allow you to do instancing in OpenGL ES 2.0, such as NV_draw_instanced, but again that would only work for specific vendors