I'm having issues with Texture2D and I'd like to understand how to use it better.
I've taken the Crashlander Texture2D class from here and a default OpenGL project in XCode 4, forcing it to load OpenGL ES1.1
First, a conceptual question. The size on the Texture2D init method is clearly an OpenGL size, but what relation to the OpenGL world does the fontSize parameter have?
Second, debugging. The result I get from the code below is a black (Or whatever colour I set in glColor) square where the text should be.
Here's the changes I've made in my code:
- (void)awakeFromNib
{
EAGLContext *aContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!aContext) {
aContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
}
self.labelAtTheTop = [[[Texture2D alloc] initWithString:#"Some Text" dimensions:CGSizeMake(1, 1) alignment:UITextAlignmentLeft fontName:#"Helvetica" fontSize:14.0f] autorelease];
if (!aContext)
NSLog(#"Failed to create ES context");
else if (![EAGLContext setCurrentContext:aContext])
NSLog(#"Failed to set ES context current");
self.context = aContext;
[aContext release];
[(EAGLView *)self.view setContext:context];
[(EAGLView *)self.view setFramebuffer];
animating = FALSE;
animationFrameInterval = 1;
self.displayLink = nil;
}
- (void)drawFrame
{
[(EAGLView *)self.view setFramebuffer];
// Replace the implementation of this method to do your own custom drawing.
glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(0.0f, 0.0f, 0.0f, 1.0f);
glPushMatrix();
glLoadIdentity();
[self.labelAtTheTop drawAtPoint:CGPointMake(0, 0)];
glPopMatrix();
glDisable(GL_COLOR_MATERIAL);
// Disable modes so they don't interfere with other parts of the program
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
[(EAGLView *)self.view presentFramebuffer];
}
Crashlander is really an old code base, so I would suggest avoiding it. There is a perfectly good 2D engine for the iPhone called Cocos2D http://www.cocos2d-iphone.org/. About the code, try commenting glDisable(GL_COLOR_MATERIAL); plus glColor4f(0,0,0,1); actually represents black color, try commenting this too. I think fontSize is the size of font in screen points.
[EDIT]
If you want to learn something about OpenGLES here is a good intro tutorial
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html
Related
I'm trying to draw a 1 pixel width line on iOS, using OpenGL and I faced a problem.
That's how drawn line (zoomed) looks like in simulator:
iPhone 4S
iPhone 6 Plus
As you can see, line is more then 1 pixel width and also it's smoothed.
I think, that problem is not in OpenGL, but in screen scale factor.
I want that the line is a one-pixel on all types of screens.
That's how I draw the line
- (void)drawView {
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glDisable(GL_LINE_SMOOTH);
glDisable(GL_BLEND);
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
GLfloat vertices[4];
vertices[0] = -0.5;
vertices[1] = 0;
vertices[2] = 0.5;
vertices[3] = 0;
glVertexPointer(2, GL_FLOAT, 0, vertices);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0f, 1.0f, 0.0f, 1.0f);
glLineWidthx(1.0f);
glDrawArrays(GL_LINE_STRIP, 0, 2);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
Have a look at http://oleb.net/blog/2014/11/iphone-6-plus-screen/
This shows that the iPhone 6 Plus uses a scaling factor and says that you can disable the scaling by "set the contentScaleFactor of your GLKView to the value of UIScreen.nativeScale"
Hopefully that will fix your problem
I have a GLKViewController that has an implementation as follows...
#import "game-gl.h"
....
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
renderFrameLine();
}
game-gl.c ( I started with s = .5)
void renderFrameLine() {
printf("Inside Render Line \n");
if(f>1.0f){
f = 0.0f;
}
else{
f = f+.01;
}
glClearColor(f, 0.65f, f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
if(s >2.0f){
s = -2.0f;
}
else{
s = s + 0.10f;
}
GLfloat vVertices[] = { s, s, 0.0f, s, -s, 0.0f, -s, s,
0.0f};
GLfloat colors[] = {
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f};
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, vVertices);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 4);
}
When I run the app the background changes color but the Triangle never shows up. Any idea what I am doing wrong?
Problem appears to be with shaders not compiling...
shader = glCreateShader(type);
if (shader == 0){
#ifdef __APPLE__
printf("Shader Failed %d\n", type);
#endif
return 0;
}
Prints out that the shader failed (type = GL_VERTEX_SHADER)
I commented this out and I still get the same response, only everything else seems to compile ok...
UPDATE
I also see the following warning, I don't think it is related since the compile check passes but....
/Users/me/Development/test3/test3/game-gl.c:97:46: Passing 'GLbyte [148]' to parameter of type 'const char *' converts between pointers to integer types with different sign
But as I said this check seems to pass...
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled) {
UPDATE AGAIN
Ok now we are getting some where...
programObject = glCreateProgram();
if(programObject == 0){
printf("Invalid program returned \n");
}
Prints off that the program is invalid ?!?!?!?
And Again
So I changed to try and grab the program in the .m file but it is still 0.
if(_program == 0){
NSLog(#"Still can not create program");
}
else{
resizeViewport(screenWidth, screenHeight);
InitializeOpenGL(0, 0, _program);
}
Where is your shader code? Are you creating a fragment shader? Creating a program? Linking? if thats not compiling, then thats your first problem. But you can get useful error messages from gl that will help you there.
Besides that, you should enable the attribute pointer before assigning the data.
Also, make sure you're not culling clockwise faces (CCW instead)... If u have face culling enabled that is.
For the GLKViewController you apparently have to set the context before getting the program. Adding this line...
[EAGLContext setCurrentContext:self.context];
before initializing the GL made it work.
I have found it really difficult to start with opengles. I looked for tutorials all around the web. Found a free book by Philips Rideout, tried out few chapters but due to lack of a good C++ skills, I left it in middle. Then tried out Ray Wenderlich's tutorial and got stucked with shaders and could not make upto a very simple tutorial. Now, I am lingering around with Jeff Lamarche's old blog. I know there is a quite nice object oriented framework called COCOS2D out there that does almost everything needed for a 2D games and graphics but I thought of making a good foundation before actually trying out COCOS2D. But, it seems like I will never reach there. I have trouble one after another and there's no way that I could find solution. So, I come to stack overflow again and again to clear my misunderstanding. Your help and support will always help me to clear bugs in my code and of course rub out my misunderstandings.
I have a issue with a really simple triangle in OpenGLES. This example uses a OpenGLES 1.0. The code for rendering the graphics in my view goes this way,
struct Vertex3D{
GLfloat x;
GLfloat y;
GLfloat z;
};
struct Triangle3D{
Vertex3D v1;
Vertex3D v2;
Vertex3D v3;
};
static inline Triangle3D Triangle3DMake(Vertex3D vertex1, Vertex3D vertex2, Vertex3D vertex3){
Triangle3D triangle;
triangle.v1 = vertex1;
triangle.v2 = vertex2;
triangle.v3 = vertex3;
return triangle;
};
static inline Vertex3D vertex3DMake(GLfloat x, GLfloat y, GLfloat z){
Vertex3D vertex;
vertex.x = x;
vertex.y = y;
vertex.z = z;
return vertex;
}
static inline GLfloat Vertex3DCalculateDistanceBetweemVertices(Vertex3D first, Vertex3D second){
GLfloat deltaX = second.x - first.x;
GLfloat deltaY = second.y - first.y;
GLfloat deltaZ = second.z - first.z;
return sqrtf(powf(deltaX, 2) + powf(deltaY, 2) + powf(deltaZ, 2));
}
#implementation GLView{
GLuint renderbuffer;
GLuint framebuffer;
EAGLContext *_context;
CAEAGLLayer *layer;
GLuint depthbuffer;
}
+(Class)layerClass{
return [CAEAGLLayer class];
}
-(void)setUpLayer{
layer = (CAEAGLLayer*)super.layer;
}
-(void)setUpContext{
EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES1;
_context = [[EAGLContext alloc] initWithAPI:api];
if(!_context){
NSLog(#"Could not create context");
abort();
}
if(![EAGLContext setCurrentContext:_context]){
NSLog(#"Could not set current context");
abort();
}
}
-(void)setUpRenderBuffer{
glGenRenderbuffersOES(1, &renderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, renderbuffer);
glGenRenderbuffers(1, &depthbuffer);
glBindRenderbufferOES(GL_DEPTH_COMPONENT16_OES, depthbuffer);
[_context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer];
[_context renderbufferStorage:GL_DEPTH_COMPONENT16_OES fromDrawable:layer];
}
-(void)setUpFrameBuffer{
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, renderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_DEPTH_COMPONENT16_OES, depthbuffer);
}
-(void)render{
Vertex3D vertex1 = vertex3DMake(0,1,0);
Vertex3D vertex2 = vertex3DMake(1.0, 0.0, 0);
Vertex3D vertex3 = vertex3DMake(-1.0, 0.0, 0.);
Triangle3D triangle = Triangle3DMake(vertex1, vertex2, vertex3);
glViewport(0, 0, self.bounds.size.width, self.bounds.size.height);
glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertexPointer(3, GL_FLOAT, 0, &triangle);
glDrawArrays(GL_TRIANGLES, 0, 9);
[_context presentRenderbuffer:GL_RENDERBUFFER_OES];
glDisableClientState(GL_VERTEX_ARRAY);
}
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
[self setUpLayer];
[self setUpContext];
[self setUpRenderBuffer];
[self setUpFrameBuffer];
[self render];
}
return self;
}
The figure this code yields is like the figure shown below;
Since, I am using 2D coordinates with (1,0),(-1,0), (1,0), I have an assumption that it should give me a figure like this;
I am sure there is something very small that I am doing incorrectly. If anybody could point me out, it would be a great help to me. Thank you again.
Looks like you're just rendering too many vertices, and OpenGL is reading past the end of your vertex array into uninitialized memory. The line
glDrawArrays(GL_TRIANGLES, 0, 9);
should probably be
glDrawArrays(GL_TRIANGLES, 0, 3);
if you just want to draw one triangle. The third parameter is the total number of vertices to render.
I'm trying to load a texture that will be drawn on a simple square shape between four vertices. Unfortunately the only way this seems to work is if I put the code for loading the image data into the function that's called every frame. My code was originally based off an ARToolKit function (hence the drawCube name). Here is what my code looks like:
- (void) drawCube
{
glStateCacheEnableTex2D();
glStateCacheEnableBlend();
glStateCacheBlendFunc(GL_ONE, GL_SRC_COLOR);
glStateCacheEnableClientStateVertexArray();
glStateCacheEnableClientStateTexCoordArray();
glStateCacheEnableClientStateNormalArray();
const GLfloat cube_vertices [4][3] = {
{-1.0f, 1.0f, -1.0f}, {1.0f, 1.0f, -1.0f}, {-1.0f, -1.0f, -1.0f}, {1.0f, -1.0f, -1.0f} };
const GLfloat texCoords [4][2] = {
{0.0, 1.0}, {1.0, 1.0}, {0.0, 0.0}, {1.0, 0.0}
};
const GLfloat normals [4][3] = {
{0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}
};
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
NSString *path = [[NSBundle mainBundle] pathForResource:#"iTunesArtwork" ofType:#"png"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef contextt = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGContextTranslateCTM (contextt, 0, height);
CGContextScaleCTM (contextt, 1.0, -1.0);
CGColorSpaceRelease( colorSpace );
CGContextClearRect(contextt, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM(contextt, 0, height - height );
CGContextDrawImage(contextt, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(contextt);
free(imageData);
[image release];
[texData release];
glPushMatrix(); // Save world coordinate system.
glScalef(20.0f, 20.0f, 20.0f);
glTranslatef(0.0f, 0.0f, 1.0f); // Place base of cube on marker surface.
glStateCacheDisableLighting();
glBindTexture(GL_TEXTURE_2D, texture[0]);
glStateCacheVertexPtr(3, GL_FLOAT, 0, cube_vertices);
glStateCacheNormalPtr(GL_FLOAT, 0, normals);
glStateCacheTexCoordPtr(2, GL_FLOAT, 0, texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glStateCacheDisableClientStateNormalArray();
glStateCacheDisableClientStateTexCoordArray();
glStateCacheDisableClientStateVertexArray();
glPopMatrix(); // Restore world coordinate system.
}
This code works fine, the only problem is it gets called every frame an AR marker is visible to the camera, which is of course not OK. However, when I try to move the image data lines to the init function, I only get a white square without the texture. The lines I moved were the top 3 Enable Tex2D/Blend and BlendFunc. Then after the normal coordinates, everything down to [texData release]. I've been basing my code off of a tutorial that I read here: http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html
Everything seems like it would be in the right place to me but clearly that's not the case. Could anyone please shed some light on my problem?
Not seeing your init code, I can't tell for sure. Possibly you're running this code to load a texture before setting up your OpenGL context. Context creation is done in the viewDidLoad method of the view controller in Apple's OpenGL app templates for Xcode. I wasn't able to download the template from this blog series, as the server is off line. Maybe your template creates the context in the same method. Try moving the texture loading code so it runs after the context creation.
By the way, this tutorial is rather old. Since that blog entry was posted GLKit was added to the iOS SDK. GLKit includes very handy texture loading code.
So I am trying to use a stencil buffer in iOS for masking/clipping purposes. Do you guys have any idea why this code may not work? This is everything I have associated with Stencils. On iOS 4 I get a black screen. On iOS 5 I get exactly what I expect. The transparent areas of the image I drew in the stencil are the only areas being drawn later.
Code is below.
This is where I setup the frameBuffer, depth and stencil. In iOS the depth and stencil are combined.
-(void)setupDepthBuffer
{
glGenRenderbuffers(1, &depthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8_OES, self.frame.size.width * [[UIScreen mainScreen] scale], self.frame.size.height * [[UIScreen mainScreen] scale]);
}
-(void)setupFrameBuffer
{
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer);
// Check the FBO.
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"Failure with framebuffer generation: %d", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}
}
This is how I am setting up and drawing the stencil. (Shader code below.)
glEnable(GL_STENCIL_TEST);
glDisable(GL_DEPTH_TEST);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_FALSE);
glStencilFunc(GL_ALWAYS, 1, -1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glColorMask(0, 0, 0, 0);
glClear(GL_STENCIL_BUFFER_BIT);
machineForeground.shader = [StencilEffect sharedInstance];
[machineForeground draw];
machineForeground.shader = [BasicEffect sharedInstance];
glDisable(GL_STENCIL_TEST);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
Here is where I am using the stencil.
glEnable(GL_STENCIL_TEST);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glStencilFunc(GL_EQUAL, 1, -1);
...Draw Stuff here
glDisable(GL_STENCIL_TEST);
Finally here is my fragment shader.
varying lowp vec2 TexCoordOut;
uniform sampler2D Texture;
void main(void)
{
lowp vec4 color = texture2D(Texture, TexCoordOut);
if(color.a < 0.1)
gl_FragColor = color;
else
discard;
}
I was able to solve this by addressing my shader. This code works fine as intended but my vertex data structs were asking for more data than I was providing to the shader. Not entirely sure what happened under the hood the allow it work on iOS 5 but I was able to fix it.
That said
glColorMask(0, 0, 0, 0);
didn't actually accomplish what I was going for. What I wanted was to set the clear color and even then I only wanted to clear the stencil so I was actually looking for
glStencilMask(1);