I want to draw simple square with size of full screen with glDrawArray method in cocos2d. When retina is disabled everything draws as expected but when enabled - everything is half as big as it should be. (it seems like coordinate system in glDrawArray is not in points but in pixels)
Other draw functions works as expected but since I am drawing complicated shapes we have to use glDrawArray since it is much faster.
Any ideas how to solve this?
-(void) draw
{
CGPoint box[4];
CGPoint boxTex[4];
CGSize winSize = [[CCDirector sharedDirector] winSize];
//float boxSize = winSize.width;
box[0] = ccp(0,winSize.height); // top left
box[1] = ccp(0,0); // bottom left
box[2] = ccp(winSize.width,winSize.height);
box[3] = ccp(winSize.width,0);
boxTex[0] = ccp(0,1);
boxTex[1] = ccp(0,0);
boxTex[2] = ccp(1,1);
boxTex[3] = ccp(1,0);
// texture backround
glBindTexture(GL_TEXTURE_2D, self.sprite.texture.name);
glVertexPointer(2, GL_FLOAT, 0, box);
glTexCoordPointer(2, GL_FLOAT, 0, boxTex);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
Yes, the drawing is done in pixels, so in order to handle proper rendering on the retina display as well, you need to multiply your vertices by CC_CONTENT_SCALE_FACTOR():
for (int i = 0; i < 3; i++)
box[i] = ccpMult(box[i], CC_CONTENT_SCALE_FACTOR());
CC_CONTENT_SCALE_FACTOR returns 2 on retina devices instead of 1, so using it should take care of the scaling.
Related
I'm working on an augmented reality project using a Retina iPad but the two layers - the camera feed and the OpenGL overlay - are not making use of the high resolution screen. The camera feed is being drawn to a texture, which appears to be being scaled and sampled, where as the overlay is using the blocky 4 pixels scale up:
I have looked through a bunch of questions and added the following lines to my EAGLView class.
To initWithCoder, before calling setupFrameBuffer and setupRenderBuffer:
self.contentScaleFactor = [[UIScreen mainScreen] scale];
and in setupFrameBuffer
float screenScale = [[UIScreen mainScreen] scale];
float width = self.frame.size.width;
float height = self.frame.size.height;
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width * screenScale, height * screenScale);
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width*screenScale, height*screenScale, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
the last two lines simply being modified to include the scale factor.
Running this code gives me the following results:
As you can see, the image now only fills the lower left quarter of the screen, but I can confirm the image is only scaled, not cropped. Can anyone help me work out why this is?
Its not actually being scaled, it is that you are actually drawing the frames to a defined size that you defined prior to allowing the render buffer to be 2X the size in both directions.
Most likely what is going on is you defined your sizing in terms of pixels rather than the more general OpenGL coordinate space which moves from -1 to 1 in both the x and y directions (this is really when you are working in 2D, as you are).
Also, calling:
float width = self.frame.size.width;
float height = self.frame.size.height;
will return a size that is NOT the retina size. If you NSLog those out, you will see that even on a retina based device, those return values based on non-retina based screens, or more generally a movement unit, not a pixel.
The way I have chosen to obtain the view's actual size in pixels is:
GLint myWidth = 0;
GLint myHeight = 0;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &myWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &myHeight);
In iOS, I have been using the below code as my setup:
-(void)setupView:(GLView*)theView{
const GLfloat zNear = 0.00, zFar = 1000.0, fieldOfView = 45.0;
GLfloat size;
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
//CGRect rect = theView.bounds;
GLint width, height;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &width);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &height);
// NSLog(#"setupView rect width = %d, height = %d", width, height);
glFrustumf(-size, size, -size / ((float)width / (float)height), size /
((float)width / (float)height), zNear, zFar);
glViewport(0, 0, width, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
The above routine is used within code I am testing on both retina and non-retina setups, and is working just fine. This setupView routine is an overridable within a viewController.
Here's the solution I found.
I found out that in my draw function I was setting the glViewport sizes based off the glview.bounds.size, which is in points rather than pixels. Switching that to the be set based off the framebufferWidth and framebufferHeight solved the problem.
glViewport(0, 0, m_glview.framebufferWidth, m_glview.framebufferHeight);
I am having trouble accomplishing something that I thought was going to be much easier. I am trying to run a method whenever a non transparent part of a picture inside a UIImage touches another non-transparent part of an image contained within a UIImage. I have included an example to help further explain my question.
As you can see in the image above, I have two triangles that are both inside a UIImage. The triangles are both PNG pictures. Only the triangle is visible because the background has been made transparent. Both of the UIImages are inside a UIImageView. I want to be able to run a method when the visible part of the triangle touches the visible part of the other triangle. Can someone please help me?
The brute force solution to this problem is to create a 2D array of bools for each image, where each array entry is true for an opaque pixel, and false for the transparent pixels. If CGRectIntersectsRect returns true (indicating a possible collision), then the code scans the two arrays (with appropriate offsets depending on relative positions) to check for an actual collision. That gets complicated, and is computationally intensive.
One alternative to the brute force method is to use OpenGLES to do all of the work. This is still a brute force solution, but it offloads the work to the GPU, which is much better at such things. I'm not an expert on OpenGLES, so I'll leave the details to someone else.
A second alternative is to place restrictions on the problem that allow it to be solved more easily. For example, given two triangles A and B, collisions can only occur if one of the vertices of A is contained within the area of B, or if one of the vertices of B is in A. This problem can be solved using the UIBezierPath class in objective-C. The UIBezierPath can be used to create a path in the shape of a triangle. Then the containsPoint: method of UIBezierPath can be used to check if the vertex of the opposing triangle is contained in the area of the target triangle.
In summary, the solution is to add a UIBezierPath property to each object. Initialize the UIBezierPath to approximate the object's shape. If CGRectIntersectsRect indicates a possible collision, then check if the vertices of one object are contained in the area of the other object using the containsPoint: method.
What I did is:
counted the amount of non alpha pixels in image A
did the same for image B
merged A + B image into one image: C
compared the resulting pixel count
If pixes amount was less after merging then we have a hit.
if (C.count < A.count + B.count) -> we have a hit
+ (int)countPoints:(UIImage *)img
{
CGImageRef cgImage = img.CGImage;
NSUInteger width = img.size.width;
NSUInteger height = img.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
size_t bitsPerComponent = 8;
size_t bytesPerPixel = 1;
size_t bytesPerRow = width * bitsPerComponent * bytesPerPixel;
size_t dataSize = bytesPerRow * height;
unsigned char *bitmapData = malloc(dataSize);
memset(bitmapData, 0, dataSize);
CGContextRef bitmap = CGBitmapContextCreate(bitmapData, width, height, bitsPerComponent, width, NULL,(CGBitmapInfo)kCGImageAlphaOnly);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(bitmap, 0, img.size.height);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), cgImage);
int p = 0;
int i = 0;
while (i < width * height) {
if (bitmapData[i] > 0) {
p++;
}
i++;
}
free(bitmapData);
bitmapData = NULL;
CGContextRelease(bitmap);
bitmap = NULL;
//NSLog(#"points: %d",p);
return p;
}
+ (UIImage *)marge:(UIImage *)imageA withImage:(UIImage *)imageB {
CGSize itemSize = CGSizeMake(imageA.size.width, imageB.size.width);
UIGraphicsBeginImageContext(itemSize);
CGRect rect = CGRectMake(0,
0,
itemSize.width,
itemSize.height);
[imageA drawInRect:rect];
[imageB drawInRect:rect];
UIImage *overlappedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return overlappedImage;
}
I am using OpenGL 2.0 to draw a rectangle. Initially the viewport is such that i am looking from above and i can see my rectangle as i expected.
Then i start rotating this rectangle about the x-axis. When the angle of rotation equals -90deg (or +90 deg if rotating in the other direction), the rectangle disappears.
What i expect to see if the bottom surface of the rectangle when i rotate past 90deg/-90deg but instead the view disappears. It does re-appear with the total rotation angle is -270deg (or +270 deg) when the upper surface is just about ready to be shown.
How do i ensure that i can see the rectangle all along (both upper and lower surface has to be visible while rotating)?
Here' the relevant piece of code:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch * touch = [touches anyObject];
if ([touches count] == 1) {
CGPoint currLoc = [touch locationInView:self];
CGPoint lastLoc = [touch previousLocationInView:self];
CGPoint diff = CGPointMake(lastLoc.x - currLoc.x, lastLoc.y - currLoc.y);
rotX = -1 * GLKMathDegreesToRadians(diff.y / 2.0);
rotY = -1 * GLKMathDegreesToRadians(diff.x / 2.0);
totalRotationX += ((rotX * 180.0f)/3.141592f);
NSLog(#"rotX: %f, rotY: %f, totalRotationX: %f", rotX, rotY, totalRotationX);
//rotate around x axis
GLKVector3 xAxis = GLKMatrix4MultiplyVector3(GLKMatrix4Invert(_rotMatrix, &isInvertible), GLKVector3Make(1, 0, 0));
_rotMatrix = GLKMatrix4Rotate(_rotMatrix, rotX, xAxis.v[0], 0, 0);
}
}
-(void)update{
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0, 0, -6.0f);
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, _rotMatrix);
self.effect.transform.modelviewMatrix = modelViewMatrix;
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0, 10.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
}
- (void)setupGL {
NSLog(#"setupGL");
isInvertible = YES;
totalRotationX = 0;
[EAGLContext setCurrentContext:self.context];
glEnable(GL_CULL_FACE);
self.effect = [[GLKBaseEffect alloc] init];
// New lines
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
// Old stuff
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
// New lines (were previously in draw)
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid *) offsetof(Vertex, Position));
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid *) offsetof(Vertex, Color));
_rotMatrix = GLKMatrix4Identity;
// New line
glBindVertexArrayOES(0);
initialized = 1;
}
I am a newbie to OpenGL and i am using the GLKit along with OpenGL 2.0
Thanks.
There are many causes for things not rendering in OpenGL. In this case, it was back face culling (see comments on the question). Back face culling is useful because it can ignore triangles facing away from the camera and save some rasterization/fragment processing time. Since many meshes/objects are watertight and you'd never want to see the inside anyway it's uncommon to actually want two-sided shading. This functionality starts with defining a front/back of a triangle. This is done with the order the vertices are given in (sometimes called winding direction). glFrontFace chooses the direction clockwise/counter-clockwise that defines forwards, glCullFace chooses to cull either front or back (I guess some could argue not much point in having both) and finally to enable/disable:
glEnable(GL_CULL_FACE); //discards triangles facing away from the camera
glDisable(GL_CULL_FACE); //default, two-sided rendering
Some other things I check for geometry not being visible include...
Is the geometry colour the same as the background. Choosing a non-black/white background can be handy here.
Is the geometry actually drawing within the viewing volume. Throw in a simple object (immediate mode helps) and maybe use identity projection/modelview to rule them out.
Is the viewing volume correct. Near/far planes too far apart (causing z-fighting) or 0.0f near planes are common issues. Also when switching to a projection matrix, drawing anything on the Z=0 plane won't be visible any more.
Is blending enabled and everything's transparent.
Is the depth buffer not being cleared and causing subsequent frames to be discarded.
In fixed pipeline rendering, are glTranslate/glRotate transforms being carried over from the previous frame causing objects to shoot off into the distance. Always keep a glLoadIdentity at the top of the display function.
Is the rendering loop structured correctly - clear/draw/swapbuffers
Of course there's heaps more - geometry shaders not outputting anything, vertex shaders transforming vertices to the same position (so they're all degenerate), fragment shaders calling discard when they shouldn't, VBO binding/indexing issues etc. Checking GL errors is a must but never catches all mistakes.
So I'm having a bit of a problem with an OpenGL 1.1 skewed drawing.
Background:
Basically the app is a painting app (some code borrowed from glPaint) in which the user can draw with various colors and point widths. When they exit the drawing screen I use glReadPixels to persist the pixel color data in RGBA format. When they come back to continue drawing I take the color data from disk, put it into a colorPointer and I generate an array of vertices like so:
typedef struct _vertexStruct{ GLfloat position[2];} vertexStruct;
vertexStruct vertices[VERTEX_SIZE];
And the loop
GLfloat row = 0.0f;
GLfloat col = 768.0f;
for (int i = 0; i < (768 * 1024); i++) {
if (row == 1024.0f) {
col-- ;
row = 0.0f;
}
else {
row++;
}
vertices[i].position[0] = row;
vertices[i].position[1] = [self bounds].size.height - col;
}
And here are the actual drawing calls:
glVertexPointer(2, GL_FLOAT, sizeof(vertexStruct),&vertices[0].position);
glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(pixelData.data), pixelData.data);
glEnableClientState(GL_COLOR_ARRAY);
glDrawArrays(GL_POINTS, 0, VERTEX_SIZE);
glDisableClientState(GL_COLOR_ARRAY);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
So, the drawing succeeds but it is skewed off to the left of where it should be. I thought that I was compensating for OpenGL(I'm using standard bottom=0,left=0 coord system) --> UIKit coordinate system differences with the
vertices[i].position[1] = [self bounds].size.height - col;
call in the loop but this may just be a naive assumption. Anyone have any clues as to what I'm doing wrong or perhaps what I need to be doing addition to have the drawing appear in the right place?? Thanks in advance!
UPDATE: Solved, I just drew the saved image to a texture (NPOT texture)! If anyone else has worries about drawing NPOT textures, it should work, worked for me at least, with the only caveat being that it's not supported on earlier devices...
I am trying to create a background to flow with the game. However the image isn't Continuous. There is a space between each of the image loads. I want the image to continue to loop.
Here is the method to create the sprite
CCSprite *sprite = [CCSprite spriteWithFile:#"Image.png" rect:CGRectMake(0, 0, 960, 640)];
ccTexParams tp = {GL_NEAREST, GL_NEAREST, GL_REPEAT, GL_REPEAT};
[sprite.texture setTexParameters:&tp];
sprite.anchorPoint = ccp(1.0f/8.0f, 0);
sprite.position = ccp(screenW/8, 0);
Method to update the position of the sprite.
- (void) setOffsetX:(float)offsetX {
if (_offsetX != offsetX) {
_offsetX = offsetX;
CGSize size = _sprite.textureRect.size;
_sprite.textureRect = CGRectMake(_offsetX, 0, size.width, size.height);
}
}
Any help please
Your image width needs to be a power of two. i.e. the width has to be 64, 128, 256, 512, etc if you want it to repeat
The gap you are seeing is where OpenGL has padded empty space to your texture to make it power of two.
After trying it a few times, the best way is to ensure that the sprite dimensions are a power of 2. This will ensure that you can scale the layer and all remains fine. If you don't plan on scaling the layer, then you can use any size sprites and use this:
[[CCDirector sharedDirector] setProjection:CCDirectorProjection2D];
http://ak.net84.net/iphone/gap-between-sprites-when-moving-in-cocos2d/