I'm trying to draw up to 200,000 squares on the screen. Or a lot of squares basically. I believe I'm just calling way to many draw calls, and it's crippling the performance of the app. The squares only update when I press a button, so I don't necessarily have to update this every frame.
Here's the code i have now:
- (void)glkViewControllerUpdate:(GLKViewController *)controller
{
//static float transY = 0.0f;
//float y = sinf(transY)/2.0f;
//transY += 0.175f;
GLKMatrix4 modelview = GLKMatrix4MakeTranslation(0, 0, -5.f);
effect.transform.modelviewMatrix = modelview;
//GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
GLKMatrix4 projection = GLKMatrix4MakeOrtho(0, 768, 1024, 0, 0.1f, 20.0f);
effect.transform.projectionMatrix = projection;
_isOpenGLViewReady = YES;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
if(_model.updateView && _isOpenGLViewReady)
{
glClear(GL_COLOR_BUFFER_BIT);
[effect prepareToDraw];
int pixelSize = _model.pixelSize;
if(!_model.isReady)
return;
//NSLog(#"UPDATING: %d, %d", _model.rows, _model.columns);
for(int i = 0; i < _model.rows; i++)
{
for(int ii = 0; ii < _model.columns; ii++)
{
ColorModel *color = [_model getColorAtRow:i andColumn:ii];
CGRect rect = CGRectMake(ii * pixelSize, i*pixelSize, pixelSize, pixelSize);
//[self drawRectWithRect:rect withColor:c];
GLubyte squareColors[] = {
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255
};
// NSLog(#"Drawing color with red: %d", color.red);
int xVal = rect.origin.x;
int yVal = rect.origin.y;
int width = rect.size.width;
int height = rect.size.height;
GLfloat squareVertices[] = {
xVal, yVal, 1,
xVal + width, yVal, 1,
xVal, yVal + height, 1,
xVal + width, yVal + height, 1
};
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, squareVertices);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_UNSIGNED_BYTE, GL_TRUE, 0, squareColors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribColor);
}
}
_model.updateView = YES;
}
First, do you really need to draw 200,000 squares? Your viewport only has 786,000 pixels total. You might be able to reduce the number of drawn objects without significantly impacting the overall quality of your scene.
That said, if these are smaller squares, you could draw them as points with a pixel size large enough to cover your square's area. That would require setting gl_PointSize in your vertex shader to the appropriate pixel width. You could then generate your coordinates and send them all to be drawn at once as GL_POINTS. That should remove the overhead of the extra geometry of the triangles and the individual draw calls you are using here.
Even if you don't use points, it's still a good idea to calculate all of the triangle geometry you need first, then send all that in a single draw call. This will significantly reduce your OpenGL ES API call overhead.
One other thing you could look into would be to use vertex buffer objects to store this geometry. If the geometry is static, you can avoid sending it on each drawn frame, or only update a part of it that has changed. Even if you just change out the data each frame, I believe using a VBO for dynamic geometry has performance advantages on the modern iOS devices.
Can you not try to optimize it somehow? I'm not terribly familiar with graphics type stuff, but I'd imagine that if you are drawing 200,000 squares chances that all of them are actually visible seems to be unlikely. Could you not add some sort of isVisible tag for your mySquare class that determines whether or not the square you want to draw is actually visible? Then the obvious next step is to modify your draw function so that if the square isn't visible, you don't draw it.
Or are you asking for someone to try to improve the current code you have, because if your performance is as bad as you say, I don't think making small changes to the above code will solve your problem. You'll have to rethink how you're doing your drawing.
It looks like what your code is actually trying to do is take a _model.rows × _model.columns 2D image and draw it upscaled by _model.pixelSize. If -[ColorModel getColorAtRow:andColumn:] is retrieving 3 bytes at a time from an array of color values, then you may want to consider uploading that array of color values into an OpenGL texture as GL_RGB/GL_UNSIGNED_BYTE data and letting the GPU scale up all of your pixels at once.
Alternatively, if scaling up the contents of your ColorModel is the only reason that you’re using OpenGL ES and GLKit, you might be better off wrapping your color values into a CGImage and allowing UIKit and Core Animation do the drawing for you. How often do the color values in the ColorModel get updated?
Related
Updated with more explanation around my confusion
(This is how a non-graphics developer imagines the rendering process!)
I specify a 2x2 sqaure to be drawn in by way of two triangles. I'm going to not talk about the triangle anymore. Square is a lot better. Let's say the square gets drawn in one piece.
I have not specified any units for my drawing. The only places in my code that I do something like that is: canvas size (set to 1x1 in my case) and the viewport (i always set this to the dimensions of my output texture).
Then I call draw().
What happens is this: that regardless of the size of my texture (being 1x1 or 10000x10000) all my texels are filled with data (color) that I returned from my frag shader. This is working each time perfectly.
So now I'm trying to explain this to myself:
The GPU is only concerned with coloring the pixels.
Pixel is the smallest unit that the GPU deals with (colors).
Depending on how many pixels my 2x2 square is mapped to, I should be running into one of the following 3 cases:
The number of pixels (to be colored) and my output texture dims match one to one: In this ideal case, for each pixel, there would be one value assigned to my output texture. Very clear to me.
The number of pixels are fewer than my output texture dims. In this case, I should expect that some of the output texels to have exact same value (which is the color of the pixel the fall under). For instance if the GPU ends up drawing 16x16 pixels and my texture is 64x64 then I'll have blocks of 4 texel which get the same value. I have not observed such case regardless of the size of my texture. Which means there is never a case where we end up with fewer pixels (really hard to imagine -- let's keep going)
The number of pixels end up being more than the number of texels. In this case, the GPU should decide which value to assign to my texel. Would it average out the pixel colors? If the GPU is coloring 64x64 pixels and my output texture is 16x16 then I should expect that each texel gets an average color of the 4x4 pixels it contains. Anyway, in this case my texture should be completely filled with values I didn't intend specifically for them (like averaged out) however this has not been the case.
I didn't even talk about how many times my frag shader gets called because it didn't matter. The results would be deterministic anyway.
So considering that I have never run into 2nd and 3rd case where the values in my texels are not what I expected them the only conclusion I can come up with is that the whole assumption of the GPU trying to render pixels is actually wrong. When I assign an output texture to it (which is supposed to stretch over my 2x2 square all the time) then the GPU will happily oblige and for each texel will call my frag shader. Somewhere along the line the pixels get colored too.
But the above lunatistic explanation also fails to answer why I end up with no values in my texels or incorrect values if I stretch my geometry to 1x1 or 4x4 instead of 2x2.
Hopefully the above fantastic narration of the GPU coloring process has given you clues as to where I'm getting this wrong.
Original Post:
We're using WebGL for general computation. As such we create a rectangle and draw 2 triangles in it. Ultimately what we want is the data inside the texture mapped to this geometry.
What I don't understand is if I change the rectangle from (-1,-1):(1,1) to say (-0.5,-0.5):(0.5,0.5) suddenly data is dropped from the texture bound to the framebuffer.
I'd appreciate if someone makes me understand the correlations. The only places that real dimensions of the output texture come into play are the call to viewPort() and readPixels().
Below are relevant pieces of code for you to see what I'm doing:
... // canvas is created with size: 1x1
... // context attributes passed to canvas.getContext()
contextAttributes = {
alpha: false,
depth: false,
antialias: false,
stencil: false,
preserveDrawingBuffer: false,
premultipliedAlpha: false,
failIfMajorPerformanceCaveat: true
};
... // default geometry
// Sets of x,y,z (for rectangle) and s,t coordinates (for texture)
return new Float32Array([
-1.0, 1.0, 0.0, 0.0, 1.0, // upper left
-1.0, -1.0, 0.0, 0.0, 0.0, // lower left
1.0, 1.0, 0.0, 1.0, 1.0, // upper right
1.0, -1.0, 0.0, 1.0, 0.0 // lower right
]);
...
const geometry = this.createDefaultGeometry();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, geometry, gl.STATIC_DRAW);
... // binding to the vertex shader attribs
gl.vertexAttribPointer(positionHandle, 3, gl.FLOAT, false, 20, 0);
gl.vertexAttribPointer(textureCoordHandle, 2, gl.FLOAT, false, 20, 12);
gl.enableVertexAttribArray(positionHandle);
gl.enableVertexAttribArray(textureCoordHandle);
... // setting up framebuffer; I set the viewport to output texture dimensions (I think this is absolutely needed but not sure)
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer);
gl.framebufferTexture2D(
gl.FRAMEBUFFER, // The target is always a FRAMEBUFFER.
gl.COLOR_ATTACHMENT0, // We are providing the color buffer.
gl.TEXTURE_2D, // This is a 2D image texture.
texture, // The texture.
0); // 0, we aren't using MIPMAPs
gl.viewport(0, 0, width, height);
... // reading from output texture
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.framebufferTexture2D(
gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture,
0);
gl.readPixels(0, 0, width, height, gl.FLOAT, gl.RED, buffer);
new answer
I'm just saying the same thing yet again (3rd time?)
Copied from below
WebGL is destination based. That means it's going to iterate over the pixels of the line/point/triangle it's drawing and for each point call the fragment shader and ask 'what value should I store here`?
It's destination based. It's going to draw each pixel exactly once. For that pixel it's going to ask "what color should I make this"
destination based loop
for (let i = start; i < end; ++i) {
fragmentShaderFunction(); // must set gl_FragColor
destinationTextureOrCanvas[i] = gl_FragColor;
You can see in the loop above there is no setting any random destination. There is no setting any part of destination twice. It's just going to run from start to end and exactly once for each pixel in the destination between start and end ask what color it should make that pixel.
How to do you set start and end? Again, to make it simple let's assume a 200x1 texture so we can ignore Y. It works like this
vertexShaderFunction(); // must set gl_Position
const start = clipspaceToArrayspaceViaViewport(viewport, gl_Position.x);
vertexShaderFunction(); // must set gl_Position
const end = clipspaceToArrayspaceViaViewport(viewport, gl_Position.x);
for (let i = start; i < end; ++i) {
fragmentShaderFunction(); // must set gl_FragColor
texture[i] = gl_FragColor;
}
see below for clipspaceToArrayspaceViaViewport
What is viewport? viewport is what you set when you called `gl.viewport(x, y, width, height)
So, set gl_Position.x to -1 and +1, viewport.x to 0 and viewport.width = 200 (the width of the texture) then start will be 0, end will be 200
set gl_Position.x to .25 and .75, viewport.x to 0 and viewport.width = 200 (the width of the texture). The start will be 125 and end will be 175
I honestly feel like this answer is leading you down the wrong path. It's not remotely this complicated. You don't have to understand any of this to use WebGL IMO.
The simple answer is
You set gl.viewport to the sub rectangle you want to affect in your destination (canvas or texture it doesn't matter)
You make a vertex shader that somehow sets gl_Position to clip space coordinates (they go from -1 to +1) across the texture
Those clip space coordinates get converted to the viewport space. It's basic math to map one range to another range but it's mostly not important. It's seems intuitive that -1 will draw to the viewport.x pixel and +1 will draw to the viewport.x + viewport.width - 1 pixel. That's what "maps from clip space to the viewport settings means".
It's most common for the viewport settings to be (x = 0, y = 0, width = width of destination texture or canvas, height = height of destination texture or canvas)
So that just leaves what you set gl_Position to. Those values are in clip space just like it explains in this article.
You can make it simple by doing if you want by converting from pixel space to clip space just like it explains in this article
zeroToOne = someValueInPixels / destinationDimensions;
zeroToTwo = zeroToOne * 2.0;
clipspace = zeroToTwo - 1.0;
gl_Position = clipspace;
If you continue the articles they'll also show adding a value (translation) and multiplying by a value (scale)
Using just those 2 things and a unit square (0 to 1) you can choose any rectangle on the screen. Want to effect 123 to 127. That's 5 units so scale = 5, translation = 123. Then apply the math above to convert from pixels to clips space and you'll get the rectangle you want.
If you continue further though those articles you'll eventually get the point where that math is done with matrices but you can do that math however you want. It's like asking "how do I compute the value 3". Well, 1 + 1 + 1, or 3 + 0, or 9 / 3, or 100 - 50 + 20 * 2 / 30, or (7^2 - 19) / 10, or ????
I can't tell you how to set gl_Position. I can only tell you make up whatever math you want and set it to *clip space* and then give an example of converting from pixels to clipspace (see above) as just one example of some possible math.
old answer
I get that this might not be clear I don't know how to help. WebGL draws lines, points, or triangles two a 2D array. That 2D array is either the canvas, a texture (as a framebuffer attachment) or a renderbuffer (as a framebuffer attachment).
The size of the area is defined by the size of the canvas, texture, renderbuffer.
You write a vertex shader. When you call gl.drawArrays(primitiveType, offset, count) you're telling WebGL to call your vertex shader count times. Assuming primitiveType is gl.TRIANGLES then for every 3 vertices generated by your vertex shader WebGL will draw a triangle. You specify that triangle by setting gl_Position in clip space.
Assuming gl_Position.w is 1, Clip space goes from -1 to +1 in X and Y across the destination canvas/texture/renderbuffer. (gl_Position.x and gl_Position.y are divided by gl_Position.w) which is not really important for your case.
To convert back to actually pixels your X and Y are converted based on the settings of gl.viewport. Let's just do X
pixelX = ((clipspace.x / clipspace.w) * .5 + .5) * viewport.width + viewport.x
WebGL is destination based. That means it's going to iterate over the pixels of the line/point/triangle it's drawing and for each point call the fragment shader and ask 'what value should I store here`?
Let's translate that to JavaScript in 1D. Let's assume you have an 1D array
const dst = new Array(100);
Let's make a function that takes a start and end and sets values between
function setRange(dst, start, end, value) {
for (let i = start; i < end; ++i) {
dst[i] = value;
}
}
You can fill the entire 100 element array with 123
const dst = new Array(100);
setRange(dst, 0, 99, 123);
To set the last half of the array to 456
const dst = new Array(100);
setRange(dst, 50, 99, 456);
Let's change that to use clip space like coordinates
function setClipspaceRange(dst, clipStart, clipEnd, value) {
const start = clipspaceToArrayspace(dst, clipStart);
const end = clipspaceToArrayspace(dst, clipEnd);
for (let i = start; i < end; ++i) {
dst[i] = value;
}
}
function clipspaceToArrayspace(array, clipspaceValue) {
// convert clipspace value (-1 to +1) to (0 to 1)
const zeroToOne = clipspaceValue * .5 + .5;
// convert zeroToOne value to array space
return Math.floor(zeroToOne * array.length);
}
This function now works just like the previous one except takes clip space values instead of array indices
// fill entire array with 123
const dst = new Array(100);
setClipspaceRange(dst, -1, +1, 123);
Set the last half of the array to 456
setClipspaceRange(dst, 0, +1, 456);
Now abstract one more time. Instead of using the array's length use a setting
// viewport looks like `{ x: number, width: number} `
function setClipspaceRangeViaViewport(dst, viewport, clipStart, clipEnd, value) {
const start = clipspaceToArrayspaceViaViewport(viewport, clipStart);
const end = clipspaceToArrayspaceViaViewport(viewport, clipEnd);
for (let i = start; i < end; ++i) {
dst[i] = value;
}
}
function clipspaceToArrayspaceViaViewport(viewport, clipspaceValue) {
// convert clipspace value (-1 to +1) to (0 to 1)
const zeroToOne = clipspaceValue * .5 + .5;
// convert zeroToOne value to array space
return Math.floor(zeroToOne * viewport.width) + viewport.x;
}
Now to fill the entire array with 123
const dst = new Array(100);
const viewport = { x: 0, width: 100; }
setClipspaceRangeViaViewport(dst, viewport, -1, 1, 123);
Set the last half of the array to 456 there are now 2 ways. Way one is just like the previous using 0 to +1
setClipspaceRangeViaViewport(dst, viewport, 0, 1, 456);
You can also set the viewport to start half way through the array
const halfViewport = { x: 50, width: 50; }
setClipspaceRangeViaViewport(dst, halfViewport, -1, +1, 456);
I don't know if that was helpful or not.
The only other thing to add is instead of value replace that with a function that gets called every iteration to supply value
function setClipspaceRangeViaViewport(dst, viewport, clipStart, clipEnd, fragmentShaderFunction) {
const start = clipspaceToArrayspaceViaViewport(viewport, clipStart);
const end = clipspaceToArrayspaceViaViewport(viewport, clipEnd);
for (let i = start; i < end; ++i) {
dst[i] = fragmentShaderFunction();
}
}
Note this is the exact same thing that is said in this article and clearified somewhat in this article.
I have vertices of some surfaces that I draw on the canvas using drawArrays(gl.TRIANGLES,...). I need to draw these surfaces for a particular camera viewpoint and hence all 3d points are projected into 2d and I download the final image using toDataUrl. Here is the downloaded image:
I used gl.readPixels later to retrieve the data for everypixel.
For all the edge vertices, I have the information for the normals. Just like how I got the color for every pixel in the 2d images, I want to get the normals at every pixel for 2d image. Since I only have the normals at the edge vertices, I decided to render the normals the same way I rendered the above image and decided to use gl.readpixels. This is not working. Here is the relevant code:
This is the function from which the drawOverlayTriangeNormals is called. The drawOverlayTriangles function (not visible in this post) was used to produce the image shown above.
//Saving BIM
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.vertexAttrib1f(shaderProgram.aIsDepth, 0.0);
drawOverlayTriangles();
saveBlob('element');
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.vertexAttrib1f(shaderProgram.aIsDepth, 0.0);
drawOverlayTrianglesNormals();
saveBlob('element');
var pixels = new Uint8Array(glCanvas.width*glCanvas.height*4);
gl.readPixels(0, 0, glCanvas.width, glCanvas.height, gl.RGBA, gl.UNSIGNED_BYTE,pixels);
pixels = new Float32Array(pixels.buffer);
}
This is the drawOverlayTrianglesNormals function:
function drawOverlayTrianglesNormals()
{
if (overlay.numElements <= 0)
return;
//Creating the matrix for normal transform
var normal_matrix = mat4.create();
var u_Normal_Matrix = mat4.create();
mat4.invert(normal_matrix,pMVMatrix);
mat4.transpose(u_Normal_Matrix,normal_matrix);
gl.enable(gl.DEPTH_TEST);
gl.enableVertexAttribArray(shaderProgram.aVertexPosition);
gl.enableVertexAttribArray(shaderProgram.aVertexColor);
gl.enableVertexAttribArray(shaderProgram.aNormal);
gl.vertexAttrib1f(shaderProgram.aIsNormal, 1.0);
//Matrix upload
gl.uniformMatrix4fv(shaderProgram.uMVMatrix, false, pMVMatrix);
gl.uniformMatrix4fv(shaderProgram.uPMatrix, false, perspM);
gl.uniformMatrix4fv(shaderProgram.uNMatrix, false, u_Normal_Matrix);
//Create normals buffer
normals_buffer = gl.createBuffer();
for (var i = 0; i < overlay.numElements; i++) {
// Upload overlay vertices
gl.bindBuffer(gl.ARRAY_BUFFER, overlayVertices[i]);
gl.vertexAttribPointer(shaderProgram.aVertexPosition, 3, gl.FLOAT, false, 0, 0);
// Upload overlay colors
gl.bindBuffer(gl.ARRAY_BUFFER, overlayTriangleColors[i]);
gl.vertexAttribPointer(shaderProgram.aVertexColor, 4, gl.FLOAT, false, 0, 0);
var normal_vertex = [];
//Upload Normals
var normals_element = overlay.elementNormals[i];
for( var j=0; j< overlay.elementNumVertices[i]; j++)
{
var x = normals_element[3*j+0];
var y = normals_element[3*j+1];
var z = normals_element[3*j+2];
var length = Math.sqrt(x*x + y*y + z*z);
normal_vertex[3*j+0] = x/length;
normal_vertex[3*j+1] = y/length;
normal_vertex[3*j+2] = z/length;
}
gl.bindBuffer(gl.ARRAY_BUFFER, normals_buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(normal_vertex),gl.STATIC_DRAW);
gl.vertexAttribPointer(shaderProgram.aNormal, 3, gl.FLOAT, false, 0, 0);
// Draw overlay
gl.drawArrays(gl.TRIANGLES, 0, overlay.elementNumVertices[i]);
}
gl.disableVertexAttribArray(shaderProgram.aVertexPosition);
gl.disableVertexAttribArray(shaderProgram.aVertexColor);
gl.vertexAttrib1f(shaderProgram.aisdepth, 0.0);
}
Below is the relevant vertex shader code:
void main(void) {
gl_PointSize = aPointSize;
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
position_1 = gl_Position;
vColor = aVertexColor;
vIsDepth = aIsDepth;
vIsNormal = aIsNormal;
vHasTexture = aHasTexture;
normals = uNMatrix*vec4(aNormal,1.0);
if (aHasTexture > 0.5)
vTextureCoord = aTextureCoord;
}
Fragment shader:
if (vIsNormal > 0.5)
{
gl_FragColor = vec4(normals.xyz*0.5+0.5,1);
}
}
Right now my output is the same image in grayscale. I am not sure what is going wrong. I felt this method makes sense but seems a little round about.
I'm not entirely sure I understand what you're trying to do, but it seems like you just want to be able to access the normals for calculating lighting effects, so let me try to answer that.
DO NOT use gl.readPixels()! This is primarily for click interactions and stuff, or modifying small numbers of pixels. Using this must be extremely inefficient, since you have to draw the pixels, then read them, then redraw them after calculating their appropriate lighting. The wonderful thing about WebGL is that it allows you to do all this from the beginning: the fragment shader will interpolate the information it's given to smoothly draw effects between two adjacent vertices.
Most of lighting depends comparing the surface normal to the direction of the light (as you seem to understand, judging by one of your comments). See Phong shading.
Now, you mentioned that you want the normals of all the rendered points, not just at the vertices. BUT the vertices' normals will be identical to the normals at every point on the surface, so you don't even need anything more than the vertices' normals. This is because all WebGL knows how to draw is triangles (I believe), which are flat, or planar, surfaces. And since every point on a plane has the same normal as any other point, you only really need one normal to know all of the normals!
Since it looks like all you're trying to draw are cylinders and rectangular prisms, it ought to be simple to specify the normals the objects you create. The normals for the rectangular prisms are trivial, but so are those of the cylinder: the normals are parallel to the line going from the axis of the cylinder to the surface.
And since WebGL's fragment shader interpolates any varying variables you pass it between adjacent vertices, you can tell it to interpolate these normals smoothly across vertices, to achieve the smooth lighting seen in the Phong shadin page! :D
I am working on a drawing application and I am noticing significant differences in textures loaded on a 32-bit iPad vs. a 64-bit iPad.
Here is the texture drawn on a 32-bit iPad:
Here is the texture drawn on a 64-bit iPad:
The 64-bit is what I desire, but it seems like maybe it is losing some data?
I create a default brush texture with this code:
UIGraphicsBeginImageContext(CGSizeMake(64, 64));
CGContextRef defBrushTextureContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(defBrushTextureContext);
size_t num_locations = 3;
CGFloat locations[3] = { 0.0, 0.8, 1.0 };
CGFloat components[12] = { 1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 0.0 };
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
CGGradientRef myGradient = CGGradientCreateWithColorComponents (myColorspace, components, locations, num_locations);
CGPoint myCentrePoint = CGPointMake(32, 32);
float myRadius = 20;
CGGradientDrawingOptions options = kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation;
CGContextDrawRadialGradient (UIGraphicsGetCurrentContext(), myGradient, myCentrePoint,
0, myCentrePoint, myRadius,
options);
CFRelease(myGradient);
CFRelease(myColorspace);
UIGraphicsPopContext();
[self setBrushTexture:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
And then actually set the brush texture like this:
-(void) setBrushTexture:(UIImage*)brushImage{
// save our current texture.
currentTexture = brushImage;
// first, delete the old texture if needed
if (brushTexture){
glDeleteTextures(1, &brushTexture);
brushTexture = 0;
}
// fetch the cgimage for us to draw into a texture
CGImageRef brushCGImage = brushImage.CGImage;
// Make sure the image exists
if(brushCGImage) {
// Get the width and height of the image
GLint width = CGImageGetWidth(brushCGImage);
GLint height = CGImageGetHeight(brushCGImage);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Allocate memory needed for the bitmap context
GLubyte* brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
CGContextRef brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushCGImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushCGImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
}
Update:
I've updated CGFloats to be GLfloats with no success. Maybe there is an issue with this rendering code?
if(frameBuffer){
// draw the stroke element
[self prepOpenGLStateForFBO:frameBuffer];
[self prepOpenGLBlendModeForColor:element.color];
CheckGLError();
}
// find our screen scale so that we can convert from
// points to pixels
GLfloat scale = self.contentScaleFactor;
// fetch the vertex data from the element
struct Vertex* vertexBuffer = [element generatedVertexArrayWithPreviousElement:previousElement forScale:scale];
glLineWidth(2);
// if the element has any data, then draw it
if(vertexBuffer){
glVertexPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Position[0]);
glColorPointer(4, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Color[0]);
glTexCoordPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Texture[0]);
glDrawArrays(GL_TRIANGLES, 0, (GLint)[element numberOfSteps] * (GLint)[element numberOfVerticesPerStep]);
CheckGLError();
}
if(frameBuffer){
[self unprepOpenGLState];
}
The vertex struct is the following:
struct Vertex{
GLfloat Position[2]; // x,y position
GLfloat Color [4]; // rgba color
GLfloat Texture[2]; // x,y texture coord
};
Update:
The issue does not actually appear to be 32-bit, 64-bit based, but rather something different about the A7 GPU and GL drivers. I found this out by running a 32-bit build and 64-bit build on the 64-bit iPad. The textures ended up looking exactly the same on both builds of the app.
I would like you to check two things.
Check your alpha blending logic(or option) in OpenGL.
Check your interpolation logic which is proportional to velocity of dragging.
It seems you don't have second one or not effective which is required to drawing app
I don't think the problem is in the texture but in the frame buffer to which you composite the line elements.
Your code fragments look like you draw segments by segment, so there are several overlapping segments drawn on top of each other. If the depth of the frame buffer is low there will be artifacts, especially in the lighter regions of the blended areas.
You can check the frame buffer using Xcode's OpenGL debugger. Activate it by running your code on the device and click the little "Capture OpenGL ES Frame" button: .
Select a "glBindFramebuffer" command in the "Debug Navigator" and look at the frame buffer description in the console area:
The interesting part is the GL_FRAMEBUFFER_INTERNAL_FORMAT.
In my opinion, the problem is in the blending mode you use when composing different image passes. I assume that you upload the texture for display only, and keep the in-memory image where you composite different drawing operations, or you read-back the image content using glReadPixels ?
Basically your second images appears like a straight-alpha image drawn like a pre-multiplied alpha image.
To be sure that it isn't a texture problem, save a NSImage to file before uploading to the texture, and check that the image is actually correct.
I'm developing an app that has to draw 320 vertical gradient lines on a portrait iPhone screen where each gradient line is either 1px or 2px wide (non-retina vs retina). Each gradient line has 1000 positions, with each position able to have a unique color. These 1000 colors (floats) sit in a C-style 2D array (an array of arrays, 320 arrays of 1000 colors)
Currently, the gradient lines are drawn in a For Loop inside the drawRect method of a custom UIView. The problem I'm having is that it takes longer than ONE second to cycle through the For Loop and draw all 320 lines. Within that ONE second, I have another thread that's updating the color arrays and but since it takes longer than ONE second to draw, I don't see every update. I see every second or third update.
I'm using the exact same procedure in my Android code, which has no problems drawing 640 gradient lines (double the amount) multiple times in a second using a SurfaceView. My Android app never misses an update.
If you look at the Android code, it actually draws gradient lines to TWO separate canvases. The array size is dynamic and can be up to half the landscape resolution width of an Android phone (ex 1280 width = 1280/2 = 640 lines). Since the Android app is fast enough, I allow landscape mode. Even with the double the data as an iPhone and drawing to two separate canvases, the Android code runs multiple times a second. The iPhone code with half the number of lines and only drawing to a single context can not draw in under a second.
Is there a faster way to draw 320 vertical gradient lines (each with 1000 positions) on an iPhone?
Is there a hardware accelerated SurfaceView equivalent for iOS that can draw many gradients really fast?
//IPHONE - drawRect method
int totalNumberOfColors = 1000;
int i;
CGFloat *locations = malloc(totalNumberOfColors * sizeof locations[0]);
for (i = 0; i < totalNumberOfColors; i++) {
float division = (float)1 / (float)(totalNumberOfColors - 1);
locations[i] = i * division;
}
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
for (int k = 0; k < 320; k++) {
CGFloat * colorComponents = arrayOfFloatArrays[k];
CGGradientRef gradient = CGGradientCreateWithColorComponents(
colorSpace,
colorComponents,
locations,
(size_t)(totalNumberOfColors));
CGRect newRect;
if (currentPositionOffset >=320) {
newRect = CGRectMake(0, 0, 1, CGRectGetMaxY(rect));
} else {
newRect = CGRectMake(319 - (k * 1), 0, 1, CGRectGetMaxY(rect));
}
CGContextSaveGState(ctx);
//NO CLIPPING STATE
CGContextAddRect(ctx, newRect);
CGContextClip(ctx);
//CLIPPING STATE
CGContextDrawLinearGradient(
ctx,
gradient,
CGPointMake(0, 0),
CGPointMake(0, CGRectGetMaxY(rect)),
(CGGradientDrawingOptions)NULL);
CGContextRestoreGState(ctx);
//RESTORE TO NO CLIPPING STATE
CGGradientRelease(gradient);
}
//ANDROID - public void run() method on SurfaceView
for (i = 0; i < sonarData.arrayOfColorIntColumns.size() - currentPositionOffset; i++) {
Paint paint = new Paint();
int[] currentColors = sonarData.arrayOfColorIntColumns.get(currentPositionOffset + i);
//Log.d("currentColors.toString()",currentColors.toString());
LinearGradient linearGradient;
if (currentScaleFactor > 1.0) {
int numberOfColorsToUse = (int)(1000.0/currentScaleFactor);
int tmpTopOffset = currentTopOffset;
if (currentTopOffset + numberOfColorsToUse > 1000) {
//shift tmpTopOffset
tmpTopOffset = 1000 - numberOfColorsToUse - 1;
}
int[] subsetOfCurrentColors = new int[numberOfColorsToUse];
System.arraycopy(currentColors, tmpTopOffset, subsetOfCurrentColors, 0, numberOfColorsToUse);
linearGradient = new LinearGradient(0, tmpTopOffset, 0, getHeight(), subsetOfCurrentColors, null, Shader.TileMode.MIRROR);
//Log.d("getHeight()","" + getHeight());
//Log.d("subsetOfCurrentColors.length","" + subsetOfCurrentColors.length);
} else {
//use all colors
linearGradient = new LinearGradient(0, 0, 0, getHeight(), currentColors, null, Shader.TileMode.MIRROR);
//Log.d("getHeight()","" + getHeight());
//Log.d("currentColors.length","" + currentColors.length);
}
paint.setShader(linearGradient);
sonarData.checkAndAddPaint(paint);
numberOfColumnsToDraw = i + 1;
}
//Log.d(TAG,"numberOfColumnsToDraw " + numberOfColumnsToDraw);
currentPositionOffset = currentPositionOffset + i;
if (currentPositionOffset >= sonarData.getMaxNumberOfColumns()) {
currentPositionOffset = sonarData.getMaxNumberOfColumns() - 1;
}
if (numberOfColumnsToDraw > 0) {
Canvas canvas = surfaceHolder.lockCanvas();
if (AppInstanceData.sonarBackgroundImage != null && canvas != null) {
canvas.drawBitmap(AppInstanceData.sonarBackgroundImage, 0, getHeight()- AppInstanceData.sonarBackgroundImage.getHeight(), null);
if (cacheCanvas != null) {
cacheCanvas.drawBitmap(AppInstanceData.sonarBackgroundImage, 0, getHeight()- AppInstanceData.sonarBackgroundImage.getHeight(), null);
}
}
for (i = drawOffset; i < sizeToDraw + drawOffset; i++) {
Paint p = sonarData.paintArray.get(i - dataStartOffset);
p.setStrokeWidth(2);
//Log.d("drawGradientLines", "canvas.getHeight() " + canvas.getHeight());
canvas.drawLine(getWidth() - (i - drawOffset) * 2, 0, getWidth() - (i - drawOffset) * 2, canvas.getHeight(), p);
if (cacheCanvas != null) {
cacheCanvas.drawLine(getWidth() - (i - drawOffset) * 2, 0, getWidth() - (i - drawOffset) * 2, canvas.getHeight(), p);
}
}
surfaceHolder.unlockCanvasAndPost(canvas);
}
No comment on the CG code — it's been a while since I've drawn any gradients — but a couple of notes:
You shouldn't be doing that in drawRect because it's called a lot. Draw into an image and display it.
There's no matching free for the malloc, so you're leaking memory like crazy.
It'll have a learning curve, but implement this using OpenGL ES 2.0. I previously took something that was drawing a large number of gradients as well, and reimplemented it using OpenGL ES 2.0 and custom vertex and fragment shaders. It is way faster than the equivalent drawing done using Core Graphics, so you will probably see a big speed boost as well.
If you don't know any OpenGL yet, I would suggest finding some tutorials for working with OpenGL ES 2.0 (has to be 2.0 because that's what offers the ability to write custom shaders) on iOS, to learn the basics. Once you do that, you should be able to significantly increase the performance of your drawing, way above that of the Android version, and maybe would be incentive to make the Android version use OpenGL as well.
I am rendering my scene as the code below
struct vertex
{
float x, y, z, nx, ny, nz;
};
bool CShell::UpdateScene()
{
glEnable(GL_DEPTH_TEST);
glClearColor(0.3f, 0.3f, 0.4f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Set the OpenGL projection matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
const float near = 0.1f;
const float far = 1000.0f;
float top = near * tanf(fieldOfView * SIMD_PI / 180.0f);
float bottom = -top;
float left = bottom * aspectRatio;
float right = top * aspectRatio;
glFrustumf(left, right, bottom, top, near, far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
bool CShell::RenderScene()
{
glEnable(GL_DEPTH_TEST);
glBindBuffer(GL_ARRAY_BUFFER, vertsVBO);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, elementSize, 0);
glNormalPointer(GL_FLOAT, elementSize, (const GLvoid*) normalOffset);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indicesVBO);
glEnable(GL_LIGHTING);
lightPosition[0] = (-gravity.x()+0.0)*lightHeight;
lightPosition[1] = (-gravity.y()+0.0)*lightHeight;
lightPosition[2] = (-gravity.z()+0.5)*lightHeight;
glLightfv(GL_LIGHT0, GL_POSITION, lightPosition);
float worldMat[16];
/// draw donuts
for (int i=0;i<numDonuts;i++)
{
sBoxBodies[i]->getCenterOfMassTransform().getOpenGLMatrix(worldMat);
glPushMatrix();
glMultMatrixf(worldMat);
glVertexPointer(3, GL_FLOAT, elementSize, (const GLvoid*)(char*)sizeof(vertex));
glNormalPointer(GL_FLOAT, elementSize, (const GLvoid*)(char*)(sizeof(vertex)+normalOffset));
glDrawElements(GL_TRIANGLES, numberOfIndices, GL_UNSIGNED_SHORT, (const GLvoid*)(char*)sizeof(GLushort));
glPopMatrix();
}
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisable(GL_LIGHTING);
return true;
}
My project uses Oolong Engine
These are two screenshots, iPodTouch 4G (iOS 6.0)
and iPodTouch 2G (iOS 4.2.1)
What could be causing those strange artitacts that appears on the later screenshot?
It apears as if the triangles on the back are overlapping those of the front
It occurs some times thought, as the artifats are jerky, it's like if there is "z-fighting", but triangles on the back have z values below of those for triangles on the front
Here is an image of the vertices and normals z arrangement
The blue arrows are normals shared by the surrounding faces, and the triangle with red lines is a representation of what could be causing those artifacts
it's like if there is "z-fighting", but triangles on the back have z values below of those for triangles on the front
It doesn't matter so much that one has z-value less than the other, you get z-fighting when your objects are too close together and you don't have enough z resolution.
The problem here I guess is that you set your projection range too large, from 0.1 to 1000. The greater magnitude the difference between these numbers, the less z-resolution you will get.
I recommend to try near/far of 0.1/100, or 1.0/1000, as long as that works with your application. It should help your z-fighting issue.