Loading a PNG file to openGL ES - ios

Im trying to set a certain texture to an object on openGL ES this is how i load it:
- (GLuint)setupTexture:(NSString *)fileName {
// 1
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
if (!spriteImage) {
NSLog(#"Failed to load image %#", fileName);
exit(1);
}
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte * spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4,
CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(spriteContext);
...
}
Now, when i'm using this tile_floor.png file:
the image is loaded and drawn on screen.
but when i use this wood.png file:
all i get is a black object.
Why is it different? is there any importance to the file dimensions (width or height)? i did not 'hard code' any dimensions or parameters, so that all images can be loaded.
It's important to say i don't get any errors in the console, and the program is running.

While you can use NPOT (non-power of two) image in many OpenGL ES implementations (all iOS devices support the extension GL_APPLE_texture_2D_limited_npot) you have to use the right edge mode:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
This will make the texture clamp to the edges of the texture coordinates 0, 1 range. Your wood texture looks like it does not repeat anyway.

OK, i got this. openGL ES requires power of 2 image size (both width and height).
Resized the photo and it worked.

Related

Fast way to create openGL texture from JPEG-2000?

I need to load large-ish (5 megapixel) jpeg images and create openGL texture from them. They are non-power-of-two, and cannot be pre-processed for this application. Loading is extremely slow, about one second per image on an iPad Air 2. I need to load a dozen or two such images and create a GL texture for each, as quickly as I can.
Profiling shows the bottleneck to be CGContextDrawImage. Previous answers suggest this is a common problem.
This previous answer seems most relevant and (unfortunately) does not leave me hopeful. I haven't tried lib-jpeg (suggested in another answer) yet - trying to keep third party code out for several reasons.
But - that answer was 2014 and things change. Does anybody know of a faster way to create textures from jpegs? Either by changing the arguments to CGContextDrawImage (as in this answer- I've tried the suggested changes with no noticeable speed change) or using a different approach entirely?
The current texture creation block (called asynchronously):
UIImage *image = [UIImage imageWithData:jpegImageData];
if (image) {
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture( GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
GLsizei width = (GLsizei)CGImageGetWidth(image.CGImage);
GLsizei height = (GLsizei)CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef imgcontext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextDrawImage( imgcontext, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(imgcontext);
free(imageData);
// ... store the textureID for use by the caller
// ...
}
(edited to add)
I tried GLKTextureLoader. I kept getting a nil return value, with error theError NSError * domain: "GLKTextureLoaderErrorDomain" - code: 12.
I've realized that the JPEGs I need to load are JPEG 2000; and that may be the problem. I've played with the GLKTextureLoader approach; I can get it to work non-J2K jpegs, but not the J2K ones I need to load. (FWIW, the files I need to load are packed inside larger files, thus I extract a data subrange from within the file, as such:
NSData *jpegImageData = [data subdataWithRange:NSMakeRange(offset, dataLength)];
GLKTextureInfo *jpegTexture;
NSError *theError;
jpegTexture = [GLKTextureLoader textureWithContentsOfData:jpegImageData options:nil error:&theError];
but, as mentioned, jpegImageData comes back as nil with the aforementioned error. This works on small jpegs, even using the subdataWithRange approach.
Likewise,
UIImage *image = [UIImage imageWithData:jpegImageData];
jpegTexture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&theError];
returns nil with the same "code 12" error.
This iOS Developer page (Table 1-1) suggests that JPEG-2000 is supported on OS X only, but when I try the
CFArrayRef mySourceTypes = CGImageSourceCopyTypeIdentifiers();
CFShow(mySourceTypes);
approach for showing supported formats, JPEG-2000 is among them (running on my iOS device):
33 : <CFString 0x19d721bf8 [0x1a1da0150]>{contents = "public.jpeg-
Any suggestions for using the faster GLKTextureLoader methods on JPEG-2000?
Did you try the GLKit Framework method?
GLKTexGtureInfo *spriteTexture;
NSError *theError;
NSString *filePath = [[NSBundle mainBundle] pathForResource:#"Sprite" ofType:#"jpg"]; // 1
spriteTexture = [GLKTextureLoader textureWithContentsOfFile:filePath options:nil error:&theError]; // 2
glBindTexture(spriteTexture.target, spriteTexture.name); // 3

iOS UIView render to texture -- preserve quality

I've implemented a program to render a UILabel to a texture and then map that same texture to a quad in OpenGL. I then rendered the same UILabel normally by adding it to the UIView hierarchy to compare them visually.
I immediately noticed that the quality of the UILabel that I rendered to texture is lower than a normally rendered UILabel. I am having trouble figuring out why the quality is lower and would appreciate any advice.
I'm using the render to texture technique found here Render contents of UIView as an OpenGL texture
Here is some of the relevant code
// setup test UIView (label for now) to be rendered to texture
_testLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 100, 50)];
[_testLabel setText:#"yo"];
_testLabel.textAlignment = NSTextAlignmentCenter;
[_testLabel setBackgroundColor:RGB(0, 255, 0)];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
_testLabelContext = CGBitmapContextCreate(NULL, _testLabel.bounds.size.width, _testLabel.bounds.size.height, 8, 4*_testLabel.bounds.size.width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
// setup texture handle for view to be rendered to texture
glGenTextures(1, &_testLabelTextureHandle);
glBindTexture(GL_TEXTURE_2D, _testLabelTextureHandle);
// these must be defined for non mipmapped nPOT textures (double check)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _testLabel.bounds.size.width, _testLabel.bounds.size.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
then in my render function...
GLubyte *labelPixelData = (GLubyte*)CGBitmapContextGetData(_testLabelContext);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, _testLabel.bounds.size.width, _testLabel.bounds.size.height, GL_RGBA, GL_UNSIGNED_BYTE, labelPixelData);
Don't you need to multiply the render width and height by the #2x/#3x? OpenGL works in real pixels.
The answer from #soprof is correct. You do need to scale the content with the main screen bounds in every case when creating a view screenshot. You need to use the same coordinates later when uploading the data to the texture. Simply creating a larger context is not enough.
This actually goes for all connections between the openGL and an UIView. You need to understand that the coordinate system is kept as if 1x was used which is very useful for the view layout. Apple uses contentScaleFactor property on the UIView to then actually scale the the internal content but that is still not visible in frame or bounds property. So for instance to get the image from an UIView you would need to do something like this:
+ (UIImage *)imageFromView:(UIView *)view
{
CGRect rect = [view bounds];
UIGraphicsBeginImageContextWithOptions(rect.size, NO, view.contentScaleFactor);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
See the context may be created with options and a view scale is used. But the problem does not stop here. The actual content scale factor is set when the view is added to the view hierarchy. This means simply initializing a label is not enough for the scale factor to be correct (it will be 1.0f by default). So you may either set the scale manually on the view to what you want, you may assign it from the screen using [UIScreen mainScreen].scale, insert this value directly when creating a context or anything in between.
Anyway if you modify your code to include the scale you will need the new frame at some point which is always
rect.size.width *= view.contentScaleFactor;
rect.size.height *= view.contentScaleFactor;
You need to use such rect for all continuing code but do not change the frame of the view you are trying to draw as the fonts will not be scaled with it. What might work though is applying a scale transform matrix on the actual inputed view (if I remember correctly the frame will scale as well so even your code should work naturally).
As I mentioned this goes to all connections so if you are creating a render buffer from the layer you will also need to use the scale in it. renderbufferStorage:GL_RENDERBUFFER fromDrawable: will take the UIView layer but you must explicitly set the scale to the layer to get the correct size of the render buffer using layer.contentsScale = [UIScreen mainScreen].scale. So if you are using this and you failed to set the scale the whole scene will have a 1x quality and even fixing the label texture will not produce a high enough quality.

How to draw cropped bitmap using the Metal API or Accelerate Framework?

I'm implementing a custom video compositor that crops video frames. Currently I use Core Graphics to do this:
-(void)renderImage:(CGImageRef)image inBuffer:(CVPixelBufferRef)destination {
CGRect cropRect = // some rect ...
CGImageRef currentRectImage = CGImageCreateWithImageInRect(photoFullImage, cropRect);
size_t width = CVPixelBufferGetWidth(destination);
size_t height = CVPixelBufferGetHeight(destination);
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(destination), // data
width,
height,
8, // bpp
CVPixelBufferGetBytesPerRow(destination),
CGImageGetColorSpace(backImage),
CGImageGetBitmapInfo(backImage));
CGRect frame = CGRectMake(0, 0, width, height);
CGContextDrawImage(context, frame, currentRectImage);
CGContextRelease(context);
}
How can I use the Metal API to do this? It should be much faster, right?
What about using the Accelerate Framework (vImage specifically)? Would that be simpler?
Ok, I don't know wether this will be useful for you, but still.
Check out the following code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
id<MTLTexture> textureY = nil;
{
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
MTLPixelFormat pixelFormat = MTLPixelFormatBGRA8Unorm;
CVMetalTextureRef texture = NULL;
CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, _textureCache, pixelBuffer, NULL, pixelFormat, width, height, 0, &texture);
if(status == kCVReturnSuccess)
{
textureY = CVMetalTextureGetTexture(texture);
if (self.delegate){
[self.delegate textureUpdated:textureY];
}
CFRelease(texture);
}
}
}
I use this code to convert CVPixelBufferRef into MTLTexture. After that you should probably create blitCommandEncoder and use it's
func copyFromTexture(sourceTexture: MTLTexture, sourceSlice: Int, sourceLevel: Int, sourceOrigin: MTLOrigin, sourceSize: MTLSize, toTexture destinationTexture: MTLTexture, destinationSlice: Int, destinationLevel: Int, destinationOrigin: MTLOrigin)
In it, you can select cropped rectangle and copy it to some other texture.
The next step is to convert generated MTLTextures back into CVPixelBufferRefs and then make a video out of that, unfortunately I don't know how to do that.
Would really like to hear what you came up with. Cheers.
Because it uses bare pointers and unencapsulated data, vImage would "crop" things by just moving the pointer to the top left corner of the image to point to the new top left corner and reducing the height and width accordingly. You now have a vImage_Buffer that refers to a region in the middle of your image. Of course, you still need to export the content again as a file or copy it to something destined to draw to the screen. See for example vImageCreateCGImageFromBuffer().
CG can do this itself with CGImageCreateWithImageInRect()
Metal would do this either as a simple compute copy kernel, a MTLBlitCommandEncoder blit, or a 3D render application of a texture to a collection of triangles with appropriate coordinate offset.

GLKView snapshot method: null return val, getting an error

I can't figure out how to use the GLKView:snapshot method.
I'm using a GLKView to render some OpenGL stuff. It all works; seems like I have it all set up correctly.
But, when I try to do a snapshot, it fails: I get a null return value, and the following log message:
Error: CGImageCreate: invalid image size: 0 x 0.
Seems like this would mean the view itself is invalid for some reason, but it's not -- everything is working, aside from this.
I've looked at a few code samples, and I'm not doing anything different.
So... anyone seen this before? Ideas?
I never figured out the above problem; however, I found an excellent workaround. I found this chunk which just reads the render buffer and saves it to a UIImage. Problem solved!
- (UIImage*)snapshotRenderBuffer {
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
NSInteger dataLength = backingWidth * backingHeight * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0.0f, 0.0f, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(
backingWidth, backingHeight, 8, 32, backingWidth * 4, colorspace,
kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast,
ref, NULL, true, kCGRenderingIntentDefault);
// (sayeth abd)
// This creates a context with the device pixel dimensions -- not points.
// To be compatible with all devices, you're meant to keep everything as points and a scale factor; but,
// this gives us a scaled down image for purposes of saving. So, keep everything in device resolution,
// and worry about it later...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(backingWidth, backingHeight), NO, 0.0f);
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, backingWidth, backingHeight), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
return image;
}
Maybe this doesn't apply in your case, but the docs for GLKView:snapshot say:
Never call this method inside your drawing function.

Render CoreGraphics to OpenGL texture on IOS

Using CoreGraphics on iOS is very easy to use, but it is possible to get the output of CoreGraphics and put it into OpenGL Textures?
The final goal is to use CGContextDrawPDFPage to render very performant pdf's and write it into a specific texture id with
OpenGL.glBindTexture(GL_TEXTURE_2D, TextureNativeId);
It does look like CoreGraphics is not able to render directly into a specific "native texture id".
Yes, you can, by rendering your Core Graphics content to a bitmap context and uploading that to a texture. The following is code that I use to draw a UIImage to a Core Graphics context, but you could replace the CGContextDrawImage() portion with your own drawing code:
GLubyte *imageData = (GLubyte *) calloc(1, (int)pixelSizeOfImage.width * (int)pixelSizeOfImage.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)pixelSizeOfImage.width, (int)pixelSizeOfImage.height, 8, (int)pixelSizeOfImage.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeOfImage.width, pixelSizeOfImage.height), [newImageSource CGImage]);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeOfImage.width, (int)pixelSizeOfImage.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
This assumes that you've created your texture using code like the following:
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
For rapidly changing content, you might want to look into iOS 5.0's texture caches (CVOpenGLESTextureCacheCreateTextureFromImage() and the like), which might let you render directly to the bytes for your texture. However, I've found that the overhead for creating and rendering to a texture with a texture cache makes this slightly slower for rendering a single image, so if you don't need to continually update this the code above is probably your fastest route.

Resources