Im porting my app to iOS, and while it has the same gl calls regarding texture binding, the quality gets very poor when zoomed out (mipmaps are enabled). Is there something wrong with this texture loading code?
EDIT: Im starting to think this is actually a retina display issue.
NSString *path = [[NSBundle mainBundle] pathForResource:filename ofType:#"jpg"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
// Flip the Y-axis
CGContextTranslateCTM (context, 0, height);
CGContextScaleCTM (context, 1.0, -1.0);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
It was a retina display issue. After some googling i came to this article which explains that the default scaleFactor is 1, which is why the textures looked bad on my device.
http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/SupportingHiResScreens/SupportingHiResScreens.html
Related
I am unable to render an image from an OpenGL context with a transparent background in CoreGraphics.
The rendered image has a black background.
This is the draw code
GLint default_frame_buffer = 0;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &default_frame_buffer);
if (default_frame_buffer == 0) {
target = createBMGLRenderTarget(width, height);
setFiltering(target, BMGL_BilinearFiltering);
glBindFramebuffer(GL_FRAMEBUFFER, target->framebuffer);
}
glViewport(0, 0, width, height);
if (background) {
CGFloat red, green, blue, alpha;
[background getRed:&red green:&green blue:&blue alpha:&alpha];
glClearColor(red, green, blue, alpha);
} else {
glClearColor(0.f, 0.f, 0.f, 0.f);
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
drawer.draw()
glFlush();
glFinish();
GLubyte *data = (GLubyte *)malloc(width * height * 4);
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, width * height * 4, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef imgRef = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, NO, kCGRenderingIntentDefault);
UIGraphicsBeginImageContextWithOptions(size, YES, 0.0);
CGContextRef cgContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgContext, kCGBlendModeCopy);
CGContextDrawImage(cgContext, CGRectMake(0, 0, size.width, size.height), imgRef);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
free(data);
CGDataProviderRelease(ref);
CGColorSpaceRelease(colorspace);
CGImageRelease(imgRef);
I have tried to specifically set opaque to false as well as different blend modes but it still adds a black background, to the original clear image. I am able to set the GLKView to have a transparent background, but rendering the image and then drawing its contents into a CGImage doesn't work.
Does anyone know why this is?
I believe your problem is in how to create UIImage from raw RGBA data. To confirm this you may check what your data is on a pixel you know to be transparent. Like data[pixelIndex*4 + 3] should be zero where you expect it to be transparent. If it is not transparent then the issue is on openGL part.
Anyway the most probable reason your image is not transparent is you are premultiplying alpha using kCGImageAlphaPremultipliedLast. Try using kCGBitmapByteOrder32Big|kCGImageAlphaLast.
In my opengl project I need to convert an UIImage in texture; what's the way to do it?
Can you help me?
I haven't test the following but i will decompose the conversion in 3 steps:
Extract info for your image:
UIImage* image = [UIImage imageNamed:#"imageToApplyAsATexture.png"];
CGImageRef imageRef = [image CGImage];
int width = CGImageGetWidth(imageRef);
int height = CGImageGetHeight(imageRef);
Allocate a textureData with the above properties:
GLubyte* textureData = (GLubyte *)malloc(width * height * 4); // if 4 components per pixel (RGBA)
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(textureData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
Set-up your texture:
GLuint textureID;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
EDIT:
Read this tut; everything is explained from the conversion of one image to a texture and applying a texture in an iOS environment.
Here is swift version of getting texture out of an UIImage
func setupTexture(sourceImage: UIImage) -> GLuint {
guard let textureImage = sourceImage.cgImage else {
print("Failed to load image")
return 0
}
let width = textureImage.width
let height = textureImage.height
/*
it will write one byte each for red, green, blue, and alpha – so 4 bytes in total.
*/
let textureData = calloc(width * height * 4, MemoryLayout<GLubyte>.size) //4 components per pixel (RGBA)
let spriteContext = CGContext(data: textureData,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: width * 4,
space: textureImage.colorSpace!,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
spriteContext?.draw(textureImage, in: CGRect(x: 0, y: 0, width: width, height: height))
var textName = GLuint()
glGenTextures(1, &textName)
glBindTexture(GLenum(GL_TEXTURE_2D), textName)
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_NEAREST)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width),
GLsizei(height), 0, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), textureData)
return textName
}
Note: Need to keep in mind that Core Graphics flips images when we load them in.
Another way of doing this using the GLKit framework:
//Path to image
NSString *path = [[NSBundle mainBundle] pathForResource:#"textureImage" ofType:#"png"];
//Set eaglContext
[EAGLContext setCurrentContext:[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]];
//Create texture
NSError *theError;
GLKTextureInfo *texture = [GLKTextureLoader textureWithContentsOfFile:filePath options:nil error:&theError];
glBindTexture(texture.target, texture.name);
texture.name is The OpenGL context’s name for the texture.
I'm working on an Augmented Reality app for a client. The OpenGL and EAGL part has been done in Unity 3D, and implemented into a View in my application.
What i need now, is a button that snaps a screenshot of the OpenGL content, which is the backmost view.
I tried writing it myself, but when i click a button with the assigned IBAction, it only saves 1/4 of the screen (the lower left corner) - though it does save it to the camera roll.
So basically, how can i make it save the entire screensize, instead of just one fourth?
here's my code for the method:
-(IBAction)tagBillede:(id)sender
{
UIImage *outputImage = nil;
CGRect s = CGRectMake(0, 0, 320, 480);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
if (!buffer) goto error;
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
if (!ref) goto error;
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
if (!iref) goto error;
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
if (!pixels) goto error;
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
if (!context) goto error;
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
if (!outputRef) goto error;
outputImage = [UIImage imageWithCGImage: outputRef];
if (!outputImage) goto error;
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, self, #selector(image: didFinishSavingWithError: contextInfo:), nil);
}
I suspect you are using a device with a Retina display, which is 640x960. You need to take the screen scale into account; it is 1.0 on non-Retina displays and 2.0 on Retina displays. Try initializing s like this:
CGFloat scale = UIScreen.mainScreen.scale;
CGRect s = CGRectMake(0, 0, 320 * scale, 480 * scale);
If the device is a retina device, you need to scale the opengl stuff yourself. You're actually specifying that you want the lower-left corner by only capturing half the width and half the height.
You need to double both your width and height for the retina screens, but realistically, you should be multiplying it by the screen's scale:
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
Thought i'd chime and, and at the same time, throw some gratitude :)
I got it working like a charm now, here's the cleaned up code:
UIImage *outputImage = nil;
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);
I am trying to read UIImage into the texture on iOS platform. I found the code snippet on StackOverflow that does the trick, but the problem is when I display the texture it is displayed Mirrored upside down.
int numComponents = 4;
UIImage* image = [UIImage imageNamed:#"Test.png"];
CGImageRef imageRef = [image CGImage];
int width = CGImageGetWidth(imageRef);
int height = CGImageGetHeight(imageRef);
//Allocate texture data
GLubyte* textureData = (GLubyte *)malloc(width * height * numComponents);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(textureData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//texture setup
GLuint textureID;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureDataMirrored);
I also tried to mirror the UIImage using following line (before reading the data) but its not working either. In fact no effect whatsoever.
image = [UIImage imageWithCGImage:image.CGImage scale:image.scale orientation:UIImageOrientationUpMirrored];
Please let me know what am I doing wrong here. Thank you.
This two lines fixed the problem
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0,height);
CGContextConcatCTM(context, flipVertical);
I've been challenged with the task of putting an oddly sized image (with fixed proportion, though) on a GL_QUAD (well, a GL_TRIANGLE_STRIP resem--you got the point) and that seemed fairily easy to me at first, except for the part where I need to do this in iOS (4.2+). The solution is awkwardly easy anyway: just take the image, make a texture out of it, map it to the correct vertices and you're good to go.
As you may very well know, OpenGL ES textures are required to have width and height to be powers of 2, like 2, 4, 8, ..., 256, 512... (not sure this holds for regular OpenGL but I think it does... anyway, doesn't matter).
Since I have to download these images from the Intertubes (actually, the YouTube) I can't really do anything beforehand, so I have these 480x360 images (if I remember it correctly) and I have to splat them on my triangle strips. Fortunately we have texture mapping which allows us to select portions of the texture to be mapped where we want, so the obvious solution would be to (optionally up/downsize) and pad with some matte color the source image, and live with it.
Enter iOS. I get the data from the Intertubes, I happily build the corresponding UIImage, then I make another UIImage (yes, I know, bear with me, I'll optimize it later) just scaled down to the nearest power-of-2 in width, preserving aspect, so let's say 256x192, then I make a bitmap context , paint it black (or, for what matters, any other colour, but I think you can see why I chose black in this case), draw the UIImage (a CGImage) on it, and return the UIImage built using the aforementioned bitmap context.
I am now the happy owner of a 256x256 image ready to be mapped on my GL_TRIANGLE_STRIP. Except that it does not work. I tried with a prepared 512x512 image and it worked flawlessly. The code I'm pasting here does not include the retrieval of the image from YouTube, I just saved it locally to rule out networking problems. Also, I'm not including the GL code as it's clearly working.
- (void)viewDidLoad {
images = [[NSMutableArray alloc] init];
//NSURL *url = [NSURL URLWithString:#"http://i.ytimg.com/vi/d2wVgzXWE9Y/0.jpg"];
NSString *path = [[NSBundle mainBundle] pathForResource:#"opengl_texture" ofType:#"jpg"];
NSData *texData = [NSData dataWithContentsOfFile:path];
UIImage *rawImage = [[UIImage alloc] initWithData:texData];
float newWidth = (float)(1 << (int)floor(log2f(rawImage.size.width)));
// Scale means the scale of the current image relative to the resulting image.
float scale = rawImage.size.width / newWidth;
UIImage *midImage = [UIImage imageWithCGImage:[rawImage CGImage] scale:scale orientation:UIImageOrientationUp];
NSLog(#"%f %f %f", midImage.size.width, midImage.size.height, scale);
[rawImage release];
UIImage *image = [self padImage:midImage withColor:[UIColor redColor]];
NSLog(#"%f %f", image.size.width, image.size.height);
[images addObject:image];
textures = malloc(sizeof(GLuint));
glGenTextures(1, textures);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetWidth(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc(width * height * 4);
CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, 4*width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[midImage release];
[image release];
[texData release];
}
- (UIImage *)padImage:(UIImage *)image withColor:(UIColor *)color {
CGFloat size = round(image.size.width);
NSLog(#"%f", size);
CGContextRef bContext = [self createBitmapContextOfSize:CGSizeMake(size, size)];
CGContextSetFillColorWithColor(bContext, [color CGColor]);
CGContextFillRect(bContext, CGRectMake(0, 0, size, size));
CGContextDrawImage(bContext, CGRectMake(0, 0, size, size), [image CGImage]);
UIImage *result = [UIImage imageWithCGImage:CGBitmapContextCreateImage(bContext)];
CGContextRelease(bContext);
return result;
}
- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetAllowsAntialiasing (context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
Please don't bother mentioning obvious memory management issues unless you think they are the core of the problem. As for the "error message" or whatever: no, there's no such thing, the whole app just crashes.
Ok, now you can collectively smack my face with a large trout.
The problem was actually memory management, specifically I was releasing objects that were created with implicit methods (namely midImage and texData). Implicit creation does not increase the retain count, while explicit (alloc+init and friends) does. How may times did I already crash against this? Lots. Were them enough? Obviously not.
Second question: where can I find a large post-it, like 1x1m at least?