Convert an UIImage in a texture - ios

In my opengl project I need to convert an UIImage in texture; what's the way to do it?
Can you help me?

I haven't test the following but i will decompose the conversion in 3 steps:
Extract info for your image:
UIImage* image = [UIImage imageNamed:#"imageToApplyAsATexture.png"];
CGImageRef imageRef = [image CGImage];
int width = CGImageGetWidth(imageRef);
int height = CGImageGetHeight(imageRef);
Allocate a textureData with the above properties:
GLubyte* textureData = (GLubyte *)malloc(width * height * 4); // if 4 components per pixel (RGBA)
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(textureData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
Set-up your texture:
GLuint textureID;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
EDIT:
Read this tut; everything is explained from the conversion of one image to a texture and applying a texture in an iOS environment.

Here is swift version of getting texture out of an UIImage
func setupTexture(sourceImage: UIImage) -> GLuint {
guard let textureImage = sourceImage.cgImage else {
print("Failed to load image")
return 0
}
let width = textureImage.width
let height = textureImage.height
/*
it will write one byte each for red, green, blue, and alpha – so 4 bytes in total.
*/
let textureData = calloc(width * height * 4, MemoryLayout<GLubyte>.size) //4 components per pixel (RGBA)
let spriteContext = CGContext(data: textureData,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: width * 4,
space: textureImage.colorSpace!,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
spriteContext?.draw(textureImage, in: CGRect(x: 0, y: 0, width: width, height: height))
var textName = GLuint()
glGenTextures(1, &textName)
glBindTexture(GLenum(GL_TEXTURE_2D), textName)
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_NEAREST)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width),
GLsizei(height), 0, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), textureData)
return textName
}
Note: Need to keep in mind that Core Graphics flips images when we load them in.

Another way of doing this using the GLKit framework:
//Path to image
NSString *path = [[NSBundle mainBundle] pathForResource:#"textureImage" ofType:#"png"];
//Set eaglContext
[EAGLContext setCurrentContext:[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]];
//Create texture
NSError *theError;
GLKTextureInfo *texture = [GLKTextureLoader textureWithContentsOfFile:filePath options:nil error:&theError];
glBindTexture(texture.target, texture.name);
texture.name is The OpenGL context’s name for the texture.

Related

Create CVPixelBuffer with pixels data, but the final image is distorted

I get pixels by OpenGLES method(glReadPixels) or other way, then create CVPixelBuffer (with or without CGImage) for video recording, but the final picture is distorted. This happens on iPhone6 when I test on iPhone 5c, 5s and 6.
It looks like:
Here is the code:
CGSize viewSize=self.glView.bounds.size;
NSInteger myDataLength = viewSize.width * viewSize.height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, viewSize.width, viewSize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < viewSize.height; y++)
{
for(int x = 0; x < viewSize.width* 4; x++)
{
buffer2[(int)((viewSize.height-1 - y) * viewSize.width * 4 + x)] = buffer[(int)(y * 4 * viewSize.width + x)];
}
}
free(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * viewSize.width;
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(viewSize.width , viewSize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
//UIImage *photo = [UIImage imageWithCGImage:imageRef];
int width = CGImageGetWidth(imageRef);
int height = CGImageGetHeight(imageRef);
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, _recorder.pixelBufferAdaptor.pixelBufferPool, &pixelBuffer);
NSAssert((status == kCVReturnSuccess && pixelBuffer != NULL), #"create pixel buffer failed.");
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer);
NSParameterAssert(pxdata != NULL);//CGContextRef
CGContextRef context = CGBitmapContextCreate(pxdata,
width,
height,
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
colorSpaceRef,
kCGImageAlphaPremultipliedLast);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
CGImageRelease(imageRef);
free(buffer2);
//CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// ...
CVPixelBufferRelease(pixelBuffer);
NOTE - this answer relates the overall problem with the image and not to the specific code.
This sort of problem is usually the 'stride' and relates to the memory layout used to hold the image where each row of pixels are not packed tightly together
As an example the source image may be 240 pixels wide.
The CMPixelBuffer may allocate 320 pixels for each rows where the first 240 pixels hold the image and the extra 80 pixels are padding.
In this case the width is 240 pixels, the stride is 320 pixels.
Strides usually mean you have to copy over each row of pixels one in a loop
Use this size everywhere in the code
int width_16 = (int)yourImage.size.width - (int)yourImage.size.width%16;
int height_ = (int)(yourImage.size.height/yourImage.size.width * width_16) ;
CGSize video_size_ = CGSizeMake(width_16, height_);
I had the same problem and I think the solution is following:
Try to change CGImageGetBytesPerRow(imageRef) to CVPixelBufferGetBytesPerRow(pxbuffer) in CGBitmapContextCreate call. The reason is that your context is backed with the raw data of pixel buffer you have created, not CGImage you are drawing. CVPixelBuffer's bytes per row count may be greater than bytes per pixel * pixel buffer width.

Landscape screenshot returns portrait mode screen

I'm taking openGL screenshot , if camera at portrait mode then take the snapshot it returns portrait mode. But if i rotate the camera into landscape mode from portrait mode then take screenshot it returns portrait mode screenshot only. But my camera view is showing live stream full mode and screenshot saving 1024X768.
ImageTargetsEAGLView.mm:
- (BOOL)presentFramebuffer
{
if (_takePhotoFlag1)
{
[self glToUIImage1];
UIImageWriteToSavedPhotosAlbum([self glToUIImage1], nil, nil, nil);
NSLog(#"Screenshot size: %d, %d", (int)[[self glToUIImage1] size].width, (int)[[self glToUIImage1] size].height);
_takePhotoFlag1 = NO;
}
// setFramebuffer must have been called before presentFramebuffer, therefore
// we know the context is valid and has been set for this (render) thread
// Bind the colour render buffer and present it
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
return [context presentRenderbuffer:GL_RENDERBUFFER];
}
- (UIImage*) glToUIImage1
{
UIImage *outputImage = nil;
UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
if (UIInterfaceOrientationIsLandscape(orientation))
{
NSLog(#"landscape screen");
CGRect screenBounds = [[UIScreen mainScreen] bounds];
// CGFloat scale = [[UIScreen mainScreen] scale];
// CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
CGRect s = CGRectMake(0, 0, 1024 , 768);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context1, transform);
CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context1);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context1);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
}else{
NSLog(#"portrait screen");
// CGFloat scale = [[UIScreen mainScreen] scale];
// CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
CGRect s = CGRectMake(0, 0, 768, 1024);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context1, transform);
CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context1);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context1);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
}
return outputImage;
}
}
I don't have the same issue with your code you do, but I do get artifacting, etc. I would suggest using the method found here: http://www.unagames.com/blog/daniele/2011/10/opengl-es-screenshots-ios
It worked perfectly for me as a drop-in replacement for yours (if you are using a GLKViewController then you just give it self.view as the eaglview) and it actually is pulling the correct size for the SS from the OpenGL ES context so you know it's always correct^^

glReadPixels only saves 1/4 screen size snapshots

I'm working on an Augmented Reality app for a client. The OpenGL and EAGL part has been done in Unity 3D, and implemented into a View in my application.
What i need now, is a button that snaps a screenshot of the OpenGL content, which is the backmost view.
I tried writing it myself, but when i click a button with the assigned IBAction, it only saves 1/4 of the screen (the lower left corner) - though it does save it to the camera roll.
So basically, how can i make it save the entire screensize, instead of just one fourth?
here's my code for the method:
-(IBAction)tagBillede:(id)sender
{
UIImage *outputImage = nil;
CGRect s = CGRectMake(0, 0, 320, 480);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
if (!buffer) goto error;
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
if (!ref) goto error;
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
if (!iref) goto error;
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
if (!pixels) goto error;
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
if (!context) goto error;
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
if (!outputRef) goto error;
outputImage = [UIImage imageWithCGImage: outputRef];
if (!outputImage) goto error;
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, self, #selector(image: didFinishSavingWithError: contextInfo:), nil);
}
I suspect you are using a device with a Retina display, which is 640x960. You need to take the screen scale into account; it is 1.0 on non-Retina displays and 2.0 on Retina displays. Try initializing s like this:
CGFloat scale = UIScreen.mainScreen.scale;
CGRect s = CGRectMake(0, 0, 320 * scale, 480 * scale);
If the device is a retina device, you need to scale the opengl stuff yourself. You're actually specifying that you want the lower-left corner by only capturing half the width and half the height.
You need to double both your width and height for the retina screens, but realistically, you should be multiplying it by the screen's scale:
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
Thought i'd chime and, and at the same time, throw some gratitude :)
I got it working like a charm now, here's the cleaned up code:
UIImage *outputImage = nil;
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);

Textures quality poor when zoomed out

Im porting my app to iOS, and while it has the same gl calls regarding texture binding, the quality gets very poor when zoomed out (mipmaps are enabled). Is there something wrong with this texture loading code?
EDIT: Im starting to think this is actually a retina display issue.
NSString *path = [[NSBundle mainBundle] pathForResource:filename ofType:#"jpg"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
// Flip the Y-axis
CGContextTranslateCTM (context, 0, height);
CGContextScaleCTM (context, 1.0, -1.0);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
It was a retina display issue. After some googling i came to this article which explains that the default scaleFactor is 1, which is why the textures looked bad on my device.
http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/SupportingHiResScreens/SupportingHiResScreens.html

In UIImage to Texture conversion, Texture is flipped vertically

I am trying to read UIImage into the texture on iOS platform. I found the code snippet on StackOverflow that does the trick, but the problem is when I display the texture it is displayed Mirrored upside down.
int numComponents = 4;
UIImage* image = [UIImage imageNamed:#"Test.png"];
CGImageRef imageRef = [image CGImage];
int width = CGImageGetWidth(imageRef);
int height = CGImageGetHeight(imageRef);
//Allocate texture data
GLubyte* textureData = (GLubyte *)malloc(width * height * numComponents);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(textureData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//texture setup
GLuint textureID;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureDataMirrored);
I also tried to mirror the UIImage using following line (before reading the data) but its not working either. In fact no effect whatsoever.
image = [UIImage imageWithCGImage:image.CGImage scale:image.scale orientation:UIImageOrientationUpMirrored];
Please let me know what am I doing wrong here. Thank you.
This two lines fixed the problem
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0,height);
CGContextConcatCTM(context, flipVertical);

Resources