Load .png file as a texture to cube ios - ios

I'm trying to load .png file as texture to my cube.
[self loadTexture:&myTexture fromFile:#"my_png.png"];
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE);
This is a function to load it, but unfortunatelly it's not working.
- (void)loadTexture:(GLuint *)newTextureName fromFile:(NSString *)fileName {
// Load image from file and get reference
UIImage *image = [[UIImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:fileName ofType:nil]];
CGImageRef imageRef = [image CGImage];
if(imageRef) {
// get width and height
size_t imageWidth = CGImageGetWidth(imageRef);
size_t imageHeight = CGImageGetHeight(imageRef);
GLubyte *imageData = (GLubyte *)malloc(imageWidth * imageHeight * 4);
memset(imageData, 0, (imageWidth * imageHeight * 4));
CGContextRef imageContextRef = CGBitmapContextCreate(imageData, imageWidth, imageHeight, 8, imageWidth * 4, CGImageGetColorSpace(imageRef), kCGImageAlphaPremultipliedLast);
// Make CG system interpret OpenGL style texture coordinates properly by inverting Y axis
CGContextTranslateCTM(imageContextRef, 0, imageHeight);
CGContextScaleCTM(imageContextRef, 1.0, -1.0);
CGContextDrawImage(imageContextRef, CGRectMake(0.0, 0.0, (CGFloat)imageWidth, (CGFloat)imageHeight), imageRef);
CGContextRelease(imageContextRef);
glGenTextures(1, newTextureName);
glBindTexture(GL_TEXTURE_2D, *newTextureName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageWidth, imageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
free(imageData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
}
Have you got any idea to fix this problem?

Class loaderClass = (NSClassFromString(#"GLKTextureLoader"));
if (loaderClass != nil )
{
NSError* error = nil;
NSDictionary* options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool: mipmap_levels > 0], GLKTextureLoaderGenerateMipmaps,
[NSNumber numberWithBool:YES], GLKTextureLoaderApplyPremultiplication,
nil
];
GLKTextureInfo* info = [loaderClass textureWithContentsOfFile: path.NSStringValue() options: options error: &error];
if (info && !error)
{}
If info is not null and no error then info will hold all the information you need.

Related

OpenGL ES textures are all black

I create 8 textures through:
GLuint textures[8];
glGenTextures(8, textures);
for (int i=0; i<num; i++) {
NSString *fileName = [NSString stringWithFormat:#"image0%d", i + 1];
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
if (!spriteImage) {
NSLog(#"Failed to load image %#", fileName);
exit(1);
}
size_t kwidth = CGImageGetWidth(spriteImage);
size_t kheight = CGImageGetHeight(spriteImage);
GLubyte * spriteData = (GLubyte *) calloc(kwidth*kheight*4, sizeof(GLubyte));
glBindTexture(GL_TEXTURE_2D, textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, kwidth, kheight, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glGenerateMipmap(GL_TEXTURE_2D);
}
But get all black textures:
How could I solve this?
GLubyte * spriteData = (GLubyte *) calloc(kwidth*kheight*4, sizeof(GLubyte));
Creates a GLubyte array filled with zeros. Since you never modify spriteData anywhere, the textures are filled with zeros which is black.

Rendering OpenGL ES 2.0 to UIImage

I am interested in writing some code which processes an image using OpenGL ES 2.0 and then reads out the image back to memory again (to eventually be saved). I have some other code which does some more complex processing and just renders the image to display which works. I now want some code which runs the same processing but just saves the image. It doesn't need to be rendered to the screen.
I have created a frame buffer object which outputs to a texture. I then want to use glReadPixels to get the contents of the frame buffer back into memory. The following code snippet should just take a UIImage, resize it to fit onto a 512x512 canvas and then write it out as a 512x512 UIImage. I can't seem to get anything meaningful to display though. If I use an image smaller than 512x512 it does seem to render, but it's as if it is drawing straight from the texture storage (because I pad images to always be a power of 2 size). The image displays but if I change any of the drawing code, it doesn't affect it.
I'd really appreciate if you could give me some insight?
Thanks!
Here is my Vertex Shader:
attribute vec4 Position;
attribute vec4 SourceColour;
varying vec4 DestinationColour;
uniform vec2 ScreenSize;
attribute vec2 TexCoordIn;
varying vec2 TexCoordOut;
void main(void) {
DestinationColour = SourceColour;
vec4 newPosition = Position;
newPosition.x = Position.x / ScreenSize.x;
newPosition.y = Position.y / ScreenSize.y;
gl_Position = Position;
TexCoordOut = TexCoordIn;
}
Here is my fragment shader:
varying lowp vec2 TexCoordOut;
uniform sampler2D Texture;
void main(void)
{
lowp vec4 pixel = texture2D(Texture, TexCoordOut);
gl_FragColor = pixel;
}
Here is the main code body:
#import "SERootViewController.h"
#import <OpenGLES/ES2/gl.h>
typedef struct {
CGSize size;
CGPoint percentage;
GLuint id;
} Texture;
typedef struct {
float Position[3];
float TexCoord[2];
} Vertex;
size_t nextPowerOfTwo(size_t n)
{
size_t po2 = 2;
while(po2<n) {
po2 = po2*2;
}
return po2;
}
const GLubyte Indices[] = {
0, 1, 2,
2, 3, 0
};
#interface SERootViewController ()
{
EAGLContext *_context;
//OpenGL
GLuint _vertexBufferHandle;
GLuint _indexBufferHandle;
GLuint _vertexShaderHandle;
GLuint _fragmentShaderHandle;
GLuint _programHandle;
GLuint _positionHandle;
GLuint _texCoordHandle;
GLuint _textureHandle;
GLuint _screenSizeHandle;
GLuint _fbo;
GLuint _fboTexture;
Texture _imageTexture;
GLfloat _defaultScale;
Vertex *_vertices;
CGSize _screenSize;
//Cocoa
UIImageView *_imageView;
}
#end
#implementation SERootViewController
//////////////////////////////////////////////////////////////////////////
#pragma mark -
#pragma mark Lifecycle
//////////////////////////////////////////////////////////////////////////
- (id)init
{
if ( (self = [super init]) != nil)
{
_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
_vertices = malloc(sizeof(Vertex) * 4);
}
return self;
}
- (void)viewDidLoad
{
[super viewDidLoad];
[self __setupFrameBuffer];
[self __setupShaders];
CGSize size = CGSizeMake(512, 512);
[self __setScreenSize:size];
UIImage *image = [UIImage imageNamed:#"christmas.jpg"];
GLubyte *bytes = [self __bytesFromImage:image];
_imageTexture = [self __newTexture:bytes size:image.size];
free(bytes);
[self __setupVBOs:_imageTexture screenSize:size];
[self __renderTexture:_imageTexture];
[self __bindFrameBuffer];
GLubyte *imageBytes = [self __renderImage:size];
UIImage *newImage = [self __imageFromBytes:imageBytes size:size];
free(imageBytes);
_imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
_imageView.image = newImage;
_imageView.contentMode = UIViewContentModeScaleAspectFit;
[self.view addSubview:_imageView];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
//////////////////////////////////////////////////////////////////////////
#pragma mark -
#pragma mark Methods
//////////////////////////////////////////////////////////////////////////
- (void)__setScreenSize:(CGSize)screenSize
{
_screenSize = screenSize;
glUniform2f(_screenSizeHandle, _screenSize.width, _screenSize.height);
}
- (void)__setupFrameBuffer
{
glGenFramebuffers(1, &_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, _fbo);
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &_fboTexture);
glBindTexture(GL_TEXTURE_2D, _fboTexture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _fboTexture, 0);
}
- (void)__bindFrameBuffer
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _fboTexture);
glBindFramebuffer(GL_FRAMEBUFFER, _fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _fboTexture, 0);
}
- (GLuint) __compileShader:(NSString *)shaderStr type:(GLenum) type
{
const char *str = shaderStr.UTF8String;
int shaderStrLength = strlen(str);
GLuint shaderHandle = glCreateShader(type);
glShaderSource(shaderHandle, 1, &str, &shaderStrLength);
glCompileShader(shaderHandle);
GLint compileSuccess;
glGetShaderiv(shaderHandle, GL_COMPILE_STATUS, &compileSuccess);
if (compileSuccess == GL_FALSE)
{
GLchar messages[512];
glGetShaderInfoLog(shaderHandle, sizeof(messages), 0, &messages[0]);
NSLog(#"Shader Error: %s", messages);
}
return shaderHandle;
}
- (void)__setupVBOs:(Texture)texture screenSize:(CGSize)screenSize
{
glDeleteBuffers(1, &_vertexBufferHandle);
glDeleteBuffers(1, &_indexBufferHandle);
_defaultScale = MIN( (screenSize.height/texture.size.height), (screenSize.width/texture.size.width) );
GLfloat width = texture.size.width*_defaultScale;
GLfloat height = texture.size.height*_defaultScale;
Vertex vertices[] = {{{ width/2, height/2, 0}, {texture.percentage.x, texture.percentage.y}},
{{ width/2, 0, 0}, {texture.percentage.x, 0}},
{{0, 0, 0}, {0, 0}},
{{0, height/2, 0}, {0, texture.percentage.y}}
};
memcpy(_vertices, vertices, sizeof(vertices));
glGenBuffers(1, &_vertexBufferHandle);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferHandle);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), _vertices, GL_STATIC_DRAW);
glGenBuffers(1, &_indexBufferHandle);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBufferHandle);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
}
- (void) __setupShaders
{
NSString *vertexFilePath = [[NSBundle mainBundle] pathForResource:#"vertexShader" ofType:#"glsl"];
NSString *vertexShader = [NSString stringWithContentsOfFile:vertexFilePath encoding:NSUTF8StringEncoding error:nil];
NSString *fragmentFilePath = [[NSBundle mainBundle] pathForResource:#"fragmentShader" ofType:#"glsl"];
NSString *fragmentShader = [NSString stringWithContentsOfFile:fragmentFilePath encoding:NSUTF8StringEncoding error:nil];
_vertexShaderHandle = [self __compileShader:vertexShader type:GL_VERTEX_SHADER];
_fragmentShaderHandle = [self __compileShader:fragmentShader type:GL_FRAGMENT_SHADER];
_programHandle = glCreateProgram();
glAttachShader(_programHandle, _vertexShaderHandle);
glAttachShader(_programHandle, _fragmentShaderHandle);
glLinkProgram(_programHandle);
GLint linkSuccess;
glGetProgramiv(_programHandle, GL_LINK_STATUS, &linkSuccess);
if (linkSuccess == GL_FALSE)
{
GLchar messages[512];
glGetProgramInfoLog(_programHandle, sizeof(messages), 0, &messages[0]);
NSLog(#"GLSL ERROR: %s", messages);
}
glUseProgram(_programHandle);
_positionHandle = glGetAttribLocation(_programHandle, "Position");
_texCoordHandle = glGetAttribLocation(_programHandle, "TexCoordIn");
_screenSizeHandle = glGetUniformLocation(_programHandle, "ScreenSize");
_textureHandle = glGetUniformLocation(_programHandle, "Texture");
glEnableVertexAttribArray(_positionHandle);
glEnableVertexAttribArray(_texCoordHandle);
}
- (Texture)__newTexture:(void *)bytes size:(CGSize)size
{
Texture texture;
size_t wpo2 = nextPowerOfTwo(size.width);
size_t hpo2 = nextPowerOfTwo(size.height);
texture.size = size;
texture.percentage = CGPointMake((float)size.width / (float)wpo2, (float)size.height / (float)hpo2);
void * texData = (GLubyte *) malloc(wpo2*hpo2*4*sizeof(GLubyte));
memset(texData, 1, sizeof(GLubyte)*size.width*4);
for(GLuint i=0;i<size.height;i++)
{
memcpy(&texData[wpo2*i*4], &bytes[(int)size.width*i*4], sizeof(GLubyte)*(int)size.width*4);
}
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &texture.id);
glBindTexture(GL_TEXTURE_2D, texture.id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)wpo2, (GLsizei)hpo2, 0, GL_RGBA, GL_UNSIGNED_BYTE, texData);
free(texData);
return texture;
}
-(void) __renderTexture:(Texture) texture
{
glClearColor(0.0f, 0.5, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glVertexAttribPointer(_positionHandle, 3, GL_FLOAT, GL_FALSE,
sizeof(Vertex), 0);
glVertexAttribPointer(_texCoordHandle, 2, GL_FLOAT, GL_FALSE,
sizeof(Vertex), (GLvoid*) (sizeof(float) * 3));
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture.id);
glUniform1i(_textureHandle, 1);
glDrawElements(GL_TRIANGLES, sizeof(Indices)/sizeof(Indices[0]),GL_UNSIGNED_BYTE, 0);
}
- (GLubyte *)__renderImage:(CGSize)size
{
GLubyte *image = malloc(size.width*size.height*4*sizeof(GLubyte));
glReadPixels(0, 0, size.width, size.height, GL_RGBA, GL_UNSIGNED_BYTE, image);
return image;
}
- (GLubyte *)__bytesFromImage:(UIImage *)image
{
GLint maxTextureSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);
CGFloat maxSize = MAX(image.size.width, image.size.height);
CGFloat width = image.size.width;
CGFloat height = image.size.height;
if (maxSize > maxTextureSize)
{
CGFloat scale = maxTextureSize/maxSize;
width = roundf(width*scale);
height = roundf(height*scale);
}
CGImageRef spriteImage = image.CGImage;
if (!spriteImage) {
NSLog(#"Failed to load image");
exit(1);
}
GLubyte * imgData = (GLubyte *) malloc(width*height*4*sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(imgData, width, height, 8, width*4,
CGImageGetColorSpace(spriteImage), (CGBitmapInfo)kCGImageAlphaPremultipliedLast);
UIGraphicsPushContext(spriteContext);
CGContextSaveGState(spriteContext);
[image drawInRect:CGRectMake(0, 0, width, height)];
CGContextRestoreGState(spriteContext);
UIGraphicsPopContext();
CGContextRelease(spriteContext);
return imgData;
}
- (UIImage *)__imageFromBytes:(GLubyte *)bytes size:(CGSize)size
{
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bytes, (size.width * size.height * 4), NULL);
// set up for CGImage creation
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(size.width, size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// make UIImage from CGImage
UIImage *newUIImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return newUIImage;
}
#end

AVFoundation: add text to the CMSampleBufferRef video frame

I'm building an app using AVFoundation.
Just before I call [assetWriterInput appendSampleBuffer:sampleBuffer] in
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection-method.
I manipulate the pixels in the sample buffer (using a pixelbuffer to apply an effect).
But the client wants me to put in a text (timestamp & framecounter) as well on the frames, but I haven't found a way to do this yet.
I tried to convert the samplebuffer to an Image, apply text on the image, and convert the image back to a samplebuffer, but then
CMSampleBufferDataIsReady(sampleBuffer)
fails.
Here are my UIImage category methods:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *newUIImage = [UIImage imageWithCGImage:newImage];
CFRelease(newImage);
return newUIImage;
}
And
- (CMSampleBufferRef) cmSampleBuffer
{
CGImageRef image = self.CGImage;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
self.size.width,
self.size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, self.size.width,
self.size.height, 8, 4*self.size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
CMVideoFormatDescriptionRef videoInfo = NULL;
CMSampleBufferRef sampleBuffer = NULL;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pxbuffer, true, NULL, NULL, videoInfo, NULL, &sampleBuffer);
return sampleBuffer;
}
Any ideas?
EDIT:
I changed my code with Tony's answer. (Thank you!)
This code works:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
UIFont *font = [UIFont fontWithName:#"Helvetica" size:40];
NSDictionary *attributes = #{NSFontAttributeName: font,
NSForegroundColorAttributeName: [UIColor lightTextColor]};
UIImage *img = [UIImage imageFromText:#"01 - 13/02/2014 15:18:21:654" withAttributes:attributes];
CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage];
[ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
You should refer the CIFunHouse sample from apple, and you may use this api to draw directly to the buffer
-(void)render:(CIImage *)image toCVPixelBuffer:(CVPixelBufferRef)buffer bounds:(CGRect)r colorSpace:(CGColorSpaceRef)cs
You can download it here WWDC2013
Create the context
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
Now render the image
CVPixelBufferRef renderedOutputPixelBuffer = NULL;
OSStatus err = CVPixelBufferPoolCreatePixelBuffer(nil, self.pixelBufferAdaptor.pixelBufferPool, &renderedOutputPixelBuffer);
[_ciContext render:filteredImage toCVPixelBuffer:renderedOutputPixelBuffer bounds:[filteredImage extent]

OpenGL ES render to UIImage, image is black

I have some problems with image processing with the help of OpenGL ES.
I have created a texture, load shaders, and attached them to the program and when I want to render my
texture to UIImage, the image is fully black :(
Texture init:
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &_name);
glBindTexture(self.format, _name);
glTexParameteri(self.format, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(self.format, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(self.format, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(self.format, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(self.format, 0);
Texture load code:
self.pixelSize = CGSizeMake(self.size.width * self.scale, self.size.height * self.scale);
BOOL shouldScale = NO;
if (self.limitedSize.width < self.pixelSize.width ||
self.limitedSize.height < self.pixelSize.height) {
self.pixelSize = self.limitedSize;
}
if (shouldScale) {
CGFloat normalizedWidth = ceil(log2(self.pixelSize.width));
CGFloat normalizedHeight = ceil(log2(self.pixelSize.height));
self.pixelSize = CGSizeMake(pow(2.0, normalizedWidth), powf(2.0, normalizedHeight));
self.data = (GLubyte *)calloc(1, self.bytes);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst;
CGContextRef context = CGBitmapContextCreate(self.data, normalizedWidth, normalizedHeight, 8, normalizedWidth * 4, colorSpace, bitmapInfo);
CGContextDrawImage(context, CGRectMake(0, 0, normalizedWidth, normalizedHeight), image.CGImage);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
}
else {
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
self.data = (GLubyte *)CFDataGetBytePtr(dataRef);
CFRelease(dataRef);
}
glBindTexture(self.format, self.name);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage2D(self.format, 0, GL_RGBA, self.pixelSize.width, self.pixelSize.height, 0, GL_BGRA, self.type, self.data);
glGenerateMipmap(GL_TEXTURE_2D);
Frame buffer:
glActiveTexture(GL_TEXTURE1);
glGenFramebuffers(1, &_buffer);
glBindFramebuffer(GL_FRAMEBUFFER, _buffer); // self.format is GL_TEXTURE_2D
glBindTexture(self.texture.format, self.texture.name);
glTexImage2D(self.texture.format, 0, GL_RGBA, self.width, self.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, self.texture.format, self.texture.name, 0);
GLint status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"GLBuffer: Failed to make framebuffer object");
}
glBindTexture(self.texture.format, 0);
Render:
[self.buffer bind];
[self.program use];
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE2);
[self.texture bind];
glUniform1i([self.texture uniformForKey:#"inputImageTexture"], 2);
glUniform1f([self.texture uniformForKey:#"red"], 1.0);
glUniform1f([self.texture uniformForKey:#"green"], 0.0);
glUniform1f([self.texture uniformForKey:#"blue"], 0.0);
glVertexAttribPointer([self.texture attributeForKey:#"position"], 2, GL_FLOAT, 0, 0, [self.texture vertices]);
glEnableVertexAttribArray([self.texture attributeForKey:#"position"]);
glVertexAttribPointer([self.texture attributeForKey:#"inputTextureCoordinate"], 2, GL_FLOAT, 0, 0, [self.texture coordinates]);
glEnableVertexAttribArray([self.texture attributeForKey:#"inputTextureCoordinate"]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[self.buffer unbind];
[self.texture unbind];
Texture to image:
[self.buffer bind];
GLubyte *rawPixels = (GLubyte *)malloc([self.texture bytes]);
glReadPixels(0, 0, [self.texture width], [self.texture height], GL_RGBA, GL_UNSIGNED_BYTE, rawPixels);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, rawPixels, [self.texture bytes], NULL);
NSUInteger bytesPerRow = [self.texture pixelWidth] * 4;
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImage = CGImageCreate([self.texture pixelWidth],
[self.texture pixelHeight],
8,
32,
bytesPerRow,
colorSpace,
bitmapInfo,
dataProvider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
after i removed glClear(GL_COLOR_BUFFER_BIT) from my rendering method, texture renders correctly on simulator, but not on real device. Any suggestions?
Convert texture to UIImage
`#pragma mark - Convert GL image to UIImage`
-(UIImage *) glToUIImage
{
imageWidth = 702;
imageHeight = 962;
NSInteger myDataLength = imageWidth * imageHeight * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, imageWidth, imageHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < imageHeight; y++)
{
for(int x = 0; x < imageWidth * 4; x++)
{
buffer2[((imageHeight - 1) - y) * imageWidth * 4 + x] = buffer[y * 4 * imageWidth + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * imageWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(imageWidth, imageHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
Just get your image by calling this method like this -
UIImage *yourImage = [self glToUIImage];
Source = http://www.bit-101.com/blog/?p=1861

Memory leak in CoreImage/CoreVideo

I'm build an iOS app that does some basic detection.
I get the raw frames from AVCaptureVideoDataOutput, convert the CMSampleBufferRef to a UIImage, resize the UIImage, then convert it to a CVPixelBufferRef.
As far as I can detect with Instruments the leak is the last part where I convert the CGImage to a CVPixelBufferRef.
Here's the code I use:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
videof = [[ASMotionDetect alloc] initWithSampleImage:[self resizeSampleBuffer:sampleBuffer]];
// ASMotionDetect is my class for detection and I use videof to calculate the movement
}
-(UIImage*)resizeSampleBuffer:(CMSampleBufferRef) sampleBuffer {
UIImage *img;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
img = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
newContext = nil;
img = [self resizeImageToSquare:img];
return img;
}
-(UIImage*)resizeImageToSquare:(UIImage*)_temp {
UIImage *img;
int w = _temp.size.width;
int h = _temp.size.height;
CGRect rect;
if (w>h) {
rect = CGRectMake((w-h)/2,0,h,h);
} else {
rect = CGRectMake(0, (h-w)/2, w, w);
}
//
img = [self crop:_temp inRect:rect];
return img;
}
-(UIImage*) crop:(UIImage*)image inRect:(CGRect)rect{
UIImage *sourceImage = image;
CGRect selectionRect = rect;
CGRect transformedRect = TransformCGRectForUIImageOrientation(selectionRect, sourceImage.imageOrientation, sourceImage.size);
CGImageRef resultImageRef = CGImageCreateWithImageInRect(sourceImage.CGImage, transformedRect);
UIImage *resultImage = [[UIImage alloc] initWithCGImage:resultImageRef scale:1.0 orientation:image.imageOrientation];
CGImageRelease(resultImageRef);
return resultImage;
}
And in my detection class I have:
- (id)initWithSampleImage:(UIImage*)sampleImage {
if ((self = [super init])) {
_frame = new CVMatOpaque();
_histograms = new CVMatNDOpaque[kGridSize *
kGridSize];
[self extractFrameFromImage:sampleImage];
}
return self;
}
- (void)extractFrameFromImage:(UIImage*)sampleImage {
CGImageRef imageRef = [sampleImage CGImage];
CVImageBufferRef imageBuffer = [self pixelBufferFromCGImage:imageRef];
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Collect some information required to extract the frame.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Extract the frame, convert it to grayscale, and shove it in _frame.
cv::Mat frame(height, width, CV_8UC4, baseAddress, bytesPerRow);
cv::cvtColor(frame, frame, CV_BGR2GRAY);
_frame->matrix = frame;
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CGImageRelease(imageRef);
}
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
CVPixelBufferRef pxbuffer = NULL;
int width = CGImageGetWidth(image)*2;
int height = CGImageGetHeight(image)*2;
NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil];
CVPixelBufferPoolRef pixelBufferPool;
CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool);
NSParameterAssert(theError == kCVReturnSuccess);
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, width*4, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
/* here is the problem: */
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
With Instrument I found out that the problem is with CVPixelBufferRef allocations but I don't understand why - can someone see the problem?
Thank you
In -pixelBufferFromCGImage:, both pxBuffer and pixelBufferPool are not released. That makes sense for pxBuffer, as it is a return value, but not for pixelBufferPool – you create and leak one per call of the method.
A quick fix should be to
Release pixelBufferPool in -pixelBufferFromCGImage:
Release pxBuffer (the return value of -pixelBufferFromCGImage:) in -extractFrameFromImage:
You should also rename -pixelBufferFromCGImage: to -createPixelBufferFromCGImage: to make clear that it returns a retained object.

Resources