Fast way to create openGL texture from JPEG-2000? - ios

I need to load large-ish (5 megapixel) jpeg images and create openGL texture from them. They are non-power-of-two, and cannot be pre-processed for this application. Loading is extremely slow, about one second per image on an iPad Air 2. I need to load a dozen or two such images and create a GL texture for each, as quickly as I can.
Profiling shows the bottleneck to be CGContextDrawImage. Previous answers suggest this is a common problem.
This previous answer seems most relevant and (unfortunately) does not leave me hopeful. I haven't tried lib-jpeg (suggested in another answer) yet - trying to keep third party code out for several reasons.
But - that answer was 2014 and things change. Does anybody know of a faster way to create textures from jpegs? Either by changing the arguments to CGContextDrawImage (as in this answer- I've tried the suggested changes with no noticeable speed change) or using a different approach entirely?
The current texture creation block (called asynchronously):
UIImage *image = [UIImage imageWithData:jpegImageData];
if (image) {
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture( GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
GLsizei width = (GLsizei)CGImageGetWidth(image.CGImage);
GLsizei height = (GLsizei)CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef imgcontext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextDrawImage( imgcontext, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(imgcontext);
free(imageData);
// ... store the textureID for use by the caller
// ...
}
(edited to add)
I tried GLKTextureLoader. I kept getting a nil return value, with error theError NSError * domain: "GLKTextureLoaderErrorDomain" - code: 12.
I've realized that the JPEGs I need to load are JPEG 2000; and that may be the problem. I've played with the GLKTextureLoader approach; I can get it to work non-J2K jpegs, but not the J2K ones I need to load. (FWIW, the files I need to load are packed inside larger files, thus I extract a data subrange from within the file, as such:
NSData *jpegImageData = [data subdataWithRange:NSMakeRange(offset, dataLength)];
GLKTextureInfo *jpegTexture;
NSError *theError;
jpegTexture = [GLKTextureLoader textureWithContentsOfData:jpegImageData options:nil error:&theError];
but, as mentioned, jpegImageData comes back as nil with the aforementioned error. This works on small jpegs, even using the subdataWithRange approach.
Likewise,
UIImage *image = [UIImage imageWithData:jpegImageData];
jpegTexture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&theError];
returns nil with the same "code 12" error.
This iOS Developer page (Table 1-1) suggests that JPEG-2000 is supported on OS X only, but when I try the
CFArrayRef mySourceTypes = CGImageSourceCopyTypeIdentifiers();
CFShow(mySourceTypes);
approach for showing supported formats, JPEG-2000 is among them (running on my iOS device):
33 : <CFString 0x19d721bf8 [0x1a1da0150]>{contents = "public.jpeg-
Any suggestions for using the faster GLKTextureLoader methods on JPEG-2000?

Did you try the GLKit Framework method?
GLKTexGtureInfo *spriteTexture;
NSError *theError;
NSString *filePath = [[NSBundle mainBundle] pathForResource:#"Sprite" ofType:#"jpg"]; // 1
spriteTexture = [GLKTextureLoader textureWithContentsOfFile:filePath options:nil error:&theError]; // 2
glBindTexture(spriteTexture.target, spriteTexture.name); // 3

Related

What is the proper colorspace conversion for this image on iOS?

How do we fix our code below to properly color the image you see below from our incoming sampleBuffer?
We are attempting to convert an incoming sampleBuffer image to a UIImage but the result is the inverted off-color image you see below.
You can see our attempt to use the kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange option in the code - but the results were the same.
The incoming image we are trying to show has all the right colors as evidenced by the fact that if we render the image into a GLKView, all the colors are there.
Could this be a YUV420 conversion issue?
Here is the conversion code we are using:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
Here is the setup code we use for the incoming CVPixelBuffer:
// Now create the CVPixelBuffer to which we will render
CVReturn retVal = CVPixelBufferCreate(kCFAllocatorDefault,
self.screenWidth,
self.screenHeight,
kCVPixelFormatType_32BGRA,
attrs,
&_outputRenderTarget);
Any suggestions for what to try to restore and display all the proper colors?
There is no problem with the code shown apart from you need to swap back your commented/uncommented lines.
I would take a step back and look at what is writing the data to your incoming buffer. For example I use a AVAssetReaderVideoCompositionOutput object to generate the sample buffers. The initialisation of the video composition object has video settings that match the ones you use for creating your incoming cvpixelbuffer plus a cgimage and bitmap compatibility key.
It turns out we were inverting all the colors with a 1-r, 1-g, 1-b.
Once we removed the inversion the colors appeared normal again.

OpenGL format texture

I am currently trying to display a video on screen using OpenGL ES 2 on iOS.
I will sum up a bit what I am doing to playback and display the video on screen :
First I have a .mov file recorded using a GPUImageMovieWriter object. When the recording is completed I am going to playback the video using AVPlayer. Therefore I set a AVPlayerItemVideoOutput to be able to retrieve frame from the video :
NSDictionary *test = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (id)kCVPixelBufferPixelFormatTypeKey];
self.videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:test];
I then use the copyPixelBufferForItemTime function from the AVPlayerItemVideoOutput and receive the CVImageBufferRef corresponding to the frame of the initial video at a specific time.
Finally, here is the function I created to create an OpenGL texture from the buffer :
- (void)setupTextureFromBuffer:(CVImageBufferRef)imageBuffer {
CVPixelBufferLockBaseAddress(imageBuffer, 0);
int bufferHeight = CVPixelBufferGetHeight(imageBuffer);
int bufferWidth = CVPixelBufferGetWidth(imageBuffer);
CVPixelBufferGetPixelFormatType(imageBuffer);
glBindTexture(GL_TEXTURE_2D, m_videoTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(imageBuffer));
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
By doing this (and also using some non related algorithms to do some augmented reality things) I got a very strange result as if the video has been put in slices(I can't show you because I don't have enough reputation to do so).
It looks like the data are not well interpreted by OpenGL (wrong format ? type ?)
I checked whether it could be a corrupted buffer error by using this function :
- (void)saveImage:(CVPixelBufferRef)pixBuffer
{
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixBuffer),
CVPixelBufferGetHeight(pixBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
UIImageWriteToSavedPhotosAlbum(uiImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
-> The saved image appeared properly in the photo album.
It may come from the .mov file but what can I do to check if there's something wrong with this file ?
Thanks a lot for your help, I'm really stuck on this problem for hours/days !
You need to use kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange.
Then transfer them to separate chroma and luma OpenGLES textures. Example at https://developer.apple.com/library/ios/samplecode/AVBasicVideoOutput/Listings/AVBasicVideoOutput_APLEAGLView_m.html
I tried using several RGB based options but could not make it work.

Loading a PNG file to openGL ES

Im trying to set a certain texture to an object on openGL ES this is how i load it:
- (GLuint)setupTexture:(NSString *)fileName {
// 1
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
if (!spriteImage) {
NSLog(#"Failed to load image %#", fileName);
exit(1);
}
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte * spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4,
CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(spriteContext);
...
}
Now, when i'm using this tile_floor.png file:
the image is loaded and drawn on screen.
but when i use this wood.png file:
all i get is a black object.
Why is it different? is there any importance to the file dimensions (width or height)? i did not 'hard code' any dimensions or parameters, so that all images can be loaded.
It's important to say i don't get any errors in the console, and the program is running.
While you can use NPOT (non-power of two) image in many OpenGL ES implementations (all iOS devices support the extension GL_APPLE_texture_2D_limited_npot) you have to use the right edge mode:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
This will make the texture clamp to the edges of the texture coordinates 0, 1 range. Your wood texture looks like it does not repeat anyway.
OK, i got this. openGL ES requires power of 2 image size (both width and height).
Resized the photo and it worked.

GLKView snapshot method: null return val, getting an error

I can't figure out how to use the GLKView:snapshot method.
I'm using a GLKView to render some OpenGL stuff. It all works; seems like I have it all set up correctly.
But, when I try to do a snapshot, it fails: I get a null return value, and the following log message:
Error: CGImageCreate: invalid image size: 0 x 0.
Seems like this would mean the view itself is invalid for some reason, but it's not -- everything is working, aside from this.
I've looked at a few code samples, and I'm not doing anything different.
So... anyone seen this before? Ideas?
I never figured out the above problem; however, I found an excellent workaround. I found this chunk which just reads the render buffer and saves it to a UIImage. Problem solved!
- (UIImage*)snapshotRenderBuffer {
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
NSInteger dataLength = backingWidth * backingHeight * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0.0f, 0.0f, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(
backingWidth, backingHeight, 8, 32, backingWidth * 4, colorspace,
kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast,
ref, NULL, true, kCGRenderingIntentDefault);
// (sayeth abd)
// This creates a context with the device pixel dimensions -- not points.
// To be compatible with all devices, you're meant to keep everything as points and a scale factor; but,
// this gives us a scaled down image for purposes of saving. So, keep everything in device resolution,
// and worry about it later...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(backingWidth, backingHeight), NO, 0.0f);
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, backingWidth, backingHeight), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
return image;
}
Maybe this doesn't apply in your case, but the docs for GLKView:snapshot say:
Never call this method inside your drawing function.

ios opencv cvReleaseImage - when does it release the memory?

I am new to opencv. I am integrating it into my ios project.
In my project I am converting from UIImage to IplImage and vice versa.
I am also applying different image filters using the openCV library.
I am testing my app for leakages. I am running with memory monitor and I notice that my app grows by approximatly 1 megabyte each time I run my opencv set of functions.
Everytime I allocate a new iplimage I later on release it by calling:
cvReleaseImage(&iplimage);
I am using simulator to force low memory warnings and thus release image memory.
It doesn't seem to influence my app's memory size.
When does cvReleaseImage really free the memory? Am I leaking?
I am using ios 5.1 with ARC turned on.
Edit:
This is the code I am using (some copy paste I found) to convert UIImage to iplImage
- (UIImage *)UIImageFromIplImage:(IplImage *)image {
NSLog(#"IplImage (%d, %d) %d bits by %d channels, %d bytes/row %s", image->width, image->height, image->depth, image->nChannels, image->widthStep, image->channelSeq);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast ,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return ret;
}
cvReleaseImage is definitely freeing the memory the moment it is called. It calls the dellocator for the two parts of the image:
data
image header
If your app reports memory leakages, you should check the whole code - maybe you create 2 images, and release one, or whatever.

Resources