how to solve "ImageIO: JPEG Application transferred too few scanlines" - ios

When I save the image to the album, it failed.
error info: PEG Application transferred too few scanlines
Anyone has ever met this? Thanks.
// make data provider with data.
Float32 picSize = texture->image_size.width * texture->image_size.height * texture->bytesPerPixel;
NSLog(#"pic size:%f", picSize);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, texture->data, texture->image_size.width * texture->image_size.height * texture->bytesPerPixel, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
UIImageWriteToSavedPhotosAlbum(myImage, self, nil, nil);

I encountered a similar Error message when I was feeding a Greyscale image to a program that expected it to be TrueColor

Related

Unable to change the colour of pixel in UIImage

I got my problem solved already by using a different code. i just want to know what is wrong with the following one?
I wanted to change colour of every pixel in UIImage using bitmap data. My code is as follows:
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UIImage *image = self.imageViewMain.image;
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
char *pixels = (char *)[data bytes];
// this is where we manipulate the individual pixels
for(int i = 1; i < [data length]; i += 3)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
pixels[r] = 0; // eg. remove red
pixels[g] = pixels[g];
pixels[b] = pixels[b];
pixels[a] = pixels[a];
}
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// the modified image
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// cleanup
free(pixels);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorspace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
}
But when this code runs - I get EXC_BAD_ACCESS error shown like in the following image :
And here is some more information from debugging:
What is it that I'm missing or doing wrong ?
try to alloc memory for pixels array like following code
char *pixels = (char *)malloc(data.length);
memcpy(pixels, [data bytes], data.length);
when pixels is not necessary, release this memory by call free(pixels)

Create png UIImage from OpenGL drawing

I'm exporting my OpenGL drawings to a UIImage using the following method:
-(UIImage*)saveOpenGLDrawnToUIImage:(NSInteger)aWidth height:(NSInteger)aHeight {
NSInteger myDataLength = aWidth * aHeight * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, aWidth, aHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < aHeight; y++)
{
for(int x = 0; x < aWidth * 4; x++)
{
buffer2[(aHeight-1 - y) * aWidth * 4 + x] = buffer[y * 4 * aWidth + x];
}
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * aWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(aWidth, aHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
But the background is always black instead of transparent.
I tried using CGImageRef imageRef = CGImageCreateWithPNGDataProvider(provider, NULL, NO, kCGRenderingIntentDefault); but it always generates nil.
How can I get this to process with a transparent background?
This is a question on how to create an image from raw RGBA data. I think you are missing a descriptor on the bitmap info to tell that you want an alpha channel...
Try Creating UIImage from raw RGBA data
The first comment is describing you should use kCGBitmapByteOrder32Big|kCGImageAlphaLast

Saving buffer with a 16-bit floating point texture

I cannot find a combination of parameters that works with float (I managed to save with unsigned byte):
float *rawImagePixels = (float*)malloc(width * height * sizeof(float));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0, 0, width, height, GL_RED_EXT, GL_HALF_FLOAT_OES, rawImagePixels);
NSData *data = [NSData dataWithBytes:rawImagePixels length:width*height];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGImageRef iref = CGImageCreate(width, height, 16, 16, width, colorspace, kCGImageAlphaNone|kCGBitmapByteOrder16Big, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *myImage = [UIImage imageWithCGImage:iref];
UIImageWriteToSavedPhotosAlbum(myImage, self, nil, nil);
My texture is a 1-channel half float texture. How can I save it as a UIImage?
Extracted from the question
I got semi-decent results with this code:
GLhalf *rawImagePixels = (GLhalf*)malloc(width * height * sizeof(GLhalf));
glPixelStorei(GL_PACK_ALIGNMENT, 2);
glReadPixels(0, 0, width, height, GL_RED_EXT, GL_HALF_FLOAT_OES, rawImagePixels);
NSData *data = [NSData dataWithBytes:rawImagePixels length:width * height * sizeof(GLhalf)];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
int bitsPerComponent = 16, bitsPerPixel = 16, bytesPerRow = width * 2;
CGImageRef iref = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorspace, kCGImageAlphaNone|kCGBitmapByteOrder16Little, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *myImage = [UIImage imageWithCGImage:iref];
UIImageWriteToSavedPhotosAlbum(myImage, self, nil, nil);

Checking for comparing UIImages

I use this code, but it very slow. Is there any other way to do it?
I tried use methods indexOfObject and containsObject for array of images but it not works for me.
BOOL haveDublicate = NO;
UIImage *i = [ImageManager imageFromPath:path];
NSArray *photoImages = [ImageManager imagesFromPaths:photoPaths];
for (UIImage *saved in photoImages)
{
if ([ UIImagePNGRepresentation( saved ) isEqualToData:
UIImagePNGRepresentation( i ) ])
{
haveDublicate = YES;
}
}
I think you should check the size of the image first. If size and scale of both images are equal, check the pixel data directly for equality, and not the images PNG representation. this will be much faster. (The link shows you how to get the pixel data. To compare it, use memcmp.)
From that post (slightly modified):
NSData *rawDataFromUIImage(UIImage *image)
{
assert(image);
// Get the image into the data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int byteSize = height * width * 4;
unsigned char *rawData = (unsigned char*) malloc(byteSize);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return [NSData dataWithBytes:rawData length:byteSize];
}
About why this is faster: UIImagePNGRepresentation (1) fetches the raw binary data and then (2) converts it to PNG format. Skipping the second step can only improve performance, because it is much more work than just doing step 1. And memcmp is faster than everything else in this example.

Show Camera streaming from frames by using UIImageView memory leak crash

I am using QCAR library, and I get the camera Frame from the library in each frame.
I am trying to show this frame by using setImage on my UIImageView* mCurFrameView. This works at first, and I am able to see the frames run smooth, but after 20 seconds it crashes.
Sometimes I get EXC_BAD_ACCESS on
int retVal = UIApplicationMain(argc, argv, nil, nil);
Sometimes it's just gdb and paused.
Sometimes before he crash I get
2012-02-24 15:59:15.726 QRAR_nextGen[226:707] Received memory warning.
Here's my code:
-(void)SaveCurrentFrame:(UIImage*)image
{
mCurFrameView.image = image;
}
- (void)renderFrameQCAR
{
cout<<"I am starting"<<endl;
QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::setFrameFormat(QCAR::RGB888, true);
const QCAR::Image *image = state.getFrame().getImage(1);// 0: YUV, 1: Grayscale image
if (image)
{
const char *data = (const char *)image->getPixels();
int width = image->getWidth(); int height = image->getHeight();
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrderDefault;
provider = CGDataProviderCreateWithData(NULL, data, width*height*3, NULL);
intent = kCGRenderingIntentDefault;
imageRef = CGImageCreate(width, height, 8, 8*3, width * 3, colorSpace, bitmapInfo, provider, NULL, NO, intent);
mCurFrame = [UIImage imageWithCGImage:imageRef];
cout<<"I am waiting"<<endl;
[self performSelectorOnMainThread:#selector(SaveCurrentFrame:) withObject:mCurFrame waitUntilDone:YES];
}
I've tried several things: showing CALayer to show the camera, release, retain, autorelease, defining and not defining property and synthesizing.
I'd appreciate it a lot if some one could help me. I am losing my mind. Thanks A lot.
You are not releasing (leaking) each CGDataProvider, that means you are storing all the images data on memory. Try doing CGDataProviderRelease(provider) after performSelectorOnMainThread.
Maybe you are missing this:
QCAR::Renderer::getInstance().end();
Here's a snippet working on iPhone 4S/5 iOS 6
- (void)renderFrameIntoImage {
QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::setFrameFormat(QCAR::GRAYSCALE, true);
const QCAR::Image *image = state.getFrame().getImage(0); // 0: YUV, 1: Grayscale image
const char *data = (const char *)image->getPixels();
int width = image->getWidth(); int height = image->getHeight();
CGColorSpace *colorSpace = CGColorSpaceCreateDeviceGray();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGDataProvider *provider = CGDataProviderCreateWithData(NULL, data, width*height, NULL);
CGColorRenderingIntent intent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, 8, 8, width * 1, colorSpace, bitmapInfo, provider, NULL, NO, intent);
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
QCAR::Renderer::getInstance().end();
UIImageWriteToSavedPhotosAlbum(myImage, nil, nil, NULL);
}
Regards...
You have three leaks.
You must call a Release method for every CoreFoundation method with the word Create in it.
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
A C++ method to do the QCAR::Image to UIImage conversion
inline UIImage *imageWithQCARCameraImage(const QCAR::Image *cameraImage)
{
UIImage *image = nil;
if (cameraImage) {
CGColorSpaceRef colorSpace = NULL;
QCAR::PIXEL_FORMAT pixelFormat = cameraImage->getFormat();
int bitsPerPixel = QCAR::getBitsPerPixel(pixelFormat);
switch (pixelFormat) {
case QCAR::RGB888:
colorSpace = CGColorSpaceCreateDeviceRGB();
break;
case QCAR::GRAYSCALE:
colorSpace = CGColorSpaceCreateDeviceGray();
break;
case QCAR::YUV:
case QCAR::RGB565:
case QCAR::RGBA8888:
case QCAR::INDEXED:
std::cerr << "Image format conversion not implemented." << std::endl;
break;
case QCAR::UNKNOWN_FORMAT:
std::cerr << "Image format unknown." << std::endl;
break;
}
float width = cameraImage->getWidth();
float height = cameraImage->getHeight();
int bytesPerRow = cameraImage->getStride();
const void *baseAddress = cameraImage->getPixels();
size_t totalBytes = QCAR::getBufferSize(width, height, pixelFormat);
if (bitsPerPixel > 0 && colorSpace != NULL) {
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
baseAddress,
totalBytes,
NULL);
int bitsPerComponent = 8;
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNone;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpace,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
image = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
}
}
return image;
}

Resources