Convert raw RGBA to UIImage - ios

Yes, I know this sounds like a duplicate... but I've tried lots of examples and even Apple's official code on their developer page and this is resulting in a white image on iPhone 4S & iPad 3.
Works great on iPad 1 and the iPhone Simulator though.
width and height passed to convertBitmapRGBA8ToUIImage is in actual pixels, not UIKit 'points'. I.e., on iPad 1, it would be 1024, 768. On iPad 3, it's 2048, 1536.
The raw data buffer is RGBA data read from glReadPixels and flipped manually before passing to tweetMessage().
- (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) buffer
withWidth:(int) width
withHeight:(int) height {
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, width * height * 4, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault);
float scale = 1.0f;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
{
scale = [[UIScreen mainScreen] scale];
}
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width/scale, height/scale), NO, scale);
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
// Image needs to be flipped BACK for CG
CGContextTranslateCTM(cgcontext, 0, height/scale);
CGContextScaleCTM(cgcontext, 1, -1);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, width/scale, height/scale), iref);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
- (void)tweetMessage:(const char *)message withURL:(const char *)url withImage:(unsigned char*)rawRGBAImage withWidth:(unsigned int)imageWidth withHeight:(unsigned int)imageHeight
{
UIImage *tweetImage = nil;
if (rawRGBAImage != nil)
{
// Convert raw data to UIImage
tweetImage = [self convertBitmapRGBA8ToUIImage:rawRGBAImage withWidth:imageWidth withHeight:imageHeight];
}
}

Related

how to change rgb value of an image in iOS?

I have a UIImage and I want to decrease the rgb value of each point , how can I do that?
or how can I change one color to another color in an image?
[Xcode8 , swift3]
This answer only applies if you want to change individual pixels..
First use UIImage.cgImage to get a CGImage. Next, create a bitmap context using CGBitmapContextCreate with the CGColorSpaceCreateDeviceRGB colour space and whatever blend modes you want.
Then call CGContextDrawImage to draw the image to the context which is backed by a pixel array provided by you. Cleanup, and you now have an array of pixels.
- (uint8_t*)getPixels:(UIImage *)image {
CGColorSpaceRef cs= CGColorSpaceCreateDeviceRGB();
uint8_t *pixels= malloc(image.size.width * image.size.height * 4);
CGContextRef ctx= CGBitmapContextCreate(rawData, image.size.width, image.size.height, 8, image.size.width * 4, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(cs);
CGContextDrawImage(ctx, CGRectMake(0, 0, image.size.width, image.size.height));
CGContextRelease(ctx);
return pixels;
}
Modify the pixels however you want.. Then recreate the image from the pixels..
- (UIImage *)imageFromPixels:(uint8_t *)pixels width:(NSUInteger)width height:(NSUInteger)height {
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, pixels, width * height * 4, nil);
CGColorSpaceRef cs = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImageRef = CGImageCreate(width, height, 8, 32, width * 4, cs, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast, provider, nil, NO, kCGRenderingIntentDefault);
pixels = malloc(width * height * 4);
CGContextRef ctx = CGBitmapContextCreate(pixels, width, height, 8, width * 4, cs, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast);
CGContextDrawImage(ctx, (CGRect) { .origin.x = 0, .origin.y = 0, .size.width = width, .size.height = height }, cgImage);
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(ctx);
CGColorSpaceRelease(cs);
CGImageRelease(cgImageRef);
CGDataProviderRelease(provider);
free(pixels);
return image;
}
One of the ways is to use image as a template and to set color which you want.
extension UIImageView {
func changeImageColor( color:UIColor) -> UIImage
{
image = image!.withRenderingMode(.alwaysTemplate)
tintColor = color
return image!
}
}
//Change color of logo
logoImage.image = logoImage.changeImageColor(color: .red)

iOS screenshot failed with OpenGL

This is my code:
#implementation UIView (LCExtension)
- (UIImage *)screenshotWithRect:(CGRect)rect {
UIGraphicsBeginImageContext(self.bounds.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
image = [UIImage imageWithCGImage:CGImageCreateWithImageInRect(image.CGImage, rect)];
UIGraphicsEndImageContext();
return image;
}
#end
And I set a breakpoint at here:
Run:
This is view-self: (picture from OpenGL Rendering)
This is generated picture:
Why it's a black picture?
When working with OpenGL (ES), yeah, I assuming you are using OpenGL ES because you have a UIView category, remember that capture screen right before presenting your renderbuffers. Once you or subclass of GLKViewController present renderbuffer, it become front framebuffer, then the screen framebuffer become back framebuffer, which gives you nothing except the color you set with glClearColor.
[EAGLContext presentRenderbuffer:]
Here is a sample code for capturing screen.
- (UIImage *)createImageFromFramebuffer {
GLint params[10];
glGetIntegerv(GL_VIEWPORT, params);
int width = params[2];
int height = params[3];
const int renderTargetWidth = width;
const int renderTargetHeight = height;
const int renderTargetSize = renderTargetWidth*renderTargetHeight * 4;
uint8_t* imageBuffer = (uint8_t*)malloc(renderTargetSize);
glReadPixels(params[0], params[1],
renderTargetWidth, renderTargetHeight,
GL_RGBA, GL_UNSIGNED_BYTE, imageBuffer);
const int rowSize = renderTargetWidth*4;
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, imageBuffer, renderTargetSize, NULL);
CGImageRef iref = CGImageCreate(renderTargetWidth, renderTargetHeight, 8, 32, rowSize,
CGColorSpaceCreateDeviceRGB(),
kCGImageAlphaLast | kCGBitmapByteOrderDefault, ref,
NULL, true, kCGRenderingIntentDefault);
uint8_t* contextBuffer = (uint8_t*)malloc(renderTargetSize);
memset(contextBuffer, 0, renderTargetSize);
CGContextRef context = CGBitmapContextCreate(contextBuffer, renderTargetWidth, renderTargetHeight, 8, rowSize,
CGImageGetColorSpace(iref),
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, renderTargetHeight);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, renderTargetWidth, renderTargetHeight), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage* image = [[UIImage alloc] initWithCGImage:outputRef];
CGImageRelease(outputRef);
CGContextRelease(context);
CGImageRelease(iref);
CGDataProviderRelease(ref);
free(contextBuffer);
free(imageBuffer);
return image;
}

iOS7 How to take OpenGLES screenshot.

I'm using VLCKit to play video in my app, and I need to be able to take a screenshot of the video at certain points. This is the code I'm using:
-(NSData*)generateThumbnail{
int s = 1;
UIScreen* screen = [UIScreen mainScreen];
if ([screen respondsToSelector:#selector(scale)]) {
s = (int) [screen scale];
}
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
int width = viewport[2];
int height = [_profile.resolution integerValue];//viewport[3];
int myDataLength = width * height * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
for(int y1 = 0; y1 < height; y1++) {
for(int x1 = 0; x1 <width * 4; x1++) {
buffer2[(height - 1 - y1) * width * 4 + x1] = buffer[y1 * 4 * width + x1];
}
}
free(buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *image = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
NSData *thumbAsData = UIImageJPEGRepresentation(image, 5);
return thumbAsData;
}
To be honest, I have no idea how most of this works. I copied it from somewhere a while ago (I don't remember the source). It mostly works, however frequently it seems parts of the image are missing.
Can someone point me in the right direction? Most of the other posts I see regarding OpenGL screenshots are fairly old, and don't seem to apply.
Thanks.
I wrote a class to work around this problem.
Basically you take directly a screenshot from the screen, and then if you want you can take just a part of the image and also scale it.
Taking a screenshot from the screen, you take every stuff. UIKit , OpenGL, AVFoundation, etc.
Here the class: https://github.com/matteogobbi/MGScreenshotHelper/
Below useful functions, but i suggest you to download (and star :D) directly my helper class ;)
/* Get the screenshot of the screen (useful when you have UIKit elements and OpenGL or AVFoundation stuff */
+ (UIImage *)screenshotFromScreen
{
CGImageRef UIGetScreenImage(void);
CGImageRef screen = UIGetScreenImage();
UIImage* screenImage = [UIImage imageWithCGImage:screen];
CGImageRelease(screen);
return screenImage;
}
/* Get the screenshot of a determinate rect of the screen, and scale it to the size that you want. */
+ (UIImage *)getScreenshotFromScreenWithRect:(CGRect)captureRect andScaleToSize:(CGSize)newSize
{
UIImage *image = [[self class] screenshotFromScreen];
image = [[self class] cropImage:image withRect:captureRect];
image = [[self class] scaleImage:image toSize:newSize];
return image;
}
#pragma mark - Other methods
/* Get a portion of an image */
+ (UIImage *)cropImage:(UIImage *)image withRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *cropedImage = [UIImage imageWithCGImage:imageRef];
return cropedImage;
}
/* Scale an image */
+ (UIImage *)scaleImage:(UIImage *)image toSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, YES, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Ok turns out this is a problem only in the Simulator. On a device it seems to work 98% of the time.

OpenGL ES: Screenshot shows different between device and simulator

I've tried to create a screenshot by using UIActivityViewController and saved into Photos in iPhone/iPad device. However, in simulator everything shows correctly, but when I switched to device, it only shows part. Here is the screenshot:
In simulator (You can see there is one background, one green line and one star image)
In real device (You can see there is only one star image, and everything else is gone)
I merged all of those three different UIImages into one image so that I can take a screenshot.
I first merge the background image (bridge UIImage) with star image.
-(UIImage*)mergeUIImageView:(UIImage*)bkgound
FrontPic:(UIImage*)fnt
FrontPicX:(CGFloat)xPos
FrontPicY:(CGFloat)yPos
FrontPicWidth:(CGFloat)picWidth
FrontPicHeight:(CGFloat)picHeight
FinalSize:(CGSize)finalSize
{
UIGraphicsBeginImageContext(CGSizeMake([UIScreen mainScreen].bounds.size.height, [UIScreen mainScreen].bounds.size.width));
// bkgound - is the bridge image
[bkgound drawInRect:CGRectMake(0, 0, [UIScreen mainScreen].bounds.size.height, [UIScreen mainScreen].bounds.size.width)];
// fnt - is the star image
[fnt drawInRect:CGRectMake(xPos, yPos, picWidth, picHeight)];
// merged image
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Then I merged this picture with opengl rendered picture which is the green line.
a) I first change the opengGL image to UIImage by using this function
-(UIImage *) glToUIImage {
float scaleFactor = [[UIScreen mainScreen] scale];
CGRect screen = [[UIScreen mainScreen] bounds];
CGFloat image_height = screen.size.width * scaleFactor;
CGFloat image_width = screen.size.height * scaleFactor;
NSInteger myDataLength = image_width * image_height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, image_width, image_height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < image_height; y++)
{
for(int x = 0; x < image_width * 4; x++)
{
buffer2[(int)((image_height - 1 - y) * image_width * 4 + x)] = buffer[(int)(y * 4 * image_width + x)];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * image_width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(image_width, image_height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
b)Then I merge this opengl image with my above image(bridge + star) by using the same function above
-(UIImage*)screenshot
{
// get opengl image from above function
UIImage *image = [self glToUIImage];
CGRect pos = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContext(image.size);
[image drawInRect:pos];
[self.background.image drawInRect:pos];
UIImage* final = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return final;
}
And it works great in (iPhone, iPad, iPhone with retina and iPad with retina)simulator (version 6.0). However, when I switched to real device (iPhone 4/4s/5, iPad (2/mini/retina)) it only shows star image. The xcode version is 4.6.3 and base SDK is latest IOS(IOS 6.1) and IOS deployment target is 5.0. Could you guys tell me how to fix it? Thanks.
The problem is IOS 6.0 will not keep the buffer all the time, it will erase it. However, when you do screenshot, you are getting data from buffer so that's why I keep getting black background. So add category to let device keep buffer image instead will solve this problem.
#interface CAEAGLLayer (Retained)
#end
#implementation CAEAGLLayer (Retained)
- (NSDictionary*) drawableProperties
{
return #{kEAGLDrawablePropertyRetainedBacking : #(YES)};
}
#end

glReadPixels only saves 1/4 screen size snapshots

I'm working on an Augmented Reality app for a client. The OpenGL and EAGL part has been done in Unity 3D, and implemented into a View in my application.
What i need now, is a button that snaps a screenshot of the OpenGL content, which is the backmost view.
I tried writing it myself, but when i click a button with the assigned IBAction, it only saves 1/4 of the screen (the lower left corner) - though it does save it to the camera roll.
So basically, how can i make it save the entire screensize, instead of just one fourth?
here's my code for the method:
-(IBAction)tagBillede:(id)sender
{
UIImage *outputImage = nil;
CGRect s = CGRectMake(0, 0, 320, 480);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
if (!buffer) goto error;
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
if (!ref) goto error;
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
if (!iref) goto error;
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
if (!pixels) goto error;
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
if (!context) goto error;
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
if (!outputRef) goto error;
outputImage = [UIImage imageWithCGImage: outputRef];
if (!outputImage) goto error;
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, self, #selector(image: didFinishSavingWithError: contextInfo:), nil);
}
I suspect you are using a device with a Retina display, which is 640x960. You need to take the screen scale into account; it is 1.0 on non-Retina displays and 2.0 on Retina displays. Try initializing s like this:
CGFloat scale = UIScreen.mainScreen.scale;
CGRect s = CGRectMake(0, 0, 320 * scale, 480 * scale);
If the device is a retina device, you need to scale the opengl stuff yourself. You're actually specifying that you want the lower-left corner by only capturing half the width and half the height.
You need to double both your width and height for the retina screens, but realistically, you should be multiplying it by the screen's scale:
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
Thought i'd chime and, and at the same time, throw some gratitude :)
I got it working like a charm now, here's the cleaned up code:
UIImage *outputImage = nil;
CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
outputImage = [UIImage imageWithCGImage: outputRef];
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);

Resources