I want to manipulate image and shuffle colors. I'm trying to rotate 180 degress with pixels but failed. I don't want to use UIImageView rotate cause i won't just rotate images. I want to do them whatever i want.
EDIT : It was wrong operator. I dont know why i used % instead of / . Anyways i hope this code helps someone(it works).
- (IBAction)shuffleImage:(id)sender {
[self calculateRGBAsAndChangePixels:self.imageView.image atX:0 andY:0];
}
-(void)calculateRGBAsAndChangePixels:(UIImage*)image atX:(int)x andY:(int)y
{
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * image.size.width;
NSUInteger bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bmContext = CGBitmapContextCreate(NULL, image.size.width, image.size.height, bitsPerComponent,bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(bmContext, (CGRect){.origin.x = 0.0f, .origin.y = 0.0f, image.size.width, image.size.height}, image.CGImage);
UInt8* data = (UInt8*)CGBitmapContextGetData(bmContext);
const size_t bitmapByteCount = bytesPerRow * image.size.height;
NSMutableArray *reds = [[NSMutableArray alloc] init];
NSMutableArray *greens = [[NSMutableArray alloc] init];
NSMutableArray *blues = [[NSMutableArray alloc] init];
for (size_t i = 0; i < bitmapByteCount; i += 4)
{
[reds addObject:[NSNumber numberWithInt:data[i]]];
[greens addObject:[NSNumber numberWithInt:data[i+1]]];
[blues addObject:[NSNumber numberWithInt:data[i+2]]];
}
for (size_t i = 0; i < bitmapByteCount; i += 4)
{
data[i] = [[reds objectAtIndex:reds.count-i%4-1] integerValue];
data[i+1] = [[greens objectAtIndex:greens.count-i%4-1] integerValue];
data[i+2] = [[blues objectAtIndex:blues.count-i%4-1] integerValue];
}
CGImageRef newImage = CGBitmapContextCreateImage(bmContext);
UIImage *imageView = [[UIImage alloc] initWithCGImage:newImage];
self.imageView.image = imageView;
}
Assuming that you are wanting to make the image turn upside down (rotate it 180) and not mirror it, I found some relevant code on another question that may help you:
static inline double radians (double degrees) {return degrees * M_PI/180;}
UIImage* rotate(UIImage* src, UIImageOrientation orientation)
{
UIGraphicsBeginImageContext(src.size);
CGContextRef context = UIGraphicsGetCurrentContext();
if (orientation == UIImageOrientationRight) {
CGContextRotateCTM (context, radians(90));
} else if (orientation == UIImageOrientationLeft) {
CGContextRotateCTM (context, radians(-90));
} else if (orientation == UIImageOrientationDown) {
// NOTHING
} else if (orientation == UIImageOrientationUp) {
CGContextRotateCTM (context, radians(90));
}
[src drawAtPoint:CGPointMake(0, 0)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
If you're trying to mirror the image, this code example from this question maybe of help:
UIImage* sourceImage = [UIImage imageNamed:#"whatever.png"];
UIImage* flippedImage = [UIImage imageWithCGImage:sourceImage.CGImage
scale:sourceImage.scale
orientation:UIImageOrientationUpMirrored];
So you're looking to actually manipulate the raw pixel data. Check this out then:
Getting the pixel data from a CGImage object
It's for MacOS but should be relevant for iOS as well.
Related
We have a process that takes high resolution source PNG/JPG images and creates renditions of these images in various lower resolution formats / cropped versions.
void ResizeAndSaveSourceImageFromFile(NSString *imagePath, NSInteger width, NSInteger height, NSString *destinationFolder, NSString *fileName, BOOL shouldCrop, NSInteger rotation, NSInteger cornerRadius, BOOL removeAlpha) {
NSString *outputFilePath = [NSString stringWithFormat:#"%#/%#", destinationFolder, fileName];
NSImage *sourceImage = [[NSImage alloc] initWithContentsOfFile:imagePath];
NSSize sourceSize = sourceImage.size;
float sourceAspect = sourceSize.width / sourceSize.height;
float desiredAspect = width / height;
float finalWidth = width;
float finalHeight = height;
if (shouldCrop == true) {
if (desiredAspect > sourceAspect) {
width = height * sourceAspect;
} else if (desiredAspect < sourceAspect) {
height = width / sourceAspect;
}
}
if (width < finalWidth) {
width = finalWidth;
height = width / sourceAspect;
}
if (height < finalHeight) {
height = finalHeight;
width = height * sourceAspect;
}
NSImage *resizedImage = ImageByScalingToSize(sourceImage, CGSizeMake(width, height));
if (shouldCrop == true) {
resizedImage = ImageByCroppingImage(resizedImage, CGSizeMake(finalWidth, finalHeight));
}
if (rotation != 0) {
resizedImage = ImageRotated(resizedImage, rotation);
}
if (cornerRadius != 0) {
resizedImage = ImageRounded(resizedImage, cornerRadius);
}
NSBitmapImageRep *imgRep = UnscaledBitmapImageRep(resizedImage, removeAlpha);
NSBitmapImageFileType type = NSPNGFileType;
if ([fileName rangeOfString:#".jpg"].location != NSNotFound) {
type = NSJPEGFileType;
}
NSData *imageData = [imgRep representationUsingType:type properties: #{}];
[imageData writeToFile:outputFilePath atomically:NO];
if ([outputFilePath rangeOfString:#"land-mdpi"].location != NSNotFound) {
[imageData writeToFile:[outputFilePath stringByReplacingOccurrencesOfString:#"land-mdpi" withString:#"tvdpi"] atomically:NO];
}
}
NSImage* ImageByScalingToSize(NSImage* sourceImage, NSSize newSize) {
if (! sourceImage.isValid) return nil;
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:newSize.width
pixelsHigh:newSize.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
rep.size = newSize;
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[sourceImage drawInRect:NSMakeRect(0, 0, newSize.width, newSize.height) fromRect:NSZeroRect operation:NSCompositingOperationCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
NSImage *newImage = [[NSImage alloc] initWithSize:newSize];
[newImage addRepresentation:rep];
return newImage;
}
NSBitmapImageRep* UnscaledBitmapImageRep(NSImage *image, BOOL removeAlpha) {
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:image.size.width
pixelsHigh:image.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:
[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[image drawAtPoint:NSMakePoint(0, 0)
fromRect:NSZeroRect
operation:NSCompositingOperationSourceOver
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
NSBitmapImageRep *imgRepFinal = rep;
if (removeAlpha == YES) {
NSImage *newImage = [[NSImage alloc] initWithSize:[rep size]];
[newImage addRepresentation:rep];
static int const kNumberOfBitsPerColour = 5;
NSRect imageRect = NSMakeRect(0.0, 0.0, newImage.size.width, newImage.size.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef tileGraphicsContext = CGBitmapContextCreate (NULL, imageRect.size.width, imageRect.size.height, kNumberOfBitsPerColour, 2 * imageRect.size.width, colorSpace, kCGBitmapByteOrder16Little | kCGImageAlphaNoneSkipFirst);
NSData *imageDataTIFF = [newImage TIFFRepresentation];
CGImageRef imageRef = [[NSBitmapImageRep imageRepWithData:imageDataTIFF] CGImage];
CGContextDrawImage(tileGraphicsContext, imageRect, imageRef);
// Create an NSImage from the tile graphics context
CGImageRef newImageRef = CGBitmapContextCreateImage(tileGraphicsContext);
NSImage *newNSImage = [[NSImage alloc] initWithCGImage:newImageRef size:imageRect.size];
// Clean up
CGImageRelease(newImageRef);
CGContextRelease(tileGraphicsContext);
CGColorSpaceRelease(colorSpace);
CGImageRef CGImage = [newNSImage CGImageForProposedRect:nil context:nil hints:nil];
imgRepFinal = [[NSBitmapImageRep alloc] initWithCGImage:CGImage];
}
return imgRepFinal;
}
NSImage* ImageByCroppingImage(NSImage* image, CGSize size) {
NSInteger trueWidth = image.representations[0].pixelsWide;
double refWidth = image.size.width;
double refHeight = image.size.height;
double scale = trueWidth / refWidth;
double x = (refWidth - size.width) / 2.0;
double y = (refHeight - size.height) / 2.0;
CGRect cropRect = CGRectMake(x * scale, y * scale, size.width * scale, size.height * scale);
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)[image TIFFRepresentation], NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGImageRef imageRef = CGImageCreateWithImageInRect(maskRef, cropRect);
NSImage *cropped = [[NSImage alloc] initWithCGImage:imageRef size:size];
CGImageRelease(imageRef);
return cropped;
}
This process works well and gets the results we want. We can re-run these functions on hundreds of images and get the same output every time. We then commit these files in git repos.
HOWEVER, every time we update macOS to a new version (such as updating to High Sierra, Monterey, etc.) when we run these functions ALL of the images result in an output that is different and has different hashes so git treats these images as being changed even though the source images are identical.
FURTHER, JPG images seem to have a different output when run on an Intel mac vs. an Apple M1 mac.
We have checked the head of the output images using a command like:
od -bc banner.png | head
This results in the same head data in all cases even though the actual image data doesn't match after version changes.
We've also checked CGImageSourceCopyPropertiesAtIndex such as:
{
ColorModel = RGB;
Depth = 8;
HasAlpha = 1;
PixelHeight = 1080;
PixelWidth = 1920;
ProfileName = "Generic RGB Profile";
"{Exif}" = {
PixelXDimension = 1920;
PixelYDimension = 1080;
};
"{PNG}" = {
InterlaceType = 0;
};
}
Which do not show any differences between versions of macOS or Intel vs. M1.
We don't want the hash to keep changing on us and resulting in extra churn in git and hoping for feedback that may help in us getting consistent output in all cases.
Any tips are greatly appreciated.
I have path of the png file (Image). I would like to create new Image in the same folder where in the center of this Image will be Black rectangle with dimensions 300x100. Then I need to get the path of new created Image.
Can someone, please, help me with this issue?
I was playing with this code:
- (void) grayscale:(UIImage*) image {
CGContextRef ctx;
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
NSUInteger byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel;
for (int ii = 0 ; ii < width * height ; ++ii)
{
// Get color values to construct a UIColor
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
rawData[byteIndex] = (char) (red);
rawData[byteIndex+1] = (char) (green);
rawData[byteIndex+2] = (char) (blue);
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
self.workingImage = rawImage;
[self.imageView setImage:self.workingImage];
free(rawData);
}
But I didn't successed.
Yes you can draw image with another black layer on it. See below my code which will satisfy your requirement:
Note: Please add image "testImage.png" into your code then execute below code.
#import "ViewController.h"
#interface ViewController ()
#end
#implementation ViewController
#synthesize strTemp3;
- (void)viewDidLoad {
[super viewDidLoad];
[self testImageWrite];
[self addNewImageFromPath];
}
- (UIImage *)imageToDraw
{
UIGraphicsBeginImageContextWithOptions(CGSizeMake(300, 100), NO, [UIScreen mainScreen].scale);
UIImage *natureImage = [UIImage imageNamed:#"testImage"];
[natureImage drawInRect:CGRectMake(0, 0, 300, 100)];
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect rectangle = CGRectMake(75, 25, 150, 50);
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1.0);
CGContextSetRGBStrokeColor(context, 0.0, 0.0, 0.0, 1.0);
CGContextFillRect(context, rectangle);
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
- (NSString *)filePath
{
NSArray * paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
return [documentsDirectory stringByAppendingPathComponent:#"Image.png"];
}
- (void)testImageWrite
{
NSData *imageData = UIImagePNGRepresentation([self imageToDraw]);
NSError *writeError = nil;
BOOL success = [imageData writeToFile:[self filePath] options:0 error:&writeError];
if (!success || writeError != nil)
{
NSLog(#"Error Writing: %#",writeError.description);
}
}
-(void)addNewImageFromPath{
UIImageView *imgView = [[UIImageView alloc] initWithFrame:CGRectMake(10, 10, 300, 100)];
imgView.image = [UIImage imageWithContentsOfFile:[self filePath]];
[self.view addSubview:imgView];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
}
Output:
Actual Image:
New Image:
So I have many jpg images with white backgrounds that I am loading into my app, and I would like to remove the white backgrounds programmatically. I have a function that does this, but it causes some jagged edges around each image. Is there a way that I can blend these edges to achieve smooth edges?
My current method:
-(UIImage *)changeWhiteColorTransparent: (UIImage *)image
{
CGImageRef rawImageRef=image.CGImage;
const CGFloat colorMasking[6] = {222, 255, 222, 255, 222, 255};
UIGraphicsBeginImageContextWithOptions(image.size, NO, [UIScreen mainScreen].scale);
CGImageRef maskedImageRef=CGImageCreateWithMaskingColors(rawImageRef, colorMasking);
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, image.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, image.size.width, image.size.height), maskedImageRef);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
CGImageRelease(maskedImageRef);
UIGraphicsEndImageContext();
return result;
}
I know that I can change the color masking values, but I don't think any combination will produce a smooth picture with no white background.
Heres an example:
That method also removes extra pixels within the images that are close to white:
I think the ideal method would change the alpha of white pixels according to how close to pure white they are instead of just removing them all. Any ideas would be appreciated.
#import "UIImage+FloodFill.h"
//https://github.com/Chintan-Dave/UIImageScanlineFloodfill
#define Mask8(x) ( (x) & 0xFF )
#define R(x) ( Mask8(x) )
#define G(x) ( Mask8(x >> 8 ) )
#define B(x) ( Mask8(x >> 16) )
#define A(x) ( Mask8(x >> 24) )
#define RGBAMake(r, g, b, a) ( Mask8(r) | Mask8(g) << 8 | Mask8(b) << 16 | Mask8(a) << 24 )
#interface UIImage (BackgroundRemoval)
//Simple Removal
- (UIImage *)floodFillRemoveBackgroundColor;
#end
#implementation UIImage (BackgroundRemoval)
- (UIImage*) maskImageWithMask:(UIImage *)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ self.size.width;
if(ratio * self.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ self.size.height;
}
CGRect rect1 = { {0, 0}, {maskImage.size.width, maskImage.size.height} };
CGRect rect2 = { {-((self.size.width*ratio)-maskImage.size.width)/2 , -((self.size.height*ratio)-maskImage.size.height)/2}, {self.size.width*ratio, self.size.height*ratio} };
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
- (UIImage *)floodFillRemove{
//1
UIImage *processedImage = [self floodFillFromPoint:CGPointMake(0, 0) withColor:[UIColor magentaColor] andTolerance:0];
CGImageRef inputCGImage=processedImage.CGImage;
UInt32 * inputPixels;
NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
NSUInteger inputHeight = CGImageGetHeight(inputCGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bitsPerComponent = 8;
NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;
inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));
CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight, bitsPerComponent, inputBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);
//2
for (NSUInteger j = 0; j < inputHeight; j++) {
for (NSUInteger i = 0; i < inputWidth; i++) {
UInt32 * currentPixel = inputPixels + (j * inputWidth) + i;
UInt32 color = *currentPixel;
if (R(color) == 255 && G(color) == 0 && B(color) == 255) {
*currentPixel = RGBAMake(0, 0, 0, A(0));
}else{
*currentPixel = RGBAMake(R(color), G(color), B(color), A(color));
}
}
}
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
//3
UIImage * maskImage = [UIImage imageWithCGImage:newCGImage];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
free(inputPixels);
UIImage *result = [self maskImageWithMask:maskImage];
//4
return result;
}
#end
what if your image with gradient background?
use below code for that.
- (UIImage *)complexReoveBackground{
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self];
GPUImagePrewittEdgeDetectionFilter *filter = [[GPUImagePrewittEdgeDetectionFilter alloc] init];
[filter setEdgeStrength:0.04];
[stillImageSource addTarget:filter];
[filter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *resultImage = [filter imageFromCurrentFramebuffer];
UIImage *processedImage = [resultImage floodFillFromPoint:CGPointMake(0, 0) withColor:[UIColor magentaColor] andTolerance:0];
CGImageRef inputCGImage=processedImage.CGImage;
UInt32 * inputPixels;
NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
NSUInteger inputHeight = CGImageGetHeight(inputCGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bitsPerComponent = 8;
NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;
inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));
CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight, bitsPerComponent, inputBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);
for (NSUInteger j = 0; j < inputHeight; j++) {
for (NSUInteger i = 0; i < inputWidth; i++) {
UInt32 * currentPixel = inputPixels + (j * inputWidth) + i;
UInt32 color = *currentPixel;
if (R(color) == 255 && G(color) == 0 && B(color) == 255) {
*currentPixel = RGBAMake(0, 0, 0, A(0));
}else{
*currentPixel = RGBAMake(0, 0, 0, 255);
}
}
}
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage * maskImage = [UIImage imageWithCGImage:newCGImage];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
free(inputPixels);
GPUImagePicture *maskImageSource = [[GPUImagePicture alloc] initWithImage:maskImage];
GPUImageGaussianBlurFilter *blurFilter = [[GPUImageGaussianBlurFilter alloc] init];
[blurFilter setBlurRadiusInPixels:0.7];
[maskImageSource addTarget:blurFilter];
[blurFilter useNextFrameForImageCapture];
[maskImageSource processImage];
UIImage *blurMaskImage = [blurFilter imageFromCurrentFramebuffer];
//return blurMaskImage;
UIImage *result = [self maskImageWithMask:blurMaskImage];
return result;
}
You can download the sample code from here sample code
Capturing screenshot using opengl works fine for iOS version before 7.
But while running app in ios7 it returns black screen.
Following is the piece of code which i am working with:
-(UIImage *) glToUIImage {
CGSize size = self.view.frame.size;
int image_height = (int)size.width;
int image_width = (int)size.height;
NSInteger myDataLength = image_width * image_height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, image_width, image_height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < image_height; y++)
{
for(int x = 0; x < image_width * 4; x++)
{
buffer2[(image_height - 1 - y) * image_width * 4 + x] = buffer[y * 4 * image_width + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * image_width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(image_width, image_height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
// screenshot function, combined my opengl image with background image and
// saved into Photos.
-(UIImage*)screenshot
{
UIImage *image = [self glToUIImage];
CGRect pos = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(image.size, NO, [[UIScreen mainScreen]scale]);
[image drawInRect:pos];
UIImage* final = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
final = [[UIImage alloc] initWithCGImage: final.CGImage
scale: 1.0
orientation: UIImageOrientationRight];
// final picture I saved into Photos.
UIImageWriteToSavedPhotosAlbum(final, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
return final;
}
Has anybody captured screenshot using opengl for ios7?
Have resolved the issue. Updated the drawableproperties function of eaglview class:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:FALSE],
kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8,
kEAGLDrawablePropertyColorFormat,
nil];
:) :)
i need help converting an 24/32 bit RGB raw image to uiimage.
I've tried the examples here from Paul Solt and others, but nothing work. Anybody could please show an example or tutorial?
The image data is hold in nsdata and i would like to have a jpg or png image.
Thx
Thorsten
I'm using the code by Paul Solt, it does something, but the image looks like it have four times the image information in one image. i cant post an image here:
EDIT: i added the lines at the beginning of the method between the comments, now it works :-)
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) buffer
withWidth:(int) width
withHeight:(int) height {
// added code
char* rgba = (char*)malloc(width*height*4);
for(int i=0; i < width*height; ++i) {
rgba[4*i] = buffer[3*i];
rgba[4*i+1] = buffer[3*i+1];
rgba[4*i+2] = buffer[3*i+2];
rgba[4*i+3] = 255;
}
//
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, rgba, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if(colorSpaceRef == NULL) {
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
NSLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if(context == NULL) {
NSLog(#"Error context not created");
free(pixels);
}
UIImage *image = nil;
if(context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Support both iPad 3.2 and iPhone 4 Retina displays with the correct scale
if([UIImage respondsToSelector:#selector(imageWithCGImage:scale:orientation:)]) {
float scale = [[UIScreen mainScreen] scale];
image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
} else {
image = [UIImage imageWithCGImage:imageRef];
}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if(pixels) {
free(pixels);
}
return image;
}
solution:
my bitmap image data has only 3 bytes, but ios wants 4 bytes, the fourth is for alpha. so inserting the following code which added a 4th byte fixed the problem.
char* rgba = (char*)malloc(width*height*4);
for(int i=0; i < width*height; ++i) {
rgba[4*i] = buffer[3*i];
rgba[4*i+1] = buffer[3*i+1];
rgba[4*i+2] = buffer[3*i+2];
rgba[4*i+3] = 255;
}
Here is Conversion of NSData to UIImage :
NSData *imageData = UIImagePNGRepresentation(image);
UIImage *image=[UIImage imageWithData:data];
I have created PNG image using this code, hope it will works for you also.