First of all sorry for my english.
I'm doing paint iOS application. I was decided to use pixel by pixel processing of image. It is needed to create difficult "brush" tools. I was use this algorithm.
My code:
ViewController.h
#import <UIKit/UIKit.h>
#interface MVRViewController : UIViewController
#property (weak, nonatomic) IBOutlet UIImageView *imageView;
#property (weak, nonatomic) UIImage *image;
#end
ViewController.m
- (void)viewDidLoad {
[super viewDidLoad];
self.image = [UIImage imageNamed:#"grid_750x450.png"];
self.imageView.image = self.image;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
NSSet *allTouches = [event allTouches];
UITouch *touch = [[allTouches allObjects] objectAtIndex:0];
CGPoint imageViewPoint = [touch locationInView:self.imageView];
if ((imageViewPoint.x < self.imageView.frame.size.width) && (self.imageViewPoint.y < imageView.frame.size.height)) {
// Create image buffer
CGContextRef ctx;
CGImageRef imageRef = [self.image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Do something with image buffer
int X = imageViewPoint.x;
int Y = imageViewPoint.y;
for (int X1=X-14; ((X1<X+14) && (X1<width)); X1++) {
for (int Y1=Y-14; ((Y1<Y+14) && (Y1<height)); Y1++) {
int byteIndex = (bytesPerRow * Y1) + X1 * bytesPerPixel;
rawData[byteIndex]=255; // new red
rawData[byteIndex+1]=0; // new green
rawData[byteIndex+2]=0; // new blue
}
}
// Save image buffer to UIImage
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
self.image = rawImage;
self.imageView.image = self.image;
CGImageRelease(imageRef);
free(rawData);
}
}
Application is work. But...
My problem: application was crashed by Low memory error (on device).
From the Instruments (Allocation profile) I see that unsigned char *rawData = malloc(height * width * 4); do not never released and free(rawData) function do not worked, I guess. On simulator memory growth to "infinity" (3Gb...).
Where am I wrong? Thank you!
Related
I'm using VLCKit to play video in my app, and I need to be able to take a screenshot of the video at certain points. This is the code I'm using:
-(NSData*)generateThumbnail{
int s = 1;
UIScreen* screen = [UIScreen mainScreen];
if ([screen respondsToSelector:#selector(scale)]) {
s = (int) [screen scale];
}
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
int width = viewport[2];
int height = [_profile.resolution integerValue];//viewport[3];
int myDataLength = width * height * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
for(int y1 = 0; y1 < height; y1++) {
for(int x1 = 0; x1 <width * 4; x1++) {
buffer2[(height - 1 - y1) * width * 4 + x1] = buffer[y1 * 4 * width + x1];
}
}
free(buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *image = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
NSData *thumbAsData = UIImageJPEGRepresentation(image, 5);
return thumbAsData;
}
To be honest, I have no idea how most of this works. I copied it from somewhere a while ago (I don't remember the source). It mostly works, however frequently it seems parts of the image are missing.
Can someone point me in the right direction? Most of the other posts I see regarding OpenGL screenshots are fairly old, and don't seem to apply.
Thanks.
I wrote a class to work around this problem.
Basically you take directly a screenshot from the screen, and then if you want you can take just a part of the image and also scale it.
Taking a screenshot from the screen, you take every stuff. UIKit , OpenGL, AVFoundation, etc.
Here the class: https://github.com/matteogobbi/MGScreenshotHelper/
Below useful functions, but i suggest you to download (and star :D) directly my helper class ;)
/* Get the screenshot of the screen (useful when you have UIKit elements and OpenGL or AVFoundation stuff */
+ (UIImage *)screenshotFromScreen
{
CGImageRef UIGetScreenImage(void);
CGImageRef screen = UIGetScreenImage();
UIImage* screenImage = [UIImage imageWithCGImage:screen];
CGImageRelease(screen);
return screenImage;
}
/* Get the screenshot of a determinate rect of the screen, and scale it to the size that you want. */
+ (UIImage *)getScreenshotFromScreenWithRect:(CGRect)captureRect andScaleToSize:(CGSize)newSize
{
UIImage *image = [[self class] screenshotFromScreen];
image = [[self class] cropImage:image withRect:captureRect];
image = [[self class] scaleImage:image toSize:newSize];
return image;
}
#pragma mark - Other methods
/* Get a portion of an image */
+ (UIImage *)cropImage:(UIImage *)image withRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *cropedImage = [UIImage imageWithCGImage:imageRef];
return cropedImage;
}
/* Scale an image */
+ (UIImage *)scaleImage:(UIImage *)image toSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, YES, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Ok turns out this is a problem only in the Simulator. On a device it seems to work 98% of the time.
I have an animal image and that image have a white background and the shape of animal is black outline. That is fixed on my image view in my .xib.
Now I would like to paint on the image, however only on the particular closed part.
Suppose a user touches on the hand then only hands will fill the gradient. The rest of the image will remain the same.
- (UIImage*)imageFromRawData:(unsigned char *)rawData
{
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * self.imageDoodle.image.size.width;
CGImageRef imageRef = [self.imageDoodle.image CGImage];
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGContextRef context = CGBitmapContextCreate(rawData,self.imageDoodle.image.size.width,
self.imageDoodle.image.size.height,bitsPerComponent,bytesPerRow,colorSpace,
kCGImageAlphaPremultipliedLast);
imageRef = CGBitmapContextCreateImage (context);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(context);
CGImageRelease(imageRef);
return rawImage;
}
-(unsigned char*)rawDataFromImage:(UIImage *)image
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSLog(#"w=%d,h=%d",width,height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return rawData;
}
Where would I need to change my code to support this?
This is possible by UIBezierPath but I don't know how to implement it in this case.
I am developing an iPad application that has multiple UIImageViews in a viewcontoller. each image has some transparent parts. when the user click on the image I want to test if the area of the image he clicked on was NOT transparent then I want to do some action
after searching I have reached the conclusion that I have to access the rawdata of the image and check the alpha value of the point the user clicked on
I used the solution found in here and it helped a lot. I modified the code so that if the point the user clicked on was transparent (alpha <1) then prent 0 else print 1. however, the result is not accurate in runtime. I sometimes get 0 where the point clicked is not transparent and viseversa. I think that there is a problem with the byteIndex value I am not sure that it returnes the color data at the point the user clicked
here is my code
CGPoint touchPoint;
- (void)viewDidLoad
{
[super viewDidLoad];
[logo addGestureRecognizer:[[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleSingleTap:)]];
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
touchPoint = [touch locationInView:self.view];
}
- (void)handleSingleTap:(UITapGestureRecognizer *)recognizer
{
int x = touchPoint.x;
int y = touchPoint.y;
[self getRGBAsFromImage:img atX:x andY:y];
}
- (void)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy {
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
if (alpha < 1) {
NSLog(#"0");
// here I should add the action I want
}
else NSLog(#"1");
free(rawData);
}
Thank you in advace
That method is bad as it draws the full image every time and also relies on the exact byte layout.
I propose to only draw into a 1x1 context and get alpha of that
- (void)handleSingleTap:(UITapGestureRecognizer *)recognizer
{
int x = touchPoint.x;
int y = touchPoint.y;
CGFloat alpha = [self getAlphaFromImage:img atX:x andY:y];
if(alpha<1)
…
}
- (CGFloat)getAlphaFromImage:(UIImage*)image atX:(NSInteger)xx andY:(NSInteger)yy {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), CGPointMake(xx,yy))) {
return 0;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = xx;
NSInteger pointY = yy;
CGImageRef cgImage = self.CGImage;
NSUInteger width = self.size.width;
NSUInteger height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return alpha;
}
i added a groundoverlay to a mapview, and i found thoese ways to change the alpha of groundoverlay.icon.
How to set the opacity/alpha of a UIImage?
but it seems has no affect in the app, i still can not see the map or other groundoverlays behind the image.
is there a solution to handle this?
+ (UIImage *) setImage:(UIImage *)image withAlpha:(CGFloat)alpha
{
// Create a pixel buffer in an easy to use format
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UInt8 * m_PixelBuf = malloc(sizeof(UInt8) * height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//alter the alpha
int length = height * width * 4;
for (int i=0; i<length; i+=4)
{
m_PixelBuf[i+3] = 255*alpha;
}
//create a new image
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef newImgRef = CGBitmapContextCreateImage(ctx);
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
free(m_PixelBuf);
UIImage *finalImage = [UIImage imageWithCGImage:newImgRef];
CGImageRelease(newImgRef);
return finalImage;
}
update 1
I have found a promising apple doc heredescribing CGDataProviderCopyData. I think this does what I originally asked about by taking a drawing from a context and extracting the pixel values.
The example code uses CGImageGetDataProvider and some other features that I do not understand so that I cannot figure out how to implement their functions. How do I take information from the variable con or from its context and get access to the pixels?
update 1
update 0
Maybe I am asking the wrong question here. CGContextDrawImage scales the image from 104 by 104 to 13 by 13 in my case, but then CGContextDrawImage displays the image. Maybe I need to find the part of CGContextDrawImage which just does the scaling.
I have found initWithData:scale: in the "UIImage Class Reference". But I don't know how to supply the data for that method. The scale I want is 0.25 .
- (id)initWithData:(NSData *)data scale:(CGFloat)scale
Can someone tell me how to supply the (NSData *)data for my app?
update 0
//
// BSViewController.m
#import "BSViewController.h"
#interface BSViewController ()
#end
#implementation BSViewController
- (IBAction) chooseImage:(id) sender{
[self.view addSubview:self.imageView];
UIImage* testCard = [UIImage imageNamed:#"ipad 7D.JPG"];
CGSize sz = [testCard size];
CGImageRef num = CGImageCreateWithImageInRect([testCard CGImage],CGRectMake(532, 0, 104, 104));
UIGraphicsBeginImageContext(CGSizeMake( 250,650));
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, CGRectMake(0, 0, 13, 13) ,num);
UIGraphicsEndImageContext();
self.workingImage = CFBridgingRelease(num);
CGImageRelease(num);
I am working on the transition from above to below.
More specifically I need to feed imageRef the correct input. I want to give the imageRef a 13 by 13 image, but when I give imageRef num it gets a 104 by 104 image, and when I give imageRef con it gets a 0 by 0 image. (Another tentative approach is mentioned at the bottom.)
The code below is Brandon Trebitowski's
CGImageRef imageRef = num;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSLog(#"the width: %u", width);
NSLog(#"the height: %u", height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
NSLog(#"Stop 3");
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel;
for (int ii = 0 ; ii < width * height ; ++ii)
{
int outputColor = (rawData[byteIndex] + rawData[byteIndex+1] + rawData[byteIndex+2]) / 3;
rawData[byteIndex] = (char) (outputColor);
rawData[byteIndex+1] = (char) (outputColor);
rawData[byteIndex+2] = (char) (outputColor);
byteIndex += 4;
}
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#end
I have also experimented with defining self.workingImage one of the following two ways and supplying that to imageRef.
self.workingImage = num;
self.workingImage = (__bridge UIImage *)(num);
CGImageRef imageRef = [self.workingImage CGImage];
I changed 2 lines and added 3 lines and got the results I wanted.
The main change was to use UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext so that the rescaling could be done before the image was drawn.
#implementation BSViewController
- (IBAction) chooseImage:(id) sender{
[self.view addSubview:self.imageView];
UIImage* testCard = [UIImage imageNamed:#"ipad 7D.JPG"];
CGSize sz = [testCard size];
CGImageRef num = CGImageCreateWithImageInRect([testCard CGImage],CGRectMake(532, 0, 104, 104));
UIGraphicsBeginImageContextWithOptions(CGSizeMake( 104,104), NO, 0.125); // Changed
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, CGRectMake(0, 0, 104, 104) ,num); // Changed
UIImage* im = UIGraphicsGetImageFromCurrentImageContext(); // Added
UIGraphicsEndImageContext();
self.workingImage = CFBridgingRelease(num);
CGImageRelease(num);
UIImageView* iv = [[UIImageView alloc] initWithImage:im]; // Added
[self.imageView addSubview: iv]; // Added
CGImageRef imageRef = [im CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
etc.