Crop UIImage to alpha - ios

I have a rather large, almost full screen image that I'm going to be displaying on an iPad. The image is about 80% transparent. I need to, on the client, determine the bounding box of the opaque pixels, and then crop to that bounding box.
Scanning other questions here on StackOverflow and reading some of the CoreGraphics docs, I think I could accomplish this by:
CGBitmapContextCreate(...) // Use this to render the image to a byte array
..
- iterate through this byte array to find the bounding box
..
CGImageCreateWithImageInRect(image, boundingRect);
That just seems very inefficient and clunky. Is there something clever I can do with CGImage masks or something which makes use of the device's graphics acceleration to do this?

Thanks to user404709 for making all the hard work.
Below code also handles retina images and frees the CFDataRef.
- (UIImage *)trimmedImage {
CGImageRef inImage = self.CGImage;
CFDataRef m_DataRef;
m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
size_t width = CGImageGetWidth(inImage);
size_t height = CGImageGetHeight(inImage);
CGPoint top,left,right,bottom;
BOOL breakOut = NO;
for (int x = 0;breakOut==NO && x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
left = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = 0;breakOut==NO && y < height; y++) {
for (int x = 0; x < width; x++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
top = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = height-1;breakOut==NO && y >= 0; y--) {
for (int x = width-1; x >= 0; x--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
bottom = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int x = width-1;breakOut==NO && x >= 0; x--) {
for (int y = height-1; y >= 0; y--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
right = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
CGFloat scale = self.scale;
CGRect cropRect = CGRectMake(left.x / scale, top.y/scale, (right.x - left.x)/scale, (bottom.y - top.y) / scale);
UIGraphicsBeginImageContextWithOptions( cropRect.size,
NO,
scale);
[self drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)
blendMode:kCGBlendModeCopy
alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CFRelease(m_DataRef);
return croppedImage;
}

I created a category on UImage which does this if any one needs it...
+ (UIImage *)cropTransparencyFromImage:(UIImage *)img {
CGImageRef inImage = img.CGImage;
CFDataRef m_DataRef;
m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int width = img.size.width;
int height = img.size.height;
CGPoint top,left,right,bottom;
BOOL breakOut = NO;
for (int x = 0;breakOut==NO && x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
left = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = 0;breakOut==NO && y < height; y++) {
for (int x = 0; x < width; x++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
top = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = height-1;breakOut==NO && y >= 0; y--) {
for (int x = width-1; x >= 0; x--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
bottom = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int x = width-1;breakOut==NO && x >= 0; x--) {
for (int y = height-1; y >= 0; y--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
right = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
CGRect cropRect = CGRectMake(left.x, top.y, right.x - left.x, bottom.y - top.y);
UIGraphicsBeginImageContextWithOptions( cropRect.size,
NO,
0.);
[img drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)
blendMode:kCGBlendModeCopy
alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}

There is no clever cheat to get around having the device do the work, but there are some ways to accelerate the task, or minimize the impact on the user interface.
First, consider the need to accelerate this task. A simple iteration through this byte array may go fast enough. There may be no need to invest in optimizing this task if the app is just calculating this once per run or in reaction to a user's choice that takes at least a few seconds between choices.
If the bounding box is not needed for some time after the image becomes available, this iteration may be launched in a separate thread. That way the calculation doesn't block the main interface thread. Grand Central Dispatch may make using a separate thread for this task easier.
If the task must be accelerated, maybe this is real time processing of video images, then parallel processing of the data may help. The Accelerate framework may help in setting up SIMD calculations on the data. Or, to really get performance with this iteration, ARM assembly language code using the NEON SIMD operations could get great results with significant development effort.
The last choice is to investigate a better algorithm. There's a huge body of work on detecting features in images. An edge detection algorithm may be faster than a simple iteration through the byte array. Maybe Apple will add edge detection capabilities to Core Graphics in the future which can be applied to this case. An Apple implemented image processing capability may not be an exact match for this case, but Apple's implementation should be optimized to use the SIMD or GPU capabilities of the iPad, resulting in better overall performance.

Related

Getting UIImage for only particular area bounds drawn - PaintView

I have already implemented paint / draw using:
- (void) touchesBegan: (NSSet *) touches withEvent: (UIEvent *) event
-(void) touchesMoved: (NSSet *) touches withEvent: (UIEvent *) event
- (void) touchesEnded: (NSSet *) touches withEvent: (UIEvent *) event
Now issue is that for any line drawn, I want to get that particular line / paint image. I don't want image of entire screen, only area / bounds of line / paint drawn.
Reason is that I want to perform pan gesture / delete functionality on that line / paint drawn.
User can draw multiple lines, so want UIImage for all this lines separately.
Any logic or code snippet will be really helpful
Thanks in advance
Depending on your application, particularly how many times you plan on doing this in a row, you may be able to create a different image/layer for each paint line. Your final image would essentially be all the individual lines drawn on top of each other.
It may be more efficient to create a custom view to capture touch events. You could store the list of touch coordinates for each paint line and render them all at once in a custom drawRect. This way you are storing lists of coordinates for each paint line, and can still access each one, instead of a list of images. You could calculate the area/bounds from the coordinates used to render the line.
Additional context and code may be helpful, I'm not sure I completely understand what you're trying to accomplish!
I take a look at the MVPaint project. It seems you have an object:
MVPaintDrawing _drawing;
which contains an array of MVPaintTransaction. You can iterate on those MVPaintTransaction to draw an UIImage.
So first you can add a method to get an image from a MVPaintTransaction:
- (UIImage *) imageToDrawWithSize:(CGSize) size xScale:(CGFloat)xScale yScale:(CGFloat)yScale {
UIGraphicsBeginImageContext(size);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), xScale, yScale);
// call the existing draw method
[self draw];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Then add a method to get an array of image from the array of MVPaintTransaction in the MVPaintDrawing class:
- (NSArray *) getImagesFromDrawingOnSurface: (UIImageView *) surface xScale: (CGFloat) xScale yScale: (CGFloat) yScale{
NSMutableArray *imageArray = [NSMutableArray new];
for (MVPaintTransaction * transaction in _drawing) {
UIImage *image = [transaction imageToDrawWithSize:surface.frame.size xScale:xScale yScale:yScale];
[imageArray addObject:image];
}
return imageArray;
}
In this way you will have an array of UIImage corresponding to each line you have drawn. If you want those images to have the "minimum" possible size (i mean without extra alpha part), you can apply this method (I added it in the MVPaintTransaction class):
- (UIImage *)trimmedImage:(UIImage *)img {
CGImageRef inImage = img.CGImage;
CFDataRef m_DataRef;
m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
size_t width = CGImageGetWidth(inImage);
size_t height = CGImageGetHeight(inImage);
CGPoint top,left,right,bottom;
BOOL breakOut = NO;
for (int x = 0;breakOut==NO && x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
left = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = 0;breakOut==NO && y < height; y++) {
for (int x = 0; x < width; x++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
top = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = height-1;breakOut==NO && y >= 0; y--) {
for (int x = width-1; x >= 0; x--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
bottom = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int x = width-1;breakOut==NO && x >= 0; x--) {
for (int y = height-1; y >= 0; y--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
right = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
CGFloat scale = img.scale;
CGRect cropRect = CGRectMake(left.x / scale, top.y/scale, (right.x - left.x)/scale, (bottom.y - top.y) / scale);
UIGraphicsBeginImageContextWithOptions( cropRect.size,
NO,
scale);
[img drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)
blendMode:kCGBlendModeCopy
alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CFRelease(m_DataRef);
return croppedImage;
}
Then simply replace in the first method:
return result;
by
return [self trimmedImage:result];

Cropping UIView

I have a view which needs to be cropped. I have 4 views displaying video sub viewed on the main view. Because of the videos ratio, I need to crop the views making the videos squares instead of rectangles. Here is my code:
- (void)videoSize {
CGFloat size;
if ([self.videosView frame].size.height <= [self.emplacementView frame].size.width) {
size = [self.emplacementView frame].size.height;
} else {
size = [self.emplacementView frame].size.width;
}
CGFloat offsetX = 0;
CGFloat offsetY = 0;
NSArray* keys = [mediaStreams allKeys];
int count = keys.count;
if ( !count ) return;
for (int i=0; i<count; i++) {
NSString* id = keys[i];
MediaStream* ms = [ mediaStreams valueForKey:id ];
switch (i) {
case 0:
offsetX = 0;
offsetY = 0;
break;
case 1:
offsetX = size / 2;
offsetY = 0;
break;
case 2:
offsetX = 0;
offsetY = size / 2;
break;
case 3:
offsetX = size / 2;
offsetY = size / 2;
break;
default:
break;
}
CGRect frame = CGRectMake(offsetX, offsetY, size / 2, size / 2);
[ms getVideoView].getView.frame = frame;
[ms getVideoView].getView.backgroundColor = [UIColor greenColor];
}
[self.videosView addSubview:[ [ mediaStream getVideoView ] getView] ];
}
I tried different ways by adding more views to hide them, but it doesn't work at all. If you already have a solution to this problem or an idea to solve it.
Set clipsToBounds property for each of the 4 views displaying video to YES
view1.clipsToBounds = YES;

Setting up custom CIColorCube filter in SKEffectNode

I'm trying to create a SKEffectNode that will turn transparent any green pixel over a black background. For testing purposes while I figure this stuff out, I want to make sure that the following code will not turn anything transparent within the subtree of the SKEffectNode. The following code actually prevents the child from being drawn and it spits up the following error:
CIColorCube inputCubeData is not of the expected length.
That's the method that creates the SKEffectNode
- (SKEffectNode *) newVeil
{
SKEffectNode *node = [[SKEffectNode alloc] init];
node.shouldEnableEffects = YES;
node.filter = [self createFilter];
SKSpriteNode *darkness = [SKSpriteNode spriteNodeWithColor:[UIColor blackColor] size:self.view.frame.size];
node.position = self.view.center;
[node addChild:darkness];
return node;
}
That's how I setup the filter (most, or dare I say all of this code is in Apple's dev documents).
- (CIFilter *) createFilter
{
// Allocate memory
const unsigned int size = 64;
float *cubeData = (float *)malloc (size * size * size * sizeof (float) * 4);
float *c = cubeData;
rgb rgbInput;
hsv hsvOutput;
// Populate cube with a simple gradient going from 0 to 1
for (int z = 0; z < size; z++){
rgbInput.b = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgbInput.g = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgbInput.r = ((double)x)/(size-1); // Red value
// Convert RGB to HSV
// You can find publicly available rgbToHSV functions on the Internet
hsvOutput = rgb2hsv(rgbInput);
// Use the hue value to determine which to make transparent
// The minimum and maximum hue angle depends on
// the color you want to remove
float alpha = (hsvOutput.h > 120 && hsvOutput.h < 100) ? 0.0f: 1.0f;
// Calculate premultiplied alpha values for the cube
c[0] = rgbInput.b * alpha;
c[1] = rgbInput.g * alpha;
c[2] = rgbInput.r * alpha;
c[3] = alpha;
c += 4; // advance our pointer into memory for the next color value
}
}
}
// Create memory with the cube data
NSData *data = [NSData dataWithBytesNoCopy:cubeData
length:size
freeWhenDone:YES];
CIFilter *colorCube = [CIFilter filterWithName:#"CIColorCube"];
[colorCube setValue:#(size) forKey:#"inputCubeDimension"];
// Set data for cube
[colorCube setValue:data forKey:#"inputCubeData"];
return colorCube;
}
I just can't spot the problem. Not a whole lot of experience with CoreImage. Anyone?
Update 1
I tried exporting the whole CIFilter into it's own class.
// PMColorCube.h
#import <CoreImage/CoreImage.h>
#interface PMColorCube : CIFilter{
CIImage *inputImage;
}
#property (retain, nonatomic) CIImage *inputImage;
#end
// PMColorCube.m
#import "PMColorCube.h"
typedef struct {
double r; // percent
double g; // percent
double b; // percent
} rgb;
typedef struct {
double h; // angle in degrees
double s; // percent
double v; // percent
} hsv;
static hsv rgb2hsv(rgb in);
#implementation PMColorCube
#synthesize inputImage;
hsv rgb2hsv(rgb in)
{
hsv out;
double min, max, delta;
min = in.r < in.g ? in.r : in.g;
min = min < in.b ? min : in.b;
max = in.r > in.g ? in.r : in.g;
max = max > in.b ? max : in.b;
out.v = max; // v
delta = max - min;
if( max > 0.0 ) {
out.s = (delta / max); // s
} else {
// r = g = b = 0 // s = 0, v is undefined
out.s = 0.0;
out.h = NAN; // its now undefined
return out;
}
if( in.r >= max ) // > is bogus, just keeps compilor happy
out.h = ( in.g - in.b ) / delta; // between yellow & magenta
else
if( in.g >= max )
out.h = 2.0 + ( in.b - in.r ) / delta; // between cyan & yellow
else
out.h = 4.0 + ( in.r - in.g ) / delta; // between magenta & cyan
out.h *= 60.0; // degrees
if( out.h < 0.0 )
out.h += 360.0;
return out;
}
- (CIImage *) outputImage
{
const unsigned int size = 64;
float *cubeData = (float *)malloc (size * size * size * sizeof (float) * 4);
float *c = cubeData;
rgb rgbInput;
hsv hsvOutput;
// Populate cube with a simple gradient going from 0 to 1
for (int z = 0; z < size; z++){
rgbInput.b = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgbInput.g = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgbInput.r = ((double)x)/(size-1); // Red value
// Convert RGB to HSV
// You can find publicly available rgbToHSV functions on the Internet
hsvOutput = rgb2hsv(rgbInput);
// Use the hue value to determine which to make transparent
// The minimum and maximum hue angle depends on
// the color you want to remove
float alpha = (hsvOutput.h > 120 && hsvOutput.h < 100) ? 0.0f: 1.0f;
// Calculate premultiplied alpha values for the cube
c[0] = rgbInput.b * alpha;
c[1] = rgbInput.g * alpha;
c[2] = rgbInput.r * alpha;
c[3] = alpha;
c += 4; // advance our pointer into memory for the next color value
}
}
}
// Create memory with the cube data
NSData *data = [NSData dataWithBytesNoCopy:cubeData
length:size
freeWhenDone:YES];
CIFilter *colorCube = [CIFilter filterWithName:#"CIColorCube"];
[colorCube setValue:#(size) forKey:#"inputCubeDimension"];
// Set data for cube
[colorCube setValue:data forKey:#"inputCubeData"];
[colorCube setValue:self.inputImage forKey:kCIInputImageKey];
CIImage *result = [colorCube valueForKey:kCIOutputImageKey];
return result;
}
#end
I still have the same error during run time
Embarrassing as it may sound. The size that I pass in the class method when creating the NSData doesn't correspond to the real size. Fixed it like so:
// Create memory with the cube data
NSData *data = [NSData dataWithBytesNoCopy:cubeData
length:size * size * size * sizeof (float) * 4
freeWhenDone:YES];

Get all X,Y coordinates between two points in objective C

How to get all X,Y coordinates between two points.
I want to move a UIButton in a diagonal pattern in objective C.
Example. To move UIButton from position 'Point A' towards position 'Point B'.
.Point B
. Point A
Thanks in advance.
you can use Bresenham's line algorithm
Here is a slightly simplified version, I have used a bunch of times
+(NSArray*)getAllPointsFromPoint:(CGPoint)fPoint toPoint:(CGPoint)tPoint
{
/*Simplified implementation of Bresenham's line algoritme */
NSMutableArray *ret = [NSMutableArray array];
float deltaX = fabsf(tPoint.x - fPoint.x);
float deltaY = fabsf(tPoint.y - fPoint.y);
float x = fPoint.x;
float y = fPoint.y;
float err = deltaX-deltaY;
float sx = -0.5;
float sy = -0.5;
if(fPoint.x<tPoint.x)
sx = 0.5;
if(fPoint.y<tPoint.y)
sy = 0.5;
do {
[ret addObject:[NSValue valueWithCGPoint:CGPointMake(x, y)]];
float e = 2*err;
if(e > -deltaY)
{
err -=deltaY;
x +=sx;
}
if(e < deltaX)
{
err +=deltaX;
y+=sy;
}
} while (round(x) != round(tPoint.x) && round(y) != round(tPoint.y));
[ret addObject:[NSValue valueWithCGPoint:tPoint]];//add final point
return ret;
}
If you simply want to animate a UIControl from one location to another, you might want to use UIAnimation:
[UIView animateWithDuration:1.0f delay:0.0f options:UIViewAnimationOptionCurveLinear animations:^{
btn.center = CGPointMake(<NEW_X>, <NEW_Y>)
} completion:^(BOOL finished) {
}];
You should really use Core Animation for this. You just need to specify the new origin for your UIButton and Core Animation does the rest:
[UIView animateWithDuration:0.3 animations:^{
CGRect frame = myButton.frame;
frame.origin = CGPointMake(..new X.., ..new Y..);
myButton.frame = frame;
}];
For Swift 3.0,
func findAllPointsBetweenTwoPoints(startPoint : CGPoint, endPoint : CGPoint) {
var allPoints :[CGPoint] = [CGPoint]()
let deltaX = fabs(endPoint.x - startPoint.x)
let deltaY = fabs(endPoint.y - startPoint.y)
var x = startPoint.x
var y = startPoint.y
var err = deltaX-deltaY
var sx = -0.5
var sy = -0.5
if(startPoint.x<endPoint.x){
sx = 0.5
}
if(startPoint.y<endPoint.y){
sy = 0.5;
}
repeat {
let pointObj = CGPoint(x: x, y: y)
allPoints.append(pointObj)
let e = 2*err
if(e > -deltaY)
{
err -= deltaY
x += CGFloat(sx)
}
if(e < deltaX)
{
err += deltaX
y += CGFloat(sy)
}
} while (round(x) != round(endPoint.x) && round(y) != round(endPoint.y));
allPoints.append(endPoint)
}
This version of Bresenham's line algorithm works well with horizontal line:
+ (NSArray*)getAllPointsFromPoint:(CGPoint)fPoint toPoint:(CGPoint)tPoint {
/* Bresenham's line algorithm */
NSMutableArray *ret = [NSMutableArray array];
int x1 = fPoint.x;
int y1 = fPoint.y;
int x2 = tPoint.x;
int y2 = tPoint.y;
int dy = y2 - y1;
int dx = x2 - x1;
int stepx, stepy;
if (dy < 0) { dy = -dy; stepy = -1; } else { stepy = 1; }
if (dx < 0) { dx = -dx; stepx = -1; } else { stepx = 1; }
dy <<= 1; // dy is now 2*dy
dx <<= 1; // dx is now 2*dx
[ret addObject:[NSValue valueWithCGPoint:CGPointMake(x1, y1)]];
if (dx > dy)
{
int fraction = dy - (dx >> 1); // same as 2*dy - dx
while (x1 != x2)
{
if (fraction >= 0)
{
y1 += stepy;
fraction -= dx; // same as fraction -= 2*dx
}
x1 += stepx;
fraction += dy; // same as fraction -= 2*dy
[ret addObject:[NSValue valueWithCGPoint:CGPointMake(x1, y1)]];
}
} else {
int fraction = dx - (dy >> 1);
while (y1 != y2) {
if (fraction >= 0) {
x1 += stepx;
fraction -= dy;
}
y1 += stepy;
fraction += dx;
[ret addObject:[NSValue valueWithCGPoint:CGPointMake(x1, y1)]];
}
}
return ret;
}
UPDATE: this is actually simple math find the point on the line made out of two points, here is my algo:
+ (NSArray*)getNumberOfPoints:(int)num fromPoint:(CGPoint)p toPoint:(CGPoint)q {
NSMutableArray *ret = [NSMutableArray arrayWithCapacity:num];
float epsilon = 1.0f / (float)num;
int count = 1;
for (float t=0; t < 1+epsilon && count <= num ; t += epsilon) {
float x = (1-t)*p.x + t*q.x;
float y = (1-t)*p.y + t*q.y;
[ret addObject:[NSValue valueWithCGPoint:CGPointMake(x, y)]];
count++;
}
// DDLogInfo(#"Vector: points made: %d",(int)[ret count]);
// DDLogInfo(#"Vector: epsilon: %f",epsilon);
// DDLogInfo(#"Vector: points: %#",ret);
return [ret copy];
}

Trim UIImage border

Here's an example of an image I would like to trim. I want to get rid of the borders around the image (in this case the top and bottom black bars).
I found a library on Github: CKImageAdditions, however it doesn't seem to work. When I pass in a UIColor (with RGB colours) it just returns the same image.
I can find a lot of examples and category classes that would trim a UIImage with any transparent pixels as the border, but in this case I need to trim the black colour. I have sampled the colour in my images and they the colour value is indeed 255, but it doesn't seem to match what the above library is looking for.
Does anyone have a library they have used or any insight? I've searched and searched and CKImageAdditions has been the only thing I can find that advertises to trim with a colour (although, unfortunately doesn't work in my case).
I ended up customizing a method from a function in CKImageAdditions that had this functionality supposedly but I couldn't get it to work. It just wouldn't trim the colour, so I instead check the pixel's RGB values to be \0 (black). The CKImageAdditions just couldn't find the black pixels, unfortunately.
Since the images I wanted to trim sometimes didn't have super black bars (sometimes they'd have a stray pixel with a lighter dark colour or something) I added GPUImage functionality to the method, which basically just creates a black and white version of the image with a strong filter on it so any dark colours become black and any light colours become white, making the black bar borders much more prominent and ensuring better results when I look for them in the method. And of course I crop the original image at the end based on the results from the black/white image.
Here's my code:
typedef struct Pixel { uint8_t r, g, b, a; } Pixel;
+(UIImage*)trimBlack:(UIImage*)originalImage {
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:originalImage];
GPUImageLuminanceThresholdFilter *stillImageFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
stillImageFilter.threshold = 0.1;
[stillImageSource addTarget:stillImageFilter];
[stillImageSource processImage];
UIImage *imageToProcess = [stillImageFilter imageFromCurrentlyProcessedOutput];
RMImageTrimmingSides sides = RMImageTrimmingSidesAll;
CGImageRef image = imageToProcess.CGImage;
void * bitmapData = NULL;
CGContextRef context = CKBitmapContextAndDataCreateWithImage(image, &bitmapData);
Pixel *data = bitmapData;
size_t width = CGBitmapContextGetWidth(context);
size_t height = CGBitmapContextGetHeight(context);
size_t top = 0;
size_t bottom = height;
size_t left = 0;
size_t right = width;
// Scan the left
if (sides & RMImageTrimmingSidesLeft) {
for (size_t x = 0; x < width; x++) {
for (size_t y = 0; y < height; y++) {
Pixel pixel = data[y * width + x];
if (pixel.r != '\0' && pixel.g != '\0' && pixel.b != '\0') {
left = x;
goto SCAN_TOP;
}
}
}
}
// Scan the top
SCAN_TOP:
if (sides & RMImageTrimmingSidesTop) {
for (size_t y = 0; y < height; y++) {
for (size_t x = 0; x < width; x++) {
Pixel pixel = data[y * width + x];
if (pixel.r != '\0' && pixel.g != '\0' && pixel.b != '\0') {
top = y;
goto SCAN_RIGHT;
}
}
}
}
// Scan the right
SCAN_RIGHT:
if (sides & RMImageTrimmingSidesRight) {
for (size_t x = width-1; x >= left; x--) {
for (size_t y = 0; y < height; y++) {
Pixel pixel = data[y * width + x];
if (pixel.r != '\0' && pixel.g != '\0' && pixel.b != '\0') {
right = x;
goto SCAN_BOTTOM;
}
}
}
}
// Scan the bottom
SCAN_BOTTOM:
if (sides & RMImageTrimmingSidesBottom) {
for (size_t y = height-1; y >= top; y--) {
for (size_t x = 0; x < width; x++) {
Pixel pixel = data[y * width + x];
if (pixel.r != '\0' && pixel.g != '\0' && pixel.b != '\0') {
bottom = y;
goto FINISH;
}
}
}
}
FINISH:
CGContextRelease(context);
free(bitmapData);
CGRect rect = CGRectMake(left, top, right - left, bottom - top);
return [originalImage imageCroppedToRect:rect];
}
Thanks to all the hard work from the developers of the libraries used in the code above and of course all credit goes to them!
I tweaked your solution now that GPUImage's imageFromCurrentlyProcessedOutput method has been removed and replaced with another method that doesn't work at all.
Also tweaked how the image was cropped and removed some stuff that was just broken. Seems to work.
typedef struct Pixel { uint8_t r, g, b, a; } Pixel;
-(UIImage*)trimBlack:(UIImage*)originalImage {
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:originalImage];
GPUImageLuminanceThresholdFilter *stillImageFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
stillImageFilter.threshold = 0.1;
[stillImageSource addTarget:stillImageFilter];
[stillImageSource processImage];
[stillImageSource useNextFrameForImageCapture];
//UIImage *imageToProcess = [stillImageFilter imageFromCurrentFramebufferWithOrientation:UIImageOrientationUp];
//UIImage *imageToProcess = [UIImage imageWithCGImage:[stillImageFilter newCGImageFromCurrentlyProcessedOutput]];
UIImage *imageToProcess = originalImage;
//RMImageTrimmingSides sides = RMImageTrimmingSidesAll;
CGImageRef image = imageToProcess.CGImage;
void * bitmapData = NULL;
CGContextRef context = CKBitmapContextAndDataCreateWithImage(image, &bitmapData);
Pixel *data = bitmapData;
size_t width = CGBitmapContextGetWidth(context);
size_t height = CGBitmapContextGetHeight(context);
size_t top = 0;
size_t bottom = height;
size_t left = 0;
size_t right = width;
// Scan the left
//if (sides & RMImageTrimmingSidesLeft) {
for (size_t x = 0; x < width; x++) {
for (size_t y = 0; y < height; y++) {
Pixel pixel = data[y * width + x];
if (pixel.r != '\0' && pixel.g != '\0' && pixel.b != '\0') {
left = x;
goto SCAN_TOP;
}
}
}
//}
// Scan the top
SCAN_TOP:
//if (sides & RMImageTrimmingSidesTop) {
for (size_t y = 0; y < height; y++) {
for (size_t x = 0; x < width; x++) {
Pixel pixel = data[y * width + x];
if (pixel.r != '\0' && pixel.g != '\0' && pixel.b != '\0') {
top = y;
goto SCAN_RIGHT;
}
}
}
//}
// Scan the right
SCAN_RIGHT:
//if (sides & RMImageTrimmingSidesRight) {
for (size_t x = width-1; x >= left; x--) {
for (size_t y = 0; y < height; y++) {
Pixel pixel = data[y * width + x];
if (pixel.r != '\0' && pixel.g != '\0' && pixel.b != '\0') {
right = x;
goto SCAN_BOTTOM;
}
}
}
//}
// Scan the bottom
SCAN_BOTTOM:
//if (sides & RMImageTrimmingSidesBottom) {
for (size_t y = height-1; y >= top; y--) {
for (size_t x = 0; x < width; x++) {
Pixel pixel = data[y * width + x];
if (pixel.r != '\0' && pixel.g != '\0' && pixel.b != '\0') {
bottom = y;
goto FINISH;
}
}
}
//}
FINISH:
CGContextRelease(context);
free(bitmapData);
CGRect rect = CGRectMake(left, top, right - left, bottom - top);
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], rect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}

Resources