How to get all X,Y coordinates between two points.
I want to move a UIButton in a diagonal pattern in objective C.
Example. To move UIButton from position 'Point A' towards position 'Point B'.
.Point B
. Point A
Thanks in advance.
you can use Bresenham's line algorithm
Here is a slightly simplified version, I have used a bunch of times
+(NSArray*)getAllPointsFromPoint:(CGPoint)fPoint toPoint:(CGPoint)tPoint
{
/*Simplified implementation of Bresenham's line algoritme */
NSMutableArray *ret = [NSMutableArray array];
float deltaX = fabsf(tPoint.x - fPoint.x);
float deltaY = fabsf(tPoint.y - fPoint.y);
float x = fPoint.x;
float y = fPoint.y;
float err = deltaX-deltaY;
float sx = -0.5;
float sy = -0.5;
if(fPoint.x<tPoint.x)
sx = 0.5;
if(fPoint.y<tPoint.y)
sy = 0.5;
do {
[ret addObject:[NSValue valueWithCGPoint:CGPointMake(x, y)]];
float e = 2*err;
if(e > -deltaY)
{
err -=deltaY;
x +=sx;
}
if(e < deltaX)
{
err +=deltaX;
y+=sy;
}
} while (round(x) != round(tPoint.x) && round(y) != round(tPoint.y));
[ret addObject:[NSValue valueWithCGPoint:tPoint]];//add final point
return ret;
}
If you simply want to animate a UIControl from one location to another, you might want to use UIAnimation:
[UIView animateWithDuration:1.0f delay:0.0f options:UIViewAnimationOptionCurveLinear animations:^{
btn.center = CGPointMake(<NEW_X>, <NEW_Y>)
} completion:^(BOOL finished) {
}];
You should really use Core Animation for this. You just need to specify the new origin for your UIButton and Core Animation does the rest:
[UIView animateWithDuration:0.3 animations:^{
CGRect frame = myButton.frame;
frame.origin = CGPointMake(..new X.., ..new Y..);
myButton.frame = frame;
}];
For Swift 3.0,
func findAllPointsBetweenTwoPoints(startPoint : CGPoint, endPoint : CGPoint) {
var allPoints :[CGPoint] = [CGPoint]()
let deltaX = fabs(endPoint.x - startPoint.x)
let deltaY = fabs(endPoint.y - startPoint.y)
var x = startPoint.x
var y = startPoint.y
var err = deltaX-deltaY
var sx = -0.5
var sy = -0.5
if(startPoint.x<endPoint.x){
sx = 0.5
}
if(startPoint.y<endPoint.y){
sy = 0.5;
}
repeat {
let pointObj = CGPoint(x: x, y: y)
allPoints.append(pointObj)
let e = 2*err
if(e > -deltaY)
{
err -= deltaY
x += CGFloat(sx)
}
if(e < deltaX)
{
err += deltaX
y += CGFloat(sy)
}
} while (round(x) != round(endPoint.x) && round(y) != round(endPoint.y));
allPoints.append(endPoint)
}
This version of Bresenham's line algorithm works well with horizontal line:
+ (NSArray*)getAllPointsFromPoint:(CGPoint)fPoint toPoint:(CGPoint)tPoint {
/* Bresenham's line algorithm */
NSMutableArray *ret = [NSMutableArray array];
int x1 = fPoint.x;
int y1 = fPoint.y;
int x2 = tPoint.x;
int y2 = tPoint.y;
int dy = y2 - y1;
int dx = x2 - x1;
int stepx, stepy;
if (dy < 0) { dy = -dy; stepy = -1; } else { stepy = 1; }
if (dx < 0) { dx = -dx; stepx = -1; } else { stepx = 1; }
dy <<= 1; // dy is now 2*dy
dx <<= 1; // dx is now 2*dx
[ret addObject:[NSValue valueWithCGPoint:CGPointMake(x1, y1)]];
if (dx > dy)
{
int fraction = dy - (dx >> 1); // same as 2*dy - dx
while (x1 != x2)
{
if (fraction >= 0)
{
y1 += stepy;
fraction -= dx; // same as fraction -= 2*dx
}
x1 += stepx;
fraction += dy; // same as fraction -= 2*dy
[ret addObject:[NSValue valueWithCGPoint:CGPointMake(x1, y1)]];
}
} else {
int fraction = dx - (dy >> 1);
while (y1 != y2) {
if (fraction >= 0) {
x1 += stepx;
fraction -= dy;
}
y1 += stepy;
fraction += dx;
[ret addObject:[NSValue valueWithCGPoint:CGPointMake(x1, y1)]];
}
}
return ret;
}
UPDATE: this is actually simple math find the point on the line made out of two points, here is my algo:
+ (NSArray*)getNumberOfPoints:(int)num fromPoint:(CGPoint)p toPoint:(CGPoint)q {
NSMutableArray *ret = [NSMutableArray arrayWithCapacity:num];
float epsilon = 1.0f / (float)num;
int count = 1;
for (float t=0; t < 1+epsilon && count <= num ; t += epsilon) {
float x = (1-t)*p.x + t*q.x;
float y = (1-t)*p.y + t*q.y;
[ret addObject:[NSValue valueWithCGPoint:CGPointMake(x, y)]];
count++;
}
// DDLogInfo(#"Vector: points made: %d",(int)[ret count]);
// DDLogInfo(#"Vector: epsilon: %f",epsilon);
// DDLogInfo(#"Vector: points: %#",ret);
return [ret copy];
}
Related
I'm developing an app that when a user opens there will be an image, controlled by CMMotionManager, which moves according to the direction the user tilts the device.
....
This is my code to start the device motion.
motionManager = [[CMMotionManager alloc] init];
motionManager.showsDeviceMovementDisplay = YES;
motionManager.deviceMotionUpdateInterval = 1.0 / 60.0;
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXArbitraryCorrectedZVertical];
The is the code I use to control my image in reference to the motion of the device.
- (void)drawRect:(CGRect)rect
{
if (placesOfInterestCoordinates == nil) {
return;
}
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (MovingImage *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
if (v[2] < 0.0f) {
CGPoint movingTo = CGPointMake(x*self.bounds.size.width, self.bounds.size.height-y*self.bounds.size.height);
if (movingTo.x < -118) {
movingTo.x = -118;
}
if (movingTo.x > 542) {
movingTo.x = 542;
}
if (movingTo.y < 215) {
movingTo.y = 215;
}
if (movingTo.y > 390) {
movingTo.y = 390;
}
poi.view.center = movingTo;
poi.view.hidden = NO;
} else {
poi.view.hidden = YES;
}
i++;
}
}
The image is in the middle of the screen rarely when the user opens the app, the image is usually to the right or left 90 degrees of the starting position, or exactly in the middle consistently.
I assume that CMAttitudeReferenceFrameXArbitraryCorrectedZVertical is the issue, but I have also tried CMAttitudeReferenceFrameXArbitraryZVertical which doesn't work either.
If useful, my project is here, for anyone interested. I used Apple pARk sample code as well.
- (void)drawRect:(CGRect)rect
{
if (placesOfInterestCoordinates == nil) {
return;
}
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (MovingImage *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
if (v[2] < 1.0f) { // change here
CGPoint movingTo = CGPointMake(x*self.bounds.size.width, self.bounds.size.height-y*self.bounds.size.height);
if (movingTo.x < -118) {
movingTo.x = -118;
}
if (movingTo.x > 542) {
movingTo.x = 542;
}
if (movingTo.y < 215) {
movingTo.y = 215;
}
if (movingTo.y > 390) {
movingTo.y = 390;
}
poi.view.center = movingTo;
poi.view.hidden = NO;
} else {
poi.view.hidden = YES;
}
i++;
}
}
try changing (v[2] < 0.0f) to (v[2] < 1.0f)
The viewing angle is different because it is looking for when the image comes into view, other than your device's perspective angle.
I am making an application just for testing some codes. There is a ship that is shooting enemies. These enemies comes down from the top of the screen and they are added with this code (The reason why its 4 different enemies is that they all have different behaviors)
-(void) addEnemy {
int randomX = arc4random() % screenWidth;
Enemy* anEnemy;
int diceRoll = arc4random() % 4;
if (diceRoll == 0) {
anEnemy = [Enemy createEnemy:#"enemy1" framesToAnimate:24];
} else if (diceRoll == 1) {
anEnemy = [Enemy createEnemy:#"enemy2" framesToAnimate:17];
} else if (diceRoll == 2) {
anEnemy = [Enemy createEnemy:#"enemy3" framesToAnimate:14];
} else if (diceRoll == 3) {
anEnemy = [Enemy createEnemy:#"enemy4" framesToAnimate:24];
}
[self addChild:anEnemy z:depthLevelBelowBullet];
anEnemy.position = ccp ( randomX , screenHeight + 200);
numberOfEnemiesOnstage ++;
}
The enemies are being added with a random x-value which means enemies sometimes is half outside the screen. Like this:
How do limit the x-value from both sides of the screen so this won't happen?
You're not considering the image width when you're calculating the random x position. You have to move the randomX declaration and initialization after you've created the enemy objects so the textureRect property will be properly set.
int offset = (int)[Enemy textureRect].size.width / 2;
int randomX = (arc4random() % (screenWidth - (2 * offset))) + offset;
The above answer is correct but if you are using CCSprite it will be a little different. Try the following code:
-(void) addEnemy {
int randomX = arc4random() % screenWidth;
Enemy* anEnemy;
int diceRoll = arc4random() % 4;
int halfWidth;
if (diceRoll == 0) {
anEnemy = [Enemy createEnemy:#"enemy1" framesToAnimate:24];
halfWidth = anEnemy.contentSize.width/2;
} else if (diceRoll == 1) {
anEnemy = [Enemy createEnemy:#"enemy2" framesToAnimate:17];
halfWidth = anEnemy.contentSize.width/2;
} else if (diceRoll == 2) {
anEnemy = [Enemy createEnemy:#"enemy3" framesToAnimate:14];
halfWidth = anEnemy.contentSize.width/2;
} else if (diceRoll == 3) {
anEnemy = [Enemy createEnemy:#"enemy4" framesToAnimate:24];
halfWidth = anEnemy.contentSize.width/2;
}
if(randomX < halfWidth) {
randomX = halfWidth;
} else {
randomX = randomX-halfWidth;
}
[self addChild:anEnemy z:depthLevelBelowBullet];
anEnemy.position = ccp ( randomX , screenHeight + 200);
numberOfEnemiesOnstage ++;
}
Using the code below I am drawing lines from triangle strips of varying sizes. At the point of the final triangle I would like to add a GLpoint primitive so it looks like the line has a rounded end. How would I calculate the correct diameter for the GLpoint based upon the size of the final triangle? Please see attached image demonstrating what I have at the moment (the point is much too large).
- (void)pan:(UIPanGestureRecognizer *)p {
CGPoint v = [p velocityInView:self.view];
CGPoint l = [p locationInView:self.view];
float distance = sqrtf((l.x - previousPoint.x) * (l.x - previousPoint.x) + (l.y - previousPoint.y) * (l.y - previousPoint.y));
float velocityMagnitude = sqrtf(v.x*v.x + v.y*v.y);
float clampedVelocityMagnitude = clamp(VELOCITY_CLAMP_MIN, VELOCITY_CLAMP_MAX, velocityMagnitude);
float normalizedVelocity = (clampedVelocityMagnitude - VELOCITY_CLAMP_MIN) / (VELOCITY_CLAMP_MAX - VELOCITY_CLAMP_MIN);
float lowPassFilterAlpha = STROKE_WIDTH_SMOOTHING;
float newThickness = (STROKE_WIDTH_MAX - STROKE_WIDTH_MIN) * normalizedVelocity + STROKE_WIDTH_MIN;
penThickness = penThickness * lowPassFilterAlpha + newThickness * (1 - lowPassFilterAlpha);
glBindVertexArrayOES(vertexArrayTriangles);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferTriangles);
if ([p state] == UIGestureRecognizerStateBegan)
{
previousPoint = l;
previousMidPoint = l;
NISignaturePoint startPoint = {
{ (l.x / self.view.bounds.size.width * 2. - 1), ((l.y / self.view.bounds.size.height) * 2.0 - 1) * -1, 0}, {0,0,0}
};
previousVertex = startPoint;
previousThickness = penThickness;
addVertexTriangles(&lengthTriangles, startPoint);
addVertexTriangles(&lengthTriangles, previousVertex);
} else if ([p state] == UIGestureRecognizerStateChanged) {
CGPoint mid = CGPointMake((l.x + previousPoint.x) / 2.0, (l.y + previousPoint.y) / 2.0);
if (distance > QUADRATIC_DISTANCE_TOLERANCE) {
// Plot quadratic bezier instead of line
unsigned int i;
int segments = (int) distance / 1.5;
float startPenThickness = previousThickness;
float endPenThickness = penThickness;
previousThickness = penThickness;
for (i = 0; i < segments; i++)
{
penThickness = startPenThickness + ((endPenThickness - startPenThickness) / segments) * i;
double t = (double)i / (double)segments;
double a = pow((1.0 - t), 2.0);
double b = 2.0 * t * (1.0 - t);
double c = pow(t, 2.0);
double x = a * previousMidPoint.x + b * previousPoint.x + c * mid.x;
double y = a * previousMidPoint.y + b * previousPoint.y + c * mid.y;
NISignaturePoint v = {
{
(x / self.view.bounds.size.width * 2. - 1),
((y / self.view.bounds.size.height) * 2.0 - 1) * -1,
0
},
strokeColor
};
[self addTriangleStripPointsForPrevious:previousVertex next:v];
previousVertex = v;
}
}
previousPoint = l;
previousMidPoint = mid;
}
else if (p.state == UIGestureRecognizerStateEnded | p.state == UIGestureRecognizerStateCancelled)
{
addVertexTriangles(&lengthTriangles, previousVertex);
addVertexTriangles(&lengthTriangles, previousVertex);
glBindVertexArrayOES(vertexArrayPoints);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferPoints);
NISignaturePoint startPoint = {
{previousVertex.vertex.x, previousVertex.vertex.y, 0}, strokeColor, ???
};
addVertexPoints(&lengthPoints, startPoint);
penThickness = STROKE_WIDTH_MIN;
previousThickness = penThickness;
}
}
- (void)addTriangleStripPointsForPrevious:(NISignaturePoint)previous next:(NISignaturePoint)next {
float toTravel = penThickness / 2.0;
//NSLog(#"add tri");
for (int i = 0; i < 2; i++) {
GLKVector3 p = perpendicular(previous, next);
GLKVector3 p1 = next.vertex;
GLKVector3 ref = GLKVector3Add(p1, p);
float distance = GLKVector3Distance(p1, ref);
float difX = p1.x - ref.x;
float difY = p1.y - ref.y;
float ratio = -1.0 * (toTravel / distance);
difX = difX * ratio;
difY = difY * ratio;
NISignaturePoint stripPoint = {
{ p1.x + difX, p1.y + difY, 0.0 },
strokeColor
};
addVertexTriangles(&lengthTriangles, stripPoint);
toTravel *= -1;
}
}
You might include UIGestureRecognizerStateChanged, UIGestureRecognizerStateEnded, and UIGestureRecognizerStateCancelled as one case to draw the line, but then detect the later two cases to add the ending point with diameter equal to 0.5+endPenThickness
- (void)pan:(UIPanGestureRecognizer *)p {
CGPoint v = [p velocityInView:self.view];
CGPoint l = [p locationInView:self.view];
float distance = sqrtf((l.x - previousPoint.x) * (l.x - previousPoint.x) + (l.y - previousPoint.y) * (l.y - previousPoint.y));
float velocityMagnitude = sqrtf(v.x*v.x + v.y*v.y);
float clampedVelocityMagnitude = clamp(VELOCITY_CLAMP_MIN, VELOCITY_CLAMP_MAX, velocityMagnitude);
float normalizedVelocity = (clampedVelocityMagnitude - VELOCITY_CLAMP_MIN) / (VELOCITY_CLAMP_MAX - VELOCITY_CLAMP_MIN);
float lowPassFilterAlpha = STROKE_WIDTH_SMOOTHING;
float newThickness = (STROKE_WIDTH_MAX - STROKE_WIDTH_MIN) * normalizedVelocity + STROKE_WIDTH_MIN;
penThickness = penThickness * lowPassFilterAlpha + newThickness * (1 - lowPassFilterAlpha);
glBindVertexArrayOES(vertexArrayTriangles);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferTriangles);
if ([p state] == UIGestureRecognizerStateBegan)
{
previousPoint = l;
previousMidPoint = l;
NISignaturePoint startPoint = {
{ (l.x / self.view.bounds.size.width * 2. - 1), ((l.y / self.view.bounds.size.height) * 2.0 - 1) * -1, 0}, {0,0,0}
};
previousVertex = startPoint;
previousThickness = penThickness;
addVertexTriangles(&lengthTriangles, startPoint);
addVertexTriangles(&lengthTriangles, previousVertex);
} else {
// UIGestureRecognizerStateChanged, UIGestureRecognizerStateEnded, and UIGestureRecognizerStateCancelled
CGPoint mid = CGPointMake((l.x + previousPoint.x) / 2.0, (l.y + previousPoint.y) / 2.0);
if (distance > QUADRATIC_DISTANCE_TOLERANCE) {
// Plot quadratic bezier instead of line
unsigned int i;
int segments = (int) distance / 1.5;
float startPenThickness = previousThickness;
float endPenThickness = penThickness;
previousThickness = penThickness;
for (i = 0; i < segments; i++)
{
penThickness = startPenThickness + ((endPenThickness - startPenThickness) / segments) * i;
double t = (double)i / (double)segments;
double a = pow((1.0 - t), 2.0);
double b = 2.0 * t * (1.0 - t);
double c = pow(t, 2.0);
double x = a * previousMidPoint.x + b * previousPoint.x + c * mid.x;
double y = a * previousMidPoint.y + b * previousPoint.y + c * mid.y;
NISignaturePoint v = {
{
(x / self.view.bounds.size.width * 2. - 1),
((y / self.view.bounds.size.height) * 2.0 - 1) * -1,
0
},
strokeColor
};
[self addTriangleStripPointsForPrevious:previousVertex next:v];
previousVertex = v;
}
}
previousPoint = l;
previousMidPoint = mid;
if (p.state == UIGestureRecognizerStateEnded | p.state == UIGestureRecognizerStateCancelled)
{
glBindVertexArrayOES(vertexArrayPoints);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferPoints);
NISignaturePoint startPoint = {
{previousVertex.vertex.x, previousVertex.vertex.y, 0}, strokeColor, endPenThickness / 2.
};
addVertexPoints(&lengthPoints, startPoint);
penThickness = STROKE_WIDTH_MIN;
previousThickness = penThickness;
}
}
}
I'm using the following method to set point of focus since iOS 4:
- (void) focusAtPoint:(CGPoint)point
{
AVCaptureDevice *device = [[self captureInput] device];
NSError *error;
if ([device isFocusModeSupported:AVCaptureFocusModeAutoFocus] &&
[device isFocusPointOfInterestSupported])
{
if ([device lockForConfiguration:&error]) {
[device setFocusPointOfInterest:point];
[device setFocusMode:AVCaptureFocusModeAutoFocus];
[device unlockForConfiguration];
} else {
NSLog(#"Error: %#", error);
}
}
}
On iOS 4 devices this works without any problems. But on iOS 5 the live camera feed freezes and after some seconds gets completely black. There is no exception or error thrown.
The error won't occur if I comment out either setFocusPointOfInterest or setFocusMode. So the combination of them both will lead to this behavior.
The point you've given the setFocusPointOfInterest: function is incorrect. It's the reason why it's crashing.
Add this method to your program and use the value returned by this function
- (CGPoint)convertToPointOfInterestFromViewCoordinates:(CGPoint)viewCoordinates
{
CGPoint pointOfInterest = CGPointMake(.5f, .5f);
CGSize frameSize = [[self videoPreviewView] frame].size;
AVCaptureVideoPreviewLayer *videoPreviewLayer = [self prevLayer];
if ([[self prevLayer] isMirrored]) {
viewCoordinates.x = frameSize.width - viewCoordinates.x;
}
if ( [[videoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResize] ) {
pointOfInterest = CGPointMake(viewCoordinates.y / frameSize.height, 1.f - (viewCoordinates.x / frameSize.width));
} else {
CGRect cleanAperture;
for (AVCaptureInputPort *port in [[[[self captureSession] inputs] lastObject] ports]) {
if ([port mediaType] == AVMediaTypeVideo) {
cleanAperture = CMVideoFormatDescriptionGetCleanAperture([port formatDescription], YES);
CGSize apertureSize = cleanAperture.size;
CGPoint point = viewCoordinates;
CGFloat apertureRatio = apertureSize.height / apertureSize.width;
CGFloat viewRatio = frameSize.width / frameSize.height;
CGFloat xc = .5f;
CGFloat yc = .5f;
if ( [[videoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResizeAspect] ) {
if (viewRatio > apertureRatio) {
CGFloat y2 = frameSize.height;
CGFloat x2 = frameSize.height * apertureRatio;
CGFloat x1 = frameSize.width;
CGFloat blackBar = (x1 - x2) / 2;
if (point.x >= blackBar && point.x <= blackBar + x2) {
xc = point.y / y2;
yc = 1.f - ((point.x - blackBar) / x2);
}
} else {
CGFloat y2 = frameSize.width / apertureRatio;
CGFloat y1 = frameSize.height;
CGFloat x2 = frameSize.width;
CGFloat blackBar = (y1 - y2) / 2;
if (point.y >= blackBar && point.y <= blackBar + y2) {
xc = ((point.y - blackBar) / y2);
yc = 1.f - (point.x / x2);
}
}
} else if ([[videoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResizeAspectFill]) {
if (viewRatio > apertureRatio) {
CGFloat y2 = apertureSize.width * (frameSize.width / apertureSize.height);
xc = (point.y + ((y2 - frameSize.height) / 2.f)) / y2;
yc = (frameSize.width - point.x) / frameSize.width;
} else {
CGFloat x2 = apertureSize.height * (frameSize.height / apertureSize.width);
yc = 1.f - ((point.x + ((x2 - frameSize.width) / 2)) / x2);
xc = point.y / frameSize.height;
}
}
pointOfInterest = CGPointMake(xc, yc);
break;
}
}
}
return pointOfInterest;
}
I want to give some additional info to #Louis 's answer.
According to Apple's documents (Please pay attention to the bold part):
In addition, a device may support a focus point of interest. You test for support using focusPointOfInterestSupported. If it’s supported, you set the focal point using focusPointOfInterest. You pass a CGPoint where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode.
We should involve the orientation when calculate FocusPointOfInterest.
I have a rather large, almost full screen image that I'm going to be displaying on an iPad. The image is about 80% transparent. I need to, on the client, determine the bounding box of the opaque pixels, and then crop to that bounding box.
Scanning other questions here on StackOverflow and reading some of the CoreGraphics docs, I think I could accomplish this by:
CGBitmapContextCreate(...) // Use this to render the image to a byte array
..
- iterate through this byte array to find the bounding box
..
CGImageCreateWithImageInRect(image, boundingRect);
That just seems very inefficient and clunky. Is there something clever I can do with CGImage masks or something which makes use of the device's graphics acceleration to do this?
Thanks to user404709 for making all the hard work.
Below code also handles retina images and frees the CFDataRef.
- (UIImage *)trimmedImage {
CGImageRef inImage = self.CGImage;
CFDataRef m_DataRef;
m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
size_t width = CGImageGetWidth(inImage);
size_t height = CGImageGetHeight(inImage);
CGPoint top,left,right,bottom;
BOOL breakOut = NO;
for (int x = 0;breakOut==NO && x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
left = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = 0;breakOut==NO && y < height; y++) {
for (int x = 0; x < width; x++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
top = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = height-1;breakOut==NO && y >= 0; y--) {
for (int x = width-1; x >= 0; x--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
bottom = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int x = width-1;breakOut==NO && x >= 0; x--) {
for (int y = height-1; y >= 0; y--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
right = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
CGFloat scale = self.scale;
CGRect cropRect = CGRectMake(left.x / scale, top.y/scale, (right.x - left.x)/scale, (bottom.y - top.y) / scale);
UIGraphicsBeginImageContextWithOptions( cropRect.size,
NO,
scale);
[self drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)
blendMode:kCGBlendModeCopy
alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CFRelease(m_DataRef);
return croppedImage;
}
I created a category on UImage which does this if any one needs it...
+ (UIImage *)cropTransparencyFromImage:(UIImage *)img {
CGImageRef inImage = img.CGImage;
CFDataRef m_DataRef;
m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int width = img.size.width;
int height = img.size.height;
CGPoint top,left,right,bottom;
BOOL breakOut = NO;
for (int x = 0;breakOut==NO && x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
left = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = 0;breakOut==NO && y < height; y++) {
for (int x = 0; x < width; x++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
top = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = height-1;breakOut==NO && y >= 0; y--) {
for (int x = width-1; x >= 0; x--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
bottom = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int x = width-1;breakOut==NO && x >= 0; x--) {
for (int y = height-1; y >= 0; y--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
right = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
CGRect cropRect = CGRectMake(left.x, top.y, right.x - left.x, bottom.y - top.y);
UIGraphicsBeginImageContextWithOptions( cropRect.size,
NO,
0.);
[img drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)
blendMode:kCGBlendModeCopy
alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
There is no clever cheat to get around having the device do the work, but there are some ways to accelerate the task, or minimize the impact on the user interface.
First, consider the need to accelerate this task. A simple iteration through this byte array may go fast enough. There may be no need to invest in optimizing this task if the app is just calculating this once per run or in reaction to a user's choice that takes at least a few seconds between choices.
If the bounding box is not needed for some time after the image becomes available, this iteration may be launched in a separate thread. That way the calculation doesn't block the main interface thread. Grand Central Dispatch may make using a separate thread for this task easier.
If the task must be accelerated, maybe this is real time processing of video images, then parallel processing of the data may help. The Accelerate framework may help in setting up SIMD calculations on the data. Or, to really get performance with this iteration, ARM assembly language code using the NEON SIMD operations could get great results with significant development effort.
The last choice is to investigate a better algorithm. There's a huge body of work on detecting features in images. An edge detection algorithm may be faster than a simple iteration through the byte array. Maybe Apple will add edge detection capabilities to Core Graphics in the future which can be applied to this case. An Apple implemented image processing capability may not be an exact match for this case, but Apple's implementation should be optimized to use the SIMD or GPU capabilities of the iPad, resulting in better overall performance.