How can I release memory of UIImages no longer used - ios

I am attempting to merge a number of smaller images into a larger one. The app crashes because it runs out of memory, but I cannot figure out how to release the memory after it is used, so it keeps building up til the app crashes.
The addImageToImage and resizeImage routines appears to be causing the crash since I cannot free up their memory after it is no longer needed. I am using Automatic Reference Counting in this project. I have tried setting the image to nil but that does not stop the crashing.
testImages is in one class that is called from the main ViewController, while addImageToImage and resizeImage are in another class called ImageUtils.
Can someone look at this code and explain to me how to properly release the memory allocated by these two routines. I cannot call release on the images since the project uses ARC and setting them to nil has no effect.
+ (void)testImages
{
const int IMAGE_WIDTH = 394;
const int IMAGE_HEIGHT = 150;
const int PAGE_WIDTH = 1275;
const int PAGE_HEIGHT = 1650;
const int COLUMN_WIDTH = 30;
const int ROW_OFFSET = 75;
CGSize imageSize = CGSizeMake(PAGE_WIDTH, PAGE_HEIGHT);
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextFillRect(context, CGRectMake(0, 0, imageSize.width, imageSize.height));
UIImage *psheet = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGSize collageSize = CGSizeMake(IMAGE_WIDTH, IMAGE_HEIGHT);
UIGraphicsBeginImageContextWithOptions(collageSize, YES, 0);
CGContextRef pcontext = UIGraphicsGetCurrentContext();
CGContextFillRect(pcontext, CGRectMake(0, 0, collageSize.width, collageSize.height));
UIImage *collage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
float row = 1;
float column = 1;
int index = 1;
int group = 1;
for (int i = 0; i < 64; i++)
{
NSLog(#"processing group %i - file %i ", group, index++);
psheet = [ImageUtils addImageToImage:psheet withImage2:collage andRect:CGRectMake((IMAGE_WIDTH*(column-1)) + (COLUMN_WIDTH * column), (IMAGE_HEIGHT * (row-1)) + ROW_OFFSET, IMAGE_WIDTH, IMAGE_HEIGHT) withImageWidth:PAGE_WIDTH withImageHeight:PAGE_HEIGHT];
column++;
if (column > 3) {
column = 1;
row++;
}
if (index == 15)
{
group++;
index = 1;
row = 1;
column = 1;
UIImage *editedImage = [ImageUtils resizeImage:psheet withWidth:PAGE_WIDTH * 2 withHeight:PAGE_HEIGHT * 2];
editedImage = nil;
}
}
}
ImageUtils methods
+(UIImage *) addImageToImage:(UIImage *)sheet withImage2:(UIImage *)label andRect:(CGRect)cropRect withImageWidth:(int) width withImageHeight:(int) height
{
CGSize size = CGSizeMake(width,height);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[sheet drawAtPoint:pointImg1];
CGPoint pointImg2 = cropRect.origin;
[label drawAtPoint: pointImg2];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(CGFloat)width withHeight:(CGFloat)height
{
CGSize newSize = CGSizeMake(width, height);
CGFloat widthRatio = newSize.width/image.size.width;
CGFloat heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Maybe your images are not deallocated but moved to autorelease pool.
Many programs create temporary objects that are autoreleased. These
objects add to the program’s memory footprint until the end of the
block. In many situations, allowing temporary objects to accumulate
until the end of the current event-loop iteration does not result in
excessive overhead; in some situations, however, you may create a
large number of temporary objects that add substantially to memory
footprint and that you want to dispose of more quickly. In these
latter cases, you can create your own autorelease pool block. At the
end of the block, the temporary objects are released, which typically
results in their deallocation thereby reducing the program’s memory
footprint
Try to wrap code inside the loop with #autoreleasepool {} :
for (int i = 0; i < 64; i++)
{
#autoreleasepool {
NSLog(#"processing group %i - file %i ", group, index++);
psheet = [ImageUtils addImageToImage:psheet withImage2:collage andRect:CGRectMake((IMAGE_WIDTH*(column-1)) + (COLUMN_WIDTH * column), (IMAGE_HEIGHT * (row-1)) + ROW_OFFSET, IMAGE_WIDTH, IMAGE_HEIGHT) withImageWidth:PAGE_WIDTH withImageHeight:PAGE_HEIGHT];
column++;
if (column > 3) {
column = 1;
row++;
}
if (index == 15)
{
group++;
index = 1;
row = 1;
column = 1;
UIImage *editedImage = [ImageUtils resizeImage:psheet withWidth:PAGE_WIDTH * 2 withHeight:PAGE_HEIGHT * 2];
editedImage = nil;
}
}
}

Related

The height of the weex <text> component is not accurate?

The current WeexSDK version 0.18.0,I use show similar effect on the simulator and the real machine and the text can't be fully displayed as shown below.
Also I've added the script to show how to calculate the height of the text component in source code is done. suggestSize.heightis always the height I want, but this code is only executed under iOS10. The value of totalHeight is always worse when I have more text lines than iOS10. How do I solve this?
The effect is as follows:
The source code:
- (CGSize)calculateTextHeightWithWidth:(CGFloat)aWidth
{
CGFloat totalHeight = 0;
CGSize suggestSize = CGSizeZero;
NSAttributedString * attributedStringCpy = [self ctAttributedString];
if (!attributedStringCpy) {
return CGSizeZero;
}
if (isnan(aWidth)) {
aWidth = CGFLOAT_MAX;
}
aWidth = [attributedStringCpy boundingRectWithSize:CGSizeMake(aWidth, CGFLOAT_MAX) options:NSStringDrawingUsesLineFragmentOrigin|NSStringDrawingUsesFontLeading context:nil].size.width;
/* Must get ceil of aWidth. Or core text may not return correct bounds.
Maybe aWidth without ceiling triggered some critical conditions. */
aWidth = ceil(aWidth);
CTFramesetterRef ctframesetterRef = CTFramesetterCreateWithAttributedString((__bridge CFAttributedStringRef)(attributedStringCpy));
suggestSize = CTFramesetterSuggestFrameSizeWithConstraints(ctframesetterRef, CFRangeMake(0, 0), NULL, CGSizeMake(aWidth, MAXFLOAT), NULL);
if (_lines == 0) {
// If not line limit use suggestSize directly.
CFRelease(ctframesetterRef);
return CGSizeMake(aWidth, suggestSize.height);
}
CGMutablePathRef path = NULL;
path = CGPathCreateMutable();
// sufficient height to draw text
CGPathAddRect(path, NULL, CGRectMake(0, 0, aWidth, suggestSize.height * 10));
CTFrameRef frameRef = NULL;
frameRef = CTFramesetterCreateFrame(ctframesetterRef, CFRangeMake(0, attributedStringCpy.length), path, NULL);
CGPathRelease(path);
CFRelease(ctframesetterRef);
if (NULL == frameRef) {
//try to protect unexpected crash.
return suggestSize;
}
CFArrayRef lines = CTFrameGetLines(frameRef);
CFIndex lineCount = CFArrayGetCount(lines);
CGFloat ascent = 0;
CGFloat descent = 0;
CGFloat leading = 0;
// height = ascent + descent + lineCount*leading
// ignore linespaing
NSUInteger actualLineCount = 0;
for (CFIndex lineIndex = 0; (!_lines|| lineIndex < _lines) && lineIndex < lineCount; lineIndex ++)
{
CTLineRef lineRef = NULL;
lineRef = (CTLineRef)CFArrayGetValueAtIndex(lines, lineIndex);
CTLineGetTypographicBounds(lineRef, &ascent, &descent, &leading);
totalHeight += ascent + descent;
actualLineCount ++;
}
totalHeight = totalHeight + actualLineCount * leading;
CFRelease(frameRef);
if (WX_SYS_VERSION_LESS_THAN(#"10.0")) {
// there is something wrong with coreText drawing text height, trying to fix this with more efficent way.
if(actualLineCount && actualLineCount < lineCount) {
suggestSize.height = suggestSize.height * actualLineCount / lineCount;
}
return CGSizeMake(aWidth, suggestSize.height);
}
return CGSizeMake(aWidth, totalHeight);
}

Core Graphics, odd behaviour with CGContextSetShadowWithColor blur height

I've been adding shadows to my shapes using CGContextSetShadowWithColor. I'm trying to use the same blur offset height.
However the height of the shadow I'm seeing on different shapes is different. I've no idea why this occurring.
Here's some typical code.
CGSize offset = CGSizeMake(0.1, self.l_shadHeight);
CGContextSetShadowWithColor(context, offset, mauveBlurRadius, mauve.CGColor);
EDIT - FURTHER CODE
+ (NSInteger)setShadowHeight
{
NSInteger retVal = 10;
if (IS_IPHONE_6P)
{
retVal = 7;
}
else if (IS_IPHONE_6)
{
retVal = 7;
}
else if (IS_IPHONE_5 || TARGET_INTERFACE_BUILDER)
{
retVal = 6;
}
else if (IS_IPHONE_4_AND_OLDER || IS_IPHONE)
{
retVal = 6;
}
else if (isiPadPro)
{
retVal = 16;
}
else if (IS_IPAD)
{
retVal = 11;
}
return retVal;
}
//
-(void)setSizingClassValues
{
self.l_shadHeight = [DrawingConstants setShadowHeight];
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
[self setSizingClassValues];
//// General Declarations
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = UIGraphicsGetCurrentContext();
//// Shadow Declarations
//UIColor* circleShadow = shadowColor2;
UIColor* circleShadow = [shadowColor2 colorWithAlphaComponent: [DrawingConstants SHADOW_ALPHA]];
CGSize circleShadowOffset = CGSizeMake(0.1, self.l_shadHeight);
//CGFloat circleShadowBlurRadius = 5;
CGFloat circleShadowBlurRadius = [DrawingConstants setShadowBlurRadius];
//
//// Frames
CGRect frame = self.bounds;
CGFloat xPadding = 18;
CGFloat YPadding = 19;
CGFloat wPadding = 39;
CGFloat hPadding = 54;
if (isiPadPro || IS_IPAD)
{
if (self.size == sizeSmallerPerformance)
{
//so we don't have to rework autolayout on other screens!
CGFloat fac = 2;
xPadding = xPadding / fac;
YPadding = YPadding / fac;
wPadding = wPadding / fac;
hPadding = hPadding / fac;
}
}
//// Subframes
CGRect group = CGRectMake(CGRectGetMinX(frame) + xPadding,
CGRectGetMinY(frame) + YPadding,
CGRectGetWidth(frame) - wPadding,
CGRectGetHeight(frame) - hPadding);
//
//// Abstracted Attributes
CGFloat circleSurroundStrokeWidth = self.l_borderWidth;
CGFloat tongueLeftStrokeWidth = self.l_strokeWidth;;
CGFloat tongueStrokeWidth = self.l_strokeWidth;
//// Group
{
//// CircleSurround Drawing
UIBezierPath* circleSurroundPath = [UIBezierPath bezierPathWithOvalInRect: CGRectMake(CGRectGetMinX(group) + floor(CGRectGetWidth(group) * 0.00164) + 0.5, CGRectGetMinY(group) + floor(CGRectGetHeight(group) * 0.00161) + 0.5, floor(CGRectGetWidth(group) * 0.99836) - floor(CGRectGetWidth(group) * 0.00164), floor(CGRectGetHeight(group) * 0.99839) - floor(CGRectGetHeight(group) * 0.00161))];
CGContextSaveGState(context);
CGContextSetShadowWithColor(context, circleShadowOffset, circleShadowBlurRadius, circleShadow.CGColor);
[circleFillColour setFill];
[circleSurroundPath fill];
CGContextRestoreGState(context);
[whiteColour setStroke];
circleSurroundPath.lineWidth = circleSurroundStrokeWidth;
[circleSurroundPath stroke];
}
//// Cleanup
CGGradientRelease(innerFaceGradient);
CGColorSpaceRelease(colorSpace);
}
#end
I've found this issue, I had my shadow on the wrong thing. Instead I added it to my outermost group.
Thanks #kurtrevis

Getting UIImage for only particular area bounds drawn - PaintView

I have already implemented paint / draw using:
- (void) touchesBegan: (NSSet *) touches withEvent: (UIEvent *) event
-(void) touchesMoved: (NSSet *) touches withEvent: (UIEvent *) event
- (void) touchesEnded: (NSSet *) touches withEvent: (UIEvent *) event
Now issue is that for any line drawn, I want to get that particular line / paint image. I don't want image of entire screen, only area / bounds of line / paint drawn.
Reason is that I want to perform pan gesture / delete functionality on that line / paint drawn.
User can draw multiple lines, so want UIImage for all this lines separately.
Any logic or code snippet will be really helpful
Thanks in advance
Depending on your application, particularly how many times you plan on doing this in a row, you may be able to create a different image/layer for each paint line. Your final image would essentially be all the individual lines drawn on top of each other.
It may be more efficient to create a custom view to capture touch events. You could store the list of touch coordinates for each paint line and render them all at once in a custom drawRect. This way you are storing lists of coordinates for each paint line, and can still access each one, instead of a list of images. You could calculate the area/bounds from the coordinates used to render the line.
Additional context and code may be helpful, I'm not sure I completely understand what you're trying to accomplish!
I take a look at the MVPaint project. It seems you have an object:
MVPaintDrawing _drawing;
which contains an array of MVPaintTransaction. You can iterate on those MVPaintTransaction to draw an UIImage.
So first you can add a method to get an image from a MVPaintTransaction:
- (UIImage *) imageToDrawWithSize:(CGSize) size xScale:(CGFloat)xScale yScale:(CGFloat)yScale {
UIGraphicsBeginImageContext(size);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), xScale, yScale);
// call the existing draw method
[self draw];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Then add a method to get an array of image from the array of MVPaintTransaction in the MVPaintDrawing class:
- (NSArray *) getImagesFromDrawingOnSurface: (UIImageView *) surface xScale: (CGFloat) xScale yScale: (CGFloat) yScale{
NSMutableArray *imageArray = [NSMutableArray new];
for (MVPaintTransaction * transaction in _drawing) {
UIImage *image = [transaction imageToDrawWithSize:surface.frame.size xScale:xScale yScale:yScale];
[imageArray addObject:image];
}
return imageArray;
}
In this way you will have an array of UIImage corresponding to each line you have drawn. If you want those images to have the "minimum" possible size (i mean without extra alpha part), you can apply this method (I added it in the MVPaintTransaction class):
- (UIImage *)trimmedImage:(UIImage *)img {
CGImageRef inImage = img.CGImage;
CFDataRef m_DataRef;
m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
size_t width = CGImageGetWidth(inImage);
size_t height = CGImageGetHeight(inImage);
CGPoint top,left,right,bottom;
BOOL breakOut = NO;
for (int x = 0;breakOut==NO && x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
left = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = 0;breakOut==NO && y < height; y++) {
for (int x = 0; x < width; x++) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
top = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int y = height-1;breakOut==NO && y >= 0; y--) {
for (int x = width-1; x >= 0; x--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
bottom = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
breakOut = NO;
for (int x = width-1;breakOut==NO && x >= 0; x--) {
for (int y = height-1; y >= 0; y--) {
int loc = x + (y * width);
loc *= 4;
if (m_PixelBuf[loc + 3] != 0) {
right = CGPointMake(x, y);
breakOut = YES;
break;
}
}
}
CGFloat scale = img.scale;
CGRect cropRect = CGRectMake(left.x / scale, top.y/scale, (right.x - left.x)/scale, (bottom.y - top.y) / scale);
UIGraphicsBeginImageContextWithOptions( cropRect.size,
NO,
scale);
[img drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)
blendMode:kCGBlendModeCopy
alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CFRelease(m_DataRef);
return croppedImage;
}
Then simply replace in the first method:
return result;
by
return [self trimmedImage:result];

Unable to record/play AudioQueue and animate simultaneously

In my app, I am trying to record on one device, transmit it and play it back on an other device. I want to animate the sound levels on both the recording device and the Playback device.
Most of the code I have used for recording, playing back and animation is from the SpeakHere sample App.
I have two differences though:
I have the additional overhead of transmitting over the network.
I an trying to do the horizontal animation with and image(shown below) instead of the colored bands as in SpeakHere
I am recording and transmitting on a separate dispatch queue(GCD) and the same for receiving and playing back.
My first problem is: when I introduce the animation, the receiving and playback becomes very slow and the AudioQueue goes silent.
My next problem is with setting a clear background to the LevelMeter view. If I do that then the animation is garbled. Not sure how the animation is connected to the background color.
Below is my version of (not so elegantly modified) drawRect from the SpeakHere example.
Please advice on any mistakes I may have made.
My understanding is that I cannot use another thread here for the animation because of UIKit. If not, please correct me.
- (void)drawRect:(CGRect)rect
{
//();
CGContextRef cxt = NULL;
cxt = UIGraphicsGetCurrentContext();
CGColorSpaceRef cs = NULL;
cs = CGColorSpaceCreateDeviceRGB();
CGRect bds;
if (_vertical)
{
CGContextTranslateCTM(cxt, 0., [self bounds].size.height);
CGContextScaleCTM(cxt, 1., -1.);
bds = [self bounds];
} else {
CGContextTranslateCTM(cxt, 0., [self bounds].size.height);
CGContextRotateCTM(cxt, -M_PI_2);
bds = CGRectMake(0., 0., [self bounds].size.height, [self bounds].size.width);
}
CGContextSetFillColorSpace(cxt, cs);
CGContextSetStrokeColorSpace(cxt, cs);
if (_numLights == 0)
{
int i;
CGFloat currentTop = 0.;
if (_bgColor)
{
[_bgColor set];
CGContextFillRect(cxt, bds);
}
for (i=0; i<_numColorThresholds; i++)
{
LevelMeterColorThreshold thisThresh = _colorThresholds[i];
CGFloat val = MIN(thisThresh.maxValue, _level);
CGRect rect = CGRectMake(
0,
(bds.size.height) * currentTop,
bds.size.width,
(bds.size.height) * (val - currentTop)
);
[thisThresh.color set];
CGContextFillRect(cxt, rect);
if (_level < thisThresh.maxValue) break;
currentTop = val;
}
if (_borderColor)
{
[_borderColor set];
CGContextStrokeRect(cxt, CGRectInset(bds, .5, .5));
}
}
else
{
int light_i;
CGFloat lightMinVal = 0.;
CGFloat insetAmount, lightVSpace;
lightVSpace = bds.size.height / (CGFloat)_numLights;
if (lightVSpace < 4.) insetAmount = 0.;
else if (lightVSpace < 8.) insetAmount = 0.5;
else insetAmount = 1.;
int peakLight = -1;
if (_peakLevel > 0.)
{
peakLight = _peakLevel * _numLights;
if (peakLight >= _numLights) peakLight = _numLights - 1;
}
for (light_i=0; light_i<_numLights; light_i++)
{
CGFloat lightMaxVal = (CGFloat)(light_i + 1) / (CGFloat)_numLights;
CGFloat lightIntensity;
CGRect lightRect;
UIColor *lightColor;
if (light_i == peakLight)
{
lightIntensity = 1.;
}
else
{
lightIntensity = (_level - lightMinVal) / (lightMaxVal - lightMinVal);
lightIntensity = LEVELMETER_CLAMP(0., lightIntensity, 1.);
if ((!_variableLightIntensity) && (lightIntensity > 0.)) lightIntensity = 1.;
}
lightColor = _colorThresholds[0].color;
int color_i;
for (color_i=0; color_i<(_numColorThresholds-1); color_i++)
{
LevelMeterColorThreshold thisThresh = _colorThresholds[color_i];
LevelMeterColorThreshold nextThresh = _colorThresholds[color_i + 1];
if (thisThresh.maxValue <= lightMaxVal) lightColor = nextThresh.color;
}
lightRect = CGRectMake(
0.,
bds.size.height * ((CGFloat)(light_i) / (CGFloat)_numLights),
bds.size.width,
bds.size.height * (1. / (CGFloat)_numLights)
);
lightRect = CGRectInset(lightRect, insetAmount, insetAmount);
if (_bgColor)
{
[_bgColor set];
CGContextFillRect(cxt, lightRect);
}
UIImage* image = [UIImage imageNamed:#"Pearl.png"];
CGImageRef imageRef = image.CGImage;
if (lightIntensity == 1.)
{
CGContextDrawImage(cxt, lightRect, imageRef);
[lightColor set];
//CGContextFillRect(cxt, lightRect);
} else if (lightIntensity > 0.) {
CGColorRef clr = CGColorCreateCopyWithAlpha([lightColor CGColor], lightIntensity);
CGContextSetFillColorWithColor(cxt, clr);
//CGContextFillRect(cxt, lightRect);
CGContextDrawImage(cxt, lightRect, imageRef);
CGColorRelease(clr);
}
if (_borderColor)
{
[_borderColor set];
CGContextStrokeRect(cxt, CGRectInset(lightRect, 0.5, 0.5));
}
lightMinVal = lightMaxVal;
}
}
CGColorSpaceRelease(cs);
}
Your drawing may be too slow, which can interfere with real time response requirements to the audio queue callbacks.
But since you are drawing mostly a bunch of static images, you might want to consider putting the dots each in their own separate image views. The you can control the number of dots displayed just by hiding and unhiding sub views, without any drawRect code at all.

Performance issues when cropping UIImage (CoreGraphics, iOS)

The basic idea of what we are trying to do is that we have a large UIImage, and we want to slice it into several pieces. The user of the function can pass in a number of rows and number of columns, and the image will be cropped accordingly (ie. 3 rows and 3 columns slices the image into 9 pieces). The problem is, we're having performance issues when trying to accomplish this with CoreGraphics. The largest grid we require is 5x5, and it takes several seconds for the operation to complete (which registeres as lagtime to the user.) This is of course far from optimal.
My colleague and I have spent quite a while on this, and have searched the web for answers unsuccessfully. Neither of us are extremely experienced with Core Graphics, so I'm hoping there's some silly mistake in the code that will fix our problems. It's left to you, SO users, to please help us figure it out!
We used the tutorial at http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/ to base revisions of our code on.
The function below:
-(void) getImagesFromImage:(UIImage*)image withRow:(NSInteger)rows withColumn:(NSInteger)columns
{
CGSize imageSize = image.size;
CGFloat xPos = 0.0;
CGFloat yPos = 0.0;
CGFloat width = imageSize.width / columns;
CGFloat height = imageSize.height / rows;
int imageCounter = 0;
//create a context to do our clipping in
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGRect clippedRect = CGRectMake(0, 0, width, height);
CGContextClipToRect(currentContext, clippedRect);
for(int i = 0; i < rows; i++)
{
xPos = 0.0;
for(int j = 0; j < columns; j++)
{
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect rect = CGRectMake(xPos, yPos, width, height);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
image.size.width,
image.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(currentContext, drawRect, image.CGImage);
//pull the image from our cropped context
UIImage* croppedImg = UIGraphicsGetImageFromCurrentImageContext();
//PuzzlePiece is a UIView subclass
PuzzlePiece* newPP = [[PuzzlePiece alloc] initWithImageAndFrameAndID:croppedImg :rect :imageCounter];
[slicedImages addObject:newPP];
imageCounter++;
xPos += (width);
}
yPos += (height);
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
}
ANY advice greatly appreciated!!
originalImageView is an IBOutlet ImageView. This image will be cropped.
#import <QuartzCore/QuartzCore.h>
QuartzCore is needed for the white border around each slice for better understanding.
-(UIImage*)getCropImage:(CGRect)cropRect
{
CGImageRef image = CGImageCreateWithImageInRect([originalImageView.image CGImage],cropRect);
UIImage *cropedImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return cropedImage;
}
-(void)prepareSlices:(uint)row:(uint)col
{
float flagX = originalImageView.image.size.width / originalImageView.frame.size.width;
float flagY = originalImageView.image.size.height / originalImageView.frame.size.height;
float _width = originalImageView.frame.size.width / col;
float _height = originalImageView.frame.size.height / row;
float _posX = 0.0;
float _posY = 0.0;
for (int i = 1; i <= row * col; i++) {
UIImageView *croppedImageVeiw = [[UIImageView alloc] initWithFrame:CGRectMake(_posX, _posY, _width, _height)];
UIImage *img = [self getCropImage:CGRectMake(_posX * flagX,_posY * flagY, _width * flagX, _height * flagY)];
croppedImageVeiw.image = img;
croppedImageVeiw.layer.borderColor = [[UIColor whiteColor] CGColor];
croppedImageVeiw.layer.borderWidth = 1.0f;
[self.view addSubview:croppedImageVeiw];
[croppedImageVeiw release];
_posX += _width;
if (i % col == 0) {
_posX = 0;
_posY += _height;
}
}
originalImageView.alpha = 0.0;
}
originalImageView.alpha = 0.0; you won't see the originalImageView any more.
Call it like this:
[self prepareSlices:4 :4];
It should make 16 slices addSubView on self.view. We have a puzzle app. This is working code from there.

Resources