Unable to record/play AudioQueue and animate simultaneously - ios

In my app, I am trying to record on one device, transmit it and play it back on an other device. I want to animate the sound levels on both the recording device and the Playback device.
Most of the code I have used for recording, playing back and animation is from the SpeakHere sample App.
I have two differences though:
I have the additional overhead of transmitting over the network.
I an trying to do the horizontal animation with and image(shown below) instead of the colored bands as in SpeakHere
I am recording and transmitting on a separate dispatch queue(GCD) and the same for receiving and playing back.
My first problem is: when I introduce the animation, the receiving and playback becomes very slow and the AudioQueue goes silent.
My next problem is with setting a clear background to the LevelMeter view. If I do that then the animation is garbled. Not sure how the animation is connected to the background color.
Below is my version of (not so elegantly modified) drawRect from the SpeakHere example.
Please advice on any mistakes I may have made.
My understanding is that I cannot use another thread here for the animation because of UIKit. If not, please correct me.
- (void)drawRect:(CGRect)rect
{
//();
CGContextRef cxt = NULL;
cxt = UIGraphicsGetCurrentContext();
CGColorSpaceRef cs = NULL;
cs = CGColorSpaceCreateDeviceRGB();
CGRect bds;
if (_vertical)
{
CGContextTranslateCTM(cxt, 0., [self bounds].size.height);
CGContextScaleCTM(cxt, 1., -1.);
bds = [self bounds];
} else {
CGContextTranslateCTM(cxt, 0., [self bounds].size.height);
CGContextRotateCTM(cxt, -M_PI_2);
bds = CGRectMake(0., 0., [self bounds].size.height, [self bounds].size.width);
}
CGContextSetFillColorSpace(cxt, cs);
CGContextSetStrokeColorSpace(cxt, cs);
if (_numLights == 0)
{
int i;
CGFloat currentTop = 0.;
if (_bgColor)
{
[_bgColor set];
CGContextFillRect(cxt, bds);
}
for (i=0; i<_numColorThresholds; i++)
{
LevelMeterColorThreshold thisThresh = _colorThresholds[i];
CGFloat val = MIN(thisThresh.maxValue, _level);
CGRect rect = CGRectMake(
0,
(bds.size.height) * currentTop,
bds.size.width,
(bds.size.height) * (val - currentTop)
);
[thisThresh.color set];
CGContextFillRect(cxt, rect);
if (_level < thisThresh.maxValue) break;
currentTop = val;
}
if (_borderColor)
{
[_borderColor set];
CGContextStrokeRect(cxt, CGRectInset(bds, .5, .5));
}
}
else
{
int light_i;
CGFloat lightMinVal = 0.;
CGFloat insetAmount, lightVSpace;
lightVSpace = bds.size.height / (CGFloat)_numLights;
if (lightVSpace < 4.) insetAmount = 0.;
else if (lightVSpace < 8.) insetAmount = 0.5;
else insetAmount = 1.;
int peakLight = -1;
if (_peakLevel > 0.)
{
peakLight = _peakLevel * _numLights;
if (peakLight >= _numLights) peakLight = _numLights - 1;
}
for (light_i=0; light_i<_numLights; light_i++)
{
CGFloat lightMaxVal = (CGFloat)(light_i + 1) / (CGFloat)_numLights;
CGFloat lightIntensity;
CGRect lightRect;
UIColor *lightColor;
if (light_i == peakLight)
{
lightIntensity = 1.;
}
else
{
lightIntensity = (_level - lightMinVal) / (lightMaxVal - lightMinVal);
lightIntensity = LEVELMETER_CLAMP(0., lightIntensity, 1.);
if ((!_variableLightIntensity) && (lightIntensity > 0.)) lightIntensity = 1.;
}
lightColor = _colorThresholds[0].color;
int color_i;
for (color_i=0; color_i<(_numColorThresholds-1); color_i++)
{
LevelMeterColorThreshold thisThresh = _colorThresholds[color_i];
LevelMeterColorThreshold nextThresh = _colorThresholds[color_i + 1];
if (thisThresh.maxValue <= lightMaxVal) lightColor = nextThresh.color;
}
lightRect = CGRectMake(
0.,
bds.size.height * ((CGFloat)(light_i) / (CGFloat)_numLights),
bds.size.width,
bds.size.height * (1. / (CGFloat)_numLights)
);
lightRect = CGRectInset(lightRect, insetAmount, insetAmount);
if (_bgColor)
{
[_bgColor set];
CGContextFillRect(cxt, lightRect);
}
UIImage* image = [UIImage imageNamed:#"Pearl.png"];
CGImageRef imageRef = image.CGImage;
if (lightIntensity == 1.)
{
CGContextDrawImage(cxt, lightRect, imageRef);
[lightColor set];
//CGContextFillRect(cxt, lightRect);
} else if (lightIntensity > 0.) {
CGColorRef clr = CGColorCreateCopyWithAlpha([lightColor CGColor], lightIntensity);
CGContextSetFillColorWithColor(cxt, clr);
//CGContextFillRect(cxt, lightRect);
CGContextDrawImage(cxt, lightRect, imageRef);
CGColorRelease(clr);
}
if (_borderColor)
{
[_borderColor set];
CGContextStrokeRect(cxt, CGRectInset(lightRect, 0.5, 0.5));
}
lightMinVal = lightMaxVal;
}
}
CGColorSpaceRelease(cs);
}

Your drawing may be too slow, which can interfere with real time response requirements to the audio queue callbacks.
But since you are drawing mostly a bunch of static images, you might want to consider putting the dots each in their own separate image views. The you can control the number of dots displayed just by hiding and unhiding sub views, without any drawRect code at all.

Related

Core Graphics, odd behaviour with CGContextSetShadowWithColor blur height

I've been adding shadows to my shapes using CGContextSetShadowWithColor. I'm trying to use the same blur offset height.
However the height of the shadow I'm seeing on different shapes is different. I've no idea why this occurring.
Here's some typical code.
CGSize offset = CGSizeMake(0.1, self.l_shadHeight);
CGContextSetShadowWithColor(context, offset, mauveBlurRadius, mauve.CGColor);
EDIT - FURTHER CODE
+ (NSInteger)setShadowHeight
{
NSInteger retVal = 10;
if (IS_IPHONE_6P)
{
retVal = 7;
}
else if (IS_IPHONE_6)
{
retVal = 7;
}
else if (IS_IPHONE_5 || TARGET_INTERFACE_BUILDER)
{
retVal = 6;
}
else if (IS_IPHONE_4_AND_OLDER || IS_IPHONE)
{
retVal = 6;
}
else if (isiPadPro)
{
retVal = 16;
}
else if (IS_IPAD)
{
retVal = 11;
}
return retVal;
}
//
-(void)setSizingClassValues
{
self.l_shadHeight = [DrawingConstants setShadowHeight];
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
[self setSizingClassValues];
//// General Declarations
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = UIGraphicsGetCurrentContext();
//// Shadow Declarations
//UIColor* circleShadow = shadowColor2;
UIColor* circleShadow = [shadowColor2 colorWithAlphaComponent: [DrawingConstants SHADOW_ALPHA]];
CGSize circleShadowOffset = CGSizeMake(0.1, self.l_shadHeight);
//CGFloat circleShadowBlurRadius = 5;
CGFloat circleShadowBlurRadius = [DrawingConstants setShadowBlurRadius];
//
//// Frames
CGRect frame = self.bounds;
CGFloat xPadding = 18;
CGFloat YPadding = 19;
CGFloat wPadding = 39;
CGFloat hPadding = 54;
if (isiPadPro || IS_IPAD)
{
if (self.size == sizeSmallerPerformance)
{
//so we don't have to rework autolayout on other screens!
CGFloat fac = 2;
xPadding = xPadding / fac;
YPadding = YPadding / fac;
wPadding = wPadding / fac;
hPadding = hPadding / fac;
}
}
//// Subframes
CGRect group = CGRectMake(CGRectGetMinX(frame) + xPadding,
CGRectGetMinY(frame) + YPadding,
CGRectGetWidth(frame) - wPadding,
CGRectGetHeight(frame) - hPadding);
//
//// Abstracted Attributes
CGFloat circleSurroundStrokeWidth = self.l_borderWidth;
CGFloat tongueLeftStrokeWidth = self.l_strokeWidth;;
CGFloat tongueStrokeWidth = self.l_strokeWidth;
//// Group
{
//// CircleSurround Drawing
UIBezierPath* circleSurroundPath = [UIBezierPath bezierPathWithOvalInRect: CGRectMake(CGRectGetMinX(group) + floor(CGRectGetWidth(group) * 0.00164) + 0.5, CGRectGetMinY(group) + floor(CGRectGetHeight(group) * 0.00161) + 0.5, floor(CGRectGetWidth(group) * 0.99836) - floor(CGRectGetWidth(group) * 0.00164), floor(CGRectGetHeight(group) * 0.99839) - floor(CGRectGetHeight(group) * 0.00161))];
CGContextSaveGState(context);
CGContextSetShadowWithColor(context, circleShadowOffset, circleShadowBlurRadius, circleShadow.CGColor);
[circleFillColour setFill];
[circleSurroundPath fill];
CGContextRestoreGState(context);
[whiteColour setStroke];
circleSurroundPath.lineWidth = circleSurroundStrokeWidth;
[circleSurroundPath stroke];
}
//// Cleanup
CGGradientRelease(innerFaceGradient);
CGColorSpaceRelease(colorSpace);
}
#end
I've found this issue, I had my shadow on the wrong thing. Instead I added it to my outermost group.
Thanks #kurtrevis

Core Graphics Drawing, Bad Performance Degredation

I am using the following core graphics code to draw a waveform of audio data as I am recording. I apologize for tons of code, but here it is (I found it here):
//
// WaveformView.m
//
// Created by Edward Majcher on 7/17/14.
//
#import "WaveformView.h"
//Gain applied to incoming samples
static CGFloat kGain = 10.;
//Number of samples displayed
static int kMaxWaveforms = 80.;
#interface WaveformView ()
#property (nonatomic) BOOL addToBuffer;
//Holds kMaxWaveforms number of incoming samples,
//80 is based on half the width of iPhone, adding a 1 pixel line between samples
#property (strong, nonatomic) NSMutableArray* bufferArray;
+ (float)RMS:(float *)buffer length:(int)bufferSize;
#end
#implementation WaveformView
- (void)awakeFromNib
{
[super awakeFromNib];
self.bufferArray = [NSMutableArray array];
}
-(void)updateBuffer:(float *)buffer withBufferSize:(UInt32)bufferSize
{
if (!self.addToBuffer) {
self.addToBuffer = YES;
return;
} else {
self.addToBuffer = NO;
}
float rms = [WaveformView RMS:buffer length:bufferSize];
if ([self.bufferArray count] == kMaxWaveforms) {
//##################################################
// [self.bufferArray removeObjectAtIndex:0];
}
[self.bufferArray addObject:#(rms * kGain)];
[self setNeedsDisplay];
}
+ (float)RMS:(float *)buffer length:(int)bufferSize {
float sum = 0.0;
for(int i = 0; i < bufferSize; i++) {
sum += buffer[i] * buffer[i];
}
return sqrtf( sum / bufferSize );
}
// *****************************************************
- (void)drawRect:(CGRect)rect
{
CGFloat midX = CGRectGetMidX(rect);
CGFloat maxX = CGRectGetMaxX(rect);
CGFloat midY = CGRectGetMidY(rect);
CGContextRef context = UIGraphicsGetCurrentContext();
// Draw out center line
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 1.);
CGContextMoveToPoint(context, 0., midY);
CGContextAddLineToPoint(context, maxX, midY);
CGContextStrokePath(context);
CGFloat x = 0.;
for (NSNumber* n in self.bufferArray) {
CGFloat height = 20 * [n floatValue];
CGContextMoveToPoint(context, x, midY - height);
CGContextAddLineToPoint(context, x, midY + height);
CGContextStrokePath(context);
x += 2;
}
if ([self.bufferArray count] >= kMaxWaveforms) {
[self addMarkerInContext:context forX:midX forRect:rect];
} else {
[self addMarkerInContext:context forX:x forRect:rect];
}
}
- (void)addMarkerInContext:(CGContextRef)context forX:(CGFloat)x forRect:(CGRect)rect
{
CGFloat maxY = CGRectGetMaxY(rect);
CGContextSetStrokeColorWithColor(context, [UIColor greenColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor greenColor].CGColor);
CGContextFillEllipseInRect(context, CGRectMake(x - 1.5, 0, 3, 3));
CGContextMoveToPoint(context, x, 0 + 3);
CGContextAddLineToPoint(context, x, maxY - 3);
CGContextStrokePath(context);
CGContextFillEllipseInRect(context, CGRectMake(x - 1.5, maxY - 3, 3, 3));
}
#end
So as I am recording audio, the waveform drawing gets more and more jittery, kind of like a game that has bad frame rates. I tried contacting the owner of this piece of code, but no luck. I have never used core graphics, so I'm trying to figure out why performance is so bad. Performance starts to degrade at around 2-3 seconds worth of audio (the waveform doesn't even fill the screen).
My first question is, is this redrawing the entire audio history every time drawRect is called? If you look in the drawRect function (marked by asterisks), there is a variable called CGRect x. This seems to affect the position at which the waveform is being drawn (if you set it to 60 instead of 0, it starts at x=60 pixels instead of x=0 pixels).
From my viewController, I pass in the audioData which gets stored in the self.bufferArray property. So when that loop goes through to draw the data, it seems like it's starting at zero and working its way up every time drawRect is getting called, which means that for every new piece of audio data added, drawRect gets called, and it redraws the entire waveform plus the new piece of audio data.
If that is the problem, does anyone know how I can optimize this piece of code? I tried emptying the bufferArray after the loop so that it contained only new data, but that didn't work.
If this is not the problem, are there any core graphics experts that can figure out what the problem is?
I should also mention that I commented out a piece of code (marked with ### at signs) because I need the entire waveform. I don't want it to remove pieces of the waveform at the beginning. The iOS Voice Memos app can hold a waveform of audio without performance degradation.

How can I release memory of UIImages no longer used

I am attempting to merge a number of smaller images into a larger one. The app crashes because it runs out of memory, but I cannot figure out how to release the memory after it is used, so it keeps building up til the app crashes.
The addImageToImage and resizeImage routines appears to be causing the crash since I cannot free up their memory after it is no longer needed. I am using Automatic Reference Counting in this project. I have tried setting the image to nil but that does not stop the crashing.
testImages is in one class that is called from the main ViewController, while addImageToImage and resizeImage are in another class called ImageUtils.
Can someone look at this code and explain to me how to properly release the memory allocated by these two routines. I cannot call release on the images since the project uses ARC and setting them to nil has no effect.
+ (void)testImages
{
const int IMAGE_WIDTH = 394;
const int IMAGE_HEIGHT = 150;
const int PAGE_WIDTH = 1275;
const int PAGE_HEIGHT = 1650;
const int COLUMN_WIDTH = 30;
const int ROW_OFFSET = 75;
CGSize imageSize = CGSizeMake(PAGE_WIDTH, PAGE_HEIGHT);
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextFillRect(context, CGRectMake(0, 0, imageSize.width, imageSize.height));
UIImage *psheet = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGSize collageSize = CGSizeMake(IMAGE_WIDTH, IMAGE_HEIGHT);
UIGraphicsBeginImageContextWithOptions(collageSize, YES, 0);
CGContextRef pcontext = UIGraphicsGetCurrentContext();
CGContextFillRect(pcontext, CGRectMake(0, 0, collageSize.width, collageSize.height));
UIImage *collage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
float row = 1;
float column = 1;
int index = 1;
int group = 1;
for (int i = 0; i < 64; i++)
{
NSLog(#"processing group %i - file %i ", group, index++);
psheet = [ImageUtils addImageToImage:psheet withImage2:collage andRect:CGRectMake((IMAGE_WIDTH*(column-1)) + (COLUMN_WIDTH * column), (IMAGE_HEIGHT * (row-1)) + ROW_OFFSET, IMAGE_WIDTH, IMAGE_HEIGHT) withImageWidth:PAGE_WIDTH withImageHeight:PAGE_HEIGHT];
column++;
if (column > 3) {
column = 1;
row++;
}
if (index == 15)
{
group++;
index = 1;
row = 1;
column = 1;
UIImage *editedImage = [ImageUtils resizeImage:psheet withWidth:PAGE_WIDTH * 2 withHeight:PAGE_HEIGHT * 2];
editedImage = nil;
}
}
}
ImageUtils methods
+(UIImage *) addImageToImage:(UIImage *)sheet withImage2:(UIImage *)label andRect:(CGRect)cropRect withImageWidth:(int) width withImageHeight:(int) height
{
CGSize size = CGSizeMake(width,height);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[sheet drawAtPoint:pointImg1];
CGPoint pointImg2 = cropRect.origin;
[label drawAtPoint: pointImg2];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(CGFloat)width withHeight:(CGFloat)height
{
CGSize newSize = CGSizeMake(width, height);
CGFloat widthRatio = newSize.width/image.size.width;
CGFloat heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Maybe your images are not deallocated but moved to autorelease pool.
Many programs create temporary objects that are autoreleased. These
objects add to the program’s memory footprint until the end of the
block. In many situations, allowing temporary objects to accumulate
until the end of the current event-loop iteration does not result in
excessive overhead; in some situations, however, you may create a
large number of temporary objects that add substantially to memory
footprint and that you want to dispose of more quickly. In these
latter cases, you can create your own autorelease pool block. At the
end of the block, the temporary objects are released, which typically
results in their deallocation thereby reducing the program’s memory
footprint
Try to wrap code inside the loop with #autoreleasepool {} :
for (int i = 0; i < 64; i++)
{
#autoreleasepool {
NSLog(#"processing group %i - file %i ", group, index++);
psheet = [ImageUtils addImageToImage:psheet withImage2:collage andRect:CGRectMake((IMAGE_WIDTH*(column-1)) + (COLUMN_WIDTH * column), (IMAGE_HEIGHT * (row-1)) + ROW_OFFSET, IMAGE_WIDTH, IMAGE_HEIGHT) withImageWidth:PAGE_WIDTH withImageHeight:PAGE_HEIGHT];
column++;
if (column > 3) {
column = 1;
row++;
}
if (index == 15)
{
group++;
index = 1;
row = 1;
column = 1;
UIImage *editedImage = [ImageUtils resizeImage:psheet withWidth:PAGE_WIDTH * 2 withHeight:PAGE_HEIGHT * 2];
editedImage = nil;
}
}
}

Working in IOS5 but crash in IOS 6

I'm adding share functionalities to my iOS app (I have to support down to iOS 5.0
The following COde is to Strike a LABEL IT IS WORKING PROPERLY IN IOS5 BUT CRASHING IN IOS 6
Crashing at CGContextStrokePath(c);
CFIndex lineIndex = 0;
for (id line in lines) {
CGFloat ascent = 0.0f, descent = 0.0f, leading = 0.0f;
CGFloat width = CTLineGetTypographicBounds((__bridge CTLineRef)line, &ascent, &descent, &leading) ;
CGRect lineBounds = CGRectMake(0.0f, 0.0f, width, ascent + descent + leading) ;
lineBounds.origin.x = origins[lineIndex].x;
lineBounds.origin.y = origins[lineIndex].y;
for (id glyphRun in (__bridge NSArray *)CTLineGetGlyphRuns((__bridge CTLineRef)line)) {
NSDictionary *attributes = (__bridge NSDictionary *)CTRunGetAttributes((__bridge CTRunRef) glyphRun);
BOOL strikeOut = [[attributes objectForKey:kTTTStrikeOutAttributeName] boolValue];
NSInteger superscriptStyle = [[attributes objectForKey:(id)kCTSuperscriptAttributeName] integerValue];
if (strikeOut) {
CGRect runBounds = CGRectZero;
CGFloat runAscent = 0.0f;
CGFloat runDescent = 0.0f;
runBounds.size.width = CTRunGetTypographicBounds((__bridge CTRunRef)glyphRun, CFRangeMake(0, 0), &runAscent, &runDescent, NULL);
runBounds.size.height = runAscent + runDescent;
CGFloat xOffset = CTLineGetOffsetForStringIndex((__bridge CTLineRef)line, CTRunGetStringRange((__bridge CTRunRef)glyphRun).location, NULL);
runBounds.origin.x = origins[lineIndex].x + rect.origin.x + xOffset;
runBounds.origin.y = origins[lineIndex].y + rect.origin.y;
runBounds.origin.y -= runDescent;
// Don't draw strikeout too far to the right
if (CGRectGetWidth(runBounds) > CGRectGetWidth(lineBounds)) {
runBounds.size.width = CGRectGetWidth(lineBounds);
}
switch (superscriptStyle) {
case 1:
runBounds.origin.y -= runAscent * 0.47f;
break;
case -1:
runBounds.origin.y += runAscent * 0.25f;
break;
default:
break;
}
// Use text color, or default to black
id color = [attributes objectForKey:(id)kCTForegroundColorAttributeName];
if (color) {
CGContextSetStrokeColorWithColor(c, (__bridge CGColorRef)color);
} else {
CGContextSetGrayStrokeColor(c, 0.0f, 1.0);
}
CTFontRef font = CTFontCreateWithName((__bridge CFStringRef)self.font.fontName, self.font.pointSize, NULL);
CGContextSetLineWidth(c, CTFontGetUnderlineThickness(font));
CGFloat y = roundf(runBounds.origin.y + runBounds.size.height / 2.0f);
CGContextMoveToPoint(c, runBounds.origin.x, y);
CGContextAddLineToPoint(c, runBounds.origin.x + runBounds.size.width, y);
CGContextStrokePath(c);
}
}
lineIndex++;
}
}
Its a bug of TTTAttributedLabel check out this issue
Hopefully someone may resolve that and you will get solution. Watchout there until you get solution.

Performance issues when cropping UIImage (CoreGraphics, iOS)

The basic idea of what we are trying to do is that we have a large UIImage, and we want to slice it into several pieces. The user of the function can pass in a number of rows and number of columns, and the image will be cropped accordingly (ie. 3 rows and 3 columns slices the image into 9 pieces). The problem is, we're having performance issues when trying to accomplish this with CoreGraphics. The largest grid we require is 5x5, and it takes several seconds for the operation to complete (which registeres as lagtime to the user.) This is of course far from optimal.
My colleague and I have spent quite a while on this, and have searched the web for answers unsuccessfully. Neither of us are extremely experienced with Core Graphics, so I'm hoping there's some silly mistake in the code that will fix our problems. It's left to you, SO users, to please help us figure it out!
We used the tutorial at http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/ to base revisions of our code on.
The function below:
-(void) getImagesFromImage:(UIImage*)image withRow:(NSInteger)rows withColumn:(NSInteger)columns
{
CGSize imageSize = image.size;
CGFloat xPos = 0.0;
CGFloat yPos = 0.0;
CGFloat width = imageSize.width / columns;
CGFloat height = imageSize.height / rows;
int imageCounter = 0;
//create a context to do our clipping in
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGRect clippedRect = CGRectMake(0, 0, width, height);
CGContextClipToRect(currentContext, clippedRect);
for(int i = 0; i < rows; i++)
{
xPos = 0.0;
for(int j = 0; j < columns; j++)
{
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect rect = CGRectMake(xPos, yPos, width, height);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
image.size.width,
image.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(currentContext, drawRect, image.CGImage);
//pull the image from our cropped context
UIImage* croppedImg = UIGraphicsGetImageFromCurrentImageContext();
//PuzzlePiece is a UIView subclass
PuzzlePiece* newPP = [[PuzzlePiece alloc] initWithImageAndFrameAndID:croppedImg :rect :imageCounter];
[slicedImages addObject:newPP];
imageCounter++;
xPos += (width);
}
yPos += (height);
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
}
ANY advice greatly appreciated!!
originalImageView is an IBOutlet ImageView. This image will be cropped.
#import <QuartzCore/QuartzCore.h>
QuartzCore is needed for the white border around each slice for better understanding.
-(UIImage*)getCropImage:(CGRect)cropRect
{
CGImageRef image = CGImageCreateWithImageInRect([originalImageView.image CGImage],cropRect);
UIImage *cropedImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return cropedImage;
}
-(void)prepareSlices:(uint)row:(uint)col
{
float flagX = originalImageView.image.size.width / originalImageView.frame.size.width;
float flagY = originalImageView.image.size.height / originalImageView.frame.size.height;
float _width = originalImageView.frame.size.width / col;
float _height = originalImageView.frame.size.height / row;
float _posX = 0.0;
float _posY = 0.0;
for (int i = 1; i <= row * col; i++) {
UIImageView *croppedImageVeiw = [[UIImageView alloc] initWithFrame:CGRectMake(_posX, _posY, _width, _height)];
UIImage *img = [self getCropImage:CGRectMake(_posX * flagX,_posY * flagY, _width * flagX, _height * flagY)];
croppedImageVeiw.image = img;
croppedImageVeiw.layer.borderColor = [[UIColor whiteColor] CGColor];
croppedImageVeiw.layer.borderWidth = 1.0f;
[self.view addSubview:croppedImageVeiw];
[croppedImageVeiw release];
_posX += _width;
if (i % col == 0) {
_posX = 0;
_posY += _height;
}
}
originalImageView.alpha = 0.0;
}
originalImageView.alpha = 0.0; you won't see the originalImageView any more.
Call it like this:
[self prepareSlices:4 :4];
It should make 16 slices addSubView on self.view. We have a puzzle app. This is working code from there.

Resources