Performance issues when cropping UIImage (CoreGraphics, iOS) - ios

The basic idea of what we are trying to do is that we have a large UIImage, and we want to slice it into several pieces. The user of the function can pass in a number of rows and number of columns, and the image will be cropped accordingly (ie. 3 rows and 3 columns slices the image into 9 pieces). The problem is, we're having performance issues when trying to accomplish this with CoreGraphics. The largest grid we require is 5x5, and it takes several seconds for the operation to complete (which registeres as lagtime to the user.) This is of course far from optimal.
My colleague and I have spent quite a while on this, and have searched the web for answers unsuccessfully. Neither of us are extremely experienced with Core Graphics, so I'm hoping there's some silly mistake in the code that will fix our problems. It's left to you, SO users, to please help us figure it out!
We used the tutorial at http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/ to base revisions of our code on.
The function below:
-(void) getImagesFromImage:(UIImage*)image withRow:(NSInteger)rows withColumn:(NSInteger)columns
{
CGSize imageSize = image.size;
CGFloat xPos = 0.0;
CGFloat yPos = 0.0;
CGFloat width = imageSize.width / columns;
CGFloat height = imageSize.height / rows;
int imageCounter = 0;
//create a context to do our clipping in
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGRect clippedRect = CGRectMake(0, 0, width, height);
CGContextClipToRect(currentContext, clippedRect);
for(int i = 0; i < rows; i++)
{
xPos = 0.0;
for(int j = 0; j < columns; j++)
{
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect rect = CGRectMake(xPos, yPos, width, height);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
image.size.width,
image.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(currentContext, drawRect, image.CGImage);
//pull the image from our cropped context
UIImage* croppedImg = UIGraphicsGetImageFromCurrentImageContext();
//PuzzlePiece is a UIView subclass
PuzzlePiece* newPP = [[PuzzlePiece alloc] initWithImageAndFrameAndID:croppedImg :rect :imageCounter];
[slicedImages addObject:newPP];
imageCounter++;
xPos += (width);
}
yPos += (height);
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
}
ANY advice greatly appreciated!!

originalImageView is an IBOutlet ImageView. This image will be cropped.
#import <QuartzCore/QuartzCore.h>
QuartzCore is needed for the white border around each slice for better understanding.
-(UIImage*)getCropImage:(CGRect)cropRect
{
CGImageRef image = CGImageCreateWithImageInRect([originalImageView.image CGImage],cropRect);
UIImage *cropedImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return cropedImage;
}
-(void)prepareSlices:(uint)row:(uint)col
{
float flagX = originalImageView.image.size.width / originalImageView.frame.size.width;
float flagY = originalImageView.image.size.height / originalImageView.frame.size.height;
float _width = originalImageView.frame.size.width / col;
float _height = originalImageView.frame.size.height / row;
float _posX = 0.0;
float _posY = 0.0;
for (int i = 1; i <= row * col; i++) {
UIImageView *croppedImageVeiw = [[UIImageView alloc] initWithFrame:CGRectMake(_posX, _posY, _width, _height)];
UIImage *img = [self getCropImage:CGRectMake(_posX * flagX,_posY * flagY, _width * flagX, _height * flagY)];
croppedImageVeiw.image = img;
croppedImageVeiw.layer.borderColor = [[UIColor whiteColor] CGColor];
croppedImageVeiw.layer.borderWidth = 1.0f;
[self.view addSubview:croppedImageVeiw];
[croppedImageVeiw release];
_posX += _width;
if (i % col == 0) {
_posX = 0;
_posY += _height;
}
}
originalImageView.alpha = 0.0;
}
originalImageView.alpha = 0.0; you won't see the originalImageView any more.
Call it like this:
[self prepareSlices:4 :4];
It should make 16 slices addSubView on self.view. We have a puzzle app. This is working code from there.

Related

Unable to scale shapes to match the dimensions of UIView

I have created a custom view that draws shapes of varying sides. The view is added to main view as a subview as given below. The shapes are of different dimensions.
My source code is given below
-(instancetype) initWithSides:(NSUInteger) sides andFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
[self setBackgroundColor:[UIColor clearColor]];
self.sides = sides;
self.radius = CGRectGetWidth(self.frame) * 1.5 / sides);
}
return self;
}
-(void) drawRect:(CGRect) frame {
// Sides is passed as constructor argument. 4 sides means a quadrilateral etc.
// Get CGPAthRef instance
CGFloat angle = DEGREES_TO_RADIANS(360 / ((CGFloat) self.sides));
int count = 0;
CGFloat xCoord = CGRectGetMidX(self.frame);
CGFloat yCoord = CGRectGetMidY(self.frame);
while (count < self.sides) {
CGFloat xPosition =
xCoord + self.radius * cos(angle * ((CGFloat) count));
CGFloat yPosition =
yCoord + self.radius * sin(angle * ((CGFloat) count));
[self.points addObject:[NSValue valueWithCGPoint:CGPointMake(xPosition, yPosition)]];
count ++;
}
NSValue* v = [self.points firstObject];
CGPoint first = v.CGPointValue;
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, nil, first.x, first.y);
for (int ix = 1; ix < [self.points count]; ix++) {
NSValue* pValue = [self.points objectAtIndex:ix];
CGPoint p = pValue.CGPointValue;
CGPathAddLineToPoint(path, nil, p.x, p.y);
}
CGPathCloseSubpath(path);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, path);
[self colourView:context withPath:path];
}
-(void) colourView:(CGContextRef) context withPath:(CGPathRef) ref {
NSUInteger num = arc4random_uniform(8) + 1;
UIColor* color = nil;
switch (num) {
case 1:
color = [UIColor redColor];
break;
case 2:
color = [UIColor greenColor];
break;
case 3:
color = [UIColor yellowColor];
break;
case 4:
color = [UIColor blueColor];
break;
case 5:
color = [UIColor orangeColor];
break;
case 6:
color = [UIColor brownColor];
break;
case 7:
color = [UIColor purpleColor];
break;
case 8:
color = [UIColor blackColor];
break;
default:
break;
}
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillPath(context);
}
This constitutes one single shape. This is how I am drawing the rest of the views.
-(void) initDrawView {
NSUInteger width = CGRectGetWidth(self.view.bounds) / 8;
NSUInteger height = (CGRectGetHeight(self.view.frame))/ 8;
UITapGestureRecognizer *singleFingerTap =
[[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(handleSingleTap:)];
for (int i = 0 ; i < 8; i++) {
CGFloat yCoord = i * height + i * 15;
for (int j = 0; j < 8; j ++) {
int side = 3 + arc4random() % 8;
CGFloat xCoord = j * width;
SimpleRegularPolygonView* view =
[[SimpleRegularPolygonView alloc] initWithSides:side andFrame:CGRectMake(xCoord, yCoord, width, height)];
[view sizeToFit];
view.viewEffectsDelegate = self;
[view setTag: (8 * i + j)];
[self.view addGestureRecognizer:singleFingerTap];
[self.view addSubview:view];
}
}
}
1) I don't know how to make them have the same dimensions. How can I do that? (First image)
2) the images don't scale up to the size of the UIView (Second image).
Your current code makes the radius inversely proportional to the number of sides:
self.radius = CGRectGetWidth(self.frame) * 1.5 / sides;
so the more sides, the smaller the image. A quick fix would be to just make the radius half the frame width:
self.radius = CGRectGetWidth(self.frame) /2;
This will mean shapes with an even number of sides fill the frame width. But those with an odd number of sides will appear to have space to the left. If you want to adjust for that you will need more detailed calculations for the width, and you will also need to move the "centre" of the shape. For an odd number sides, the radius would need to be:
self.radius = CGRectGetWidth(self.frame) /(1 + cos(angle / 2));
and xCoord would need to be:
CGFloat xCoord = CGRectGetMinX(self.frame) + self.radius * cos(angle/2);

curved text starting point

I'm using this to generate a curved text:
- (UIImage*)createCircularText:(NSString*)text withSize:(CGSize)size andCenter:(CGPoint)center
{
UIFont *font = [UIFont fontWithName:#"HelveticaNeue-Light" size:15];
// Start drawing
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Retrieve the center and set a radius
CGFloat r = center.x / 3;
// Start by adjusting the context origin
// This affects all subsequent operations
CGContextTranslateCTM(context, center.x, center.y);
// Calculate the full extent
CGFloat fullSize = 0;
for (int i = 0; i < [text length]; i++)
{
NSString *letter = [text substringWithRange:NSMakeRange(i, 1)];
CGSize letterSize = [letter sizeWithAttributes:#{NSFontAttributeName:font}];
fullSize += letterSize.width;
}
// Initialize the consumed space
CGFloat consumedSize = 0.0f;
// Iterate through the alphabet
for (int i = 0; i < [text length]; i++)
{
// Retrieve the letter and measure its display size
NSString *letter = [text substringWithRange:NSMakeRange(i, 1)];
CGSize letterSize = [letter sizeWithAttributes:#{NSFontAttributeName:font}];
// Calculate the current angular offset
//CGFloat theta = i * (2 * M_PI / ((float)[text length] * 3));
// Move the pointer forward, calculating the new percentage of travel along the path
consumedSize += letterSize.width / 2.0f;
CGFloat percent = consumedSize / fullSize;
CGFloat theta = (percent * 2 * M_PI) / ((float)[text length] / 4);
consumedSize += letterSize.width / 2.0f;
// Encapsulate each stage of the drawing
CGContextSaveGState(context);
// Rotate the context
CGContextRotateCTM(context, theta);
// Translate up to the edge of the radius and move left by
// half the letter width. The height translation is negative
// as this drawing sequence uses the UIKit coordinate system.
// Transformations that move up go to lower y values.
CGContextTranslateCTM(context, -letterSize.width / 2, -r);
// Draw the letter and pop the transform state
[letter drawAtPoint:CGPointMake(0, 0) withAttributes:#{NSFontAttributeName:font}];
CGContextRestoreGState(context);
}
// Retrieve and return the image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
and i get this back:
The problem is that the text starts at 0° but I actually want it to begin more at the left, so that the center of the string is at 0°. How to accomplish this?
Two options should work:
After drawing all of the text, rotate the context half of the use angle and get the image from the context at that point.
Make two passes. The first simply calculates the required angle to draw the text. The second pass does what you do now. Except in the 2nd pass, subtract half of the required total angle from each letter's angle.

How can I release memory of UIImages no longer used

I am attempting to merge a number of smaller images into a larger one. The app crashes because it runs out of memory, but I cannot figure out how to release the memory after it is used, so it keeps building up til the app crashes.
The addImageToImage and resizeImage routines appears to be causing the crash since I cannot free up their memory after it is no longer needed. I am using Automatic Reference Counting in this project. I have tried setting the image to nil but that does not stop the crashing.
testImages is in one class that is called from the main ViewController, while addImageToImage and resizeImage are in another class called ImageUtils.
Can someone look at this code and explain to me how to properly release the memory allocated by these two routines. I cannot call release on the images since the project uses ARC and setting them to nil has no effect.
+ (void)testImages
{
const int IMAGE_WIDTH = 394;
const int IMAGE_HEIGHT = 150;
const int PAGE_WIDTH = 1275;
const int PAGE_HEIGHT = 1650;
const int COLUMN_WIDTH = 30;
const int ROW_OFFSET = 75;
CGSize imageSize = CGSizeMake(PAGE_WIDTH, PAGE_HEIGHT);
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextFillRect(context, CGRectMake(0, 0, imageSize.width, imageSize.height));
UIImage *psheet = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGSize collageSize = CGSizeMake(IMAGE_WIDTH, IMAGE_HEIGHT);
UIGraphicsBeginImageContextWithOptions(collageSize, YES, 0);
CGContextRef pcontext = UIGraphicsGetCurrentContext();
CGContextFillRect(pcontext, CGRectMake(0, 0, collageSize.width, collageSize.height));
UIImage *collage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
float row = 1;
float column = 1;
int index = 1;
int group = 1;
for (int i = 0; i < 64; i++)
{
NSLog(#"processing group %i - file %i ", group, index++);
psheet = [ImageUtils addImageToImage:psheet withImage2:collage andRect:CGRectMake((IMAGE_WIDTH*(column-1)) + (COLUMN_WIDTH * column), (IMAGE_HEIGHT * (row-1)) + ROW_OFFSET, IMAGE_WIDTH, IMAGE_HEIGHT) withImageWidth:PAGE_WIDTH withImageHeight:PAGE_HEIGHT];
column++;
if (column > 3) {
column = 1;
row++;
}
if (index == 15)
{
group++;
index = 1;
row = 1;
column = 1;
UIImage *editedImage = [ImageUtils resizeImage:psheet withWidth:PAGE_WIDTH * 2 withHeight:PAGE_HEIGHT * 2];
editedImage = nil;
}
}
}
ImageUtils methods
+(UIImage *) addImageToImage:(UIImage *)sheet withImage2:(UIImage *)label andRect:(CGRect)cropRect withImageWidth:(int) width withImageHeight:(int) height
{
CGSize size = CGSizeMake(width,height);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[sheet drawAtPoint:pointImg1];
CGPoint pointImg2 = cropRect.origin;
[label drawAtPoint: pointImg2];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(CGFloat)width withHeight:(CGFloat)height
{
CGSize newSize = CGSizeMake(width, height);
CGFloat widthRatio = newSize.width/image.size.width;
CGFloat heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Maybe your images are not deallocated but moved to autorelease pool.
Many programs create temporary objects that are autoreleased. These
objects add to the program’s memory footprint until the end of the
block. In many situations, allowing temporary objects to accumulate
until the end of the current event-loop iteration does not result in
excessive overhead; in some situations, however, you may create a
large number of temporary objects that add substantially to memory
footprint and that you want to dispose of more quickly. In these
latter cases, you can create your own autorelease pool block. At the
end of the block, the temporary objects are released, which typically
results in their deallocation thereby reducing the program’s memory
footprint
Try to wrap code inside the loop with #autoreleasepool {} :
for (int i = 0; i < 64; i++)
{
#autoreleasepool {
NSLog(#"processing group %i - file %i ", group, index++);
psheet = [ImageUtils addImageToImage:psheet withImage2:collage andRect:CGRectMake((IMAGE_WIDTH*(column-1)) + (COLUMN_WIDTH * column), (IMAGE_HEIGHT * (row-1)) + ROW_OFFSET, IMAGE_WIDTH, IMAGE_HEIGHT) withImageWidth:PAGE_WIDTH withImageHeight:PAGE_HEIGHT];
column++;
if (column > 3) {
column = 1;
row++;
}
if (index == 15)
{
group++;
index = 1;
row = 1;
column = 1;
UIImage *editedImage = [ImageUtils resizeImage:psheet withWidth:PAGE_WIDTH * 2 withHeight:PAGE_HEIGHT * 2];
editedImage = nil;
}
}
}

Table View UIImage rendering issue

I'm running into a rendering issue with my tableView UIImages and was wondering if anyone has encountered the same problem and knows how to fix it.
Here is my cellForRowAtIndexPath
-(UITableViewCell *)tableView:(UITableView *)tableView
cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
cell.textLabel.text = exerciseDisplayName;
cell.textLabel.numberOfLines = 0;
cell.textLabel.lineBreakMode = NSLineBreakByWordWrapping;
[tableView setSeparatorInset:UIEdgeInsetsZero];
UtilityMethods *commonMethods = [[UtilityMethods alloc]init];
UIImage *rowImage = [commonMethods imageForRow:tempPlaceholder.bodyPart];
cell.imageView.image = rowImage;
return cell;
}
Here is my height for row.
-(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 96;
}
There are lots of lines and squiggles in the images in the table. I was wondering if anyone knowns any UIImage properties that I might need to apply to my image to fix the problem. Increasing the height for row in table fixes the problem at the expense of increasing the height of the table row. The number that seems to work is 128 in heightForRow. When using 128 the squiggles are much less noticeable. Now I'm pretty sure this has something to do with how iOS is rendering the image. Ive taken the image and resized it to 76x76 using Microsoft Paint just to see if I would see the same problem, and the images appear just fine without all the squiggles. The images are .png format. The original size of the images is 1024x1024. Ive just resized them downwards as I've needed them. If anyone has any tips or advice on how to fix this I'd really appreciate it.
You are going to need to resample the image to the size you need. Viewing a large image in a small space looks rather bad on iOS devices (most any really). But if you use built in functions to create a new UIImage of the proper size everything looks much better. Scaling down a UIImage when displaying will always look worse than creating a new image of the proper size and displaying that. The way to do this is as follows (taken from here):
- (UIImage*)imageByScalingAndCroppingForSize:(CGSize)targetSize
{
UIImage *sourceImage = self;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContextWithOptions(targetSize, 0, NO); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
{
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
That function does a bit more than you are looking for, but you should be able to cut it does to only what you need.
Make sure to use the UIGraphicsBeginImageContextWithOptions function instead of UIGraphicsBeginImageContext so you deal with retina displays properly, otherwise this will make your images more blurry than they should be and you will have a second problem to deal with.

Resize UIImage with aspect ratio?

I'm using this code to resize an image on the iPhone:
CGRect screenRect = CGRectMake(0, 0, 320.0, 480.0);
UIGraphicsBeginImageContext(screenRect.size);
[value drawInRect:screenRect blendMode:kCGBlendModePlusDarker alpha:1];
UIImage *tmpValue = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Which is working great, as long as the aspect ratio of the image matches that of the new resized image. I'd like to modify this so that it keeps the correct aspect ratio and just puts a black background anywhere the image doesn't show up. So I would still end up with a 320x480 image but with black on the top and bottom or sides, depending on the original image size.
Is there an easy way to do this similar to what I'm doing? Thanks!
After you set your screen rect, do something like the following to decide what rect to draw the image in:
float hfactor = value.bounds.size.width / screenRect.size.width;
float vfactor = value.bounds.size.height / screenRect.size.height;
float factor = fmax(hfactor, vfactor);
// Divide the size by the greater of the vertical or horizontal shrinkage factor
float newWidth = value.bounds.size.width / factor;
float newHeight = value.bounds.size.height / factor;
// Then figure out if you need to offset it to center vertically or horizontally
float leftOffset = (screenRect.size.width - newWidth) / 2;
float topOffset = (screenRect.size.height - newHeight) / 2;
CGRect newRect = CGRectMake(leftOffset, topOffset, newWidth, newHeight);
If you don't want to enlarge images smaller than the screenRect, make sure factor is greater than or equal to one (e.g. factor = fmax(factor, 1)).
To get the black background, you would probably just want to set the context color to black and call fillRect before drawing the image.
I know this is very old, but thanks for that post -- it redirected me from attempting to use scale to drawing the image. In case it is of benefit to anyone, I made an extension class I'll throw in here. It allows you to resize an image like this:
UIImage imgNew = img.Fit(40.0f, 40.0f);
I don't need a fit option, but it could easily be extended to support Fill as well.
using CoreGraphics;
using System;
using UIKit;
namespace SomeApp.iOS.Extensions
{
public static class UIImageExtensions
{
public static CGSize Fit(this CGSize sizeImage,
CGSize sizeTarget)
{
CGSize ret;
float fw;
float fh;
float f;
fw = (float) (sizeTarget.Width / sizeImage.Width);
fh = (float) (sizeTarget.Height / sizeImage.Height);
f = Math.Min(fw, fh);
ret = new CGSize
{
Width = sizeImage.Width * f,
Height = sizeImage.Height * f
};
return ret;
}
public static UIImage Fit(this UIImage image,
float width,
float height,
bool opaque = false,
float scale = 1.0f)
{
UIImage ret;
ret = image.Fit(new CGSize(width, height),
opaque,
scale);
return ret;
}
public static UIImage Fit(this UIImage image,
CGSize sizeTarget,
bool opaque = false,
float scale = 1.0f)
{
CGSize sizeNewImage;
CGSize size;
UIImage ret;
size = image.Size;
sizeNewImage = size.Fit(sizeTarget);
UIGraphics.BeginImageContextWithOptions(sizeNewImage,
opaque,
1.0f);
using (CGContext context = UIGraphics.GetCurrentContext())
{
context.ScaleCTM(1, -1);
context.TranslateCTM(0, -sizeNewImage.Height);
context.DrawImage(new CGRect(CGPoint.Empty, sizeNewImage),
image.CGImage);
ret = UIGraphics.GetImageFromCurrentImageContext();
}
UIGraphics.EndImageContext();
return ret;
}
}
}
As per the post above, it starts a new context for an image, then for that image it figures out aspect and then paints into the image. If you haven't done any Swift xcode dev time, UIGraphics is a bit backwards to most systems I work with but not bad. One issue is that bitmaps by default paint bottom to top. To get around that,
context.ScaleCTM(1, -1);
context.TranslateCTM(0, -sizeNewImage.Height);
Changes the orientation of drawing to the more common top-left to bottom-right... but then you need to move the origin as well hence the TranslateCTM.
Hopefully, it saves someone some time.
Cheers

Resources