Drawing getting pixelated in iOS - ios

I am working on a drawing application, I am drawing into CGlayers, I am creating a CGlayer this way
-(void)drawRect
{
if(self.DrawingLayer == nil)
{
CGFloat scale = self.contentScaleFactor;
CGRect bounds = CGRectMake(0, 0, self.bounds.size.width * scale, self.bounds.size.height * scale);
CGLayerRef layer = CGLayerCreateWithContext(context, bounds.size, NULL);
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextScaleCTM(layerContext, scale, scale);
self.DrawingLayer = layer;
}
CGContextDrawLayerInRect(context,rectSize, self.newDrawingLayer);
CGContextDrawLayerInRect(context, self.bounds, self.DrawingLayer );
}
Now as my drawing Canvas can be dynamically increased/decreased, whenever user performs any of the actions , I create one more layer and store the previous drawing on to the new CGlayer
this way
-(void)canvasIncreased
{
rectSize = self.bounds
CGContextRef context = UIGraphicsGetCurrentContext();
CGFloat scale = self.contentScaleFactor;
CGRect bounds = CGRectMake(0, 0, self.bounds.size.width * scale, self.bounds.size.height * scale);
CGLayerRef layer = CGLayerCreateWithContext(context, bounds.size, NULL);
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextScaleCTM(layerContext, scale, scale);
self.newDrawingLayer = layer;
CGContextDrawLayerInRect(layerContext, self.bounds, self.DrawingLayer);
self.DrawingLayer = nil;
}
Everything works fine, but what happens is that whenever I draw something then increase canvas size, the again draw, then increase canvas size, then again draw and so on,
then, my previous drawing is getting pixelated, here is the image

Related

Crop rotated, panned and zoomed image [duplicate]

I develop an application in which i process the image using its pixels but in that image processing it takes a lot of time. Therefore i want to crop UIImage (Only middle part of image i.e. removing/croping bordered part of image).I have the develop code are,
- (NSInteger) processImage1: (UIImage*) image
{
CGFloat width = image.size.width;
CGFloat height = image.size.height;
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
// Create a new bitmap
CGContextRef context = CGBitmapContextCreate(
(void*) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
kCGImageAlphaPremultipliedLast
);
if (context != NULL)
{
// Draw the image in the bitmap
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
NSUInteger numberOfPixels = image.size.width * image.size.height;
NSMutableArray *numberOfPixelsArray = [[[NSMutableArray alloc] initWithCapacity:numberOfPixelsArray] autorelease];
}
How i take(croping outside bordered) the middle part of UIImage?????????
Try something like this:
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
Note: cropRect is smaller rectangle with middle part of the image...
I was looking for a way to get an arbitrary rectangular crop (ie., sub-image) of a UIImage.
Most of the solutions I tried do not work if the orientation of the image is anything but UIImageOrientationUp.
For example:
http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Typically if you use your iPhone camera, you will have other orientations like UIImageOrientationLeft, and you will not get a correct crop with the above. This is because of the use of CGImageRef/CGContextDrawImage which differ in the coordinate system with respect to UIImage.
The code below uses UI* methods (no CGImageRef), and I have tested this with up/down/left/right oriented images, and it seems to work great.
// get sub image
- (UIImage*) getSubImageFrom: (UIImage*) img WithRect: (CGRect) rect {
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, img.size.width, img.size.height);
// clip to the bounds of the image context
// not strictly necessary as it will get clipped anyway?
CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));
// draw image
[img drawInRect:drawRect];
// grab image
UIImage* subImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return subImage;
}
Because I needed it just now, here is M-V 's code in Swift 4:
func imageWithImage(image: UIImage, croppedTo rect: CGRect) -> UIImage {
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
let drawRect = CGRect(x: -rect.origin.x, y: -rect.origin.y,
width: image.size.width, height: image.size.height)
context?.clip(to: CGRect(x: 0, y: 0,
width: rect.size.width, height: rect.size.height))
image.draw(in: drawRect)
let subImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return subImage!
}
It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible. It would certainly eliminate a lot of effort!
Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify. The pixel scaling is preserved from the original image.
My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done when that tab is selected.
This code is also posted at Most efficient way to draw part of an image in iOS
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {
return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp];
}
// Draw a full image into a crop-sized area and offset to produce a cropped, rotated image
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {
// convert y coordinate to origin bottom-left
CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,
orgX = -aperture.origin.x,
scaleX = 1.0,
scaleY = 1.0,
rot = 0.0;
CGSize size;
switch (orientation) {
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
size = CGSizeMake(aperture.size.height, aperture.size.width);
break;
case UIImageOrientationDown:
case UIImageOrientationDownMirrored:
case UIImageOrientationUp:
case UIImageOrientationUpMirrored:
size = aperture.size;
break;
default:
assert(NO);
return nil;
}
switch (orientation) {
case UIImageOrientationRight:
rot = 1.0 * M_PI / 2.0;
orgY -= aperture.size.height;
break;
case UIImageOrientationRightMirrored:
rot = 1.0 * M_PI / 2.0;
scaleY = -1.0;
break;
case UIImageOrientationDown:
scaleX = scaleY = -1.0;
orgX -= aperture.size.width;
orgY -= aperture.size.height;
break;
case UIImageOrientationDownMirrored:
orgY -= aperture.size.height;
scaleY = -1.0;
break;
case UIImageOrientationLeft:
rot = 3.0 * M_PI / 2.0;
orgX -= aperture.size.height;
break;
case UIImageOrientationLeftMirrored:
rot = 3.0 * M_PI / 2.0;
orgY -= aperture.size.height;
orgX -= aperture.size.width;
scaleY = -1.0;
break;
case UIImageOrientationUp:
break;
case UIImageOrientationUpMirrored:
orgX -= aperture.size.width;
scaleX = -1.0;
break;
}
// set the draw rect to pan the image to the right spot
CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);
// create a context for the new image
UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);
CGContextRef gc = UIGraphicsGetCurrentContext();
// apply rotation and scaling
CGContextRotateCTM(gc, rot);
CGContextScaleCTM(gc, scaleX, scaleY);
// draw the image to our clipped context using the offset rect
CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);
// pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
// pop the context to get back to the default
UIGraphicsEndImageContext();
// Note: this is autoreleased
return cropped;
}
#Very small/simple Swift 5 version,
You shouldn't mix UI and CG objects, they sometimes have very different coordinate spaces. This can make you sad.
Note 👉 : self.draw(at:)
#inlinable private prefix func - (right: CGPoint) -> CGPoint
{
return CGPoint(x: -right.x, y: -right.y)
}
extension UIImage
{
public func cropped(to cropRect: CGRect) -> UIImage?
{
let renderer = UIGraphicsImageRenderer(size: cropRect.size)
return renderer.image
{
_ in
self.draw(at: -cropRect.origin)
}
}
}
Using the function
CGContextClipToRect(context, CGRectMake(0, 0, size.width, size.height));
Here's an example code, used for a different purpose but clips ok.
- (UIImage *)aspectFillToSize:(CGSize)size
{
CGFloat imgAspect = self.size.width / self.size.height;
CGFloat sizeAspect = size.width/size.height;
CGSize scaledSize;
if (sizeAspect > imgAspect) { // increase width, crop height
scaledSize = CGSizeMake(size.width, size.width / imgAspect);
} else { // increase height, crop width
scaledSize = CGSizeMake(size.height * imgAspect, size.height);
}
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToRect(context, CGRectMake(0, 0, size.width, size.height));
[self drawInRect:CGRectMake(0.0f, 0.0f, scaledSize.width, scaledSize.height)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
If you want a portrait crop down the center of every photo.
Use #M-V solution, & replace cropRect.
CGFloat height = imageTaken.size.height;
CGFloat width = imageTaken.size.width;
CGFloat newWidth = height * 9 / 16;
CGFloat newX = abs((width - newWidth)) / 2;
CGRect cropRect = CGRectMake(newX,0, newWidth ,height);
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/

Why the PNG image resized and shows very well in UIImageView but quality not good when I draw it by CGContextDrawImage

I have a PNG (52x52) image file,if I show it in a UIImageView (16x16) , it is showed good.
But if I try to use CGContextDrawImage to draw, the quality is very bad.
Please see below detail code:
resizeImage is used to resize the image (copy from apple site).
drawStonesPng is do draw the image and called form CALayer:: drawInContext
- (UIImage*)resizeImage:(UIImage*)image toWidth:(NSInteger)width height:(NSInteger)height
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize size = CGSizeMake(width, height);
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(size, NO, 0);
else
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Flip the context because UIKit coordinate system is upside down to Quartz coordinate system
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
// Draw the original image to the context
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), image.CGImage);
// Retrieve the UIImage from the current context
UIImage *imageOut = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageOut;
}
-(void)drawStonesPng:(CGContextRef)ctx{
_cellWidth=16;
float x1,y1;
float f1=0.48;
// self.contentsScale = [UIScreen mainScreen].scale;
UIImage* resizedImageBlack = [self resizeImage:[UIImage imageNamed:#"blackstone52"] toWidth:_cellWidth*f1*2 height:_cellWidth*f1*2];
UIImage* resizedImageBlackShadow = [self resizeImage:[UIImage imageNamed:#"blackshadow"] toWidth:_cellWidth*f1*2 height:_cellWidth*f1*2];
CGImageRef imgBlack = [resizedImageBlack CGImage];
CGImageRef imgBlackShadow = [resizedImageBlackShadow CGImage];
CGFloat width = CGImageGetWidth(imgBlack), height = CGImageGetHeight(imgBlack);
// CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh);
// CGContextSetShouldAntialias(ctx, true);
for(int y=0; y<_boardSize; y++) {
for(int x=0; x<_boardSize; x++) {
STONE_T stone = [MyGoController getStoneType:x y:y];
if(STONE_INVALID==stone){
CGContextClosePath(ctx);
return;
}
if(stone==STONE_BLACK){
x1=(x+1)*_cellWidth-_cellWidth*f1;
y1=(y+1)*_cellWidth-_cellWidth*f1;
// CGFloat scale = _cellWidth*0.9/width;
// NSLog(#"Scale: %f\nWidth: %f\nHeight: %f", scale, width, height);
// CGContextTranslateCTM(ctx, 0, height / scale);
// CGContextScaleCTM(ctx, 1.0, -1.0);
CGFloat scale = [[UIScreen mainScreen] scale];
self.contentsScale =[[UIScreen mainScreen] scale];
NSLog(#"Scale: %f\nWidth: %f\nHeight: %f", scale, width, height);
CGContextTranslateCTM(ctx, 0, width / scale);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextDrawImage(ctx, CGRectMake(x1+1, y1+1, width/scale,width/scale), imgBlackShadow);
CGContextDrawImage(ctx, CGRectMake(x1, y1, width/scale,width/scale), imgBlack);
}
}
}
resizedImageBlack = [self resizeImage:[UIImage imageNamed:#"whitestone52"] toWidth:_cellWidth*f1*2 height:_cellWidth*f1*2];
resizedImageBlackShadow = [self resizeImage:[UIImage imageNamed:#"whiteshadow"] toWidth:_cellWidth*f1*2 height:_cellWidth*f1*2];
imgBlack = [resizedImageBlack CGImage];
imgBlackShadow = [resizedImageBlackShadow CGImage];
//draw white stones
for(int y=0; y<_boardSize; y++) {
for(int x=0; x<_boardSize; x++) {
STONE_T stone = [MyGoController getStoneType:x y:y];
if(stone==STONE_WHITE){
x1=(x+1)*_cellWidth-_cellWidth*f1;
y1=(y+1)*_cellWidth-_cellWidth*f1;
// CGFloat scale = _cellWidth*0.9/width;
// NSLog(#"Scale: %f\nWidth: %f\nHeight: %f", scale, width, height);
// CGContextTranslateCTM(ctx, 0, height / scale);
// CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextDrawImage(ctx, CGRectMake(x1+1, y1+1, _cellWidth*f1*2,_cellWidth*f1*2), imgBlackShadow);
CGContextDrawImage(ctx, CGRectMake(x1, y1, _cellWidth*f1*2,_cellWidth*f1*2), imgBlack);
}
}
}
}
You need to set the content scale of the layer you're using :
CALayer theLayer = ....;
theLayer.contentsScale = [UIScreen mainScreen].scale

Memory issues with CGlayer

I am working CGlayers for drawing. I have implemented the drawing part, the drawingView(canvas) where the user will draw is dynamic, what I mean by this is, user can increase/decrease height of the drawingView(Canvas)
For example by default size - 500*200
When user clicks on expand button - 500*300
So here is my function when user expands the canvas,
- (void)IncreaseCanavasSize
{
CGContextRef layerContext1 = CGLayerGetContext(permanentDrawingLayer );
CGContextDrawLayerInRect(layerContext1, rectSize, newDrawingLayer);
rectSize = self.bounds;
CGFloat scale = self.contentScaleFactor;
CGRect bounds = CGRectMake(0, 0, self.bounds.size.width * scale, self.bounds.size.height * scale);
CGLayerRef layer = CGLayerCreateWithContext(layerContext1, bounds.size, NULL);
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextScaleCTM(layerContext, scale, scale);
[self setNewDrawingLayer:layer];
CGLayerRelease(layer);
CGContextDrawLayerInRect(layerContext, self.bounds, permanentDrawingLayer);
permanentDrawingLayer = nil;
}
So let me explain what I am doing in the above code, I am creating a newLayer with size before it is increased, and then transfer the previousDrawing from "*permanentDrawingLayer" to this "newDrawingLayer" and making "permanentDrawing" nil*
So Whenever I draw, I draw into permanentDrawingLayer, here is my drawRect method
- (void)drawRect:(CGRect)rect
{
if(permanentDrawingLayer == nil)
{
CGFloat scale = self.contentScaleFactor;
CGRect bounds = CGRectMake(0, 0, self.bounds.size.width * scale, self.bounds.size.height * scale);
CGLayerRef layer = CGLayerCreateWithContext(context, bounds.size, NULL);
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextScaleCTM(layerContext, scale, scale);
[self setPermanentDrawingLayer:layer];
CGLayerRelease(layer);
}
CGContextRef layerContext = CGLayerGetContext(permanentDrawingLayer);
CGContextBeginPath(layerContext);
CGContextAddPath(layerContext, mutablePath);
CGContextSetLineWidth(layerContext, self.lineWidth);
CGContextSetLineCap(layerContext, kCGLineCapRound);
CGContextSetLineJoin(layerContext, kCGLineJoinRound);
CGContextSetAllowsAntialiasing(layerContext, YES);
CGContextSetShouldAntialias(layerContext, YES);
CGContextSetStrokeColorWithColor(layerContext, self.lineColor.CGColor);
CGContextSetFillColorWithColor(layerContext, self.lineColor.CGColor);
CGContextSetBlendMode(layerContext,kCGBlendModeNormal);
CGContextStrokePath(layerContext);
CGContextDrawLayerInRect(context,rectSize, newDrawingLayer);
CGContextDrawLayerInRect(context, self.bounds, permanentDrawingLayer);
}
Here you can see, I draw newDrawingLayer with rectSize, and permanetDrawingLayer with newSize, So whenever, I draw on a canvas, if user has increased the size, the newDrawingLayer will draw that, and new drawing whatever user does will be done in permanentDrawingLayer. Hope it is clear.
Now here are my problems
1) Memory spikes to 10MB, when I drawSomething and increase the canvasSize, so if I do this action always you can imagine, how fast my app will terminate due to memory pressure.
2) What I saw was if I comment the Line " permanentDrawingLayer = nil" in function - "IncreaseCanavasSize", then memory doesnt spikes up, but if I dont do that, then when I draw next time, then Layer with newSize will not be created and I will get duplicate drawings.
So I need all your help
Your permanentDrawingLayer is a CGLayerRef so setting it to NULL, don't release it.
You need to call CGLayerRelease(permanentDrawingLayer) before setting it to NULL.
- (void)IncreaseCanavasSize
{
CGContextRef layerContext1 = CGLayerGetContext(permanentDrawingLayer );
CGContextDrawLayerInRect(layerContext1, rectSize, newDrawingLayer);
rectSize = self.bounds;
CGFloat scale = self.contentScaleFactor;
CGRect bounds = CGRectMake(0, 0, self.bounds.size.width * scale, self.bounds.size.height * scale);
CGLayerRef layer = CGLayerCreateWithContext(layerContext1, bounds.size, NULL);
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextScaleCTM(layerContext, scale, scale);
[self setNewDrawingLayer:layer];
CGLayerRelease(layer);
CGContextDrawLayerInRect(layerContext, self.bounds, permanentDrawingLayer);
CGLayerRelease(permanentDrawingLayer);
permanentDrawingLayer = NULL;
}
Also if your method -setPermanentDrawingLayer: looks like the method -setCurrentDrawingLayer: in your previous question.
In -IncreaseCanavasSize You can simply replace the line permanentDrawingLayer = nil; with [self setPermanentDrawingLayer:NULL];, then it will release the CGLayerRef and set it to NULL.

Drawing a pattern along a path

My goal is to take a pattern like this
and draw it repeatedly along a circular path to produce something similar to this image:
I found several code examples in other questions and an full demo project here but the result is this:
I think the difference between the two images is obvious, but I find it hard to describe (pardon my lack of graphics vocabulary). The result seems to be tiling without the desired rotation/deformation of the pattern. I think I can live with the lack of deformation, but the rotation is key. I think that perhaps the draw callback could/should be modified to include a rotation, but can't figure out how to retrieve/determine the angle at the point of the callback.
I considered an approach where I manually deformed/rotated the image and drew it several times around a centerpoint to achieve the effect I want, but I believe that CoreGraphics could do it with more efficiency and with less code.
Any suggestions about how to achieve the result I want would be appreciated.
Here is the relevant code from the ChalkCircle project:
const float kPatternWidth = 8;
const float kPatternHeight = 8;
void DrawPatternCellCallback(void *info, CGContextRef cgContext)
{
UIImage *patternImage = [UIImage imageNamed:#"chalk_brush.png"];
CGContextDrawImage(cgContext, CGRectMake(0, 0, kPatternWidth, kPatternHeight), patternImage.CGImage);
}
- (void)drawRect:(CGRect)rect {
float startDeg = 0; // where to start drawing
float endDeg = 360; // where to stop drawing
int x = self.center.x;
int y = self.center.y;
int radius = (self.bounds.size.width > self.bounds.size.height ? self.bounds.size.height : self.bounds.size.width) / 2 * 0.8;
CGContextRef ctx = UIGraphicsGetCurrentContext();
const CGRect patternBounds = CGRectMake(0, 0, kPatternWidth, kPatternHeight);
const CGPatternCallbacks kPatternCallbacks = {0, DrawPatternCellCallback, NULL};
CGAffineTransform patternTransform = CGAffineTransformIdentity;
CGPatternRef strokePattern = CGPatternCreate(
NULL,
patternBounds,
patternTransform,
kPatternWidth, // horizontal spacing
kPatternHeight,// vertical spacing
kCGPatternTilingNoDistortion,
true,
&kPatternCallbacks);
CGFloat color1[] = {1.0, 1.0, 1.0, 1.0};
CGColorSpaceRef patternSpace = CGColorSpaceCreatePattern(NULL);
CGContextSetStrokeColorSpace(ctx, patternSpace);
CGContextSetStrokePattern(ctx, strokePattern, color1);
CGContextSetLineWidth(ctx, 4.0);
CGContextMoveToPoint(ctx, x, y - radius);
CGContextAddArc(ctx, x, y, radius, (startDeg-90)*M_PI/180.0, (endDeg-90)*M_PI/180.0, 0);
CGContextClosePath(ctx);
CGContextDrawPath(ctx, kCGPathStroke);
CGPatternRelease(strokePattern);
strokePattern = NULL;
CGColorSpaceRelease(patternSpace);
patternSpace = NULL;
}
.SOLUTION FROM SAM
I modified sam's solution to handle non-square patterns, center the result, and remove hard coded numbers by calculating them from the passed in image:
#define MAX_CIRCLE_DIAMETER 290.0f
#define OVERLAP 1.5f
-(void) drawInCircle:(UIImage *)patternImage
{
int numberOfImages = 12;
float diameter = (MAX_CIRCLE_DIAMETER * numberOfImages * patternImage.size.width) / ( (2.0 * M_PI * patternImage.size.height) + (numberOfImages * patternImage.size.width));
//get the radius, circumference and image size
CGRect replicatorFrame = CGRectMake((320-diameter)/2.0f, 60.0f, diameter, diameter);
float radius = diameter/2;
float circumference = M_PI * diameter;
float imageWidth = circumference/numberOfImages;
float imageHeight = imageWidth * patternImage.size.height / patternImage.size.width;
//create a replicator layer and add it to our view
CAReplicatorLayer *replicator = [CAReplicatorLayer layer];
replicator.frame = replicatorFrame;
[self.view.layer addSublayer:replicator];
//configure the replicator
replicator.instanceCount = numberOfImages;
//apply a rotation transform for each instance
CATransform3D transform = CATransform3DIdentity;
transform = CATransform3DRotate(transform, M_PI / (numberOfImages/2), 0, 0, 1);
replicator.instanceTransform = transform;
//create a sublayer and place it inside the replicator
CALayer *layer = [CALayer layer];
//the frame places the layer in the middle of the replicator layer and on the outside of
//the replicator layer so that the the size is accurate relative to the circumference
layer.frame = CGRectMake(radius - (imageWidth/2.0) - (OVERLAP/2.0), -imageHeight/2.0, imageWidth+OVERLAP, imageHeight);
layer.anchorPoint = CGPointMake(0.5, 1);
[replicator addSublayer:layer];
//apply a perspective transform to the layer
CATransform3D perspectiveTransform = CATransform3DIdentity;
perspectiveTransform.m34 = 1.0f / -radius;
perspectiveTransform = CATransform3DRotate(perspectiveTransform, (M_PI_4), -1, 0, 0);
layer.transform = perspectiveTransform;
//set the image as the layer's contents
layer.contents = (__bridge id)patternImage.CGImage;
}
Using Core Animation's replicator layer, I managed to create this result:
I think it's close to what your looking for. In this example all the images are square with a 3d X rotation applied to each of them.
#import <QuartzCore/QuartzCore.h>
//set the number of images and the diameter (width) of the circle
int numberOfImages = 30;
float diameter = 450.0f;
//get the radius, circumference and image size
float radius = diameter/2;
float circumference = M_PI * diameter;
float imageSize = circumference/numberOfImages;
//create a replicator layer and add it to our view
CAReplicatorLayer *replicator = [CAReplicatorLayer layer];
replicator.frame = CGRectMake(100.0f, 100.0f, diameter, diameter);
[self.view.layer addSublayer:replicator];
//configure the replicator
replicator.instanceCount = numberOfImages;
//apply a rotation transform for each instance
CATransform3D transform = CATransform3DIdentity;
transform = CATransform3DRotate(transform, M_PI / (numberOfImages/2), 0, 0, 1);
replicator.instanceTransform = transform;
//create a sublayer and place it inside the replicator
CALayer *layer = [CALayer layer];
//the frame places the layer in the middle of the replicator layer and on the outside of the replicator layer so that the the size is accurate relative to the circumference
layer.frame = CGRectMake(radius - (imageSize/2), -imageSize/2, imageSize, imageSize);
layer.anchorPoint = CGPointMake(0.5, 1);
[replicator addSublayer:layer];
//apply a perspective transofrm to the layer
CATransform3D perspectiveTransform = CATransform3DIdentity;
perspectiveTransform.m34 = 1.0f / -radius;
perspectiveTransform = CATransform3DRotate(perspectiveTransform, (M_PI_4), -1, 0, 0);
layer.transform = perspectiveTransform;
//set the image as the layer's contents
layer.contents = (__bridge id)[UIImage imageNamed:#"WCR3Q"].CGImage;

Getting blur stroked while drawing in Cglayer

I am working with a drawing app where I draw inside CGlayers and then draw on grahics context, but I drawing is getting blurred.
Here is my code
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();//Get a reference to current context(The context to draw)
NSLog(#"Size1%#", NSStringFromCGRect(self.bounds));
if(self.currentDrawingLayer == nil)
{
self.currentDrawingLayer = CGLayerCreateWithContext(context, self.bounds.size, NULL);
}
CGPoint mid1 = midPoint(m_previousPoint1, m_previousPoint2);
CGPoint mid2 = midPoint(m_currentPoint, m_previousPoint1);
CGContextRef layerContext = CGLayerGetContext(self.currentDrawingLayer);
// UIGraphicsBeginImageContextWithOptions(self.bounds.size,YES,0.0);
CGContextSetLineCap(layerContext, kCGLineCapRound);
CGContextSetBlendMode(layerContext, kCGBlendModeNormal);
CGContextSetLineJoin(layerContext, kCGLineJoinRound);
CGContextSetLineWidth(layerContext, self.lineWidth);
CGContextSetStrokeColorWithColor(layerContext, self.lineColor.CGColor);
CGContextSetShouldAntialias(layerContext, YES);
CGContextSetAllowsAntialiasing(layerContext, YES);
CGContextSetAlpha(layerContext, self.lineAlpha);
CGContextSetFlatness(layerContext, 1.0f);
CGContextBeginPath(layerContext);
CGContextMoveToPoint(layerContext, mid1.x, mid1.y);//Position the current point
CGContextAddQuadCurveToPoint(layerContext, m_previousPoint1.x, m_previousPoint1.y, mid2.x, mid2.y);
CGContextStrokePath(layerContext);//paints(fills) the line along the current path.
CGContextDrawLayerInRect(context, self.bounds, self.currentDrawingLayer);
}
I get blurr strokes with this code, according to docs, whenever we create the CGlayer with graphicsContext, they say that it will provide the resolution of the device, but then why I am getting blur lines.
Here is the image of blur lines
Regards
Ranjit
Substitute this in your code where you create your CGLayer:
if(self.currentDrawingLayer == nil)
{
CGFloat scale = self.contentScaleFactor;
CGRect bounds = CGRectMake(0, 0, self.bounds.size.width * scale, self.bounds.size.height * scale);
CGLayerRef layer = CGLayerCreateWithContext(context, bounds.size, NULL);
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextScaleCTM(layerContext, scale, scale);
self.currentDrawingLayer = layer;
}

Resources