I have a app with 2 UIImageView that are Layed over each other. I have backgroundImg and FrontImg. Where frontImg can be: rotated, moved, scaled via the UIGestureRecognizers.
When I want to save my UIImages's I merge them, but they are saved as if they were never touched.
Does anyone knows how to fix this?
This is my saving method:
UIGraphicsBeginImageContext(self.backgroundImg.image.size);
CGRect rect = CGRectMake(0, 0, self.backgroundImg.image.size.width, self.backgroundImg.image.size.height);
self.frontImg.contentMode = UIViewContentModeScaleAspectFit;
[self.backgroundImg.image drawInRect:rect];
[self.frontImg.image drawInRect:rect];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[imageView3 setImage:resultingImage];
UIGraphicsBeginImageContextWithOptions(self.backgroundImg.bounds.size, NO,0.0);
[self.backgroundImg.image drawInRect:CGRectMake(0, 0, self.backgroundImg.frame.size.width, self.backgroundImg.frame.size.height)];
//UIImage *SaveImage = UIGraphicsGetImageFromCurrentImageContext();
//UIImage *resultingImage = [self mergeImage:self.backgroundImg.image withImage:self.frontImg.image];
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(resultingImage, self,#selector(image:didFinishSavingWithError:contextInfo:), nil);
These are my gesture functions:
-(IBAction)handlePan:(UIPanGestureRecognizer *)recognizer{
CGPoint translation = [recognizer translationInView:self.view];
recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x,
recognizer.view.center.y + translation.y);
[recognizer setTranslation:CGPointMake(0, 0) inView:self.view];
}
-(IBAction)handlePinch:(UIPinchGestureRecognizer *)recognizer
{
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale =1;
}
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer{
return ![gestureRecognizer isKindOfClass:[UIPanGestureRecognizer class]];
}
-(IBAction)handleRotate:(UIRotationGestureRecognizer *)recognizer
{
recognizer.view.transform = CGAffineTransformRotate(recognizer.view.transform, recognizer.rotation);
self.frontImg.transform = CGAffineTransformRotate(recognizer.view.transform, recognizer.rotation);
recognizer.rotation = 0;
}
Your front image isn't being drawn at the coordinates, scale, etc it has in the view. You are drawing it with the same rect as the background (0, 0, backgroundWidth, backgroundHeight). You need to make sure that when you draw it to the graphics context you're getting the proper location, size, and scale.
If both of the images are contained within the same view, you can just screenshot that view. This post should help you out: http://ios-dev-blog.com/how-to-create-uiimage-from-uiview/
Related
I am making a app similar to a drawing app, and want to draw an image at the place the user touches. I can draw the image at the location O.K. with this code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect imageRect = CGRectMake(self.locationOfTouch.x, self.locationOfTouch.y, 50, 50);
CGFloat centerX = self.locationOfTouch.x - (imageRect.size.width/2);
CGFloat centerY = self.locationOfTouch.y - (imageRect.size.height/2);
// To center image on touch loc
imageRect.origin.x = centerX;
imageRect.origin.y = centerY;
UIImage * imageImage = [UIImage imageNamed:#"image.png"];
CGImageRef imageRef = imageImage.CGImage;
CGContextBeginPath(ctx);
CGContextDrawImage(ctx, imageRect, imageRef);
But, whenever I tap again, the image moves to the new spot.
I would like it to "duplicate" every time it was tapped.
How can I do this?
You can try this.
- (void)drawLayer:(CALayer*)layer inContext:(CGContextRef)ctx
{
CGPoint locationPoint = [self.touch locationInView:self];
CGRect imageRect = CGRectMake(locationPoint.x, locationPoint.y, 50, 50);
CGFloat centerX = locationPoint.x - (imageRect.size.width/2);
CGFloat centerY = locationPoint.y - (imageRect.size.height/2);
imageRect.origin.x = centerX;
imageRect.origin.y = centerY;
UIImage * imageImage = [UIImage imageNamed:#"add.png"];
UIImageView *imageView = [[UIImageView alloc ] initWithFrame:imageRect];
imageView.image = imageImage;
[layer addSublayer:imageView.layer];
}
it can work.
After going through this link, issue with my code is that output image is unable to set proper x and y values as cropped image seems to have 0 and 0 in the resultant image irrespective to where I zoom (or where the scroll offset is calculated). Here's what I tried.
- (IBAction)crop:(id)sender
{
float zoomScale = 1.0f / [self.scroll zoomScale];
CGRect rect;
NSLog(#"contentOffset is :%f,%f",[self.scroll contentOffset].x,[self.scroll contentOffset].y);
rect.origin.x = self.scroll.contentOffset.x * zoomScale;
rect.origin.y = self.scroll.contentOffset.y * zoomScale;
rect.size.width = self.scroll.bounds.size.width * zoomScale;
rect.size.height = self.scroll.bounds.size.height * zoomScale;
UIGraphicsBeginImageContextWithOptions( CGSizeMake(rect.size.width, rect.size.height),
NO,
0.);
NSLog(#"rect offset is :%f,%f",rect.origin.x,rect.origin.y);
CGPoint point = CGPointMake(-rect.origin.x, -rect.origin.y); **//even though above NSLog have some values, but output image is unable to set proper x and y values as cropped image seems to have 0 and 0 in the resultant image.**
[[self.imagV image] drawAtPoint:point
blendMode:kCGBlendModeCopy
alpha:1];
self.croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
DTImageViewController *imageViewController = [[DTImageViewController alloc] initWithNibName:#"DTImageViewController" bundle:nil];
imageViewController.image = self.croppedImage;
[self.navigationController pushViewController:imageViewController animated:YES];
}
Similar code as already posted but just taking whole UIScrollView bounds without passing a CGRect
-(void)takeScreenShotOfScrollView
{
UIGraphicsBeginImageContextWithOptions(scrollView.bounds.size, YES, [UIScreen mainScreen].scale);
CGPoint offset = scrollView.contentOffset;
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -offset.x, -offset.y);
[scrollView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
img = [SDImageHelper imageWithImage:img_image scaledToSize:CGSizeMake(769, 495)];
}
BONUS:
The cropping method
+(UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I'm using the following code for this and it does exactly what expected:
- (UIImage *) croppedImageOfView:(UIView *) view withFrame:(CGRect) rect
{
UIGraphicsBeginImageContextWithOptions(rect.size,NO,0.0);
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -rect.origin.x, -rect.origin.y);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *visibleScrollViewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return visibleScrollViewImage;
}
- (void) crop
{
CGRect neededRect = CGRectMake(50,50,100,100);
UIImage *image = [self croppedImageOfView:_scrollView withFrame:neededRect];
}
Works well even if content is zoomed, the only thing that must be calculated wisely is needed area CGRect.
I am developing simple drawing app using UIKit using the idea shared in Ray Wenderlich's tutorial. Difference is that I need to implement a feature so that I can zoom/scale into my drawing and draw finer lines. I am able to zoom in using CGAffineTransformScale (with ofcourse UIPinchGestureRecognizer) and move around the UIImage using CGAffineTransform - the problem is that once zoomed in the UITouch points detected and the actual touch points have a huge offset. This offset gets bigger as I keep scaling the image.
In the code
drawingImage - one which user interacts with savingImage - drawn lines are savedtransform_translate - CGAffinetransformlastScale - CGFloat to save last zoom scale valuelastPoint - CGPoint to save last point of touchlastPointForPinch - CGPoint to save last pinch point
Pinch gesture is initialized in viewDidLoad as -
pinchGestureRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchGestureDetected:)];
[self.drawingImage addGestureRecognizer:pinchGestureRecognizer];
The method for UIPinchGesture detection is is -
- (void)pinchGestureDetected:(UIPinchGestureRecognizer *)recognizer
{
if([recognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [recognizer scale];
lastPointForPinch = [recognizer locationInView:self.drawingImage];
}
if ([recognizer state] == UIGestureRecognizerStateBegan ||
[recognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[recognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 2.0;
const CGFloat kMinScale = 1.0;
CGFloat newScale = 1 - (lastScale - [recognizer scale]);
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[recognizer view] transform], newScale, newScale);
self.savingImage.transform = transform;
self.drawingImage.transform=transform;
lastScale = [recognizer scale]; // Store the previous scale factor for the next pinch gesture call
CGPoint point = [recognizer locationInView:self.drawingImage];
transform_translate = CGAffineTransformTranslate([[recognizer view] transform], point.x - lastPointForPinch.x, point.y - lastPointForPinch.y);
self.savingImage.transform = transform_translate;
self.drawingImage.transform=transform_translate;
lastPointForPinch = [recognizer locationInView:self.drawingImage];
}
}
The method for drawing of lines (FYI this is a fairly standard procedure taken from the above mentioned tutorial, putting it here if incase I made some mistake here it can be caught) -
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
mouseSwiped = NO;
UITouch *touch = [touches anyObject];
lastPoint = [touch locationInView:self.drawingImage];
UIGraphicsBeginImageContext(self.savingImage.frame.size);
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
mouseSwiped = YES;
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.drawingImage];
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(ctxt, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(ctxt, currentPoint.x, currentPoint.y);
CGContextSetLineCap(ctxt, kCGLineCapRound);
CGContextSetLineWidth(ctxt, brush );
CGContextSetRGBStrokeColor(ctxt, red, green, blue, opacity);
CGContextSetBlendMode(ctxt,kCGBlendModeNormal);
CGContextSetShouldAntialias(ctxt,YES);
CGContextSetAllowsAntialiasing(ctxt, YES);
CGContextStrokePath(ctxt);
self.drawingImage.image = UIGraphicsGetImageFromCurrentImageContext();
lastPoint = currentPoint;
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
if(!mouseSwiped) {
UIGraphicsEndImageContext();
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.drawingImage];
UIGraphicsBeginImageContext(self.drawingImage.frame.size);
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), brush);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), red, green, blue, opacity);
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.drawingImage.image = UIGraphicsGetImageFromCurrentImageContext();
[self.drawingImage.image drawInRect:CGRectMake(0, 0, self.drawingImage.frame.size.width, self.drawingImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
}
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(self.savingImage.frame.size);
[self.savingImage.image drawInRect:CGRectMake(0, 0, self.savingImage.frame.size.width, self.savingImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
[self.drawingImage.image drawInRect:CGRectMake(0, 0, self.drawingImage.frame.size.width, self.drawingImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
self.savingImage.image = UIGraphicsGetImageFromCurrentImageContext();
self.drawingImage.image=nil;
UIGraphicsEndImageContext();
}
}
I have tried doing CGPointApplyAffineTransform(point, transform_translate) but the huge offset still remains.
Hope my question was explained clearly and someone can help me. I have been struggling to make progress in this. Thanks in advance
I found the solution finally...one silly mistake done again and again.
locationInView was needed to be from self.view and not from the image.
#davidkonard thanks for the suggestion - actually I did not realize that (in context of drawing app) the user touches the screen with an intent that exactly at that point the drawing will done, therefore even if the UIImageView is moved still the user wants to draw a point/line/whatever under his finger. So locationInView is supposed to be self.view (and self.view in my case was not transformed ever).
Hope this explains why I was making a mistake and how I came up with solution.
I created RotatedView subclass of UIView and added UIImageView as subview of RotatedView and also draw a image on RotatedView using drawRect: method same as image of imageView. I applied pinch and rotate gestures on imageView. When i pinch the imageView drawing image is also changed. But when i rotate the imageView, the drawing image is not changed. I used following code::
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.backgroundColor = [UIColor redColor];
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(80, 150, 100, 150)];
imageView.image = [UIImage imageNamed:#"Images_6.jpg"];
imageView.userInteractionEnabled = YES;
[self addSubview:imageView];
UIRotationGestureRecognizer *rotationGesture = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotationMethod:)];
rotationGesture.delegate = self;
[imageView addGestureRecognizer:rotationGesture];
}
return self;
}
- (void)drawRect:(CGRect)rect
{
// Drawing code
CGContextRef context =UIGraphicsGetCurrentContext();
CGRect rectFrame = CGRectMake(0, 0, imageView.frame.size.width, imageView.frame.size.height);
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetLineWidth(context, 10.0);
CGContextBeginPath(context);
CGContextMoveToPoint(context, 0, 0);
CGContextAddLineToPoint(context, 90, 0);
CGContextAddLineToPoint(context, 90, 90);
CGContextAddLineToPoint(context, 0, 90);
CGContextAddLineToPoint(context, 0, 0);
CGContextClip(context);
CGContextStrokePath(context);
CGContextDrawImage(context,rectFrame, imageView.image.CGImage);
}
-(void)rotationMethod:(UIRotationGestureRecognizer *)gestureRecognizer{
NSLog(#"rotationMethod");
if ([gestureRecognizer state]==UIGestureRecognizerStateBegan || [gestureRecognizer state]==UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformRotate(gestureRecognizer.view.transform, gestureRecognizer.rotation);
gestureRecognizer.view.transform = transform;
[gestureRecognizer setRotation:0];
}
[self setNeedsDisplay];
}
How do i get rotate image in UIView when the imageView is rotated?
**
Edited:
**
I got solution using first method. Now i am using second method. I think this is simple and best but I am not sure which one is best. In second method, image is rotated but not at center point. I am struggle to solve this problem. Please help me.
Modifying methods are:
- (void)drawRect:(CGRect)rect
{
NSLog(#"drawRect");
// Drawing code
CGContextRef context =UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetLineWidth(context, 10.0);
CGContextBeginPath(context);
CGContextMoveToPoint(context, imageRect.origin.x, imageRect.origin.y);
CGContextAddLineToPoint(context, imageRect.size.width, imageRect.origin.y);
CGContextAddLineToPoint(context, imageRect.size.width, imageRect.size.height);
CGContextAddLineToPoint(context, imageRect.origin.x, imageRect.size.height);
CGContextAddLineToPoint(context, imageRect.origin.x, imageRect.origin.y);
CGContextClip(context);
CGContextStrokePath(context);
CGContextDrawImage(context, imageRect, [self rotateImage:imageView.image].CGImage);
}
-(void)rotationMethod:(UIRotationGestureRecognizer *)gestureRecognizer{
NSLog(#"rotationMethod");
if ([gestureRecognizer state]==UIGestureRecognizerStateBegan || [gestureRecognizer state]==UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformRotate(gestureRecognizer.view.transform, gestureRecognizer.rotation);
gestureRecognizer.view.transform = transform;
lastRotation = gestureRecognizer.rotation;
[gestureRecognizer setRotation:0];
}
[self setNeedsDisplay];
}
- (UIImage *)rotateImage:(UIImage *) img
{
NSLog(#"rotateImage");
CGAffineTransform transform = imageView.transform;
transform = CGAffineTransformTranslate(transform, img.size.width, img.size.height);
transform = CGAffineTransformRotate(transform, lastRotation);
transform = CGAffineTransformScale(transform, -1, -1);
CGContextRef ctx = CGBitmapContextCreate(NULL, img.size.width, img.size.height,
CGImageGetBitsPerComponent(img.CGImage), 0,
CGImageGetColorSpace(img.CGImage),
CGImageGetBitmapInfo(img.CGImage));
CGContextConcatCTM(ctx, transform);
CGContextDrawImage(ctx, CGRectMake(20,20,img.size.width,img.size.height), img.CGImage);
CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
CGContextRelease(ctx);
CGImageRelease(cgimg);
return newImg;
}
You need to use CGContextScaleCTM and CGContextRotateCTM (and possibly to appropriately transform your CGContextRef to match your UIImageView's transform.
Take a look at the Quartz 2D Programming Guide
Ok. Finally i got it using ANImageBitmapRep. I declared lastRotation, angle,rotateMe in RotatedView.h class.
-(void)rotationMethod:(UIRotationGestureRecognizer *)gestureRecognizer{
if ([gestureRecognizer state]==UIGestureRecognizerStateBegan) {
if (!rotateMe) {
rotateMe = [[ANImageBitmapRep alloc] initWithImage:displayImageView.image];
}else{
rotateMe =[ANImageBitmapRep imageBitmapRepWithImage:displayImageView.image];
}
}
CGFloat rotation = 0.0 - (lastRotation - [gestureRecognizer rotation]);
CGAffineTransform currentTransform = [gestureRecognizer view].transform;
CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform,rotation);
[[gestureRecognizer view] setTransform:newTransform];
lastRotation = [gestureRecognizer rotation];
angle =lastRotation * (180 / M_PI);
[self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect
{
CGRect imageRect = CGRectMake(0, 0, 90, 90);
// Drawing code
CGContextRef context =UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetLineWidth(context, 10.0);
CGContextBeginPath(context);
CGContextMoveToPoint(context, imageRect.origin.x, imageRect.origin.y);
CGContextAddLineToPoint(context, imageRect.size.width, imageRect.origin.y);
CGContextAddLineToPoint(context, imageRect.size.width, imageRect.size.height);
CGContextAddLineToPoint(context, imageRect.origin.x, imageRect.size.height);
CGContextAddLineToPoint(context, imageRect.origin.x, imageRect.origin.y);
CGContextClip(context);
CGContextStrokePath(context);
UIImage *rotatedImage1=[UIImage imageWithCGImage:[rotateMe imageByRotating:angle]];
CGContextDrawImage(context, imageRect, rotatedImage1.CGImage);
}
I find this solution. If anyone find best solution, give me suggestions according this problem.
I have an UIImageView displaying a picture.
I registered a pinch gesture to zoom in/out and a pan gesture to draw on it (I have a good reason to use the pan gesture for that, instead of touchMoved)
Here is the pinch target:
- (void)pinchGestureActionOnSubject:(UIPinchGestureRecognizer *)pinch
{
if([(UIPinchGestureRecognizer*)pinch state] == UIGestureRecognizerStateBegan)
{
_lastScale = 1.0;
}
CGFloat scale = 1.0 - (_lastScale - [(UIPinchGestureRecognizer*)pinch scale]);
CGAffineTransform currentTransform = self.imageViewSubject.transform;
CGAffineTransform newTransform = CGAffineTransformScale(currentTransform, scale, scale);
[self.imageViewSubject setTransform:newTransform];
_lastScale = [(UIPinchGestureRecognizer*)pinch scale];
}
and here is the pan target:
-(void) panOnSubjectForDrawing:(id)sender
{
if([sender numberOfTouches] > 0)
{
CGPoint currentPoint = [sender locationOfTouch:0 inView:self.imageViewSubject];
if(lastPoint.x == 0 || lastPoint.y == 0)
{
lastPoint.x = currentPoint.x;
lastPoint.y = currentPoint.y;
}
UIGraphicsBeginImageContext(self.imageViewSubject.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.imageViewSubject.image drawInRect:CGRectMake(0, 0, self.imageViewSubject.bounds.size.width, self.imageViewSubject.bounds.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, 10); // Epaisseur du trait
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeClear); // Transparent
CGContextMoveToPoint(context, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextStrokePath(context);
self.imageViewSubject.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = currentPoint;
}
}
The problem is that if I draw after I zoom in, the picture get blurry. But if I draw without zoom, the image keep intact.
I heard that it can be because my frame has to be integer, so I add:
self.imageViewSubjectframe = CGRectIntegral(self.imageViewSubject.frame);
before I draw, but it is worst, more more more blurry.
Any idea?