UIImageView image crop based on UIView mask - ios

So i have an canvas (UIView) and a UIImageView, the canvas acts as a mask over the imageview
i am using UIGestureRecognizers to zoom and rotate the UIImageView which is under the canvas.
i want to convert the final image (show in the canvas to a UIImage, one solution is to convert the canvas to an image like below
UIGraphicsBeginImageContext(self.canvas.bounds.size);
[self.canvas.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *newCombinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
now this works fine but the problem with this solution is the image is cropped to the dimensions of the canvas so the resolution is very low.
another option i explored was to use some custom UIImage categories to rotate and scale.
[[[self.photoImage image] imageRotatedByDegrees:rotaton_angle]
imageAtRect:CGRectMake(x,y width,height)]
i need to provide rotation angle (the rotation angle provided by UIGesture Delegate is not in Degrees or Radians, then there is x,y,width,height, i imagine these needs to be calculated based on some scale, (i do get scale value from UIGesture delegate but they do not seem to be correct for this function)
there are a number of solutions here, that guides you to crop and image given a rect. but in my case the rect is not the same scale as the image also there is rotation involved.
any help will be appreciated.

i have managed to solve this, here is my solution, it most definitely isn't the cleanest, but it works.
i needed to handle 3 things, pan, zoom and rotate.
firstly i used the the UIGestureRecognizer Delegate to get cumulative values incrementing at UIGestureRecognizerStateEnded for all 3.
then for rotation i just used the UIImage Category discussed here
self.imagetoEdit = [self.imagetoEdit imageRotatedByRadians:total_rotation];
for zooming (scale) i used GPUImage (i am using this throughout the app)
GPUImageTransformFilter *scaleFilter = [[GPUImageTransformFilter alloc] init];
[scaleFilter setAffineTransform:CGAffineTransformMakeScale(total_scale, total_scale)];
[scaleFilter prepareForImageCapture];
self.imagetoEdit = [scaleFilter imageByFilteringImage:self.imagetoEdit];
for panning, im doing this. (Not the cleanest code :S ) also using the above mention UIImage+Categories.
CGFloat x_ = (translation_point.x/canvas.frame.size.width)*self.imagetoEdit.size.width;
CGFloat y_ = (translation_point.y/canvas.frame.size.height)*self.imagetoEdit.size.height;
CGFloat xx = 0;
CGFloat yy = 0;
CGFloat ww = self.imagetoEdit.size.width-x_;
CGFloat hh = self.imagetoEdit.size.height-y_;
if (translation_point.x < 0) {
xx = x_*-1;
ww = self.imagetoEdit.size.width + xx;
}
if (translation_point.y < 0) {
yy = y_*-1;
hh = self.imagetoEdit.size.height + yy;
}
CGRect cgrect = CGRectMake(xx,yy, ww, hh);
self.imagetoEdit = [self.imagetoEdit imageAtRect:cgrect];
everything seem to work.

This might be helpful...
Resizing a UIImage the right way
It might need some updating for ARC etc... though I think there are people who have done it and have posted it on Github.

Related

UIImageView height and width changes after rotation

Hey guys I've been trying to use two UISliders to manipulate one UIImageView. One slider to scale the image and one to rotate it. However, when I switch from one slider to the other it resets the previous rotation. (So if I scale the image to say 125x125 when its normally 100x100 and then go to use the other slider to rotate the image, it rotates it but changes its size back to 100x100 and vice versa). So anyways, Ive tried using CGAffineTransformConcat to combine the two transformations, Ive tried setting the frame of the ImageView after every transformation, I've tried deleting and readding the imageView to _myArray to "save" it, and I briefly tried using anchorpoints but all with no luck. So my question is, what am I doing wrong??? I feel like my code should work and each action shouldn't reset my UIImageView but I have no idea why it won't.
- (IBAction)scaleImage:(UISlider *)sender {
NSLog(#"ScaleImage Called");
UIImageView *selectedImage = [_myArray objectAtIndex:0];
CGAffineTransform transform = CGAffineTransformScale(CGAffineTransformIdentity, sender.value, sender.value);
selectedImage.transform = transform;
_scaleValue = sender.value;
//_width = selectedImage.frame.size.width
//_height = selectedImage.frame.size.height
_sizeLabel.text = [NSString stringWithFormat:#"%d", _scaleValue];
}
- (IBAction)rotateImage:(UISlider *)sender {
UIImageView *selectedImage = [_myArray objectAtIndex:0];
//selectedImage.frame = CGRectMake(selectedImage.frame.origin.x, selectedImage.frame.origin.y, _width, _height);
selectedImage.transform = CGAffineTransformMakeRotation(sender.value * 2*M_PI / sender.maximumValue);
_rotationLabel.text = [NSString stringWithFormat:#"%d", _rotationValue];
_rotationValue = sender.value;
}
Does anyone know why each method resets the others previous transformation.
If you look at your code, you will notice that you are explicitly setting the transform to the new value across the Identity transform.
CGAffineTransform transform = CGAffineTransformScale(CGAffineTransformIdentity, sender.value, sender.value);
What you will want to do is apply the two together... you could save a rotation transform and a scale transform as properties and the IBActions would update those and then set the transform as a concatenation of the two.
So, the scale would look like:
self.scaleValue = sender.value;
And the rotation would look like:
self.rotateValue = sender.value * 2*M_PI / sender.maximumValue;
And you would have another method called transform
-(void)transform
{
CGAffineTransform fullTransform = CGAffineTransformMakeRotation(self.rotateValue);
CGAffineTransformScale(fullTransform, self.scaleValue, self.scaleValue);
UIImageView *selectedImage = [_myArray objectAtIndex:0];
selectedImage.transform = fullTransform;
}
You would then call that transform method in the IBAction calls at the end instead of trying to set the transforms separately

Rotating a view using another view's center as the anchor point

I have two image views. The first is the blueish arrow, and the second is the white circle, with a black dot drawn to represent the center of the circle.
I'm trying to rotate the arrow so it's anchor point is the black dot in the picture like this
Right now I'm setting the anchor point of the arrow's layer to a point calculated like this
CGFloat y = _userImageViewContainer.center.y - CGRectGetMinY(_directionArrowView.frame);
CGFloat x = _userImageViewContainer.center.x - CGRectGetMinX(_directionArrowView.frame);
CGFloat yOff = y / CGRectGetHeight(_directionArrowView.frame);
CGFloat xOff = x / CGRectGetWidth(_directionArrowView.frame);
_directionArrowView.center = _userImageViewContainer.center;
CGPoint anchor = CGPointMake(xOff, yOff);
NSLog(#"anchor: %#", NSStringFromCGPoint(anchor));
_directionArrowView.layer.anchorPoint = anchor;
Since the anchor point is set as a percentage of the view, i.e. the coords for the center are (.5, .5), I'm calculating the percentage of the height in arrow's frame where the black dot falls. But my math, even after working out by hand, keeps resulting in .5, which isn't right because it's further than half way down when the arrow is in the original position (vertical, with the point up).
I'm rotating based on the user's compass heading
CLHeading *heading = [notif object];
// update direction of arrow
CGFloat degrees = [self p_calculateAngleBetween:[PULAccount currentUser].location.coordinate
and:_user.location.coordinate];
_directionArrowView.transform = CGAffineTransformMakeRotation((degrees - heading.trueHeading) * M_PI / 180);
The rotation is correct, it's just the anchor point that's not working right. Any ideas of how to accomplish this?
I've always found the anchor point stuff flaky, especially with rotation. I'd try something like this.
CGPoint convertedCenter = [_directionArrowView convertPoint:_userImageViewContainer.center fromView:_userImageViewContainer ];
CGSize offset = CGSizeMake(_directionArrowView.center.x - convertedCenter.x, _directionArrowView.center.y - convertedCenter.y);
// I may have that backwards, try the one below if it offsets the rotation in the wrong direction..
// CGSize offset = CGSizeMake(convertedCenter.x -_directionArrowView.center.x , convertedCenter.y - _directionArrowView.center.y);
CGFloat rotation = 0; //get your angle (radians)
CGAffineTransform tr = CGAffineTransformMakeTranslation(-offset.width, -offset.height);
tr = CGAffineTransformConcat(tr, CGAffineTransformMakeRotation(rotation) );
tr = CGAffineTransformConcat(tr, CGAffineTransformMakeTranslation(offset.width, offset.height) );
[_directionArrowView setTransform:tr];
NB. the transform property on UIView is animatable, so you could put that last line there in an animation block if desired..
Maybe better use much easier solution - make arrow image size bigger, and square. So the black point will be in center of image.
Please compare attached images and you understand what I'm talking about
New image with black dot in center
Old image with shifted dot
Now you can easy use standard anchor point (0.5, 0.5) to rotate edited image

Parenting views in Xcode

Currently i am trying to implement the following. While making a compass, i would like to draw arrows(circles) at set locations and rotate around my view to display on compass.
I can use storyboard to create Imageviews and the like and parent them with one another.
I am trying to now do this programming code, as that when a new location is received by program it can display the new point on compass.
I have already worked out code to rotate around
Ideally my flow of code should be as follows:
For i = 1 to 5;
Draw empty square view[i]
Draw Circle and position within square[i] at co-ordinate (x,y) (pretty much at the north point of compass)
Parent circle to Square
Rotate square[i] to x degrees.
Next i
My question is how do i programmatically draw these views and then how do i parent the views. Such that i can rotate one with the other at a fixed point.
Thanks.
this is not exact answer, but it may help you.just play with value of i(loop index)
-(void) rotateOn360Degree
{
int x,y;
double radious=30;
for(int i=1;i<=360;i++)
{
x = radious* cos((i * 3.14) / 180));
y = radious* sin((i * 3.14) / 180));
UIView *tmpView = [[UIView alloc] init];
[tmpView setBackgroundColor:[UIColor greenColor]];
[tmpView.layer setCornerRadius:5];
tmpView.frame = CGRectMake(x,y, 10, 10);
[self.view addSubview:tmpView];
}
}

CGAffineTransform MakeScale resulting in black screen?

Trying to flip a UIImageView with scaling it to -1.0 but it results in a black screen. Scaling with 1.0, 1.0 shows results as expected.
Here's my code:
UIImageView *vImg = [[UIImageView alloc] initWithImage:finalImg];
CGAffineTransform transform = CGAffineTransformMakeScale(-1.0, 1.0);
[vImg setTransform:transform];
What am I missing?
A quick all-in-one solution exists in UIImage:
- (id)initWithCGImage:(CGImageRef)imageRef scale:(CGFloat)scale orientation:(UIImageOrientation)orientation;
with one of the UIImageOrientation*Mirrored orientations.
EDIT: IIRC the prepended translation is only required for low-level drawRect due to the different coordinates system, so the scale transform alone should be fine. Are you sure the bug doesn't lie elsewhere?

AVFoundation photo size and rotation

I'm having a nightmare time trying to correct a photo taken with AVFoundation captureStillImageAsynchronouslyFromConnection to size and orient to exactly what is shown on the screen.
I show the AVCaptureVideoPreviewLayer with this code to make sure it displays the correct way up at all rotations:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
previewLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
if ([[previewLayer connection] isVideoOrientationSupported])
{
[[previewLayer connection] setVideoOrientation:(AVCaptureVideoOrientation)[UIApplication sharedApplication].statusBarOrientation];
}
[self.view.layer insertSublayer:previewLayer atIndex:0];
Now when I have a returned image it needs cropping as it's much bigger than what was displayed.
I know there are loads of UIImage cropping examples, but the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
The preview is using AVLayerVideoGravityResizeAspectFill and I have my UIImageView also set to AspectFill
So how can I get the correct frame that AVFoundation is displaying on screen from the preview layer?
EDIT ----
Here's an example of the problem i'm facing. Using the front camera of an iPad Mini, the camera using the resolution 720x1280 but the display is 768x0124. The view displays this (See the dado rail at the top of the image:
Then when I take the image and display it, it looks like this:
Obviously the camera display was centred in the view, but the cropped image is taken from the top(none seen) section of the photo.
I'm working on a similar project right now and thought I might be able to help, if you haven't already figured this out.
the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
Let's say your image is 720x1280 and you want your image to be cropped to the rectangle of your display, which is a CGRect of size 768x1024. You can't just pass a rectangle of size 768x1024. First, your image isn't 768 pixels wide. Second, you need to specify the placement of that rectangle with respects to the image (i.e. by specifying the rectangle's origin point). In your example, self.view.frame is a CGRect that has an origin of (0, 0). That's why it's always cropping from the top of your image rather than from the center.
Calculating the cropping rectangle is a bit tricky because you have a few different coordinate systems.
You've got your view controller's view, which has...
...a video preview layer as a sublayer, which is displaying an aspect-filled image, but...
...the AVCaptureOutput returns a UIImage that not only has a different width/height than the video preview, but it also has a different aspect ratio.
So because your preview layer is displaying a centered and cropped preview image (i.e. aspect fill), what you basically want to find is the CGRect that:
Has the same display ratio as self.view.bounds
Has the same smaller dimension size as the smaller dimension of the UIImage (i.e. aspect fit)
Is centered in the UIImage
So something like this:
// Determine the width:height ratio of the crop rect, based on self.bounds
CGFloat widthToHeightRatio = self.bounds.size.width / self.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (image.size.width < image.size.height) {
cropRect.size.width = image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = image.size.height * widthToHeightRatio;
cropRect.size.height = image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (image.size.height - cropRect.size.height) / 2.0;
} else {
cropRect.origin.x = (image.size.width - cropRect.size.width) / 2.0;
cropRect.origin.y = 0;
}
So finally, to go back to your original example where the image is 720x1280, and you want your image to be cropped to the rectangle of your display which is 768x1024, you will end up with a CGRect of size 720x960, with an origin of x = 0, y = 1280-960/2 = 160.

Resources