Premise: I'm building a cropping tool that handles two-finger arbitrary rotation of an image as well as arbitrary cropping.
Sometimes the image ends up rotated in a way that empty space is inserted to fill a gap between the rotated image and the crop rect (see the examples below).
I need to ensure that the image view, when rotated, fits entirely into the cropping rectangle. If it doesn't, I then need to re-transform the image (zoom it) so that it fits into the crop bounds.
Using this answer, I've implemented the ability to check whether a rotated UIImageView intersects with the cropping CGRect, but unfortunately that doesn't tell me if the crop rect is entirely contained in the rotated imageview. Hoping that I can make some easy modifications to this answer?
A visual example of OK:
and not OK, that which I need to detect and deal with:
Update: not working method
- (BOOL)rotatedView:(UIView*)rotatedView containsViewCompletely:(UIView*)containedView {
CGRect rotatedBounds = rotatedView.bounds;
CGPoint polyContainedView[4];
polyContainedView[0] = [containedView convertPoint:rotatedBounds.origin toView:rotatedView];
polyContainedView[1] = [containedView convertPoint:CGPointMake(rotatedBounds.origin.x + rotatedBounds.size.width, rotatedBounds.origin.y) toView:rotatedView];
polyContainedView[2] = [containedView convertPoint:CGPointMake(rotatedBounds.origin.x + rotatedBounds.size.width, rotatedBounds.origin.y + rotatedBounds.size.height) toView:rotatedView];
polyContainedView[3] = [containedView convertPoint:CGPointMake(rotatedBounds.origin.x, rotatedBounds.origin.y + rotatedBounds.size.height) toView:rotatedView];
if (CGRectContainsPoint(rotatedView.bounds, polyContainedView[0]) &&
CGRectContainsPoint(rotatedView.bounds, polyContainedView[1]) &&
CGRectContainsPoint(rotatedView.bounds, polyContainedView[2]) &&
CGRectContainsPoint(rotatedView.bounds, polyContainedView[3]))
return YES;
else
return NO;
}
That should be easier than checking for intersection (as in the referenced thread).
The (rotated) image view is a convex quadrilateral. Therefore it suffices to check
that all 4 corner points of the crop rectangle are within the rotated image view.
Use [cropView convertPoint:point toView:imageView] to convert the corner points of the crop rectangle to the coordinate system of the
(rotated) image view.
Use CGRectContainsPoint() to check that the 4 converted corner points are within the bounds rectangle of the image view.
Sample code:
- (BOOL)rotatedView:(UIView *)rotatedView containsCompletely:(UIView *)cropView {
CGPoint cropRotated[4];
CGRect rotatedBounds = rotatedView.bounds;
CGRect cropBounds = cropView.bounds;
// Convert corner points of cropView to the coordinate system of rotatedView:
cropRotated[0] = [cropView convertPoint:cropBounds.origin toView:rotatedView];
cropRotated[1] = [cropView convertPoint:CGPointMake(cropBounds.origin.x + cropBounds.size.width, cropBounds.origin.y) toView:rotatedView];
cropRotated[2] = [cropView convertPoint:CGPointMake(cropBounds.origin.x + cropBounds.size.width, cropBounds.origin.y + cropBounds.size.height) toView:rotatedView];
cropRotated[3] = [cropView convertPoint:CGPointMake(cropBounds.origin.x, cropBounds.origin.y + cropBounds.size.height) toView:rotatedView];
// Check if all converted points are within the bounds of rotatedView:
return (CGRectContainsPoint(rotatedBounds, cropRotated[0]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[1]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[2]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[3]));
}
Related
I am doing some operation where I need to replace the bigger rectangle with the smaller rectangle.
Most answers suggested to use smallerRectMat.copyTo(biggerRectMat) but it didn't give me the require output. The submat is changed but the original image is as it is.
And when I try to see the submat both became same of same smaller rectangle size.
Mat rectNose = testBuffer.submat(rectA.y,rectA.y+rectA.height,rectA.x,rectC.x+rectC.width);
Rect biggerRect = getHeadContour(testBuffer);
Mat rectHead = testBuffer.submat(biggerRect.y+1,biggerRect.y+biggerRect.height,biggerRect.x+1,biggerRect.x+biggerRect.width);
rectNose.copyTo(rectHead);
Imgcodecs.imwrite("/Users/test.jpg",rectHead);
Imgcodecs.imwrite("/Users/test1.jpg",rectNose);
Imgcodecs.imwrite("/Users/test1.jpg",testBuffer);
Basically I want to copy the rectangle near the nose region to the rectangle with blue boundary at forehead.
You can try ROI(Region Of Image) scaling
smallRect = img[rectA.y:rectA.y+rectA.height, rectA.x:rectC.x+rectC.width]
upscaledRegion = cv2.resize(smallRect , (biggerRect.width, biggerRect.height), interpolation=cv2.INTER_AREA)
img[biggerRect.y:biggerRect.y+biggerRect.height, biggerRect.x:biggerRect.x+biggerRect.width] = upscaledRegion
I currently have a large map that goes off the screen, because of this its coordinate system is very different from my other nodes. This has led me to a problem, because I'm needing to generate a random CGPoint within the bounds of this map, and then if that point is frame/on-screen I place a visible node there. However the check on wether or not the node is on screen continuously fails.
I'm checking if the node is in frame with the following code: CGRectContainsPoint(self.frame, values) (With values being the random CGPoint I generated). Now this is where my problem comes in, the coordinate system of the frame is completely different from the coordinate system of the map.
For example, in the picture below the ball with the arrows pointing to it is at coordinates (479, 402) in the scene's coordinates, but they are actually at (9691, 9753) in the map's coordinates.
I determined the coordinates using the touchesBegan event for those who are wondering. So basically, how do I convert that map coordinate system to one that will work for the frame?
Because as seen below, the dot is obviously in the frame however the CGRectContainsPoint always fails. I've tried doing scene.convertPoint(position, fromNode: map) but it didn't work.
Edit: (to clarify some things)
My view hierarchy looks something like this:
The map node goes off screen and is about 10,000x10,000 for size. (I have it as a scrolling type map). The origin (Or 0,0) for this node is in the bottom left corner, where the map starts, meaning the origin is offscreen. In the picture above, I'm near the top right part of the map. I'm generating a random CGPoint with the following code (Passing it the maps frame) as an extension to CGPoint:
static func randPoint(within: CGRect) -> CGPoint {
var point = within.origin
point.x += CGFloat(arc4random() % UInt32(within.size.width))
point.y += CGFloat(arc4random() % UInt32(within.size.height))
return point;
}
I then have the following code (Called in didMoveToView, note that I'm applying this to nodes I'm generating - I just left that code out). Where values is the random position.
let values = CGPoint.randPoint(map.totalFrame)
if !CGRectContainsPoint(self.frame, convertPointToView(scene!.convertPoint(values, fromNode: map))) {
color = UIColor.clearColor()
}
To make nodes that are off screen be invisible. (Since the user can scroll the map background). This always passes as true, making all nodes invisible, even though nodes are indeed within the frame (As seen in the picture above, where I commented out the clear color code).
If I understand your question correctly, you have an SKScene that contains an SKSpriteNode that is larger than the scene's view, and that you are randomly generating coordinates within that sprite's coordinate system that you want to map to the view.
You're on the right track with SKNode's convertPoint(_:fromNode:) (where your scene is the SKNode and your map is the fromNode). That should get you from the generated map coordinate to the scene coordinate. Next, convert that coordinate to the view's coordinate system using your scene's convertPointToView(_:). The point will be out of bounds if it is not in view.
Using a worldNode which includes a playerNode and having the camera center on this node, you can check on/off with this code:
float left = player.position.x - 700;
float right = player.position.x + 700;
float up = player.position.y + 450;
float down = player.position.y - 450;
if((object.position.x > left) && (object.position.x < right) && (object.position.y > down) && (object.position.y < up)) {
if((object.parent == nil) && (object.dead == false)) {
[worldNode addChild:object];
}
} else {
if(object.parent != nil) {
[object removeFromParent];
}
}
The numbers I used above are static. You can also make them dynamic:
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat screenWidth = screenRect.size.width;
CGFloat screenHeight = screenRect.size.height;
Diving the screenWidth by 2 for left and right. Same for screenHeight.
Update: partially working implementation below.
I've asked a couple questions on this previously here and here.
The first works great to determine if the "image" rect is sufficiently contained inside the "crop" rect.
The second works a little bit, but something's off in my implementation of it that it doesn't really work.
I'm now looking at the problem a little differently, and would like to change the behavior:
When the user begins to rotate the image, I'll run the check method (below) to determine if it needs fixing or not.
If it does need fixing, rather than waiting until the user has finished rotating it, I'd like to resize the image simultaneously to fit the bounds. Is there a simpler (or more reliable) way to implement this behavior?
I'm going to block rotation greater than 35ยบ in either direction so that we don't have to worry about severe enlargements.
Assumptions/Constraints:
I'm using AutoLayout
Point of rotation will be the center of the crop rect, which may or may not be the center of the image rect.
This demonstrates it working with a square crop, but the user can resize it to whatever, so I imagine it's going to bite me in the ass even more so when it's not square.
Code:
- (BOOL)rotatedView:(UIView*)rotatedView containsViewCompletely:(UIView*)cropView {
// If this method returns YES, good! if NO, bad!
CGPoint cropRotated[4];
CGRect rotatedBounds = rotatedView.bounds;
CGRect cropBounds = cropView.bounds;
// Convert corner points of cropView to the coordinate system of rotatedView:
cropRotated[0] = [cropView convertPoint:cropBounds.origin toView:rotatedView];
cropRotated[1] = [cropView convertPoint:CGPointMake(cropBounds.origin.x + cropBounds.size.width, cropBounds.origin.y) toView:rotatedView];
cropRotated[2] = [cropView convertPoint:CGPointMake(cropBounds.origin.x + cropBounds.size.width, cropBounds.origin.y + cropBounds.size.height) toView:rotatedView];
cropRotated[3] = [cropView convertPoint:CGPointMake(cropBounds.origin.x, cropBounds.origin.y + cropBounds.size.height) toView:rotatedView];
// Check if all converted points are within the bounds of rotatedView:
return (CGRectContainsPoint(rotatedBounds, cropRotated[0]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[1]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[2]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[3]));
}
Taking even yet a different spin on this, I'm getting there. But as you can see in the .gif, eventually the calculations get out of whack because the rotation really isn't what I should be using to calculate the new size. How can I implement this with the correct geometry to ensure the image always resizes correctly? For time saving, I put this into an Xcode project so you don't have to fiddle around building your own: https://github.com/r3mus/RotationCGRectFix.git
- (IBAction)gestureRecognized:(UIRotationGestureRecognizer *)gesture {
CGFloat maxRotation = 40;
CGFloat rotation = gesture.rotation;
CGFloat currentRotation = atan2f(_imageView.transform.b, _imageView.transform.a);;
NSLog(#"%0.4f", RADIANS_TO_DEGREES(rotation));
if ((currentRotation > DEGREES_TO_RADIANS(maxRotation) && rotation > 0) || (currentRotation < DEGREES_TO_RADIANS(-maxRotation) && rotation < 0)) {
return;
}
CGAffineTransform rotationTransform = CGAffineTransformRotate(self.imageView.transform, rotation);
gesture.rotation = 0.0f;
if (gesture.state == UIGestureRecognizerStateChanged) {
CGFloat scale = sqrt(_imageView.transform.a * _imageView.transform.a + _imageView.transform.c * _imageView.transform.c);
if ((currentRotation > 0 && rotation > 0) || (currentRotation < 0 && rotation < 0))
scale = 1 + fabs(rotation);
else if (currentRotation == 0)
scale = 1;
else
scale = 1 - fabs(rotation);
CGAffineTransform sizeTransform = CGAffineTransformMakeScale(scale, scale);
CGPoint center = _imageView.center;
_imageView.transform = CGAffineTransformConcat(rotationTransform, sizeTransform);
_imageView.center = center;
}
}
Gif:
So I know about image.center, but when I do something like this:
image.frame = CGRectMake(image.center.x, image.center.y, image.frame.size.width, image.frame.size.height);
The image moves down and right. I believe this is happening because it is getting the x and y coordinates of the center of the image, but is there a way to get the top left coordinates so that the above code doesn't move the image?
If you want to set the stick's frame to be the frame of image, the easiest way is the following:
stick.frame = image.frame;
Just for your information, what you were originally looking for is the frame.origin, which is a CGPoint including the x and y origin's:
stick.frame = CGRectMake(image.frame.origin.x, image.frame.origin.y, image.frame.size.width, image.frame.size.height);
I have a ZBarReaderView embedded in a viewController and I would like to limit the area of the scan to a square in the middle of the view. I have set the resolution of the camera to 1280x720. Assuming the device is in Portrait Mode, I calculate the normalized coordinates only using the cameraViewBounds, which is the readerView's bounds and the overlayViewFrame which is the orange box's frame, as seen in this screenshot - http://i.imgur.com/xzUDHIh.png . Here is what I have so far:
self.cameraView.session.sessionPreset = AVCaptureSessionPreset1280x720;
//Set CameraView's Frame to fit the SideController's Width
CGPoint origin = self.cameraView.frame.origin;
float cameraWidth = self.view.frame.size.width;
self.cameraView.frame = CGRectMake(origin.x, origin.y, cameraWidth, cameraWidth);
//Set CameraView's Cropped Scanning Rect
CGFloat x,y,width,height;
CGRect cameraViewBounds = self.cameraView.bounds;
CGRect overlayViewFrame = self.overlay.frame;
y = overlayViewFrame.origin.y / cameraViewBounds.size.height;
x = overlayViewFrame.origin.x / cameraViewBounds.size.width;
height = overlayViewFrame.size.height / cameraViewBounds.size.height;
width = overlayViewFrame.size.width / cameraViewBounds.size.width;
self.cameraView.scanCrop = CGRectMake(x, y, width, height);
NSLog(#"\n\nScan Crop:\n%#", NSStringFromCGRect(self.cameraView.scanCrop));
As you can see in the screenshot, the blue box is the scanCrop rect, what I want to be able to do is have that blue box match the orange box. Do I need to factor in the resolution of the image or the image size when calculating the normalized values for the scanCrop?
I cannot figure out what spadix is explaining in this comment on sourceforge:
"So, assuming your sample rectangle represents screen points in portrait orientation and using the default 640x480 camera resolution, your scanCrop rectangle would be {75/w, 38/320., 128/w, 244/320.}, where w=480/(320*640.) is the major dimension of the image in screen points." and here:
"assuming your coordinates refer to points in portrait orientation, using the default camera image size, located at the default location and taking into account that the camera image is rotated:
scanCrop = CGRectMake(50/426., 1-(20+250)/320., 150/426., 250/320.)"
He uses the values 426 and 320 which I am assuming have to do with the image size but in one of his comments he mentions that the resolution is 640x480. How do I factor in the image size to calculate the correct rect for scanCrop?