Get Position of UIImageView - iOS - ios

So I know about image.center, but when I do something like this:
image.frame = CGRectMake(image.center.x, image.center.y, image.frame.size.width, image.frame.size.height);
The image moves down and right. I believe this is happening because it is getting the x and y coordinates of the center of the image, but is there a way to get the top left coordinates so that the above code doesn't move the image?

If you want to set the stick's frame to be the frame of image, the easiest way is the following:
stick.frame = image.frame;
Just for your information, what you were originally looking for is the frame.origin, which is a CGPoint including the x and y origin's:
stick.frame = CGRectMake(image.frame.origin.x, image.frame.origin.y, image.frame.size.width, image.frame.size.height);

Related

Swift: How to convert CGPoints from SpriteKit to CGRect?

In SpriteKit, I can use touch locatons to record "Hits" in a target, where center of the target, "bulls eye" have the coordinates (0,0). After plenty of shooting, I will fetch all hits as an array with CGPoints. Since the target is 500 x 500 points (SKScene, sks-file), all hits can have a x position from -250 to +250 and likewise for y position.
In the attatched photo, the hits are registered as points at around (150, 150).
The problem arises when I will use the famous LFHeatMap https://github.com/gpolak/LFHeatMap.
+ (UIImage *)heatMapWithRect:(CGRect)rect
boost:(float)boost
points:(NSArray *)points
weights:(NSArray *)weights;
The LFHeatMap generates a UIImage based on the array, which I add to a UIImageView. The problem is that the UIViews has the x and y values arranged differently from SKScenes
func setHeatMap() {
let points = getPointsFromCoreData()
let weigths = getWeightsFromCoreData()
var rect = CGRectMake(0, 0, 500, 500)
rect.origin = CGPointMake(-250, -250)
let image =
LFHeatMap.heatMapWithRect(rect, boost: 1, points: points, weights: weights)
heatMapView.contentMode = UIViewContentMode.ScaleAspectFit
heatMapView.image = image
}
Lowering the shots makes the heat move higher.
How can I solve this? Either All points have to be converted to fit another coordinate system, or the coordiate of the CGrect making the heatmap, must be changed. How can this be done?
This was embarrasingly easy when the solution first occured.
Run a loop trough the points array, and multiply the point.y with -1...
Then all the valus on the y-axis is correct.

Evenly cropping of an image from both side using imagemagick .net

I am using image magick for .net to cropping and resizing the images. But the problem with the library is that it only crop the bottom of the image. Isn't there any way by means which we can crop it evenly from both up and down or left and right?
Edited question :
MagickGeometry size = new MagickGeometry(width, height);
size.IgnoreAspectRatio = maintainAspectRatio;
imgStream.Crop(size);
Crop will always use the specified width and height in Magick.NET/ImageMagick so there is no need to set size.IgnoreAspectRatio. If you want to cut out a specific area in the center of your image you should use another overload of Crop that also has a Gravity as an argument:
imgStream.Crop(width, height, Gravity.Center);
If the size variable is an instance of MagickGeometry, than there should be an X & Y offset property. I'm not familiar with .net, but I would imagine it would be something like...
MagickGeometry size = new MagickGeometry(width, height);
size.IgnoreAspectRatio = maintainAspectRatio;
// Adjust geometry offset to center of image (same as `-gravity Center`)
size.Y = imgStream.Height / 2 - height / 2;
size.X = imgStream.Width / 2 - width / 2;
imgStream.Crop(size);

Get x,y from Top, Left, Width and Height for Corona Objects

Good day
I just started using Corona and I'm kinda confused with this x and y properties. Is it possible to perhaps get the x and y values using Top, Left, Width and Height properties if these are provided? For example, I want an an object to be at Left=10, Top=0, Width=40 and Height=40. Can someone please advise how I can do this, this could be for images, text, textfield etc
Of course. There are several methods to do this.
Example 1:
local myImage = display.newImageRect( "homeBg.png", 40, 40)
myImage.anchorX = 0; myImage.anchorY = 0
myImage.x = 10 -- Left gap
myImage.y = 0 -- Top gap
localGroup:insert(myImage)
Here, setting the anchor points to (0,0) will make the reference point of your images' geometric center to its top left corner.
Example 2:
local myImage = display.newImageRect( "homeBg.png", 40, 40)
myImage.x = (myImage.contentWidth/2) + 10
myImage.y = (myImage.contentHeight/2)
localGroup:insert(myImage)
Here, the center-X position of your image is calculated by adding Left gap to the image's half width itself. And the center-Y position is calculated by adding Top gap to the image's half height
You can position the objects with any of such methods. If you are a beginner in corona, then the following topics will be useful for you to get more knowledge about Displaying Objects with specific size, position, etc.
Corona SDK : display.newImageRect()
Tutorial: Anchor Points in Graphics 2.0
Corona uses like a Cartesian Coordinate System But the (0,0) is on the TOP LEFT you can view more here:
https://docs.coronalabs.com/guide/graphics/group.html#coordinates
BUT: You can get the screen corners based on the image width and height using this codes:
NOTE THAT YOU SHOULD YOU SHOULD CHANGE IT WITH YOUR IMAGE
local image = display.newImageRect(“images/yourImage.png”, width, height)
--TOP:
image.y = math.floor(display.screenOriginY + image*0.5)
--BOTTOM:
image.y = math.floor(screenH - display.screenOriginY) - image.height*.5
--LEFT:
image.x = (screenOffsetW*.5) + image*.5
--RIGHT:
image.x = math.floor(screenW - screenOffsetW*.5) - image.width*.5
Corona SDK display objects have attributes that can be read or set:
X = myObject.x -- gets the current center (by default) of myObject
width = myObject.width
You can set these values too....
myObject.x = 100 -- centers the object at 100px left of the content area.
By default Corona SDK's are based on their center, unless you change it's anchor point:
myObject.anchorX = 0
myObject.anchorY = 0
myObject.x = 100
myObject.y = 100
by setting the anchor's to 0, then .x and .y refer to the top left of the object.

Determine if crop rect is entirely contained within rotated UIView

Premise: I'm building a cropping tool that handles two-finger arbitrary rotation of an image as well as arbitrary cropping.
Sometimes the image ends up rotated in a way that empty space is inserted to fill a gap between the rotated image and the crop rect (see the examples below).
I need to ensure that the image view, when rotated, fits entirely into the cropping rectangle. If it doesn't, I then need to re-transform the image (zoom it) so that it fits into the crop bounds.
Using this answer, I've implemented the ability to check whether a rotated UIImageView intersects with the cropping CGRect, but unfortunately that doesn't tell me if the crop rect is entirely contained in the rotated imageview. Hoping that I can make some easy modifications to this answer?
A visual example of OK:
and not OK, that which I need to detect and deal with:
Update: not working method
- (BOOL)rotatedView:(UIView*)rotatedView containsViewCompletely:(UIView*)containedView {
CGRect rotatedBounds = rotatedView.bounds;
CGPoint polyContainedView[4];
polyContainedView[0] = [containedView convertPoint:rotatedBounds.origin toView:rotatedView];
polyContainedView[1] = [containedView convertPoint:CGPointMake(rotatedBounds.origin.x + rotatedBounds.size.width, rotatedBounds.origin.y) toView:rotatedView];
polyContainedView[2] = [containedView convertPoint:CGPointMake(rotatedBounds.origin.x + rotatedBounds.size.width, rotatedBounds.origin.y + rotatedBounds.size.height) toView:rotatedView];
polyContainedView[3] = [containedView convertPoint:CGPointMake(rotatedBounds.origin.x, rotatedBounds.origin.y + rotatedBounds.size.height) toView:rotatedView];
if (CGRectContainsPoint(rotatedView.bounds, polyContainedView[0]) &&
CGRectContainsPoint(rotatedView.bounds, polyContainedView[1]) &&
CGRectContainsPoint(rotatedView.bounds, polyContainedView[2]) &&
CGRectContainsPoint(rotatedView.bounds, polyContainedView[3]))
return YES;
else
return NO;
}
That should be easier than checking for intersection (as in the referenced thread).
The (rotated) image view is a convex quadrilateral. Therefore it suffices to check
that all 4 corner points of the crop rectangle are within the rotated image view.
Use [cropView convertPoint:point toView:imageView] to convert the corner points of the crop rectangle to the coordinate system of the
(rotated) image view.
Use CGRectContainsPoint() to check that the 4 converted corner points are within the bounds rectangle of the image view.
Sample code:
- (BOOL)rotatedView:(UIView *)rotatedView containsCompletely:(UIView *)cropView {
CGPoint cropRotated[4];
CGRect rotatedBounds = rotatedView.bounds;
CGRect cropBounds = cropView.bounds;
// Convert corner points of cropView to the coordinate system of rotatedView:
cropRotated[0] = [cropView convertPoint:cropBounds.origin toView:rotatedView];
cropRotated[1] = [cropView convertPoint:CGPointMake(cropBounds.origin.x + cropBounds.size.width, cropBounds.origin.y) toView:rotatedView];
cropRotated[2] = [cropView convertPoint:CGPointMake(cropBounds.origin.x + cropBounds.size.width, cropBounds.origin.y + cropBounds.size.height) toView:rotatedView];
cropRotated[3] = [cropView convertPoint:CGPointMake(cropBounds.origin.x, cropBounds.origin.y + cropBounds.size.height) toView:rotatedView];
// Check if all converted points are within the bounds of rotatedView:
return (CGRectContainsPoint(rotatedBounds, cropRotated[0]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[1]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[2]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[3]));
}

Positioning a view after transform rotation

I'm creating a custom popover background, so I subclassed the UIPopoverBackground abstract class. While creating the layout function I came across a problem with placing and rotating the arrow for the background.
The first picture shows the arrow at the desired position. In the following code I calculated the origin I wanted but the rotation seemed to have translated the new position of the image off to the side about 11 points. As you can see, I created a hack solution where I shifted the arrow over 11 points. But that still doesn't cover up the fact that I have a gapping hole in my math skills. If someone would be so kind as to explain to me what's going on here I'd be eternally grateful. What also would be nice is a solution that would not involve magic numbers, so that I could apply this solution to the cases with the up, down and right arrow
#define ARROW_BASE_WIDTH 42.0
#define ARROW_HEIGHT 22.0
case UIPopoverArrowDirectionRight:
{
width -= ARROW_HEIGHT;
float arrowCenterY = self.frame.size.height/2 - ARROW_HEIGHT/2 + self.arrowOffset;
_arrowView.frame = CGRectMake(width,
arrowCenterY,
ARROW_BASE_WIDTH,
ARROW_HEIGHT);
rotation = CGAffineTransformMakeRotation(M_PI_2);
//rotation = CGAffineTransformTranslate(rotation, 0, 11);
_borderImageView.frame = CGRectMake(left, top, width, height);
[_arrowView setTransform:rotation];
}
break;
Well, if the rotation is applied about the center of the arrow view (as it is), that leaves a gap of (ARROW_BASE_WIDTH - ARROW_HEIGHT) / 2 to the post-rotation left of the arrow, which is what you have to compensate for, it seems. By offsetting the center of the arrow view by this much, it should come back into alignment.

Resources