I have 2 UIImageView. Both UIImageView has the same size proportion. The first UIImageView's size is always smaller or equal to the second UIImageView's size.
On the first UIImageView I have a UIView. This UIView the user can place it anywhere on the first UIImageView.
I want to find the x,y of the second UIView that will be on the second UIImageView with same proportion as the first UIView.
Please look at this image to be clear. Please note that Rectangle 1 is first UIImageView and Rectangle 2 is second UIImageView.
What I've tried:
CGFloat xMultiplier = imageView2.frame.size.width / imageView1.frame.size.width;
CGFloat yMultiplier = imageView2.frame.size.height / imageView1.frame.size.height
CGFloat view2x = view1.frame.origin.x * xMultiplier;
CGFloat view2y = view1.frame.origin.y * yMultiplier;
My application is an application that users can choose stickers (UIView) and place it anywhere on their photo. The stickerView on the second image is not in the same place as stickerView in the second image
The resulting x,y for my code is close to where it should be but its not exactly where it should be.
What am I missing?
Thanks!
First you should get the factor ( which you are already doing correctly ) , but then you have to calculate it proportionally to Point1 ( to some point in View1 ) , for example
CGFloat yFactor = view1.frame.size.height / view2.frame.size.height;
CGFloat xFactor = view1.frame.size.width / view2.frame.size.width;
CGPoint pointInView2 = CGPointMake(pointInView1.x / xFactor,pointInView1.y / yFactor);
pointInView1 is the origin of Sticker in View1
if sticker2 is subview of view2 , then origin will be pointInView2
if sticker2 is not subview of view2 , and they have the same superview , the origin will
CGPointMake(pointInView2.x + view2.frame.size.width,pointInView2.y + view2.frame.size.height);
if there is any other hierarchy , you can use UIVIews method convertPoint ...
[view2 convertPoint:pointInView2 toView:sticker2.superView]
assuming that the rectangle 2 is merely a rescaled version of rectangle 1. the position of view 2 is :
xpos_view2 = xpos_view1/xwidth_Rect1 * xwidth_Rect2
ypos_view2 = ypos_view1/ywidth_Rect1 * ywidth_Rect2
View2.position = CGPoint(x:xpos_view2, y:ypos_view2)
Calculate Ratio from smallest iPhone device
func ratioW(_ ofSize : CGFloat) -> CGFloat {
let bounds = UIScreen.main.bounds
let windowWidth = bounds.size.width
let temp = 320/ofSize
return windowWidth / temp
}
func ratioH(_ ofSize : CGFloat) -> CGFloat {
let bounds = UIScreen.main.bounds
let windowHeight = bounds.size.height
let temp = 480/ofSize
return windowHeight / temp
}
Related
I have an imageView and its frame is (47.0, 415.66666666666674, 320.0, 320.0), its y origin is 415
lazy var imageView: UIImageView = {
let imgView = UIImageView()
// ...
imgView.contentMode = .aspectScaleFit
}
To get the rect of the image inside that imageView I use:
view.layoutIfNeeded()
let imageRect = AVMakeRect(aspectRatio: imageView.image!.size, insideRect: imageView.bounds)
print(imageRect) // (-47.0, -358.2753623188407, 320.0, 205.21739130434787)
When I try to get convert the imageRect to find out where it is inside the Window's coordinate system I get
let frameInWindow = imageView.convert(imageRect, from: nil)
print(frameInWindow) // (-47.0, -358.2753623188407, 320.0, 205.21739130434787)
I have 2 questions:
1- why are the x and y values coming up as negative -47 and -358
To get the positive y value I use let yPosition = abs(frameInWindow.origin.y) which gives me a yPosition of 358 however that is above the imageView's y position.
2- why is the yPosition not something like 425 or wherever it is at considering the center of the image is set at the center of the imageView and because the image is smaller than the imageView its y.origin should be below 415 (the imageView's y value inside the Window)
When you say imageView.convert(imageRect, from: nil) you are converting the imageRect from the window's coordinates to the image view's coordinates. But the imageRect was not in the window's coordinates, so that makes no sense.
I need the actual pixcel position not the positoin with respect to the UIImageView frame, but the actual pixcel position on UIImage.
UIpangesture recognizer giver the location in UIimageView, so it is of no use.
I can multiply the x and y with scale, but the UIImage scale is always 0.
I need to crop a circular area from UIImage make it blur and place it exactly at the same position
Flow:
Crop circular area from an UIimage usin:g CGImageCreateWithImageInRect
Then roud rect the image using: [[UIBezierPath bezierPathWithRoundedRect:
Blur the round rect image using CIGaussianBlur
Place the round rect blurred image at the x,y position
In the first step I need the actual pixel position where the user tapped
It depends on the image view content mode.
For the scale to fill mode you need to simply multiply the coordinates with image to view ratio:
CGPoint pointOnImage = CGPointMake(pointOfTouch.x*(imageSize.width/frameSize.width), pointOfTouch.y*(imageSize.height/frameSize.height));
For all other modes you need to compute the actual image frame inside the view which have different procedures then.
Adding aspect fit mode from comments:
For aspect fit you need to compute the actual image frame which can be smaller then the image view frame in one of the dimensions and is placed in center:
CGSize imageSize; // the original image size
CGSize imageViewSize; // the image view size
CGFloat imageRatio = imageSize.width/imageSize.height;
CGFloat viewRatio = imageViewSize.width/imageViewSize.height;
CGRect imageFrame = CGRectMake(.0f, .0f, imageViewSize.width, imageViewSize.height);
if(imageRatio > viewRatio) {
// image has room on top and bottom but fits perfectly on left and right
CGSize displayedImageSize = CGSizeMake(imageViewSize.width, imageViewSize.width / imageRatio);
imageFrame = CGRectMake(.0f, (imageViewSize.height-displayedImageSize.height)*.5f, displayedImageSize.width, displayedImageSize.height);
}
else if(imageRatio < viewRatio) {
// image has room on left and right but fits perfectly on top and bottom
CGSize displayedImageSize = CGSizeMake(imageViewSize.height * imageRatio, imageViewSize.height);
imageFrame = CGRectMake((imageViewSize.width-displayedImageSize.width)*.5f, .0f, displayedImageSize.width, displayedImageSize.height);
}
// transform the coordinate
CGPoint locationInImageView; // received from touch
CGPoint locationOnImage = CGPointMake(locationInImageView.x, locationInImageView.y); // copy the original point
locationOnImage = CGPointMake(locationOnImage.x - imageFrame.origin.x, locationOnImage.y - imageFrame.origin.y); // translate to fix the origin
locationOnImage = CGPointMake(locationOnImage.x/imageFrame.size.width, locationOnImage.y/imageFrame.size.height); // transform to relative coordinates
locationOnImage = CGPointMake(locationOnImage.x*imageSize.width, locationOnImage.y*imageSize.height); // scale to original image coordinates
Just a note if you want to ransfer to aspect fill all you need to do is swap < and > in both of the if statements.
thanks for reading.
I have a background image and a foreground image. The foreground image is in a UIScrollView so can be resized and repositioned over the background image. The background image is set as Aspect Fit. I have a function that combines the two UIImages into a new UIImage. That works fine, but what I can't get right is the x,y co-ordinates of one view over the other.
Here's some code:
CGFloat bgImageScale = self.backgroundImageView.bounds.size.height / self.bgImage.size.height; // Gives me the AspectFit scale.
CGFloat bgOffsetX = (self.backgroundImageView.bounds.size.width - self.bgImage.size.width * bgImageScale) / 2.0;
CGFloat bgOffsetY = 0.0;
CGFloat fgImageScale = self.fgImageScrollView.zoomScale;
CGFloat fgOffsetX = -self.fgImageScrollView.contentOffset.x;
CGFloat fgOffsetY = -self.fgImageScrollView.contentOffset.y;
CGPoint imageOffset = CGPointMake((fgOffsetX - bgOffsetX) * bgImageScale, (fgOffsetY - bgOffsetY) * bgImageScale);
[self.delegate completedOverlayImage:
[self mergeImage:self.fgImage
withImage:self.bgImage
usingAlpha:0.5f
withOffset:imageOffset
andScale:fgImageScale / bgImageScale
]];
In brief, the compeletedOverlayImage code does the following relevant bit:
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[topImage drawInRect:CGRectMake(imageOffset.x, imageOffset.y, newSize.width*imageScale, newSize.height*imageScale) blendMode:kCGBlendModeNormal alpha:alpha];
So I just can't get the imageOffset stuff right to get the new image overlaid the same as it appeared on-screen.
By the way, this app is iOS 7 and up only.
Can anyone help? Thanks.
Take a look at the UIView convertRect:toView: method.
Assuming that your view hierarchy looks like this
SomeViewController
View // the view controller's main view
BgView // the background view (UIImageView)
ScrollView
FgView // the foreground view (UIImageView)
If the foreground image is a UIImageView that's a child of the scroll view, then you can convert the FgView frame coordinates to the main view coordinate system with a line of code like this
CGRect foregroundFrame = [self.foregroundImageView convertRect:self.foregroundImageView.bounds toView:self.view];
Since the BgView's frame is already in mainView coordinates, this will give you both frames in the same coordinate system.
OK, I solved it myself. So for anyone else attempting the same thing, here's the code:
CGFloat bgImageScale = self.backgroundImageView.bounds.size.height / self.bgImage.size.height;
CGFloat bgOffsetX = (self.backgroundImageView.bounds.size.width - self.bgImage.size.width * bgImageScale) / 2.0;
CGFloat bgOffsetY = 0.0;
CGFloat fgImageScale = self.fgImageScrollView.zoomScale;
CGFloat fgRelativeZoom = fgImageScale / bgImageScale; // How much is fg zoomed compared to bg?
CGFloat fgOffsetX = -self.fgImageScrollView.contentOffset.x; // We want the offset of the (0,0), not the offset of the viewport. Hence, negative.
CGFloat fgOffsetY = -self.fgImageScrollView.contentOffset.y;
CGPoint imageOffset = CGPointMake(fgOffsetX / bgImageScale - bgOffsetX, fgOffsetY / bgImageScale - bgOffsetY);
[self.delegate completedOverlayImage:
[self mergeImage:self.fgImage
withImage:self.bgImage
usingAlpha:0.5f
withOffset:imageOffset
andScale:fgRelativeZoom
]];
and the function to combine the images (assuming iOS 7 which lets you set scale to zero in the UIGraphicsBeginContextWithOptions() call):
- (UIImage*) mergeImage:(UIImage*)topImage withImage:(UIImage*)bottomImage usingAlpha:(CGFloat)alpha withOffset:(CGPoint)imageOffset andScale:(CGFloat)imageScale {
int width = bottomImage.size.width;
int height = bottomImage.size.height;
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[topImage drawInRect:CGRectMake(imageOffset.x, imageOffset.y, newSize.width*imageScale, newSize.height*imageScale) blendMode:kCGBlendModeNormal alpha:alpha];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Hope that helps someone else out there.
I am creating a UIImageView and adding it in a loop to my view, I set the initial frame to 0,0,1,47 and each passage of the loop I change the center of the image view to space them out.
I am always using 0 as the origin.y
The problem is the origin reference is in the centre of the image view, assuming we was in interface builder, this is equivalent to the image below.
How can I change the reference point in code ?
After reading these answers and your comments I'm not really sure what is your point.
With UIView you can set position by 2 ways:
center – It definitely says it is the center.
frame.origin – Top left corner, can't be set directly.
If you want the bottom left corner to be at x=300, y=300 you can just do this:
UIView *view = ...
CGRect frame = view.frame;
frame.origin.x = 300 - frame.size.width;
frame.origin.y = 300 - frame.size.height;
view.frame = frame;
But if you go one level deeper to magical world of CALayers (don' forget to import QuartzCore), you are more powerful.
CALayer has these:
position – You see, it don't explicitely says 'center', so it may not be center!
anchorPoint – CGPoint with values in range 0..1 (including) that specifies point inside the view. Default is x=0.5, y=0.5 which means 'center' (and -[UIView center] assumes this value). You may set it to any other value and the position property will be applied to that point.
Example time:
You have a view with size 100x100
view.layer.anchorPoint = CGPointMake(1, 1);
view.layer.position = CGPointMake(300, 300);
Top left corner of the view is at x=200, y=200 and its bottom right corner is at x=300, y=300.
Note: When you rotate the layer/view it will be rotated around the anchorPoint, that is the center by default.
Bu since you just ask HOW to do specific thing and not WHAT you want to achieve, I can't help you any further now.
The object's frame includes its position in its superview. You can change it with something like:
CGRect frame = self.imageView.frame;
frame.origin.y = 0.0f;
self.imageView.frame = frame;
If I am understanding you correctly, you need to set the frame of the image view you are interested in moving. This can be done in the simple case like this:
_theImageView.frame = CGRectMake(x, y, width, height);
Obviously you need to set x, y, width, and height yourself. Please also be aware that a view's frame is in reference to its parent view. So, if you have a view that is in the top left corner (x = 0, y = 0), and is 320 points wide and 400 points tall, and you set the frame of the image view to be (10, 50, 100, 50) and then add it as a subview of the previous view, it will sit at x = 10, y = 50 of the parent view's coordinate space, even though the bounds of the image view are x = 0, y = 0. Bounds are in reference to the view itself, frame is in reference to the parent.
So, in your scenario, your code might look something like the following:
CGRect currentFrame = _theImageView.frame;
currentFrame.origin.x = 0;
currentFrame.origin.y = 0;
_theImageView.frame = currentFrame;
[_parentView addSubview:_theImageView];
Alternatively, you can say:
CGRect currentFrame = _theImageView.frame;
_theImageView.frame = CGRectMake(0, 0, currentFrame.size.width, currentFrame.size.height);
[_parentView addSubview:_theImageView];
Either approach will set the image view to the top left of the parent you add it to.
I thought I would take a cut at this in Swift.
If one would like to set a views position on the screen by specifying the coordinates to an origin point in X and Y for that view, with a little math, we can figure out where the center of the view needs to be in order for the origin of the frame to be located as desired.
This extension uses the frame of the view to get the width and height.
The equation to calculate the new center is almost trivial. See the below extension :
extension CGRect {
// Created 12/16/2020 by Michael Kucinski for anyone to reuse as desired
func getCenterWhichPlacesFrameOriginAtSpecified_X_and_Y_Coordinates(x_Position: CGFloat, y_Position: CGFloat) -> CGPoint
{
// self is the CGRect
let widthDividedBy2 = self.width / 2
let heightDividedBy2 = self.height / 2
// Calculate where the center needs to be to place the origin at the specified x and y position
let desiredCenter_X = x_Position + widthDividedBy2
let desiredCenter_Y = y_Position + heightDividedBy2
let calculatedCenter : CGPoint = CGPoint(x: desiredCenter_X, y: desiredCenter_Y)
return calculatedCenter // Using this point as the center will place the origin at the specified X and Y coordinates
}
}
Usage as shown below to place the origin in the upper left corner area, 25 pixels in :
// Set the origin for this object at the values specified
maskChoosingSlider.center = maskChoosingSlider.frame.getCenterWhichPlacesFrameOriginAtSpecified_X_and_Y_Coordinates(x_Position: 25, y_Position: 25)
If you want to pass a CGPoint into the extension instead of X and Y coordinates, that's an easy change you can make on your own.
I've successfully implemented the custom map annotation callout code from the asynchrony blog post .
(When user taps a map pin, I show a customized image instead of the standard callout view).
The only remaining problem is that the callout occupies the entire width of the view, and the app would look much better if the width corresponded to the image I'm using.
I have subclassed MKAnnotationView, and when I set it's contentWidth to the width of the image, the triangle does not always point back to the pin, or the image is not even inside it's wrapper view.
Any help or suggestions would be great.
Thanks.
I ran into a similar problem when implementing the CalloutMapAnnotationView for the iPad. Basically I didn't want the iPad version to take the full width of the mapView.
In the prepareFrameSize method set your width:
- (void)prepareFrameSize {
// ...
// changing frame x/y origins here does nothing
frame.size = CGSizeMake(320.0f, height);
self.frame = frame;
}
Next you'll have to calculate the xOffset based off the parentAnnotationView:
- (void)prepareOffset {
// Base x calculations from center of parent view
CGPoint parentOrigin = [self.mapView convertPoint:self.parentAnnotationView.center
fromView:self.parentAnnotationView.superview];
CGFloat xOffset = 0;
CGFloat mapWidth = self.mapView.bounds.size.width;
CGFloat halfWidth = mapWidth / 2;
CGFloat x = parentOrigin.x + (320.0f / 2);
if( parentOrigin.x < halfWidth && x < 0 ) // left half of map
xOffset = -x;
else if( parentOrigin.x > halfWidth && x > mapWidth ) // right half of map
xOffset = -( x - mapWidth);
// yOffset calculation ...
}
Now in drawRect:(CGRect)rect before the callout bubble is drawn:
- (void)drawRect:(CGRect)rect {
// ...
// Calculate the carat lcation in the frame
if( self.centerOffset.x == 0.0f )
parentX = 320.0f / 2;
else if( self.centerOffset.x < 0.0f )
parentX = (320.0f / 2) + -self.centerOffset.x;
//...
}
Hope this helps put you on the right track.