I have a UIView that needs to be placed over a UIImage inside of a UIImageView at specific coordinates. The coordinates for the frame are referenced from the top left corner and have a specified width and height refrenced from the original image.
So, to make the frame, I am first getting the CGRect of the image using a category from the following post: UIImage size in UIImageView
I then get a scale factor to shrink the size of the frame by taking the original height, dividing it by the scaled height, and then dividing all of my values by that.
Lastly, I take the image CGRect and add the scaled position values of the frame to get my final CGRect for the view. However, the frame is always up and to the right of the desired location. Can anyone see what I'm doing wrong?
Here's the code (new is just a custom object with the correct frame parameters):
CGRect imageBounds = [self.imageView displayedImageBounds];
float scaleFactor = AppDelegate.usedImage.size.height / imageBounds.size.height;
new.height /= scaleFactor;
new.width /= scaleFactor;
new.positionX /= scaleFactor;
new.positionY /= scaleFactor;
UIView *faceRectView = [[UIView alloc] init];
faceRectView.tag = idx;
faceRectView.backgroundColor = [UIColor whiteColor];
faceRectView.frame = CGRectMake((imageBounds.origin.x + new.positionX), (imageBounds.origin.y + new.positionY), new.width, new.height);
[self.view addSubview:faceRectView];
CGPoint is a C structure that defines a point in a coordinate system. The origin of this coordinate system is at the top left on iOS and at the bottom left on OS X. In other words, the orientation of its vertical axis differs on iOS and OS X.
CGSize is another simple C structure that defines a width and a height value, and CGRect has an origin field, a CGPoint, and a size field, a CGSize. Together the origin and size fields define the position and size of a rectangle.
On iOS and OS X, an application has multiple coordinate systems. On iOS, for example, the application's window is positioned in the screen's coordinate system and every subview of the window is positioned in the window's coordinate system. In other words, the subviews of a view are always positioned in the view's coordinate system.
Take this example of a frame
and notice how it differs from the concept of bounds
CGGeometry Reference is a collection of structures, constants, and functions that make it easier to work with coordinates and rectangles. You may have run into code snippets similar to this:
CGPoint point = CGPointMake(self.view.frame.origin.x + self.view.frame.size.width, self.view.frame.origin.y + self.view.frame.size.height);
Not only is this snippet hard to read, it's also quite verbose. We can rewrite this code snippet using two convenient functions defined in the CGGeometry Reference.
CGRect frame = self.view.frame;
CGPoint point = CGPointMake(CGRectGetMaxX(frame), CGRectGetMaxY(frame));
To simplify the above code snippet, we store the view's frame in a variable named frame and use CGRectGetMaxX and CGRectGetMaxY. The names of the functions are self-explanatory.
The CGGeometry Reference defines functions to return the smallest and largest values for the x- and y-coordinates of a rectangle as well as the x- and y-coordinates that lie at the rectangle's center. Two other convenient getter functions are CGRectGetWidth and CGRectGetHeight.
Finally to conclude, check out the implementation of CGRectMake.
CGRectMake(CGFloat x, CGFloat y, CGFloat width, CGFloat height)
{
CGRect rect;
rect.origin.x = x; rect.origin.y = y;
rect.size.width = width; rect.size.height = height;
return rect;
}
Can you add Like that
faceRectView.frame = CGRectMake((0.0), (0.0), new.width, new.height);
Related
I need the actual pixcel position not the positoin with respect to the UIImageView frame, but the actual pixcel position on UIImage.
UIpangesture recognizer giver the location in UIimageView, so it is of no use.
I can multiply the x and y with scale, but the UIImage scale is always 0.
I need to crop a circular area from UIImage make it blur and place it exactly at the same position
Flow:
Crop circular area from an UIimage usin:g CGImageCreateWithImageInRect
Then roud rect the image using: [[UIBezierPath bezierPathWithRoundedRect:
Blur the round rect image using CIGaussianBlur
Place the round rect blurred image at the x,y position
In the first step I need the actual pixel position where the user tapped
It depends on the image view content mode.
For the scale to fill mode you need to simply multiply the coordinates with image to view ratio:
CGPoint pointOnImage = CGPointMake(pointOfTouch.x*(imageSize.width/frameSize.width), pointOfTouch.y*(imageSize.height/frameSize.height));
For all other modes you need to compute the actual image frame inside the view which have different procedures then.
Adding aspect fit mode from comments:
For aspect fit you need to compute the actual image frame which can be smaller then the image view frame in one of the dimensions and is placed in center:
CGSize imageSize; // the original image size
CGSize imageViewSize; // the image view size
CGFloat imageRatio = imageSize.width/imageSize.height;
CGFloat viewRatio = imageViewSize.width/imageViewSize.height;
CGRect imageFrame = CGRectMake(.0f, .0f, imageViewSize.width, imageViewSize.height);
if(imageRatio > viewRatio) {
// image has room on top and bottom but fits perfectly on left and right
CGSize displayedImageSize = CGSizeMake(imageViewSize.width, imageViewSize.width / imageRatio);
imageFrame = CGRectMake(.0f, (imageViewSize.height-displayedImageSize.height)*.5f, displayedImageSize.width, displayedImageSize.height);
}
else if(imageRatio < viewRatio) {
// image has room on left and right but fits perfectly on top and bottom
CGSize displayedImageSize = CGSizeMake(imageViewSize.height * imageRatio, imageViewSize.height);
imageFrame = CGRectMake((imageViewSize.width-displayedImageSize.width)*.5f, .0f, displayedImageSize.width, displayedImageSize.height);
}
// transform the coordinate
CGPoint locationInImageView; // received from touch
CGPoint locationOnImage = CGPointMake(locationInImageView.x, locationInImageView.y); // copy the original point
locationOnImage = CGPointMake(locationOnImage.x - imageFrame.origin.x, locationOnImage.y - imageFrame.origin.y); // translate to fix the origin
locationOnImage = CGPointMake(locationOnImage.x/imageFrame.size.width, locationOnImage.y/imageFrame.size.height); // transform to relative coordinates
locationOnImage = CGPointMake(locationOnImage.x*imageSize.width, locationOnImage.y*imageSize.height); // scale to original image coordinates
Just a note if you want to ransfer to aspect fill all you need to do is swap < and > in both of the if statements.
I am using the AVMetaData API to extract the bounds of an AVMetadataFaceObject. When printed to the console, this CGRect has the following values: bounds={0.2,0.3 0.4x0.5}. I'm having a fair amount of trouble mapping this to a UIView that displays over the face. I can hard-code in some conversion values for my specific screen to get it to crudely be in the right spot, but I would like a solution that displays a UIView over the face shown in my previewView on any screen size.
Does anyone know how to map these to the frame of an on-screen UIView based upon the size of a previewView?
You should be able to take the size of the capture area, let's call that "captureSize" and then do this:
CGRect viewRect;
viewRect.origin.x = bounds.origin.x * captureSize.width;
viewRect.origin.y = bounds.origin.y * captureSize.height;
viewRect.size.width = bounds.size.width * captureSize.width;
viewRect.size.height = bounds.size.height * captureSize.height;
Now, this all depends how your previewView is setup and whether or not it has any content scaling, etc, but should give you a sense of the conversion.
I am doing objective-C app and I want create a UIView with my custom values and there is a problem that i can not fix.
This code works fine:
CGRect rect;
rect.origin.x = 10; rect.origin.y = 20;
rect.size.width = 30; rect.size.height = 40;
NSLog(#" %#",NSStringFromCGRect(rect));
But this one in NSLOG returns origin point to (0,0) and width and height fine:
UIView* aux = [[UIView alloc] initWithFrame:CGRectMake(10,20,30,40)];
NSLog(#" %#",NSStringFromCGRect(aux.bounds));
Is there any problem in initialization of UIView with CGRectMake?
Thanks
You are printing view.bounds, but not view.frame.
Just change it to:
NSLog(#" %#",NSStringFromCGRect(aux.frame));
First a recap on the question: frame, bounds and center and theirs relationships.
Frame A view's frame (CGRect) is the position of its rectangle in the superview's coordinate system. By default it starts at the top left.
Bounds A view's bounds (CGRect) expresses a view rectangle in its own coordinate system.
Center A center is a CGPoint expressed in terms of the superview's coordinate system and it determines the position of the exact center point of the view.
You need to use:
NSLog(#" %#",NSStringFromCGRect(aux.frame));
The origin of the bounds of a view are always (0, 0). You have to change your code to
NSLog(#" %#",NSStringFromCGRect(aux.frame));
See Apple's documentation for an explanation here
The bounds of an UIView is the rectangle, expressed as a location (x,y) and size (width,height) relative to its own coordinate system (0,0).
The frame of an UIView is the rectangle, expressed as a location (x,y) and size (width,height) relative to the superview it is contained within.
So you need to use view.frame in your log method as-
NSLog(#" %#",NSStringFromCGRect(aux.frame));
I have a unique requirement to set the coordinate origin of a UIView to be the center of the view. To clarify, the origin needs to be centered vertically and horizontally so that moving to the right is a positive X value, moving left is negative. For Y, moving above the center mark is positive, below is negative. This is essentially the same as geographic coordinates, using the prime meridian's intersection with the equator as the origin.
I am not even sure where to start with this. Can anyone offer up a hint? Thanks, V
Set the origin of the views bounds property to be the middle of the view, something like:
CGRect bounds = view.frame;
bounds.origin.x = bounds.size.width / -2.;
bounds.origin.y = bounds.size.height / -2.;
view.bounds = bounds;
You need functions to transform normal coordinates to strange and vice versa.
For example method in your UIView subclass
- (CGPoint)normalToStrange:(CGPoint)normal
{
return CGPointMake(normal.x - self.width / 2, -normal.y + self.height / 2);
}
I am creating a UIImageView and adding it in a loop to my view, I set the initial frame to 0,0,1,47 and each passage of the loop I change the center of the image view to space them out.
I am always using 0 as the origin.y
The problem is the origin reference is in the centre of the image view, assuming we was in interface builder, this is equivalent to the image below.
How can I change the reference point in code ?
After reading these answers and your comments I'm not really sure what is your point.
With UIView you can set position by 2 ways:
center – It definitely says it is the center.
frame.origin – Top left corner, can't be set directly.
If you want the bottom left corner to be at x=300, y=300 you can just do this:
UIView *view = ...
CGRect frame = view.frame;
frame.origin.x = 300 - frame.size.width;
frame.origin.y = 300 - frame.size.height;
view.frame = frame;
But if you go one level deeper to magical world of CALayers (don' forget to import QuartzCore), you are more powerful.
CALayer has these:
position – You see, it don't explicitely says 'center', so it may not be center!
anchorPoint – CGPoint with values in range 0..1 (including) that specifies point inside the view. Default is x=0.5, y=0.5 which means 'center' (and -[UIView center] assumes this value). You may set it to any other value and the position property will be applied to that point.
Example time:
You have a view with size 100x100
view.layer.anchorPoint = CGPointMake(1, 1);
view.layer.position = CGPointMake(300, 300);
Top left corner of the view is at x=200, y=200 and its bottom right corner is at x=300, y=300.
Note: When you rotate the layer/view it will be rotated around the anchorPoint, that is the center by default.
Bu since you just ask HOW to do specific thing and not WHAT you want to achieve, I can't help you any further now.
The object's frame includes its position in its superview. You can change it with something like:
CGRect frame = self.imageView.frame;
frame.origin.y = 0.0f;
self.imageView.frame = frame;
If I am understanding you correctly, you need to set the frame of the image view you are interested in moving. This can be done in the simple case like this:
_theImageView.frame = CGRectMake(x, y, width, height);
Obviously you need to set x, y, width, and height yourself. Please also be aware that a view's frame is in reference to its parent view. So, if you have a view that is in the top left corner (x = 0, y = 0), and is 320 points wide and 400 points tall, and you set the frame of the image view to be (10, 50, 100, 50) and then add it as a subview of the previous view, it will sit at x = 10, y = 50 of the parent view's coordinate space, even though the bounds of the image view are x = 0, y = 0. Bounds are in reference to the view itself, frame is in reference to the parent.
So, in your scenario, your code might look something like the following:
CGRect currentFrame = _theImageView.frame;
currentFrame.origin.x = 0;
currentFrame.origin.y = 0;
_theImageView.frame = currentFrame;
[_parentView addSubview:_theImageView];
Alternatively, you can say:
CGRect currentFrame = _theImageView.frame;
_theImageView.frame = CGRectMake(0, 0, currentFrame.size.width, currentFrame.size.height);
[_parentView addSubview:_theImageView];
Either approach will set the image view to the top left of the parent you add it to.
I thought I would take a cut at this in Swift.
If one would like to set a views position on the screen by specifying the coordinates to an origin point in X and Y for that view, with a little math, we can figure out where the center of the view needs to be in order for the origin of the frame to be located as desired.
This extension uses the frame of the view to get the width and height.
The equation to calculate the new center is almost trivial. See the below extension :
extension CGRect {
// Created 12/16/2020 by Michael Kucinski for anyone to reuse as desired
func getCenterWhichPlacesFrameOriginAtSpecified_X_and_Y_Coordinates(x_Position: CGFloat, y_Position: CGFloat) -> CGPoint
{
// self is the CGRect
let widthDividedBy2 = self.width / 2
let heightDividedBy2 = self.height / 2
// Calculate where the center needs to be to place the origin at the specified x and y position
let desiredCenter_X = x_Position + widthDividedBy2
let desiredCenter_Y = y_Position + heightDividedBy2
let calculatedCenter : CGPoint = CGPoint(x: desiredCenter_X, y: desiredCenter_Y)
return calculatedCenter // Using this point as the center will place the origin at the specified X and Y coordinates
}
}
Usage as shown below to place the origin in the upper left corner area, 25 pixels in :
// Set the origin for this object at the values specified
maskChoosingSlider.center = maskChoosingSlider.frame.getCenterWhichPlacesFrameOriginAtSpecified_X_and_Y_Coordinates(x_Position: 25, y_Position: 25)
If you want to pass a CGPoint into the extension instead of X and Y coordinates, that's an easy change you can make on your own.