I am currently trying to use CALayer to show a mask and then use this mask to crop the picture according to this mask but I can't find a way to get the good position and size of my mask in my image.
When I draw my mask, I use kCAGravityResizeAspectFill to keep the ratio of my image. In this case the layer use the height to fill my screen height and compute the proper width / (x, y) to keep the ratio.
CGRect screenViewRect = [self.viewForBaselineLayout bounds];
CGFloat screenViewWidth = screenViewRect.size.width;
CGFloat screenViewHeight = screenViewRect.size.height;
masqueLayer.frame = CGRectMake(screenViewWidth *0.45, screenViewHeight*0.05, screenViewWidth *0.10, screenViewHeight*0.92);
masqueLayer.contents = (__bridge id)([UIImage imageNamed:masqueJustif].CGImage);
masqueLayer.contentsGravity = kCAGravityResizeAspectFill;
[self.layer insertSublayer:masqueLayer atIndex:2];
As I get the mask on my screen I can easily see that the screenViewWidth *0.10 is not respected as I wanted, but my trouble come from the fact that when I get the frame or my layer the width isn't updated and so I can't get the real position of my layer on my screen.
Is there a method to get the real position of my layer on my screen.
I am actually trying to get the crop rectangle with this (considering my ratio is 21/29.7 as it is a A4 mask). This code is actually working on IPad but not on Iphone as the ratio is different :
CGRect outputRect = [masqueLayer convertRect:masqueLayer.bounds toLayer:self.layer];
outputRect.origin.y *= 2;
outputRect.size.height *= 2;
outputRect.size.width = outputRect.size.height * (21/29.7);
I also tried using my mask percentage :
CGRect outputRect = masqueLayer.frame;
outputRect.origin.y = its.size.height * 0.05;
outputRect.origin.x = its.size.width * 0.45 * masqueLayer.anchorPoint.x;
outputRect.size.height = its.size.height * 0.92;
outputRect.size.width = its.size.height * 0.92 * (21/29.7);
Here is a screenshot of my mask on another layer. I want to extract the image bounded by the blue corner (which is the border of my layer)
Thanks.
Related
Introduction
I have a CGPath I create from a SVG file using PocketSVG API. It all works fine.
The Problem
The problem is that the shape stretches for some reason, please take a look on this picture (the blue color is just to make is more visible to you, please ignore it, it should be ClearColor):
The Target
What do I want to achieve? I want to achieve a shape that goes all over the screen's width (I don't care about the height, it should modify itself according to the width), and sticks to the bottom of the screen, please take a look on this picture as well (please ignore the circular button):
The Code
The important part ;)
I have a subclass of UIView that draws this shape from the SVG file, it called CategoriesBarView. Then, on my MainViewController (a subclass of UIViewController) I'm creating an object of CategoriesBarView and setting it programmatically as a subview.
CategoriesBarView:
class CategoriesBarView: UIView {
override func layoutSubviews() {
let myPath = PocketSVG.pathFromSVGFileNamed("CategoriesBar").takeUnretainedValue()
var transform = CGAffineTransformMakeScale(self.frame.size.width / 750.0, self.frame.size.height / 1334.0)
let transformedPath = CGPathCreateCopyByTransformingPath(myPath, &transform)
let myShapeLayer = CAShapeLayer()
myShapeLayer.path = transformedPath
let blur = UIBlurEffect(style: .Light)
let effectView = UIVisualEffectView(effect: blur)
effectView.frame.size = self.frame.size
effectView.frame.origin = CGPointMake(0, 0)
effectView.layer.mask = myShapeLayer
self.addSubview(effectView)
}
}
MainViewController:
override func viewDidLoad() {
super.viewDidLoad()
let testHeight = UIScreen.mainScreen().bounds.size.height / 6 // 1/6 of the screen’s height, that is the height in the target picture approximately, doesn’t it?
let categoriesBarView = CategoriesBarView(frame: CGRect(x: 0, y: UIScreen.mainScreen().bounds.size.height - testHeight , width: UIScreen.mainScreen().bounds.size.width, height: testHeight))
categoriesBarView.backgroundColor = UIColor.blueColor() // AS I said, it should be ClearColor
self.view.addSubview(categoriesBarView)
}
Does anyone of you know what is the problem here and why the shape is stretching like that? I'll really appreciate if someone could help me here.
Thank you very much :)
Consider following code which draws a Square of 100x100 dimension. What i have done here is taken 100x100 as a base dimension(Because its easy to calculate respective ratio or scale dimension), as you can see i have defined scaleWidth and scaleHeight variable which represents your current scale for path. Scale is 1.0 at the moment which means it draws a square of 100x100, if you change it to 0.5 and 0.75 respectively it will draw a rectangle of 50X75 pixels. Refer Images which clearly depicts difference between scale width and height as 1.0 and 0.5 and 0.75 respectively.
CGFloat scaleWidth = 0.50f;
CGFloat scaleHeight = 0.75f;
//// Square Drawing
UIBezierPath* bezierSquarePath = [UIBezierPath bezierPath];
// ***Starting point of path ***
[bezierSquarePath moveToPoint: CGPointMake(1, 1)];
// *** move to x and y position to draw lines, calculate respective x & y position using scaleWidth & scaleHeight ***
[bezierSquarePath addLineToPoint: CGPointMake(100*scaleWidth, 1)];
[bezierSquarePath addLineToPoint: CGPointMake(100*scaleWidth, 100*scaleHeight)];
[bezierSquarePath addLineToPoint: CGPointMake(1, 100*scaleHeight)];
// *** end your path ***
[bezierSquarePath closePath];
[UIColor.blackColor setStroke];
bezierSquarePath.lineWidth = 1;
[bezierSquarePath stroke];
Image 1 : Represents 100x100 square using scaleWidth = 1.0 and scaleHeight = 1.0
Image 2 : Represents 50x75 square using scaleWidth = 0.50 and scaleHeight = 0.75
Note: In given images all the drawing is done in UIView's - (void)drawRect:(CGRect)rect method as only UIView is capable to draw. I have placed a UIView which is highlighted with GrayColor in images.
I believe it gives you a perspective about scaling a path to solve your problem as you can not use the same code but you can generate one using it.
Helpful Tool : If you are not expert in Graphics coding you can recommend to use PaintCode software which generates Objective-C code with UI. Thought there might be other softwares you can opt for.
Happy coding :)
I have a UIView that needs to be placed over a UIImage inside of a UIImageView at specific coordinates. The coordinates for the frame are referenced from the top left corner and have a specified width and height refrenced from the original image.
So, to make the frame, I am first getting the CGRect of the image using a category from the following post: UIImage size in UIImageView
I then get a scale factor to shrink the size of the frame by taking the original height, dividing it by the scaled height, and then dividing all of my values by that.
Lastly, I take the image CGRect and add the scaled position values of the frame to get my final CGRect for the view. However, the frame is always up and to the right of the desired location. Can anyone see what I'm doing wrong?
Here's the code (new is just a custom object with the correct frame parameters):
CGRect imageBounds = [self.imageView displayedImageBounds];
float scaleFactor = AppDelegate.usedImage.size.height / imageBounds.size.height;
new.height /= scaleFactor;
new.width /= scaleFactor;
new.positionX /= scaleFactor;
new.positionY /= scaleFactor;
UIView *faceRectView = [[UIView alloc] init];
faceRectView.tag = idx;
faceRectView.backgroundColor = [UIColor whiteColor];
faceRectView.frame = CGRectMake((imageBounds.origin.x + new.positionX), (imageBounds.origin.y + new.positionY), new.width, new.height);
[self.view addSubview:faceRectView];
CGPoint is a C structure that defines a point in a coordinate system. The origin of this coordinate system is at the top left on iOS and at the bottom left on OS X. In other words, the orientation of its vertical axis differs on iOS and OS X.
CGSize is another simple C structure that defines a width and a height value, and CGRect has an origin field, a CGPoint, and a size field, a CGSize. Together the origin and size fields define the position and size of a rectangle.
On iOS and OS X, an application has multiple coordinate systems. On iOS, for example, the application's window is positioned in the screen's coordinate system and every subview of the window is positioned in the window's coordinate system. In other words, the subviews of a view are always positioned in the view's coordinate system.
Take this example of a frame
and notice how it differs from the concept of bounds
CGGeometry Reference is a collection of structures, constants, and functions that make it easier to work with coordinates and rectangles. You may have run into code snippets similar to this:
CGPoint point = CGPointMake(self.view.frame.origin.x + self.view.frame.size.width, self.view.frame.origin.y + self.view.frame.size.height);
Not only is this snippet hard to read, it's also quite verbose. We can rewrite this code snippet using two convenient functions defined in the CGGeometry Reference.
CGRect frame = self.view.frame;
CGPoint point = CGPointMake(CGRectGetMaxX(frame), CGRectGetMaxY(frame));
To simplify the above code snippet, we store the view's frame in a variable named frame and use CGRectGetMaxX and CGRectGetMaxY. The names of the functions are self-explanatory.
The CGGeometry Reference defines functions to return the smallest and largest values for the x- and y-coordinates of a rectangle as well as the x- and y-coordinates that lie at the rectangle's center. Two other convenient getter functions are CGRectGetWidth and CGRectGetHeight.
Finally to conclude, check out the implementation of CGRectMake.
CGRectMake(CGFloat x, CGFloat y, CGFloat width, CGFloat height)
{
CGRect rect;
rect.origin.x = x; rect.origin.y = y;
rect.size.width = width; rect.size.height = height;
return rect;
}
Can you add Like that
faceRectView.frame = CGRectMake((0.0), (0.0), new.width, new.height);
I need the actual pixcel position not the positoin with respect to the UIImageView frame, but the actual pixcel position on UIImage.
UIpangesture recognizer giver the location in UIimageView, so it is of no use.
I can multiply the x and y with scale, but the UIImage scale is always 0.
I need to crop a circular area from UIImage make it blur and place it exactly at the same position
Flow:
Crop circular area from an UIimage usin:g CGImageCreateWithImageInRect
Then roud rect the image using: [[UIBezierPath bezierPathWithRoundedRect:
Blur the round rect image using CIGaussianBlur
Place the round rect blurred image at the x,y position
In the first step I need the actual pixel position where the user tapped
It depends on the image view content mode.
For the scale to fill mode you need to simply multiply the coordinates with image to view ratio:
CGPoint pointOnImage = CGPointMake(pointOfTouch.x*(imageSize.width/frameSize.width), pointOfTouch.y*(imageSize.height/frameSize.height));
For all other modes you need to compute the actual image frame inside the view which have different procedures then.
Adding aspect fit mode from comments:
For aspect fit you need to compute the actual image frame which can be smaller then the image view frame in one of the dimensions and is placed in center:
CGSize imageSize; // the original image size
CGSize imageViewSize; // the image view size
CGFloat imageRatio = imageSize.width/imageSize.height;
CGFloat viewRatio = imageViewSize.width/imageViewSize.height;
CGRect imageFrame = CGRectMake(.0f, .0f, imageViewSize.width, imageViewSize.height);
if(imageRatio > viewRatio) {
// image has room on top and bottom but fits perfectly on left and right
CGSize displayedImageSize = CGSizeMake(imageViewSize.width, imageViewSize.width / imageRatio);
imageFrame = CGRectMake(.0f, (imageViewSize.height-displayedImageSize.height)*.5f, displayedImageSize.width, displayedImageSize.height);
}
else if(imageRatio < viewRatio) {
// image has room on left and right but fits perfectly on top and bottom
CGSize displayedImageSize = CGSizeMake(imageViewSize.height * imageRatio, imageViewSize.height);
imageFrame = CGRectMake((imageViewSize.width-displayedImageSize.width)*.5f, .0f, displayedImageSize.width, displayedImageSize.height);
}
// transform the coordinate
CGPoint locationInImageView; // received from touch
CGPoint locationOnImage = CGPointMake(locationInImageView.x, locationInImageView.y); // copy the original point
locationOnImage = CGPointMake(locationOnImage.x - imageFrame.origin.x, locationOnImage.y - imageFrame.origin.y); // translate to fix the origin
locationOnImage = CGPointMake(locationOnImage.x/imageFrame.size.width, locationOnImage.y/imageFrame.size.height); // transform to relative coordinates
locationOnImage = CGPointMake(locationOnImage.x*imageSize.width, locationOnImage.y*imageSize.height); // scale to original image coordinates
Just a note if you want to ransfer to aspect fill all you need to do is swap < and > in both of the if statements.
I'm making a full screen camera for the iPhone 5 and have the following code to scale the 4:3 camera to fill the entire screen, which is a 2:3 ratio. The left and right sides bleed off the screen.
I have to move the cameraView down 71 points in order for it to center with the screen. Otherwise, there's a black bar at the bottom. I'm not quite sure why. Because I don't know why this is happening, I can't figure out how to dynamically code the adjustment to accommodate the iPhone 6 and 6 Plus.
Any help is appreciated.
// get the screen size
CGSize screenSize = [[UIScreen mainScreen] bounds].size;
// establish the height to width ratio of the camera
float heightRatio = 4.0f / 3.0f;
// calculate the height of the camera based on the screen width
float cameraHeight = screenSize.width * heightRatio;
// calculate the ratio that the camera height needs to be scaled by
float ratio = screenSize.height / cameraHeight;
//This slots the preview exactly in the middle of the screen by moving it down 71 points (for iphone 5)
CGAffineTransform translate = CGAffineTransformMakeTranslation(0.0, 71.0);
self.camera.cameraViewTransform = translate;
CGAffineTransform scale = CGAffineTransformScale(translate, ratio, ratio);
self.camera.cameraViewTransform = scale;
This finally clicked in my head. Since we know that the camera will always be the same length of the screen width:
//getting the camera height by the 4:3 ratio
int cameraViewHeight = SCREEN_WIDTH * 1.333;
int adjustedYPosition = (SCREEN_HEIGHT - cameraViewHeight) / 2;
CGAffineTransform translate = CGAffineTransformMakeTranslation(0, adjustedYPosition);
self.imagePicker.cameraViewTransform = translate;
I'm having a nightmare time trying to correct a photo taken with AVFoundation captureStillImageAsynchronouslyFromConnection to size and orient to exactly what is shown on the screen.
I show the AVCaptureVideoPreviewLayer with this code to make sure it displays the correct way up at all rotations:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
previewLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
if ([[previewLayer connection] isVideoOrientationSupported])
{
[[previewLayer connection] setVideoOrientation:(AVCaptureVideoOrientation)[UIApplication sharedApplication].statusBarOrientation];
}
[self.view.layer insertSublayer:previewLayer atIndex:0];
Now when I have a returned image it needs cropping as it's much bigger than what was displayed.
I know there are loads of UIImage cropping examples, but the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
The preview is using AVLayerVideoGravityResizeAspectFill and I have my UIImageView also set to AspectFill
So how can I get the correct frame that AVFoundation is displaying on screen from the preview layer?
EDIT ----
Here's an example of the problem i'm facing. Using the front camera of an iPad Mini, the camera using the resolution 720x1280 but the display is 768x0124. The view displays this (See the dado rail at the top of the image:
Then when I take the image and display it, it looks like this:
Obviously the camera display was centred in the view, but the cropped image is taken from the top(none seen) section of the photo.
I'm working on a similar project right now and thought I might be able to help, if you haven't already figured this out.
the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
Let's say your image is 720x1280 and you want your image to be cropped to the rectangle of your display, which is a CGRect of size 768x1024. You can't just pass a rectangle of size 768x1024. First, your image isn't 768 pixels wide. Second, you need to specify the placement of that rectangle with respects to the image (i.e. by specifying the rectangle's origin point). In your example, self.view.frame is a CGRect that has an origin of (0, 0). That's why it's always cropping from the top of your image rather than from the center.
Calculating the cropping rectangle is a bit tricky because you have a few different coordinate systems.
You've got your view controller's view, which has...
...a video preview layer as a sublayer, which is displaying an aspect-filled image, but...
...the AVCaptureOutput returns a UIImage that not only has a different width/height than the video preview, but it also has a different aspect ratio.
So because your preview layer is displaying a centered and cropped preview image (i.e. aspect fill), what you basically want to find is the CGRect that:
Has the same display ratio as self.view.bounds
Has the same smaller dimension size as the smaller dimension of the UIImage (i.e. aspect fit)
Is centered in the UIImage
So something like this:
// Determine the width:height ratio of the crop rect, based on self.bounds
CGFloat widthToHeightRatio = self.bounds.size.width / self.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (image.size.width < image.size.height) {
cropRect.size.width = image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = image.size.height * widthToHeightRatio;
cropRect.size.height = image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (image.size.height - cropRect.size.height) / 2.0;
} else {
cropRect.origin.x = (image.size.width - cropRect.size.width) / 2.0;
cropRect.origin.y = 0;
}
So finally, to go back to your original example where the image is 720x1280, and you want your image to be cropped to the rectangle of your display which is 768x1024, you will end up with a CGRect of size 720x960, with an origin of x = 0, y = 1280-960/2 = 160.