swift get image in aspect fill from original image [duplicate] - ios

This question already has answers here:
How to crop a UIImageView to a new UIImage in 'aspect fill' mode?
(2 answers)
Closed 4 years ago.
The problem I am facing is that the image taken from the camera is larger then the one shown in the live view. I have the camera view setup as Aspect Fill.
So the image that I get from the camera is about 4000x3000 and the view that shows the live feed from the camera is 375x800 (fullscreen iPhoneX size) so how do I transform/cut out part of the image from the image gotten from the camera to be the same as the one shown in the live view, so I can further manipulate the image (draw over it).
As far as I understand the Aspect Fill property clips the image that cannon't be shown in the view. But that clip does not happen on X = 0 and y = 0 it happens somewhere in the middle of the image. So how do i get that X and Y on the original image so that i can crop out exactly that part out.
I hope I explained well enough.
EDIT:
To give more context and some code snipets to make it easier to understand the issue.
Setting up my camera with the .resizeAspectFill gravity.
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraPreviewLayer?.frame = self.captureView.frame
self.captureView.layer.addSublayer(cameraPreviewLayer!)
which is displayed in the live view (captureView) that has the size of
375x818 (width: 375 and height: 818).
Then I get the image from that camera on button click and the size of that image is:
3024x4032 (width: 3024 and height: 4032)
So what i want to do is crop the image from the camera to be the same as the one in the live view (captureView) that is set to AspectFill type.

As you already state, content mode option Aspect fill tries to fill up the live view and you are also right that it crops some rectangle from center (cropping top-bottom or left-right depending upon the image size and the image view size)
For generic solution there are two possible case
The image needed to be cropped along the height to fit the image view (proportional drawing height is smaller)
The image needed to be cropped along the width to fit the image view (proportional drawing width is smaller)
Considering your size notation is 4000x3000 (height = 4000, width = 3000 a portrait image) and your drawing canvas size is 375X800 (height = 375, width = 800), then your cropping would be height wise while setting the content mode Aspect Fill.
So cropping will be done from X=0 but the Y would be somewhat positive. So lets calculate the Y
let propotionalHeight = 4000 / 3000 * 800
let allowedHeight = 375
let topBottomCroppedHeight = proportionalHeight - allowedHeight
let croppedYPosition = topBottomCroppedHeight / 2
So here you got your Y value. and the height would be the height of the canvas / live view where you are rendering. Please replace these values with your variables.
If you are interested in how all the contentMode works can dive into here. All the contentMode supported by UIImageView is simulated here.
Happy coding.
UPDATE
one thing i forgot to mention that, this calculated croppedYPosition is for smaller proportion image. If you want to use this value for the original 4000X3000 image you have to scale this up for the original value as following
let originalYPosition = (croppedYPosition / 375) * 4000
Use originalYPosition to crop from the original image of size 4000X3000.

Related

iOS Vision: Drawing Detected Rectangles on Live Camera Preview Works on iPhone But Not on iPad

I'm using the iOS Vision framework to detect rectangles in real-time with the camera on an iPhone and it works well. The live preview displays a moving yellow rectangle around the detected shape.
However, when the same code is run on an iPad, the yellow rectangle tracks accurately along the X axis, but on the Y it is always slightly offset from the centre and it is not correctly scaled. The included image shows both devices tracking the same test square to better illustrate. In both cases, after I capture the image and plot the rectangle on the full camera frame (1920 x 1080), everything looks fine. It's just the live preview on the iPad that does not track properly.
I believe the issue is caused by how the iPad screen has a 4:3 aspect ratio. The iPhone's full screen preview scales its 1920 x 1080 raw frame down to 414 x 718, where both X and Y dims are scaled down by the same factor (about 2.6). However, the iPad scales the 1920 x 1080 frame down to 810 x 964, which warps the image and causes the error along the Y axis.
A rough solution could be to set a preview layer size smaller than the full screen and have it be scaled down uniformly in a 16:9 ratio matching 1920 x 1080, but I would prefer to use the full screen. Has anyone here come across this issue and found a transform that can properly translate and scale the rect observation onto the iPad screen?
Example test images and code snippet are below.
let rect: VNRectangleObservation
//Camera preview (live) image dimensions
let previewWidth = self.previewLayer!.bounds.width
let previewHeight = self.previewLayer!.bounds.height
//Dimensions of raw captured frames from the camera (1920 x 1080)
let frameWidth = self.frame!.width
let frameHeight = self.frame!.height
//Transform to change detected rectangle from Vision framework's coordinate system to SwiftUI
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -(previewHeight))
let scale = CGAffineTransform.identity.scaledBy(x: previewWidth, y: previewHeight)
//Convert the detected rectangle from normalized [0, 1] coordinates with bottom left origin to SwiftUI top left origin
//and scale the normalized rect to preview window dimensions.
var bounds: CGRect = rect.boundingBox.applying(scale).applying(transform)
//Rest of code draws the bounds CGRect in yellow onto the preview window, as shown in the image.
In case it helps anyone else, based on the info posted by Mr.SwiftOak's comment, I was able to resolve the problem through a combination of changing the preview layer to scale as .resizeAspect, rather than .resizeAspectFill, preserving the ratio of the raw frame in the preview. This led to the preview no longer taking up the full iPad screen, but made it a lot simpler to overlay accurately.
I then drew the rectangles as a .overlay to the preview window, so that the drawing coords are relative to the origin of the image (top left) rather than the view itself, which has an origin at (0, 0) top left of the entire screen.
To clarify on how I've been drawing the rects, there are two parts:
Converting the detect rect bounding boxes into paths on CAShapeLayers:
let boxPath = CGPath(rect: bounds, transform: nil)
let boxShapeLayer = CAShapeLayer()
boxShapeLayer.path = boxPath
boxShapeLayer.fillColor = UIColor.clear.cgColor
boxShapeLayer.strokeColor = UIColor.yellow.cgColor
boxLayers.append(boxShapeLayer)
Appending the layers in the updateUIView of the preview UIRpresentable:
func updateUIView(_ uiView: VideoPreviewView, context: Context)
{
if let rectangles = self.viewModel.rectangleDrawings {
for rect in rectangles {
uiView.videoPreviewLayer.addSublayer(rect)
}
}
}

How do I change the aspect ratio of the camera in Xamarin iOS?

First of all, my app is written using mostly in Xamarin Forms, but uses a CustomRenderer for the camera. Not sure if that will affect anything.
My problem currently is that I need to change the aspect ratio of the camera from 16:9 to something custom, if this is even possible.
I have tried changing the width and height independently, but, for example, if I were to change the height to something significantly higher, all it does is expand the width and height at the same rate until it 'hits the side of the box' that the camera is contained in, so it results in the width being correct, but the height I specified in code is a lot higher than the actual height of the camera.
var videoPreviewLayer = new AVCaptureVideoPreviewLayer(captureSession)
{
// 16 / 9 = 1.77777778
Frame = new CGRect(-15, -45, size, size * 1.77777778) /*LiveCameraStream.Bounds*/,
BackgroundColor = new CGColor(0, 255, 0) //Green
};
viewLayer.AddSublayer(videoPreviewLayer);
viewLayer.Bounds = new CGRect(4, -50, 100, 50);
viewLayer.BackgroundColor = new CGColor(255, 0, 0); //Red
//Background colours included just for dev purposes to distinguish between layers
Whenever I set the height to be width * 1.777... it results in a perfect 16:9 aspect ratio, which unfortunately is not what I need. The image below shows how it is currently looking, with ideally the camera taking up all of the red area.
So my question, is how do I change the aspect ratio of this camera, if it is even possible?

Get origin of image after aspect fit [duplicate]

This question already has answers here:
How to know the image size after applying aspect fit for the image in an UIImageView
(20 answers)
Closed 6 years ago.
I want to get the CGRect value of the image after it is being "aspect fitted" onto the screen. All the solutions I found online only gives the CGSize. However, I want the origin so that I can draw a canvas on top of that image to perform drawing only on top of the image rather than the whole imageView.
Thanks,
let rect = CGRectIntegral(AVMakeRectWithAspectRatioInsideRect(imageView.image!.size, imageView.bounds))
Try to execute below code
var img=UIImageView(frame: CGRectMake(50, 50, 100, 100))
var imageOrigin=img.frame.origin //This will give you image x and y position
print(img.frame.origin) // this will print (50.0, 50.0)

Rotated Image gets distorted and blurry?

I use an image view:
#IBOutlet weak var imageView: UIImageView!
to paint an image and also another image which has been rotated. It turns out that the rotated image has very bad quality. In the following image the glasses in the yellow box are not rotated. The glasses in the red box are rotated by 4.39 degrees.
Here is the code I use to draw the glasses:
UIGraphicsBeginImageContext(imageView.image!.size)
imageView.image!.drawInRect(CGRectMake(0, 0, imageView.image!.size.width, imageView.image!.size.height))
var drawCtxt = UIGraphicsGetCurrentContext()
var glassImage = UIImage(named: "glasses.png")
let yellowRect = CGRect(...)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.yellowColor().CGColor)
CGContextStrokeRect(drawCtxt, yellowRect)
CGContextDrawImage(drawCtxt, yellowRect, glassImage!.CGImage)
// paint the rotated glasses in the red square
CGContextSaveGState(drawCtxt)
CGContextTranslateCTM(drawCtxt, centerX, centerY)
CGContextRotateCTM(drawCtxt, 4.398 * CGFloat(M_PI) / 180)
var newRect = yellowRect
newRect.origin.x = -newRect.size.width / 2
newRect.origin.y = -newRect.size.height / 2
CGContextAddRect(drawCtxt, newRect)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.redColor().CGColor)
CGContextSetLineWidth(drawCtxt, 1)
// draw the red rect
CGContextStrokeRect(drawCtxt, newRect)
// draw the image
CGContextDrawImage(drawCtxt, newRect, glassImage!.CGImage)
CGContextRestoreGState(drawCtxt)
How can I rotate and paint the glasses without losing quality or get a distorted image?
You should use UIGraphicsBeginImageContextWithOptions(CGSize size, BOOL opaque, CGFloat scale) to create the initial context. Passing in 0.0 as the scale will default to the scale of the current screen (e.g., 2.0 on an iPhone 6 and 3.0 on an iPhone 6 Plus).
See this note on UIGraphicsBeginImageContext():
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
As others have pointed out, you need to set up your context to allow for retina displays.
Aside from that, you might want to use a source image that is larger than the target display size and scale it down. (2X the pixel dimensions of the target image would be a good place to start.)
Rotating to odd angles is destructive. The graphics engine has to map a grid of source pixels onto a different grid where they don't line up. Perfectly straight lines in the source image are no longer straight in the destination image, etc. The graphics engine has to do some interpolation, and a source pixel might be spread over several pixels, or less than a full pixel, in the destination image.
By providing a larger source image you give the graphics engine more information to work with. It can better slice and dice those source pixels into the destination grid of pixels.

How to use scanCrop property on a custom sized ZBarReaderView with an overlay

I have a ZBarReaderView embedded in a viewController and I would like to limit the area of the scan to a square in the middle of the view. I have set the resolution of the camera to 1280x720. Assuming the device is in Portrait Mode, I calculate the normalized coordinates only using the cameraViewBounds, which is the readerView's bounds and the overlayViewFrame which is the orange box's frame, as seen in this screenshot - http://i.imgur.com/xzUDHIh.png . Here is what I have so far:
self.cameraView.session.sessionPreset = AVCaptureSessionPreset1280x720;
//Set CameraView's Frame to fit the SideController's Width
CGPoint origin = self.cameraView.frame.origin;
float cameraWidth = self.view.frame.size.width;
self.cameraView.frame = CGRectMake(origin.x, origin.y, cameraWidth, cameraWidth);
//Set CameraView's Cropped Scanning Rect
CGFloat x,y,width,height;
CGRect cameraViewBounds = self.cameraView.bounds;
CGRect overlayViewFrame = self.overlay.frame;
y = overlayViewFrame.origin.y / cameraViewBounds.size.height;
x = overlayViewFrame.origin.x / cameraViewBounds.size.width;
height = overlayViewFrame.size.height / cameraViewBounds.size.height;
width = overlayViewFrame.size.width / cameraViewBounds.size.width;
self.cameraView.scanCrop = CGRectMake(x, y, width, height);
NSLog(#"\n\nScan Crop:\n%#", NSStringFromCGRect(self.cameraView.scanCrop));
As you can see in the screenshot, the blue box is the scanCrop rect, what I want to be able to do is have that blue box match the orange box. Do I need to factor in the resolution of the image or the image size when calculating the normalized values for the scanCrop?
I cannot figure out what spadix is explaining in this comment on sourceforge:
"So, assuming your sample rectangle represents screen points in portrait orientation and using the default 640x480 camera resolution, your scanCrop rectangle would be {75/w, 38/320., 128/w, 244/320.}, where w=480/(320*640.) is the major dimension of the image in screen points." and here:
"assuming your coordinates refer to points in portrait orientation, using the default camera image size, located at the default location and taking into account that the camera image is rotated:
scanCrop = CGRectMake(50/426., 1-(20+250)/320., 150/426., 250/320.)"
He uses the values 426 and 320 which I am assuming have to do with the image size but in one of his comments he mentions that the resolution is 640x480. How do I factor in the image size to calculate the correct rect for scanCrop?

Resources