I have a nested video like this:
Live camera feed
When the user takes a photo, the image is offset along the y axis
Captured Still image
I do want to capture the WHOLE image and let the user scroll up and down. They can do this currently but I want the starting scroll of the image to be centered to match the camera feed preview. So if they take a picture, the image matches the frame that the video feed was showing.
The problem is, because the aspect on the camera is set to AVLayerVideoGravityResizeAspectFill it's doing some 'cropping' to fit the image into the live preview. Since the height is much bigger than the width, there are top and bottom parts that are captured in the image that are NOT showing up in the live feed (naturally).
What I don't know, however, is how much the top is being cropped so I can offset the previewed image to match this.
So my question is: Do you know how to calculate how much is being cropped from the top of a camera with its aspect ratio set to AVLayerVideoGravityResizeAspectFill? (Objective-C and Swift answers welcome!)
The solution I came up with is this:
func getVerticalOffsetAdjustment()->CGFloat
{
var cropRect:CGRect = _videoPreviewLayer.metadataOutputRectOfInterestForRect(_videoPreviewLayer.bounds) //returns the cropped aspect ratio box so you can use its offset position
//Because the camera is rotated by 90 degrees, you need to use .x for the actual y value when in portrait mode
return cropRect.origin.x/cropRect.width * frame.height
}
Its confusing I admit, but because the camera is rotated 90 degrees when in portrait mode you need to use the width and x values. The cropRect will return a value like (0.125,0,0,75,1.0)(your exact values will be different).
What this tells me, is my my shifted y value (that the video live feed is showing me) is shifted down 12.5% of its total height and that the height of the video feed is only 75% of the total height.
So I take 12.5% and divide by 75% to get the normalized (to my UIWindow) value and then apply that amount to the scrollview offset.
WHEW!!!
Related
I'm using an ARSKView which blends 2D SpriteKit with 3D ARKit. When it displays the camera AR experience, I notice that the field of view of the camera is a bit narrow (in portrait mode). It's equivalent to 1.5x zoom in the built-in camera app.
I would like to zoom out, or widen the field of view a bit... even if it's just to the same 1x resolution that the built-in camera app allows.
Is there any way to do that?
If the video appears zoomed in in an ARSKView, it's most likely because it's frame size is too big in one dimension, and it's essentially trying to do an aspect fill effect, which causes one of the dimensions to zoom in.
Make sure the scene.scaleMode is .resizeFill1. Next, make sure that the ARSKView width / height ratio is exactly the same as the configuration.videoFormat.imageResolution (swap width & height if in portrait mode).
In my case I was using phone in portrait mode and using the back camera. configuration.videoFormat.imageResolution was 1920 x 1440. Since it's in portrait, I jotted down the size as 1440 x 1920 (i.e. I reversed it). Next I calculated the ratio: 1440 / 1920 = 0.75. Thus, on an iPhone 11 which has 414 pixels horizontally, I needed to ensure that the height of the ARSKView was 552. Since 414 / 552 = 0.75.
If my ARSKView height is too small (e.g. 207) this makes the ratio too big (e.g. 414 / 207 = 2). In this case, the entire width of the video will be seen properly, but the top and bottom of the video will be cropped out of frame.
If my ARSKView height is too big (e.g. 828) this makes the ratio too small (e.g. 414 / 828 = 0.5) In this case, I end up with the entire vertical portion of the video seen, but the horizontal portion will be zoomed in to maintain the aspect ratio.
1 The other fill aspect ratios like .fill and .aspectFill might work for your use-case as well, but the one you likely want to avoid at all costs is .aspectFit which behaves very oddly, and never shows you the full video resolution no matter the size of the view. As you resize the ANSKView's height from 0 to larger, you'll notice that it crops both the vertical and horizontal parts of the video, and once you reach 552, it'll stop revealing (never having reached the full video resolution), and instead go into zooming, followed by a weird black bar being added to further cover the vertical dimension. It also has black bars in the vertical and horizontal dimensions nearly the entire time, leading to a poor user experience.
I have a UIView that works as camera and it's 320x180 and a UIImageView of same size.
When I take a photo, it generates me an UIImage of size 1080x1920, so when I show it on the imageView, what happens is that the photo is very compressed on its height, because it is very tall, is like this
██████ the black rectangle is the whole photo (1080x1920)
██████
█▒▒▒▒█ the gray is what camera show in screen
██████ (it shows only gray part but it stores
██████ all the black part 1080x1920)
I would like to store it as an UIImage exactly how I see it on the gray rectangle.
I'm not sure how to do this, since the size of the photo is way bigger than the resolution the screen (which is 320 x 568) so is hard to crop correctly (and the crop is also rotating the image and bringing other bugs).
1080/6 = 180. 1920/6 = 320. So the values are in the correct aspect ratio — but they are reversed. You need to apply the correct rotation value to the image.
I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks
I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral
I'm stuck with something I can't figure out...
My app lets the user zoom/pan a thumbnail image, via a UIScrollView. Then it needs to take the changes the user made in the scrollview and apply them to the same image at a much higher resolution (ie. generate a high-res UIImage that looks the same as the zoomed/panned low-res thumbnail the user touched).
I can see that the scrollview has a zoomscale and contentOffset which I can reuse, but I really can't see how to apply this to my UIImage.
All help much appreciated, thanks!
The zoom scale,contentOffset and frame of the UIScrollView will present a sub rectangle of the thumbnail.
Rescale that rectangle proportionally against the higher res version of your image.
e.g
Your scroller has bounds of 100px x 100px
Your thumbnail is 100px x 100px and is zoomed at 4x with a content offset of (x:100,y:100). You will see a sub rectangle of frame (x:25,y:25,w:25,h:25) against the original thumbnail inside the 100x100 window of the scroller i.e blurry. The width and height comes from the scrollers frame.
Once you flip in a high res image of 1000px x 1000px you are going to want to present the same chunk of the image except now you present (x:250,y:250,w:250,h:250) by setting the zoom to 0.4. contentOffset remains the same.
Note that the zoom of 1x and zero offset which would present the whole thumbnail image is a zoom of 0.1x and zero offset against the higher res.
BUT
You are overthinking the issue. Your container UIImageView does all the work for you. Once you reach your target zoom point simply load the higher res image into the imageView (myImageView.image = hiresImage ) and it will "just work" assuming your contentMode is set to Scale To Fill (UIViewContentModeScaleToFill) or Aspect Fill . The low res image will be replaced by the high res version in exactly the right position.
In my iOS app, I am putting several UIImages on one back UIImage and then saving the overall back image with all subimages added on it by taking screenshot programmatically.
Now I want to change the subviews UIImages position of that saved image. So want to know how to detect the subview images position as I have taken whole image as screenshot.
Record their frame as converted to window coordinates. The pixels of the image should be the same as the frame origin (for normal) or double for retina. The screenshot is of the whole screen, so its dimensions are equivalent to the window frame. UIView has some convenience methods to convert arbitrary view frames to other view (or window) coordinates.
EDIT: to deal with content fit, you have to do the math yourself. You know the frame of the imageView, and you can ask the image for its size. Knowing the aspect ratio of each will let you determine in which dimension the image completely fits, and then you can compute the other dimension (which will be a value less than the imageView frame. Divide the difference of the view dimension minus the image dimension by two, and that lets you know the offset to the image inside the view. Now you can save the frame of the image as its displayed in the view.