I'm using an ARSKView which blends 2D SpriteKit with 3D ARKit. When it displays the camera AR experience, I notice that the field of view of the camera is a bit narrow (in portrait mode). It's equivalent to 1.5x zoom in the built-in camera app.
I would like to zoom out, or widen the field of view a bit... even if it's just to the same 1x resolution that the built-in camera app allows.
Is there any way to do that?
If the video appears zoomed in in an ARSKView, it's most likely because it's frame size is too big in one dimension, and it's essentially trying to do an aspect fill effect, which causes one of the dimensions to zoom in.
Make sure the scene.scaleMode is .resizeFill1. Next, make sure that the ARSKView width / height ratio is exactly the same as the configuration.videoFormat.imageResolution (swap width & height if in portrait mode).
In my case I was using phone in portrait mode and using the back camera. configuration.videoFormat.imageResolution was 1920 x 1440. Since it's in portrait, I jotted down the size as 1440 x 1920 (i.e. I reversed it). Next I calculated the ratio: 1440 / 1920 = 0.75. Thus, on an iPhone 11 which has 414 pixels horizontally, I needed to ensure that the height of the ARSKView was 552. Since 414 / 552 = 0.75.
If my ARSKView height is too small (e.g. 207) this makes the ratio too big (e.g. 414 / 207 = 2). In this case, the entire width of the video will be seen properly, but the top and bottom of the video will be cropped out of frame.
If my ARSKView height is too big (e.g. 828) this makes the ratio too small (e.g. 414 / 828 = 0.5) In this case, I end up with the entire vertical portion of the video seen, but the horizontal portion will be zoomed in to maintain the aspect ratio.
1 The other fill aspect ratios like .fill and .aspectFill might work for your use-case as well, but the one you likely want to avoid at all costs is .aspectFit which behaves very oddly, and never shows you the full video resolution no matter the size of the view. As you resize the ANSKView's height from 0 to larger, you'll notice that it crops both the vertical and horizontal parts of the video, and once you reach 552, it'll stop revealing (never having reached the full video resolution), and instead go into zooming, followed by a weird black bar being added to further cover the vertical dimension. It also has black bars in the vertical and horizontal dimensions nearly the entire time, leading to a poor user experience.
Related
I have a photoshop file with 8 concentric 'rings' (although some aren't rings and are more irregular), with the largest at the 'back' and decreasing in size up to the 8th one being very small in the centre.
The 'rings' are drawn in such a way as that the smaller ones are 'internal' to its 'outer' or next larger ring. Each 'ring' has transparency on its outside, but also on its inside (where the smaller rings would 'sit').
I need to support all iOS devices (Universal App).
The largest image has a default size of 2048x2048 pixels, but every one of the 8 layers has a common 'centre' point around which they need to rotate, and around which they need to be fixed.
So, basically, all 8 have to be layered, one on top of the other, such that their centres are all perfectly aligned.
BUT the size of the artwork is larger than any iOS device, and the auto-layout has to allow for every device size and orientation, with the largest (rear) layer having an 8 point inset from the screen edges.
For those that can't picture this, here is a crude representation, where the dark background is 'transparent' and represents the smaller of the width or height of the iOS device (depending on orientation):
Note: The placement of where each smaller UIImageView is precise. They all share a common centre (the centre of the screen) but each ring sits 'inside' of the larger ring behind it. i.e. the centre of the green, hot pink and baby pink circles are empty / transparent, and no matter what size screen or orientations, they have to nest together perfectly, as they do in the photoshop art assets.
I've spent hours in auto-layout trying to sort this out, and when I've got it working on one device and both orientations, it's not working on any others.
No code to show because I'm trying to do this in IB so I can preview on all devices.
Is this even possible using IB / Auto-Layout or do I have to switch to manually working out the scales by which to resize their UIImageView based on screen width / height at runtime and the relationship between each 'ring'?
Edit:
And unless I'm doing it wrong, embedding each UIImageView into a transparent UIView in order to use the UIView to fake 'insets', this doesn't work because those numbers are hard coded, such that when it's perfect on a 12.9" iPad Pro, on an iPhone SE each 'inset' UIImageView is much more compressed and doesn't sit 'inside' it's next larger ring, but is like a tiny letter O with lots of surrounding blank space, because those 'insets' don't scale. What is 100pts on an iPad is a tiny amount of space, but 100pts on an iPhone SE is a 1/3 of the screen.
You can draw circles using CAShapeLayer and UIBezierPath. Since you are trying to fit this in a square, I'd define container size to be either the width or height of the parent container depending on what's smaller, this will allow for rotation and different screen sizes. As for the center, you can always find it by getting center coordinates of your square container (container.bounds.size.width / 2). To rotate your layers/sublayers you can use this answer: https://stackoverflow.com/a/3929703/4577610
I have a question about using images for universal apps on IOS.
I've created a universal app that works on all iphone devices and ipads. I've placed all the content via storyboard. On a View I have two buttons one on the top space area that is 40 points high and as wide as the view, one on the bottom area also 40 points high and as wide as the view and an image that is a square (A x A) in the middle of the view, the image is constrained by horizontal and vertical (For it to stay always aligned in the middle of the view).
(I understand that if I use an image that is 100 points by 100 points image at 1x, I need to create two other images at 200 points by 200 points for 2x and 300 by 300 for 3x.)
1) So if I constrain the UIImage container by height and width equal to 100 by 100, it will be 100 by 100 points in all devices. But I want it to use as much space as possible. That would mean that on iphone 5 - 6s+ it would always be 100 by 100, it would look smaller on every growing screen. It would also mean that I would need to make it bigger for Ipad (Storyboard changing to regular regular to change UIImage container size for ipad, ex. increase it to 300 x 300). Put once I make it bigger, the image I have will be to small for that size therefore turn blurry or pixelated.
Right?
2) If I would like to be able to use as much space as possible I thought of using the following method. Constraining the UIImage to be equal width as the view but reducing its multiplier to 0.9 or 0.8 (Therefore making it smaller then the view width) and placing an aspect ratio of 1:1(To maintain it square). That way taking advantage of most of the view space that is free, and In all the devices it would always seem filling the same space. However the problem would be that the image would have to be different sizes. (Ex. iphone 5 = 150 x 150, on iphone 6 = 250 x 250, on iphone 6+ = 320 x 320 ans on ipad 600 x 600).
so If I make an image that in 2x that is 150 x 150, when its used on the iphone 6 it would be distorted or pixelated and the same for for the rest.
So can someone help me understand what I should do? or link a tutorial?
Please help!!
I've found that the best way to solve this problem would be to make a much larger image and let the constraints resize it. That way, you've covered your bases if new resolutions are developed, or with the current varying sizes. A much larger image would not look bad if it was compressed into a smaller space, but you might lose some details.
I'm working on an app using XCode 6 and I'm trying to frame an image. I have one image that will change dynamically depending on the selection in the prior screen. I have another image behind it that is literally an image of a frame. The idea is to have the actual image look like it is surrounded by the frame itself. Here's the trick. I want the actual image to be a certain width to fill most of the horizontal size of the screen (say 85% if you want a number). I want the image to resize to this width based on the width of the screen (based on iPhone 4 vs iPhone 6, for example; so the width sets based on the screen, and the height sets based on the width to maintain its original aspect ratio). The frame should be about 10 pixels wider and 10 pixels taller, leaving 5 pixels around each edge, and the pictures should be centered at the same point.
I've seen a few programmatic fixes for resizing things based on the original aspect ratio of the image. However, I've been primarily using just Storyboard and was hoping to get an answer along those lines. Thanks so much!
Use auto layout and add constraints to the left and right of the frame to the superview. Then add constraints to the image inside the frame to be 10 pixels on top, bottom, left, and right to the edges of the frame. The left and right constraints of the frame should leave everything centered on the screen. Do you have any experience with Auto Layout? It can be a bit of a difficult learning curve, but for your purposes its a perfect solution.
I have a nested video like this:
Live camera feed
When the user takes a photo, the image is offset along the y axis
Captured Still image
I do want to capture the WHOLE image and let the user scroll up and down. They can do this currently but I want the starting scroll of the image to be centered to match the camera feed preview. So if they take a picture, the image matches the frame that the video feed was showing.
The problem is, because the aspect on the camera is set to AVLayerVideoGravityResizeAspectFill it's doing some 'cropping' to fit the image into the live preview. Since the height is much bigger than the width, there are top and bottom parts that are captured in the image that are NOT showing up in the live feed (naturally).
What I don't know, however, is how much the top is being cropped so I can offset the previewed image to match this.
So my question is: Do you know how to calculate how much is being cropped from the top of a camera with its aspect ratio set to AVLayerVideoGravityResizeAspectFill? (Objective-C and Swift answers welcome!)
The solution I came up with is this:
func getVerticalOffsetAdjustment()->CGFloat
{
var cropRect:CGRect = _videoPreviewLayer.metadataOutputRectOfInterestForRect(_videoPreviewLayer.bounds) //returns the cropped aspect ratio box so you can use its offset position
//Because the camera is rotated by 90 degrees, you need to use .x for the actual y value when in portrait mode
return cropRect.origin.x/cropRect.width * frame.height
}
Its confusing I admit, but because the camera is rotated 90 degrees when in portrait mode you need to use the width and x values. The cropRect will return a value like (0.125,0,0,75,1.0)(your exact values will be different).
What this tells me, is my my shifted y value (that the video live feed is showing me) is shifted down 12.5% of its total height and that the height of the video feed is only 75% of the total height.
So I take 12.5% and divide by 75% to get the normalized (to my UIWindow) value and then apply that amount to the scrollview offset.
WHEW!!!
The photo taken using the UIImagePickerController is of 4:3 aspect ratio. However, the full screen aspect ratio is 3:2. So the gallery app is doing some magic to show the photo as 3:2 aspect ratio. When you zoom out in the full screen view, the photo appears in 4:3 aspect ratio. Can anyone shed light on how it could be done? I've been breaking my head for the past two weeks on this.
Really appreciate the help!!
To fit a 4:3 image into a 3:2 space you can either match the height or match the width.
If you were to match the height then you'd turn the 3 in 4:3 into the 2 in 3:2. So you'd scale the entire image by 2/3. Since you'd be scaling width and height by the same amount, the effective height after scaling would be the 4 from 4:3 scaled by 2/3, to give 8/3 — a bit less than three. You'd therefore not quite fill the screen.
Conversely, if you were to match the width then you'd turn the 4 in 4:3 into the 3 in 3:2. So you'd scale the entire image by 3/4. Since you'd be scaling width and height by the same amount, the effective height at the end would be the 3 from 4:3 scaled by 3/4, to give 9/4 — a bit more than two. You'll therefore slightly more than fill the screen.
So that the photos app does is display pictures with an initial zoom so as to fit the width of the stored image to the width of the display. If the stored image is 3264x2448 (which I think it is on the iPhone 4S and the 5) then on an iPhone 4s — using points rather than pixels — it's scaled by a ratio of 480/3264. If you work that out, it gives the image a final height of very close to 360pt, 40pt wider than the screen.
In terms of UIKit, that probably means putting a UIImage inside a UIScrollView and setting the initial value of zoomScale to 480/3264 (ie, approximately 0.15). The scroll view can help you with zooming in and out though there's still some manual work to be done — see e.g. this tutorial. By setting a minimumZoomScale of 320/2448 (ie, approximately 0.13) you'll automatically get the behaviour where zooming out as far as you can go ends up showing the entire 4:3 image on screen.
not sure how you obtain your image, but you might have gotten one of the representations of the image. One of those representations is specifically for getting a quick fullScreen CGImage, an other will return the FullResolution. FullScreen will be whatever is needed for the device (640x960 on iPhone4), Full resolution would be the 8MP picture.