How to take UIScrollView's zoomScale & contentOffset, and apply to larger image? - ios

I'm stuck with something I can't figure out...
My app lets the user zoom/pan a thumbnail image, via a UIScrollView. Then it needs to take the changes the user made in the scrollview and apply them to the same image at a much higher resolution (ie. generate a high-res UIImage that looks the same as the zoomed/panned low-res thumbnail the user touched).
I can see that the scrollview has a zoomscale and contentOffset which I can reuse, but I really can't see how to apply this to my UIImage.
All help much appreciated, thanks!

The zoom scale,contentOffset and frame of the UIScrollView will present a sub rectangle of the thumbnail.
Rescale that rectangle proportionally against the higher res version of your image.
e.g
Your scroller has bounds of 100px x 100px
Your thumbnail is 100px x 100px and is zoomed at 4x with a content offset of (x:100,y:100). You will see a sub rectangle of frame (x:25,y:25,w:25,h:25) against the original thumbnail inside the 100x100 window of the scroller i.e blurry. The width and height comes from the scrollers frame.
Once you flip in a high res image of 1000px x 1000px you are going to want to present the same chunk of the image except now you present (x:250,y:250,w:250,h:250) by setting the zoom to 0.4. contentOffset remains the same.
Note that the zoom of 1x and zero offset which would present the whole thumbnail image is a zoom of 0.1x and zero offset against the higher res.
BUT
You are overthinking the issue. Your container UIImageView does all the work for you. Once you reach your target zoom point simply load the higher res image into the imageView (myImageView.image = hiresImage ) and it will "just work" assuming your contentMode is set to Scale To Fill (UIViewContentModeScaleToFill) or Aspect Fill . The low res image will be replaced by the high res version in exactly the right position.

Related

Maintain image quality in iOS Application

I am trying to create a photo application but I am having a tough time formatting my photos so that they show clearly.
I have an imageview size 320 * 500, and an image size 3648*2736 px (Which of course I can scale down).
imageView.contentMode=UIViewContentModeScaleAspectFit;
With imageView.contentMode=UIViewContentModeScaleAspectFit; I changed the image size to 700* 525px (IMGA) and one 500 * 325(IMGB).
In this mode
IMGA fills the entire image view but is somehow a little distorted/not crisp
IMGB does not fill the entire image view Top and Bottom but the width is perfect and the image is crisp.
UIViewContentModeScaleAspectFill
With UIViewContentModeScaleAspectFill
the image is made for fit into the uiimageview but again distorted even if the image is scaled down vs being scaled up.
I see many apps with crisp large images . and I am hoping that someone helps me with measuring/ contentmode to get my images better.
Or correct my resizing
P.S I have been looking at this link to try help but I'm still missing my goal.
Difference between UIViewContentModeScaleAspectFit and UIViewContentModeScaleToFill?

Xcode 6 UIImageView will not scale correctly

Please see screen shot of flower setup below. The flower image has been correctly loaded from an asset catalog and when the app is run on various simulators the correct pixel resolution is assigned to each device. My problem is how to get the flower image to be scaled (equally sized to fit) the same on each device ??
I have learnt how to position the image to different positions using constraints and frames but the image never scales correctly - please see first pic
The following image is a mock up of what I want to be able to do (flower image scaled correctly on each device)
Judging by your mockups, it looks like you want the image to fill half the width, and keep its square aspect ratio to determine its height. One way to approach this would be to use AutoLayout to make a left UIImageView and a right placeholder (blank) view. Pin the left view to the left edge of the parent, the right view to the right edge, and then set them to be 0 pixels from each other. Then set an equal widths constraint on them. Finally, control drag the image view to itself and you can select aspect ratio -- and assuming that in IB, the width and height is the same, it will keep it square. Adding an equal heights constraint will give the other view the height it needs to be equal in case you need that.
This gives you a left image view that is 50% and with your mode set to Aspect Fit or Aspect Fill, it should give you the results in your mockups. In case you have an image that isn't square, make sure to check Clip Subviews for your UIImageView to prevent showing the overflow.
The real problem is that your goals are not well defined. Scaled with respect to what? The screen has a height and a width. You can't scale with respect to both, because that would distort the image (because different devices have different overall aspect ratios). Thus you have to pick one dimension that you will scale to.
Judging from your mockup, I'm going to guess that what you want is to scale with respect to height. If so, then give the image view a height constraint that is a fixed fraction of its superview's height. It looks to be about 0.25 in your mockups but you will have to eyeball what you think is a good fraction.
Now, if the content mode of the image view is Aspect Fit (or Aspect Fill), the image will change its height along with the image view without distorting the aspect ratio.
However, it would be best if you would make the other dimension (here, the width) of the image view a fixed fraction of the height, such as to make the aspect ratio of the image view the same as the aspect ratio of the image. The reason is that otherwise the image might end up centered in the image view in such a way that it doesn't touch the top or left any more, even though the image view itself does.
CGFloat screen_width = [[UIScreen mainScreen] bounds].size.width;
your_imageview.frame = CGRectMake(0, 0, screen_width/2, screen_width/2);

Compensate For AVLayerVideoGravityResizeAspectFill Height Difference

I have a nested video like this:
Live camera feed
When the user takes a photo, the image is offset along the y axis
Captured Still image
I do want to capture the WHOLE image and let the user scroll up and down. They can do this currently but I want the starting scroll of the image to be centered to match the camera feed preview. So if they take a picture, the image matches the frame that the video feed was showing.
The problem is, because the aspect on the camera is set to AVLayerVideoGravityResizeAspectFill it's doing some 'cropping' to fit the image into the live preview. Since the height is much bigger than the width, there are top and bottom parts that are captured in the image that are NOT showing up in the live feed (naturally).
What I don't know, however, is how much the top is being cropped so I can offset the previewed image to match this.
So my question is: Do you know how to calculate how much is being cropped from the top of a camera with its aspect ratio set to AVLayerVideoGravityResizeAspectFill? (Objective-C and Swift answers welcome!)
The solution I came up with is this:
func getVerticalOffsetAdjustment()->CGFloat
{
var cropRect:CGRect = _videoPreviewLayer.metadataOutputRectOfInterestForRect(_videoPreviewLayer.bounds) //returns the cropped aspect ratio box so you can use its offset position
//Because the camera is rotated by 90 degrees, you need to use .x for the actual y value when in portrait mode
return cropRect.origin.x/cropRect.width * frame.height
}
Its confusing I admit, but because the camera is rotated 90 degrees when in portrait mode you need to use the width and x values. The cropRect will return a value like (0.125,0,0,75,1.0)(your exact values will be different).
What this tells me, is my my shifted y value (that the video live feed is showing me) is shifted down 12.5% of its total height and that the height of the video feed is only 75% of the total height.
So I take 12.5% and divide by 75% to get the normalized (to my UIWindow) value and then apply that amount to the scrollview offset.
WHEW!!!

iOS - Get framing of Visible part of UIImage from UIImageView

I am trying to make a transition like APP Tinder.
Detail:
In Screen One there is a Vertical Rectangular UIImaveView with contentMode = Aspect Fill, so it hides some portion of Image to adujust Aspect Ratio.
In Screen Two (Detail Screen) the same image after transition has to to be passed, but the ImageView in Second screen is Square One.
I want to make a Morphing kind of Transition in which User should think that the same ImageView from Screen One become square one in Second one without stretching the Image.So What should i do?
Currently i am trying to get Frame of UIImage that is in visible area of UIImageView so that I can do some Logical stuff to achieve this. but can anyone help me to Get the Frame of Visible Portion of UIImage.
EDIT
Please Find out the Attached Image for understanding
I think there's a little ambiguity in the question: a frame must be specified in a coordinate system. But I think you're looking for a rect relative to the original, unclipped image.
If that's right, then the rect can be computed as follows. Say the image is called image, and the image view is imageView. The size of the rect is the size of the image view:
imageView.bounds.size
And, since aspect fill will center the oversized dimension, it's origin is:
CGPointMake((image.size.width - imageView.bounds.size.width) / 2.0, 0.0);

Again edit of edited UIImage in iOS app

In my iOS app, I am putting several UIImages on one back UIImage and then saving the overall back image with all subimages added on it by taking screenshot programmatically.
Now I want to change the subviews UIImages position of that saved image. So want to know how to detect the subview images position as I have taken whole image as screenshot.
Record their frame as converted to window coordinates. The pixels of the image should be the same as the frame origin (for normal) or double for retina. The screenshot is of the whole screen, so its dimensions are equivalent to the window frame. UIView has some convenience methods to convert arbitrary view frames to other view (or window) coordinates.
EDIT: to deal with content fit, you have to do the math yourself. You know the frame of the imageView, and you can ask the image for its size. Knowing the aspect ratio of each will let you determine in which dimension the image completely fits, and then you can compute the other dimension (which will be a value less than the imageView frame. Divide the difference of the view dimension minus the image dimension by two, and that lets you know the offset to the image inside the view. Now you can save the frame of the image as its displayed in the view.

Resources