Again edit of edited UIImage in iOS app - ios

In my iOS app, I am putting several UIImages on one back UIImage and then saving the overall back image with all subimages added on it by taking screenshot programmatically.
Now I want to change the subviews UIImages position of that saved image. So want to know how to detect the subview images position as I have taken whole image as screenshot.

Record their frame as converted to window coordinates. The pixels of the image should be the same as the frame origin (for normal) or double for retina. The screenshot is of the whole screen, so its dimensions are equivalent to the window frame. UIView has some convenience methods to convert arbitrary view frames to other view (or window) coordinates.
EDIT: to deal with content fit, you have to do the math yourself. You know the frame of the imageView, and you can ask the image for its size. Knowing the aspect ratio of each will let you determine in which dimension the image completely fits, and then you can compute the other dimension (which will be a value less than the imageView frame. Divide the difference of the view dimension minus the image dimension by two, and that lets you know the offset to the image inside the view. Now you can save the frame of the image as its displayed in the view.

Related

Stick 2 Image views dimensions in all iOS devices

I want to thank anyone of you who is contributing to this community, and i which to find a solution to the issue that is presented below:
this is a capture of my xcode project Main Storyboard:
it contains a background image view with a grid picture and a small image view over it with a cercle picture.the content mode is the same for both views is aspect fit.
what i am trying to achieve here is to get the same combination between the two images, as in this picture, on all devices.
so basically sticking the cercle image to the background image (the grid image) so that if the background image dimensions change on another device the dimensions of the cercle image change the same way to keep the same view as in this picture.
I grabbed your image and clipped out the circle.
The "grid" actual pixels are 312 x 324. The "circle" is 30 x 30.
I set the grid imageView to fill, with a width-to-height ratio of 312:324 and a width of 0.75 to superview.
I set the circle imageView to fill, with a width-to-height ratio of 1:1 and a width constraint of 30:312 of the grid imageView.
Here is the result:
You would need to calculate the run-time" ratio for placement, but that's pretty straight-forward.
Edit:
I whipped up a simple example - has buttons to move the circle based on the intersection points: https://github.com/DonMag/GridScale

Position an UIImageView over another with scaling

Is there a method on UIImageView that tells me the position of its image within its bounds? Say I have an image of a car, like this:
This image is 600x243, and, where the rear wheel should be, there's a hole which is 118,144,74,74 (x,y,w,h).
I want to let the user see different rear wheel options, and I have a few wheel images to choose from (all square, so they are easily scaled to match the hole in the car).
I wanted to place the car image in a UIImageView whose size is arbitrary based on layout, and I wanted to see the whole car at the natural aspect ratio. So I set the image view's content mode to UIViewContentModeScaleAspectFit, and that worked great.
For example, here's the car in an imageView that is 267x200:
I think doing this scaled the image from w=600 to w=267, or, by a factor of 267/600=0.445, and (I think) that means that the height changed from 200 to 200*0.445=89. And I think it's true that the hole was scaled by that factor, too
But I want to add a UIImageView subview to show the wheel, this is where I get confused. I know the image size, I know the imageView size, and I know the hole frame in terms of the original image size. How do I get the hole frame after the image is scaled?
I've tried something like this:
determine the position of the car image in its UIImageView. That's something like:
float ratio=carImage.width/carImageView.frame.size.width; // 0.445
CGFloat yPos=(carImageView.frame.size.height-carImage.height)/2; // there should be a method for this?
determine the scaled frame of the hole:
CGFloat holeX = ratio*118;
CGFloat holeY = yPos + ratio*144;
CGFloat holeEdge = ratio*74;
CGRect holeRect = CGRectMake(holeX,holeY,holeEdge,holeEdge);
But there must be a better way. These calculations (if they are right) are only right for a car image view that is taller than the car. The code needs to be different if the image view is wider.
I think I can work out the logic for a wider view, but it still might be wrong. For example, that yPos calculation. Do the docs say that, for content mode = AspectFit, the image is centered inside the larger dimension? I don't see that any place.
Please tell me there's a better way, or, if not, is it proven that my idea here will work for arbitrary size images, image views, holes?
Thanks.
The easiest solution (by far) is to simply use the same sizes for both the car image and the wheel option images.
Just give the wheel options a transparent padding (easy to do in nearly every graphics editing program), and overlay them over the car with the same frame.
You may increase your asset sizes by a minuscule amount.. but it'll save you one hell of a headache trying to work out positions and sizings, especially as you're scaling the car image.

iOS Image Resizing / Dealing With Blank Space

I have simply dragged UIImageView into storyboard and made it square. I added a pink background to show the effects of the leftover space in the ImageView. In each case I added either a taller image (1st image) and a wider image (2nd image), as well as a text label. Here are my results.
So the obvious question is....how can I get rid of this extra (pink) space and keep the integrity of the photo (that is, to not have to stretch or lose part of the image)? If I wanted to be able to scroll through photos, it would be nice to have them all the same width to the edge so they look neat and orderly (if they were portrait), and if I wanted to have text under each, I'd want the text to be closer to it, rather than have all the blank (pink) space in between if it were landscape. And obviously different size images will give different sizes of blank space.
So I'm thinking what I could do is before displaying the image, get the size of it, then just have a designated distance from either the label or the edge of screen, depending on the orientation of the picture, and then creating/changing the size of the UIImageView with a bit of math and using the image dimensions before inserting the picture into the ImageView. Is this possible? Is there another method I can't quite figure out?
Just look at any decent photo app and they are nice and neatly organized/displayed despite being different sizes, orientations, etc and I'm wondering how to pull this off. I obviously haven't gotten too deep into using images past simply showing them in a pre-determined ImageView.
Thanks for the help/suggestions!
Try this... set your UIImageView to AspectFit (not AspectFill since that will lose some of the image) and using constraints do the following:
centre the UIImageView in the container both horizontally and vertically
set the UILabel to float below the UIImageView by whatever distance you desire ("standard" is usually good)
set the left, right, and top constraints on the UIImageView to be >= whatever distance you desire
set the bottom constraint on the UILabel to be (once again) >= whatever distance you desire
The effect of this should be that the UIImageView will properly resize itself to its intrinsic size and the constraints should properly position it and the label.

How to take UIScrollView's zoomScale & contentOffset, and apply to larger image?

I'm stuck with something I can't figure out...
My app lets the user zoom/pan a thumbnail image, via a UIScrollView. Then it needs to take the changes the user made in the scrollview and apply them to the same image at a much higher resolution (ie. generate a high-res UIImage that looks the same as the zoomed/panned low-res thumbnail the user touched).
I can see that the scrollview has a zoomscale and contentOffset which I can reuse, but I really can't see how to apply this to my UIImage.
All help much appreciated, thanks!
The zoom scale,contentOffset and frame of the UIScrollView will present a sub rectangle of the thumbnail.
Rescale that rectangle proportionally against the higher res version of your image.
e.g
Your scroller has bounds of 100px x 100px
Your thumbnail is 100px x 100px and is zoomed at 4x with a content offset of (x:100,y:100). You will see a sub rectangle of frame (x:25,y:25,w:25,h:25) against the original thumbnail inside the 100x100 window of the scroller i.e blurry. The width and height comes from the scrollers frame.
Once you flip in a high res image of 1000px x 1000px you are going to want to present the same chunk of the image except now you present (x:250,y:250,w:250,h:250) by setting the zoom to 0.4. contentOffset remains the same.
Note that the zoom of 1x and zero offset which would present the whole thumbnail image is a zoom of 0.1x and zero offset against the higher res.
BUT
You are overthinking the issue. Your container UIImageView does all the work for you. Once you reach your target zoom point simply load the higher res image into the imageView (myImageView.image = hiresImage ) and it will "just work" assuming your contentMode is set to Scale To Fill (UIViewContentModeScaleToFill) or Aspect Fill . The low res image will be replaced by the high res version in exactly the right position.

iOS - Get framing of Visible part of UIImage from UIImageView

I am trying to make a transition like APP Tinder.
Detail:
In Screen One there is a Vertical Rectangular UIImaveView with contentMode = Aspect Fill, so it hides some portion of Image to adujust Aspect Ratio.
In Screen Two (Detail Screen) the same image after transition has to to be passed, but the ImageView in Second screen is Square One.
I want to make a Morphing kind of Transition in which User should think that the same ImageView from Screen One become square one in Second one without stretching the Image.So What should i do?
Currently i am trying to get Frame of UIImage that is in visible area of UIImageView so that I can do some Logical stuff to achieve this. but can anyone help me to Get the Frame of Visible Portion of UIImage.
EDIT
Please Find out the Attached Image for understanding
I think there's a little ambiguity in the question: a frame must be specified in a coordinate system. But I think you're looking for a rect relative to the original, unclipped image.
If that's right, then the rect can be computed as follows. Say the image is called image, and the image view is imageView. The size of the rect is the size of the image view:
imageView.bounds.size
And, since aspect fill will center the oversized dimension, it's origin is:
CGPointMake((image.size.width - imageView.bounds.size.width) / 2.0, 0.0);

Resources