I have a UIImageView with an image, the view has been set to aspect fit therefore when I load new images the size of the image alters.
On top of that UIImageView is another UIImageView where I am allowing the user to draw a line around a person in the image.
This all works perfectly, however I am able to draw outwith the image(still in the UIImageView). Is it possible to restrict the boundaries of drawing to be the size of the image rather than the view?
I have worked out the width and height of images in the view, however when I put these in as parameters to drawInRect it no longer draws on screen.
This works perfectly for drawing in the complete view:
[self.drawImage.image drawInRect:CGRectMake(0,0,self.view.frame.size.width, self.view.frame.size.height)];
Related
I have added an icon to my assets to be used in an image view. However, When I put it in an image view the outer edges seem to be larger than the image itself. Is there any way I can fully fill the image. Even when I try scale to fill or aspect fill, fit options still there remains a margin in between.
In other words I want my circular image to be tangent to the image view rectangle
The example of what I am trying to do
I have an imageView and say its size is the screen size. It displays an image which has a larger size, and the imageView's content mode is set to scaleAspectFill. Then I drawing some lines on the imageView by using UIBeizerPath.
Later I would like to generate an new image which includes the lines I drew by using drawViewHierarchyInRect. The problem is the new image size is the imageView's size, since the drawViewHierarchyInRect method works only like taking a snapshot. How Can I combine the original image with the lines I drew and at the same time, keep the image's size?
You want to use the method UIGraphicsBeginImageContextWithOptions to create an off-screen context of the desired size. (In your case, the size of the image.)
Then draw your image into the context, draw the lines on top, and extract your composite image from the context. Finally, dispose of the context.
There is tons of sample code online showing how to use UIGraphicsBeginImageContextWithOptions. It's quite easy.
I have a problem with rotating UIImageView. As you look at the attached screenshot you will find that image is rotated but the frame of UIImageView has some unwanted empty area. The blue line around image is the frame of UIImageView.
How can I remove that empty area? And assign that much of frame which is require by image.
I am trying to make a transition like APP Tinder.
Detail:
In Screen One there is a Vertical Rectangular UIImaveView with contentMode = Aspect Fill, so it hides some portion of Image to adujust Aspect Ratio.
In Screen Two (Detail Screen) the same image after transition has to to be passed, but the ImageView in Second screen is Square One.
I want to make a Morphing kind of Transition in which User should think that the same ImageView from Screen One become square one in Second one without stretching the Image.So What should i do?
Currently i am trying to get Frame of UIImage that is in visible area of UIImageView so that I can do some Logical stuff to achieve this. but can anyone help me to Get the Frame of Visible Portion of UIImage.
EDIT
Please Find out the Attached Image for understanding
I think there's a little ambiguity in the question: a frame must be specified in a coordinate system. But I think you're looking for a rect relative to the original, unclipped image.
If that's right, then the rect can be computed as follows. Say the image is called image, and the image view is imageView. The size of the rect is the size of the image view:
imageView.bounds.size
And, since aspect fill will center the oversized dimension, it's origin is:
CGPointMake((image.size.width - imageView.bounds.size.width) / 2.0, 0.0);
In my iOS app, I am putting several UIImages on one back UIImage and then saving the overall back image with all subimages added on it by taking screenshot programmatically.
Now I want to change the subviews UIImages position of that saved image. So want to know how to detect the subview images position as I have taken whole image as screenshot.
Record their frame as converted to window coordinates. The pixels of the image should be the same as the frame origin (for normal) or double for retina. The screenshot is of the whole screen, so its dimensions are equivalent to the window frame. UIView has some convenience methods to convert arbitrary view frames to other view (or window) coordinates.
EDIT: to deal with content fit, you have to do the math yourself. You know the frame of the imageView, and you can ask the image for its size. Knowing the aspect ratio of each will let you determine in which dimension the image completely fits, and then you can compute the other dimension (which will be a value less than the imageView frame. Divide the difference of the view dimension minus the image dimension by two, and that lets you know the offset to the image inside the view. Now you can save the frame of the image as its displayed in the view.