Load an image in UIImageView and resize the container - swift - ios

I have an s3 link using which I need to load an image in UIImageView. I don't have the dimensions of image with me. I have defined mainImageView (leading and trailing constraints) in storyboard
In the code, I am loading the image using:
mainImageView.setImageWith(URL(string: ("https:" + (content?.imagePath)!)), placeholderImage: nil)
Do we have any way of resizing the container once the image loads? I want the container to take the dimensions (height and width) of the image. I heard that there is something called content wrap in Android to achieve the same. However, I am unable to find an equivalent in iOS.

If you are using Autolayout, you dont actually have to do much as UIImageView has an intrinsic size which makes it take the width and height of the image. In your .xib or .storyboard you need to position the image view so that it can resolve its position(horizontal and vertical). For size you can provide a default image(ow autolayout will show error).
When you will change the image at runtime, the imageview will take the size of its image.

You can achieve it using:
imgView.frame = [self frameForImage:self.image inImageViewAspectFit:imgView];
function implementation:
-(CGRect)frameForImage:(UIImage*)image inImageViewAspectFit:(UIImageView*)imageView
{
float imageRatio = image.size.width / image.size.height;
float viewRatio = imageView.frame.size.width / imageView.frame.size.height;
if(imageRatio < viewRatio) {
float scale = imageView.frame.size.height / image.size.height;
float width = scale * image.size.width;
float topLeftX = (imageView.frame.size.width - width) * 0.5;
return CGRectMake(topLeftX, 0, width, imageView.frame.size.height);
} else {
float scale = imageView.frame.size.width / image.size.width;
float height = scale * image.size.height;
float topLeftY = (imageView.frame.size.height - height) * 0.5;
return CGRectMake(0, topLeftY, imageView.frame.size.width, height);
}
}

You can resize the image by modifying its frame to the image's size:
mainImageView.frame = CGRect(origin: mainImageView.frame.origin, size: mainImageView.image.size)
If you are using Auto Layout, you need to modify the height and width constraints accordingly or let the layout to its job.

Related

How to align an image inside a imageView

I wanted to know how to align an image to the right while keeping the aspect fill. So this is how my image view looks right now.
I would like to move the image to the left, so that the image looks like this.
Now I tried aligning it to the right, but the image is so big that it only shows her gun. So I was wondering how would you be able to do this. Would I have to use a ScrollView? Would appreciate the help, Thanks.
I am not sure if this works for you or not:
Try the different contentModes:
Use it like:
imgView.contentMode = .scaleAspectFit //Or any from below options
You can Use StoryBoard also:
Try different types which suites your useCase
Hope this helps.
Set Content Mode of image Which is best for you Image View.
you could resize imageview as per the image size, below code is not tested:
CGSize kMaxImageViewSize = {.width = 100, .height = 100};
CGSize imageSize = image.size;
CGFloat aspectRatio = imageSize.width / imageSize.height;
CGRect frame = imageView.frame;
if (kMaxImageViewSize.width / aspectRatio <= kMaxImageViewSize.height)
{
frame.size.width = kMaxImageViewSize.width;
frame.size.height = frame.size.width / aspectRatio;
}
else
{
frame.size.height = kMaxImageViewSize.height;
frame.size.width = frame.size.height * aspectRatio;
}
imageView.frame = frame;

How to resize and layout image/attachment in TextKit?

I'm building a magazine app via TextKit, here is a TextKit demo (check out the developer branch). It loads a NSAttributeString from a rtfd file as text storage object, all the pages have the same size with custom NSTextContainer object, the pagination feature is done.
When I tried to add image to the source rtfd file, the image attachment shows in UITextView directly without any additional code, that's great! However, some big images will be clipped by default in text view frame. I tried all kinds of delegate methods and override methods to resize and re-layout it but failed at the end.
- (void)setAttachmentSize:(CGSize)attachmentSize forGlyphRange:(NSRange)glyphRange
- (CGSize)attachmentSizeForGlyphAtIndex:(NSUInteger)glyphIndex;
The setter method is called in glyph layout process, the latter getter method is called in glyph draw process from the call stack.
- (BOOL)layoutManager:(NSLayoutManager *)layoutManager shouldSetLineFragmentRect:(inout CGRect *)lineFragmentRect lineFragmentUsedRect:(inout CGRect *)lineFragmentUsedRect baselineOffset:(inout CGFloat *)baselineOffset inTextContainer:(NSTextContainer *)textContainer forGlyphRange:(NSRange)glyphRange
{
NSTextAttachment *attachment = ...;
NSUInteger characterIndex = [layoutManager characterIndexForGlyphAtIndex:glyphRange.location];
UIImage *image = [attachment imageForBounds:*lineFragmentRect textContainer:textContainer characterIndex:characterIndex];
CGSize imageSize = GetScaledToFitSize(image.size, self.textContainerSize);
CGFloat ratio = imageSize.width / imageSize.height;
CGRect rect = *lineFragmentRect, usedRect = *lineFragmentUsedRect;
CGFloat dy = *baselineOffset - imageSize.height;
if (dy > 0) {
*baselineOffset -= dy;
usedRect.size.height -= dy;
usedRect.size.width = ratio * usedRect.size.height;
}
if (!CGRectContainsRect(usedRect, rect)) {
if (rect.size.height > usedRect.size.height) {
*baselineOffset -= rect.size.height - usedRect.size.height;
rect.size.height = usedRect.size.height;
rect.size.width = ratio * usedRect.size.height;
}
if (rect.size.width > usedRect.size.width) {
//...
}
}
*lineFragmentRect = rect;
*lineFragmentUsedRect = usedRect;
return YES;
}
This delegate method could resize the layout size but not affect to the final width and image scale. I tried serval solutions with no luck. It seems there aren't many threads about images on TextKit on SO and Apple example code.
I have ever done similar work for image attachment auto resize. How about handle it just after you get the attributed string.
That is, enumerate the original string with NSAttachmentAttributeName, replace the attachment with a subclass of NSTextAttachment, with implies NSTextAttachmentContainer protocol.
- (CGRect)attachmentBoundsForTextContainer:(nullable NSTextContainer *)textContainer proposedLineFragment:(CGRect)lineFrag glyphPosition:(CGPoint)position characterIndex:(NSUInteger)charIndex {
CGFloat lineWidth = CGRectGetWidth(lineFrag);
CGSize size = self.bounds.size;
size.height *= (size.width > 0) ? (lineWidth / size.width) : 0;
size.width = lineWidth;
return CGRectMake(0, 0, size.width, size.height);
}
Code above resize attachment to fit the width, you don't need to resize image, since it will be auto resize to the bound when drawing.
Hoping to be helpful.

Determining the center of a zoomed and panned image in UIScrollview

I am building a photo app for iPhone which allows the user to take a photo with the camera or grab one from the Camera Roll, then pan and zoom this image as needed in a UIScrollView. The user then taps a button to save the image. I am having trouble with the key method that returns the exact center of the visible area of an imageview embedded in the scrollview. I need this method to allow for variations in the dimensions of the device screen (i.e., iPhone 4 vs 5), as well as for variations in the size, aspect ratio and zoom scale of the source image.
As an example, I need to get this to work for the following:
iPhone 5 with screen dimensions that are 320 X 568
Scrollview frame size of 320 X 568
An image with dimensions 380 width and 284 height
Zoom scale of 3.0
Alternatively, I also need for it to work for this:
iPhone 4 with screen dimensions of 320 X 480
Scrollview frame size of 320 X 480
An image with dimensions 640 width and 1048 height
Zoom scale of 1.3
The following is my current code, which tries to account for variations in the screen's dimensions and the image's dimensions, but it does not work for both iPhone 4 and 5, and for all types of images, such as those in portrait or landscape format. Is there a simpler way to get the center point of the visible portion of a scrollview? I need a clearer understanding of how to interpret and manipulate the properties of a view, such as the bounds.origin and bounds.size and use content size and zoom scale in the scrollview.
I have looked at various questions that are similar but none of these seem to account adequately for all variations in device or image aspect ratio or size. Any help would be greatly appreciated!
Similar questions
How can I determine the area currently visible in a scrollview and determine the center?
Getting the right coordinates of the visible part of a UIImage inside of UIScrollView
- (CGPoint)centerOfVisibleFrame:(UIImage *)image inScrollView:(UIScrollView *)scrollView
{
CGPoint frameCenter;
CGFloat zoomScale = scrollView.zoomScale;
// First determine the dimensions of the device
CGSize deviceFrameSize = [UIScreen mainScreen].bounds.size;
// Need to determine the scale factor for adjusting the image to full width/full height
CGFloat imageDeviceAspectFitScale;
// Compare the aspect ratio (height / width) of the device to the aspect ratio of the image
CGFloat deviceAspectRatio = deviceFrameSize.height / deviceFrameSize.width;
CGFloat imageAspectRatio = image.size.height / image.size.width;
// If the device's aspect ratio is greater than the image aspect ratio
if (deviceAspectRatio > imageAspectRatio)
{
// Set the imageDeviceAspectFitScale for full width
imageDeviceAspectFitScale = image.size.width / deviceFrameSize.width;
}
// Otherwise the image's aspect ratio is greater than the device aspect ratio
else
{
// Set the imageDeviceAspectFitScale for full height
imageDeviceAspectFitScale = image.size.height / deviceFrameSize.height;
}
// Create the frame for the image at full width or full height
CGSize imageAspectFitSize;
imageAspectFitSize.width = image.size.width / imageDeviceAspectFitScale;
imageAspectFitSize.height = image.size.height / imageDeviceAspectFitScale;
// Calculate the vertical and horizontal offset to adjust the coordinates of the
// image center to account for greater device height or device width
CGFloat verticalOffset = deviceFrameSize.height - imageAspectFitSize.height;
CGFloat horizontalOffset = deviceFrameSize.width - imageAspectFitSize.width;
if (self.debug) NSLog(#"verticalOffset = %f horizontalOffset = %f", verticalOffset, horizontalOffset);
if (self.debug) NSLog(#"image.size.width: %f image.size.height: %f", image.size.width, image.size.height);
if (self.debug) NSLog(#"scrollView.frame.size w: %f h: %f", scrollView.frame.size.width, scrollView.frame.size.height);
if (self.debug) NSLog(#"scrollView.bounds.size.width: %f scrollView.bounds.size.height: %f",
scrollView.bounds.size.width, scrollView.bounds.size.height);
if (self.debug) NSLog(#"scrollView.contentSize w=%f h=%f", scrollView.contentSize.width, scrollView.contentSize.height);
// imageRect represents the coordinate space for the image, adjusted for zoom scale
CGRect imageRect;
// First use the visible frame's origin to determine the top left corner of the visible rectangle
imageRect.origin.x = scrollView.contentOffset.x;
imageRect.origin.y = scrollView.contentOffset.y;
if (self.debug) NSLog(#"imageRect.origin x = %f y = %f", imageRect.origin.x, imageRect.origin.y);
// Adjust the image rect for zoom - Multiply by zoom scale
imageRect.size.width = image.size.width * zoomScale;
imageRect.size.height = image.size.height * zoomScale;
if (self.debug) NSLog(#"Zoomed imageRect.size width = %f height = %f", imageRect.size.width, imageRect.size.height);
// Then scale the image down to fit into the device frame
// Divide by the image device aspect fit scale
imageRect.size.width = imageRect.size.width / imageDeviceAspectFitScale;
imageRect.size.height = imageRect.size.height / imageDeviceAspectFitScale;
if (self.debug) NSLog(#"CVF Aspect fit imageRect.size width = %f height = %f", imageRect.size.width, imageRect.size.height);
// Then calculate the frame center by using the x and y dimensions of the DEVICE frame
frameCenter.x = imageRect.origin.x + (deviceFrameSize.width / 2);
frameCenter.y = imageRect.origin.y + (deviceFrameSize.height / 2);
// Scale back to original image dimensions from zoom
frameCenter.x = frameCenter.x / zoomScale;
frameCenter.y = frameCenter.y / zoomScale;
if (self.debug) NSLog(#"frameCenter.x = %f frameCenter.y = %f", frameCenter.x, frameCenter.y);
// Scale back up for the aspect fit scale
frameCenter.x = frameCenter.x * imageDeviceAspectFitScale;
frameCenter.y = frameCenter.y * imageDeviceAspectFitScale;
// Correct the coordinates for horizontal and vertical offset
frameCenter.x = frameCenter.x - (horizontalOffset);
frameCenter.y = frameCenter.y - (verticalOffset);
if (self.debug) NSLog(#"CVF frameCenter.x = %f frameCenter.y = %f", frameCenter.x, frameCenter.y);
return frameCenter;
}
basically this gives you the visible rect of your scrollview.
CGRect visibleRect = [yourScrollView convertRect:yourScrollView.bounds toView:yourImageview];
or
CGRect visibleRect;
visibleRect.origin = scrollView.contentOffset;
visibleRect.size = scrollView.frame.size;
please have a look this question and answers for more details. Getting the visible rect of an UIScrollView's content.
as long as you know the rect you can easily calculate center point.

UIImageView pinch zoom and reposition to fit overlay

I am using an UIImagePickerView to grab photos from the camera or camera roll. When the user 'picks' an image, I'm inserting the image onto an UIImageView, which is nested in a UIScrollView to allow pinch/pan. I have an overlay above the image view which represents the area to which the image will be cropped (just like when UIImagePickerView's .allowEditing property is YES).
The Apple-provided "allowEditing" capability also has the same problem I'm seeing with my code (which I why I tried to write it myself in the first place, and I need custom shapes in the overlay). The problem is that I can't seem to find a good way to allow the user to pan around over ALL the image. There are always portions of the image which can't be placed in the crop window. It's always content around the edges (maybe the outside 10%) of the image, which cannot be panned into the crop window.
In the above photo, the brown area at the top and bottom are the scroll view's background color. The image view is sized to the image.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// Calculate what height/width we'll need for our UIImageView.
CGSize screenSize = [UIScreen mainScreen].bounds.size;
float width = 0.0f, height = 0.0f;
if (image.size.width > image.size.height) {
width = screenSize.width;
height = image.size.height / (image.size.width/screenSize.width);
} else {
height = screenSize.height;
width = image.size.width / (image.size.height/screenSize.height);
}
if (width > screenSize.width) {
height /= (width/screenSize.width);
width = screenSize.width;
}
if (height > screenSize.height) {
width /= (height/screenSize.height);
height = screenSize.height;
}
// Update the image view to the size of the image and center it.
// Image view is a subview of the scroll view.
imageView.frame = CGRectMake((screenSize.width - width) / 2, (screenSize.height - height) / 2, width, height);
imageView.image = image;
// Setup our scrollview so we can scroll and pinch zoom the image!
imageScrollView.contentSize = CGSizeMake(screenSize.width, screenSize.height);
// Close the picker.
[[picker presentingViewController] dismissViewControllerAnimated:YES completion:NULL];
}
I've considered monitoring scroll position and zoom level of the scroll view and disallow a side of the image to pass into the crop "sweet spot" of the image. This seems like over-engineering, however.
Does anyone know of a way to accomplish this?
I'm a moron. What a difference a good night's sleep can make ;-) Hopefully, it will help someone in the future.
Setting the correct scroll view contentSize and contentInset did the trick. The working code is below.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// Change the frame of the image view so that it fits the image!
CGSize screenSize = [UIScreen mainScreen].bounds.size;
float width = 0.0f, height = 0.0f;
if (image.size.width > image.size.height) {
width = screenSize.width;
height = image.size.height / (image.size.width/screenSize.width);
} else {
height = screenSize.height;
width = image.size.width / (image.size.height/screenSize.height);
}
// Make sure the new height and width aren't bigger than the screen
if (width > screenSize.width) {
height /= (width/screenSize.width);
width = screenSize.width;
}
if (height > screenSize.height) {
width /= (height/screenSize.height);
height = screenSize.height;
}
CGRect overlayRect = cropOverlay.windowRect;
imageView.frame = CGRectMake((screenSize.width - width) / 2, (screenSize.height - height) / 2, width, height);
imageView.image = image;
// Setup our scrollview so we can scroll and pinch zoom the image!
imageScrollView.contentSize = imageView.frame.size;
imageScrollView.contentInset = UIEdgeInsetsMake(overlayRect.origin.y - imageView.frame.origin.y,
overlayRect.origin.x,
overlayRect.origin.y + imageView.frame.origin.y,
screenSize.width - (overlayRect.origin.x + overlayRect.size.width));
// Dismiss the camera's VC
[[picker presentingViewController] dismissViewControllerAnimated:YES completion:NULL];
}
The scrollview and image view are set up like this:
imageScrollView = [[UIScrollView alloc] initWithFrame:self.view.bounds];
imageScrollView.showsHorizontalScrollIndicator = NO;
imageScrollView.showsVerticalScrollIndicator = NO;
imageScrollView.backgroundColor = [UIColor blackColor];
imageScrollView.userInteractionEnabled = YES;
imageScrollView.delegate = self;
imageScrollView.minimumZoomScale = MINIMUM_SCALE;
imageScrollView.maximumZoomScale = MAXIMUM_SCALE;
[self.view addSubview:imageScrollView];
imageView = [[UIImageView alloc] initWithFrame:self.view.bounds];
imageView.contentMode = UIViewContentModeScaleAspectFit;
imageView.backgroundColor = [UIColor grayColor]; // I had this set to gray so I could see if/when it didn't align properly in the scroll view. You'll likely want to change it to black
[imageScrollView addSubview:imageView];
Edit 3-21-14 Newer, fancier, better implementation of the method that calculates where to place the image in the screen and scrollview. So what's better? This new implementation will check for any image that is being set into the scrollview which is SMALLER in width or height, and adjust the frame of the image view such that it expands to be at least as wide or tall as the overlay rect, so you don't ever have to worry about your user selecting an image that isn't optimal for your overlay. Yay!
- (void)imagePickerController:(UIImagePickerController *)pickerUsed didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// Change the frame of the image view so that it fits the image!
CGSize screenSize = [UIScreen mainScreen].bounds.size;
float width = 0.0f, height = 0.0f;
if (image.size.width > image.size.height) {
width = screenSize.width;
height = image.size.height / (image.size.width/screenSize.width);
} else {
height = screenSize.height;
width = image.size.width / (image.size.height/screenSize.height);
}
CGRect overlayRect = cropOverlay.windowRect;
// We should check the the width and height are at least as big as our overlay window
if (width < overlayRect.size.width) {
float ratio = overlayRect.size.width / width;
width *= ratio;
height *= ratio;
}
if (height < overlayRect.size.height) {
float ratio = overlayRect.size.height / height;
height *= ratio;
width *= ratio;
}
CGRect imageViewFrame = CGRectMake((screenSize.width - width) / 2, (screenSize.height - height) / 2, width, height);
imageView.frame = imageViewFrame;
imageView.image = image;
// Setup our scrollview so we can scroll and pinch zoom the image!
imageScrollView.contentSize = imageView.frame.size;
imageScrollView.contentInset = UIEdgeInsetsMake(overlayRect.origin.y - imageView.frame.origin.y,
(imageViewFrame.origin.x * -1) + overlayRect.origin.x,
overlayRect.origin.y + imageView.frame.origin.y,
imageViewFrame.origin.x + (screenSize.width - (overlayRect.origin.x + overlayRect.size.width)));
// Calculate the REAL minimum zoom scale!
float minZoomScale = 1 - MIN(fabsf(fabsf(imageView.frame.size.width) - fabsf(overlayRect.size.width)) / imageView.frame.size.width,
fabsf(fabsf(imageView.frame.size.height) - fabsf(overlayRect.size.height)) / imageView.frame.size.height);
imageScrollView.minimumZoomScale = minZoomScale;
// Dismiss the camera's VC
[[picker presentingViewController] dismissViewControllerAnimated:YES completion:NULL];
}

Resize UIImage with aspect ratio?

I'm using this code to resize an image on the iPhone:
CGRect screenRect = CGRectMake(0, 0, 320.0, 480.0);
UIGraphicsBeginImageContext(screenRect.size);
[value drawInRect:screenRect blendMode:kCGBlendModePlusDarker alpha:1];
UIImage *tmpValue = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Which is working great, as long as the aspect ratio of the image matches that of the new resized image. I'd like to modify this so that it keeps the correct aspect ratio and just puts a black background anywhere the image doesn't show up. So I would still end up with a 320x480 image but with black on the top and bottom or sides, depending on the original image size.
Is there an easy way to do this similar to what I'm doing? Thanks!
After you set your screen rect, do something like the following to decide what rect to draw the image in:
float hfactor = value.bounds.size.width / screenRect.size.width;
float vfactor = value.bounds.size.height / screenRect.size.height;
float factor = fmax(hfactor, vfactor);
// Divide the size by the greater of the vertical or horizontal shrinkage factor
float newWidth = value.bounds.size.width / factor;
float newHeight = value.bounds.size.height / factor;
// Then figure out if you need to offset it to center vertically or horizontally
float leftOffset = (screenRect.size.width - newWidth) / 2;
float topOffset = (screenRect.size.height - newHeight) / 2;
CGRect newRect = CGRectMake(leftOffset, topOffset, newWidth, newHeight);
If you don't want to enlarge images smaller than the screenRect, make sure factor is greater than or equal to one (e.g. factor = fmax(factor, 1)).
To get the black background, you would probably just want to set the context color to black and call fillRect before drawing the image.
I know this is very old, but thanks for that post -- it redirected me from attempting to use scale to drawing the image. In case it is of benefit to anyone, I made an extension class I'll throw in here. It allows you to resize an image like this:
UIImage imgNew = img.Fit(40.0f, 40.0f);
I don't need a fit option, but it could easily be extended to support Fill as well.
using CoreGraphics;
using System;
using UIKit;
namespace SomeApp.iOS.Extensions
{
public static class UIImageExtensions
{
public static CGSize Fit(this CGSize sizeImage,
CGSize sizeTarget)
{
CGSize ret;
float fw;
float fh;
float f;
fw = (float) (sizeTarget.Width / sizeImage.Width);
fh = (float) (sizeTarget.Height / sizeImage.Height);
f = Math.Min(fw, fh);
ret = new CGSize
{
Width = sizeImage.Width * f,
Height = sizeImage.Height * f
};
return ret;
}
public static UIImage Fit(this UIImage image,
float width,
float height,
bool opaque = false,
float scale = 1.0f)
{
UIImage ret;
ret = image.Fit(new CGSize(width, height),
opaque,
scale);
return ret;
}
public static UIImage Fit(this UIImage image,
CGSize sizeTarget,
bool opaque = false,
float scale = 1.0f)
{
CGSize sizeNewImage;
CGSize size;
UIImage ret;
size = image.Size;
sizeNewImage = size.Fit(sizeTarget);
UIGraphics.BeginImageContextWithOptions(sizeNewImage,
opaque,
1.0f);
using (CGContext context = UIGraphics.GetCurrentContext())
{
context.ScaleCTM(1, -1);
context.TranslateCTM(0, -sizeNewImage.Height);
context.DrawImage(new CGRect(CGPoint.Empty, sizeNewImage),
image.CGImage);
ret = UIGraphics.GetImageFromCurrentImageContext();
}
UIGraphics.EndImageContext();
return ret;
}
}
}
As per the post above, it starts a new context for an image, then for that image it figures out aspect and then paints into the image. If you haven't done any Swift xcode dev time, UIGraphics is a bit backwards to most systems I work with but not bad. One issue is that bitmaps by default paint bottom to top. To get around that,
context.ScaleCTM(1, -1);
context.TranslateCTM(0, -sizeNewImage.Height);
Changes the orientation of drawing to the more common top-left to bottom-right... but then you need to move the origin as well hence the TranslateCTM.
Hopefully, it saves someone some time.
Cheers

Resources