Getting size of an image in an UIImageView - ios

I am having trouble getting the size of an image after it has been assigned to an UIImageView programmatically.
The code below runs in a loop and a function is used (getNoteImage) to download an image from an URL (photoURL) and assign it to the created UIImageView. I need to get the height of the image so that I can calculate the spacing for the following images and labels.
var myImage :UIImageView
myImage = UIImageView(frame: CGRectMake(0, 0, UIScreen.mainScreen().bounds.width, UIScreen.mainScreen().bounds.height))
myImage.center = CGPointMake(UIScreen.mainScreen().bounds.size.width/2, imageY)
myImage.getNoteImage(photoUrl)
self.detailsView.addSubview(myImage)
myImage.contentMode = UIViewContentMode.ScaleAspectFit
imageHeight = myImage.bounds.size.height
I have tried using
imageHeight = myImage.bounds.size.height
which I read as a solution on another post but his just returns the screen size for the device (667 on the iPhone6 simulator).
Can anyone guide me on how I can get the correct image size ?
Thanks

As your imageView is taking up the whole screen and the image is sized using 'aspect fit' to get the height the image is displayed at you will need to get the original image size using myImage.image!.size then scale this based on myImage.bounds.size something like;
let imageViewHeight = myImage.bounds.height
let imageViewWidth = myImage.bounds.width
let imageSize = myImage.image!.size
let scaledImageHeight = min(imageSize.height * (imageViewWidth / imageSize.width), imageViewHeight)
That should give the actual height of the image, note that image.size gives the "logical dimensions of the image" i.e. its natural size and not the size it is drawn at.

As UIImage official doc:
if let image = myImage.image {
let size = image.size
print("image size is \(size)")
} else {
print("There is no image here..")
}
I suppose your code working with synchronously image as I understand in your question, but if not I recommended to use AlamofireImage.
With AlamofireImage you can do:
self.myImage.af_setImageWithURL(
NSURL(string: photoUrl)!,
placeholderImage: nil,
filter: nil,
imageTransition: .CrossDissolve(0.5),
completion: { response in
print(response.result.value) # UIImage
if let image = response.result.value {
let size = image.size
print("image size is \(size)")
}
print(response.result.error) # NSError
}
)

change your code like this and try again. only you did note force set the frame of the UIImageView and then give it a image ,you can use "imageHeight = myImage.bounds.size.height" get the real size of the image.
var myImage :UIImageView
//remove your initial frame parameters
myImage = UIImageView()
myImage.center = CGPointMake(UIScreen.mainScreen().bounds.size.width/2, imageY)

try using myImage.image.size.height and also make sure the image is downloaded from the url you should write it in a completion block and only check if the image is downloaded.
See the documentation for more details.

Related

How to convert VNRectangleObservation item to UIImage in SwiftUI

I was able to identify squares from a images using VNDetectRectanglesRequest. Now I want those rectangles to store as separate images (UIImage or cgImage). Below is what I tried.
let rectanglesDetection = VNDetectRectanglesRequest { request, error in
rectangles = request.results as! [VNRectangleObservation]
rectangles.sort{$0.boundingBox.origin.y > $1.boundingBox.origin.y}
for rectangle in rectangles {
let rect = rectangle.boundingBox
let imageRef = cgImage.cropping(to: rect)
let image = UIImage(cgImage: imageRef!, scale: image!.scale, orientation: image!.imageOrientation)
checkBoxImages.append(image)
}
Can anybody point out what's wrong or what should be the best approach?
Update 1
At this stage, I'm testing with an image that I added to the assets.
With this image I get 7 rectangles as observations as each for each cell and one for the table margin.
My task is to identify the text inside in each rectangle and my approach is to send VNRecognizeTextRequest for each rectangle that has been identified. My real scenario is little complicated than this but I want to at least achieve this before going forward.
Update 2
for rectangle in rectangles {
let trueX = rectangle.boundingBox.minX * image!.size.width
let trueY = rectangle.boundingBox.minY * image!.size.height
let width = rectangle.boundingBox.width * image!.size.width
let height = rectangle.boundingBox.height * image!.size.height
print("x = " , trueX , " y = " , trueY , " width = " , width , " height = " , height)
let cropZone = CGRect(x: trueX, y: trueY, width: width, height: height)
guard let cutImageRef: CGImage = image?.cgImage?.cropping(to:cropZone)
else {
return
}
let croppedImage: UIImage = UIImage(cgImage: cutImageRef)
croppedImages.append(croppedImage)
}
My image width and height is
width = 406.0 height = 368.0
I've taken my debug interface for you to get a proper understand.
As #Lasse mentioned, this is my actual issue with screenshots.
This is just a guess since you didn't state what the actual problem is, but probably you're getting a zero-sized image for each VNRectangleObservation.
The reason is: Vision uses a normalized coordinate space from 0.0 to 1.0 with lower left origin.
So in order to get the correct rectangle of your original image, you need to convert the rect from Normalized Space to Image Space. Luckily there is VNImageRectForNormalizedRect(::_:) to do just that.

IOS Swift Kingfisher Resize processor result blurry image

I'm developing an app using Kingfisher library to load image from ulr and display in CollectionViewCell. I'm trying to resize the image to fit the Contentview of CollectionViewCell.
I have tried ResizeProcessor and ScaleFactor from the library but the result image seem blur. My implementation is below (function is call in CellForRowAtIndexPath)
let url = photo.flickrImageURL("m")
let size = contentView.frame.size
print("size is \(size.debugDescription)")
let resizeProcessor = ResizingImageProcessor(referenceSize: size, mode: .aspectFit)
self.flickrPhoto.kf.setImage(with: url, options: [.backgroundDecode,.processor(resizeProcessor), .scaleFactor(UIScreen.main.scale),.cacheOriginalImage])
Is there anything that I did wrong? The quality of images seem very blur.
You should set referenceSize to size of imageView instead of contentView. As size of your contentView will be bigger than your imageView
Change your code with below code:
let size = self.flickrPhoto.frame.size
UPDATED
Just found that ResizingImageProcessor works on pixel instead of points so you need to multiply scale into image size like below:
let resizingProcessor = ResizingImageProcessor(referenceSize: CGSize(width: self.flickrPhoto.frame.size.width * UIScreen.main.scale, height: self.flickrPhoto.frame.size.height * UIScreen.main.scale))

iOS: I get many `UIImage` from network, how can I know their `aspect ratio`?

How can I get the aspect ratio for the image which comes from the server?
EDIT:If I get many image in the network, how can I get them by using async function ? and in the block to get their aspect ratio?
Once you download the image, create an UIImage instance by using the downloaded data then through size property, you will be able to determine the height and width of the actual image downloaded :
let image = UIImage(data: <imageDownloadeddata>)
let imgWidth = image.size.width
let imgHeight = image.size.height

Make a large image fit into a imageview of small size

I am trying to make a large image fit into an UIImageView of small size. How can I reduce the size of the image or scale it appropriately so that the full image fits into the UIImageView?
try to use different modes like scale to fill , aspect fit , aspect fill` etc.
i think accorging to your requirement scale to fill is good.
aspect fit will keep the original shape your image and fit accorging to aspect ratio so it will possible that some portion of your imageview remains blank or not cover by image.
Scale to fit try to scale image according to imageview. this is by default mode.
aspect fill will fill whole imageview with aspect ratio.
so choose whatever is best for you. you can easily change mode from storyboard from attribute inspector.
hope this will help :)
set the mode of the imageView to Aspect Fit
Set the this property for your image view
imageView.contentMode = UIViewContentModeScaleAspectFit;
Set the desired frame of your imageView and set the content mode of the image View. For example :
imgView.contentMode = UIViewContentMode.ScaleAspectFit
//OR
imgView.contentMode = UIViewContentMode.ScaleAspectFill
//OR
imgView.contentMode = UIViewContentMode.ScaleToFill
Use whichever works for you.
By looking at the JSON I'm updating the answer:
As I had suspected, its a URL which you are getting in JSON. So do the following:
Parse the JSON and get the URL from the JSON.(videothumbnail key value)
The URL contains back slashes as escape charecters(http://content.m-tutor.com/images/20150916113347_5-cont.png) remove them to get the URL as this http://content.m-tutor.com/images/20150916113347_5-cont.png
Download the image Using below code :
func startDownloadImageWithURL(urlInString: String?){
let session = NSURLSession.sharedSession();
if let urlInString = urlInString{
let urlToDownload = NSURL(string: urlInString)
session.dataTaskWithURL(urlToDownload!, completionHandler: {(data : NSData?,response : NSURLResponse?, error : NSError?) -> Void in
dispatch_async(dispatch_get_main_queue(), {
if let error = error {
print("Error while downloading \(error.localizedDescription)")
}
else{
//Assuming the yourImageView is already initialized with position
//yourImageView.image = UIImage(data: data)
}
})
}).resume()
}
}
I have given just an example on how to download image(get the data), create the image using data downloaded and the set it to imageView, you can update it according to your need.
Hope it helps :]

UIScrollView crop to take Device scale and zoom scale into effect?

I am trying to create a simple crop feature that takes into effect device screen-density and zoom-scale
I basically modeled it after the code in this tutorial:
https://www.youtube.com/watch?v=hz9pMw4Y2Lk
func cropImage(sender:AnyObject!) { //triggered by a button
let myScale = UIScreen.mainScreen().scale
var height = self.scrollView.bounds.height
var width = self.scrollView.bounds.width
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), true, myScale)
let offset = scrollView.contentOffset
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -offset.x, -offset.y)
scrollView.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//i would like to check here if target image is >300x300px
if image.size.width > 300 && image.size.height > 300{
println("image correct")
println(image.size)
} else {
println("nope")
println(image.size)
}
}
So far I will always end up having an image that is bounds.height/width -which means that on a 320 device incl. a 8xp leading/trailing gap, the user might never be able to create a "correct image".
I understand why it happens, but I do not understand where I should be multiplying with device-scale factor and/or zoom-factor of the UIScrollView.
For example having a camera picture imported at ScrollView zoom-scale 0.0 - i want to keep it at ~8MP'ish.

Resources