ScaleAspectFit blank spaces, imageView.image nul - ios

I have a UIImageView, where the image is set with a given url. Then, I set the content mode to Scale Aspect Fit. This works fine, but there is a ton of blank space before and after the image, when the image is supposed to be directly at the top of the screen.
What I would like to do is rescale the UIImage size (maybe frame?) to match the new size created when Aspect Fit is applied (seems to be the suggestion most people received).
The problem is, whenever I test previous solutions, I'm getting a nul error. Particularly:
import UIKit
import AVFoundation
class OneItemViewController: UIViewController {
#IBOutlet weak var itemImage: UIImageView!
#IBOutlet weak var menuButton: UIBarButtonItem!
#IBOutlet weak var titleText: UILabel!
override func viewDidLoad() {
super.viewDidLoad()
let imageURL:NSURL? = NSURL(string: "https://upload.wikimedia.org/wikipedia/commons/d/d5/Pic_de_neige_cordier_Face_E.jpg")
if imageURL != nil {
itemImage.sd_setImageWithURL(imageURL)
itemImage.contentMode = UIViewContentMode.ScaleAspectFit
AVMakeRectWithAspectRatioInsideRect(itemImage.image!.size, itemImage.bounds)
/**
let imageSize:CGSize = onScreenPointSizeOfImageInImageView(itemImage)
var imageViewRect:CGRect = itemImage.frame
imageViewRect.size = imageSize
itemImage.frame = imageViewRect
**/
}
if self.revealViewController() != nil {
menuButton.target = self.revealViewController()
menuButton.action = "revealToggle:"
self.view.addGestureRecognizer(self.revealViewController().panGestureRecognizer())
}
self.titleText.text = "Title: " + "Earl and Countess of Derby with Edward, their Infant Son, and Chaplain"
// Do any additional setup after loading the view.
}
/**
func onScreenPointSizeOfImageInImageView(imageV: UIImageView) -> CGSize {
var scale: CGFloat
if (imageV.frame.size.width > imageV.frame.size.height) {
if (imageV.image!.size.width > imageV.image!.size.height) {
scale = imageV.image!.size.height / imageV.frame.size.height
} else {
scale = imageV.image!.size.width / imageV.frame.size.width
}
} else {
if (imageV.image!.size.width > imageV.image!.size.height) {
scale = imageV.image!.size.width / imageV.frame.size.width
} else {
scale = imageV.image!.size.height / imageV.frame.size.height
}
}
return CGSizeMake(imageV.image!.size.width / scale, imageV.image!.size.height / scale)
}
**/
}
Tried two things here to get rid of blank space.
First attempt is the call to AVMakeRectWithAspectRatioInsideRect.
Second attempt is the two chunks of code in the /** **/ comments. (onScreenPointSizeOfImageInImageView function and calls to it in viewDidLoad.)
But I can't tell if either work because itemImage.image!.size is causing an error.
So two questions:
1) Why is itemImage.image!.size giving me a nil while unwrapping?
2) Has anyone found a faster solution to removing blank spaces caused by AspectFit?

imageView.widthAnchor.constraint(equalTo: imageView.heightAnchor, multiplier: image.size.width / image.size.height).isActive = true

This answer is answered programmatically with UIKit with Swift 5
As mentioned by #Ignelio, using NSLayoutConstraint would do the work for UIImageView.
The reasoning is that you want to keep maintain the aspect ratio - by using
// let UIImage be whatever you decide to name it
UIImage.contentMode = .scaleAspectFit
would make the UIImage inside the UIImageView fit back to its ratio size given the width. However, as mentioned in Apple's documentation, that will leave remaining area with transparent spacing. Hence, what you want to tackle is UIImageView's size/frame.
--
With this method, you're giving your UIImageView its width constraint equal to the UIImage's ratio - scaling back perfectly in regards to its parent's width constraint (whatever that may be).
// let UIImageView be whatever you name
UIImageView.widthAnchor.constraint(equalTo: UIImageView.heightAnchor, multiplier: UIImage.size.width / UIImage.size.height).isActive = true

Related

Dynamic UIImageView inside UITableViewCell based on aspect ratio of image coming from URL

Im implementing a UITableViewCell for social media post which includes username, userImage, text, media like image post posted by the user and like, comment buttons etc. Here the media will be optional and if there is any image posted, I will unhide the UIView contains imageView and adjust the UIImageView height based on aspect ratio of the image coming from the API response.
Here is my code for the UITableViewCell class:
class PostsTableViewCell: UITableViewCell {
#IBOutlet weak var ivProfilePic: UIImageView!
#IBOutlet weak var ivPost: UIImageView!
#IBOutlet weak var lblName: UILabel!
#IBOutlet weak var lblPostContent: UILabel!
#IBOutlet weak var viewPost: UIView!
#IBOutlet weak var heighIvPost: NSLayoutConstraint!
var postImage: UIImage? {
didSet {
if let image = postImage {
configureCellWhenPostImageIsAvailable(image: image)
}
viewPost.isHidden = postImage == nil
layoutIfNeeded()
}
}
override func awakeFromNib() {
super.awakeFromNib()
// Initialization code
viewPost.isHidden = true
}
override func setSelected(_ selected: Bool, animated: Bool) {
super.setSelected(selected, animated: animated)
// Configure the view for the selected state
}
override func prepareForReuse() {
super.prepareForReuse()
viewPost.isHidden = true
heighIvPost.constant = 162 // default height of UIImageView
ivPost.image = nil
}
// To calculate aspect ratio & set heightIvPost constraint value
func configureCellWhenPostImageIsAvailable(image: UIImage) {
let hRatio = image.size.height / image.size.width
let newImageHeight = hRatio * viewPost.bounds.width
heighIvPost.constant = newImageHeight
ivPost.image = image
ivPost.layoutIfNeeded()
}
}
This is my cellForRowAt function in the main UIViewController:
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "PostsTableViewCell", for: indexPath) as! PostsTableViewCell
let data = posts[indexPath.row]
if let userImage = data.memberProfilePic {
cell.ivProfilePic.kf.setImage(with: userImage)
}
cell.lblName.text = data.memberName
if let postText = data.postContent {
cell.lblPostContent.isHidden = false
cell.lblPostContent.text = postText
}
if let postImage = data.postImage { // data.postImage contains URL for image and if not nil then unhide viewPost and set Image
cell.viewPost.isHidden = false
cell.ivPost.kf.setImage(with: postImage)
if let image = cell.ivPost.image {
cell.configureCellWhenPostImageIsAvailable(image: image)
cell.layoutIfNeeded()
}
}
return cell
}
Here is my data model just in case:
class PostEntity: NSObject {
var postContent: String?
var postImage: URL?
var memberName: String?
var memberProfilePic: URL?
override init() {
}
init(jsonData: JSON){
postContent = jsonData["postContent"].stringValue
postImage = jsonData["postImages"].url
memberName = jsonData["member_name"].stringValue
memberProfilePic = jsonData["member_profile"].url
}
}
When I run this code, my requirement is if there is any image in post ie.. data.postImage != nil, it should display image with correct aspect ratio however what I get is:
When UITableView is loaded, the cells that are loaded show images with correct aspect ratio.
When I scroll down, the UIImageView will not show images in correct aspect ratio but default one.
When I scroll back up, I think because of prepareForReuse, it again displays images in correct aspect ratio.
Only problem I face is when o scroll down and new cells are created, it won't show correct aspect ratio if data.postImage != nil.
Here is the video link for further clarification:
https://youtube.com/shorts/vcRb4u_KAVM?feature=share
In the video above you can see, at start all images have perfect aspect ratio but when I scroll down and reach robot and car image, they are of default size i.e. 162, but when I scroll down and scroll back up to them, they get resized to desired results.
I want to remove that behaviour and have correct aspect ratio based on image size.
The problem is...
When a table view calls cellForRowAt, the row height gets set -- usually, be constraints on the content of the cell. If you change the height after the cell has been displayed, it is up to you to inform the table view that it needs to re-calculate the height of the row.
So, you can add a closure to your cell class like this:
class PostsTableViewCell: UITableViewCell {
// closure
var layoutChange: ((PostsTableViewCell) -> ())?
// the rest of your cell code...
func configureCellWhenPostImageIsAvailable(image: UIImage) {
let hRatio = image.size.height / image.size.width
let newImageHeight = hRatio * viewPost.bounds.width
heighIvPost.constant = newImageHeight
ivPost.image = image
// not needed
//ivPost.layoutIfNeeded()
// use the closure to inform the table view the row height has changed
self.layoutChange?(self)
}
}
then, in your controller's cellForRowAt:
cell.layoutChange = { [weak self] theCell in
guard let self = self,
let cellIndexPath = tableView.indexPath(for: theCell)
else { return }
// you probably want to update something in your data
// maybe:
var data = self.posts[cellIndexPath.row]
data.profilePicDownloaded = true
self.posts[cellIndexPath.row] = data
// tell the table view to re-cacluclate the row heights
self.tableView.performBatchUpdates(nil, completion: nil)
}
return cell

UIProgressView setting to an image? UIprogressView entire contentView, downwards?

Hi there,
I’ve been trying to solve a problem of how to create two progressViews in my viewController for the past week with little to no success. Please see the diagram I have attached as an idea as to the question I am asking.
I am creating a game with a timer which elapses with each question. As the time elapses I would like a UIProgressView to cascade down the entire screen transitioning from blue to white as it goes (Please see number 1 in the diagram, this indicates the white trackTintColor).
Number 2 in the diagram (the cyan part) represents the progressTintColor. The diagram should hopefully be clear in that I am hoping to customise the progressView so that it tracks downwards. Which is one of the main issues at the moment. (I can only seem to fins walkthroughs with small customisable Progressviews which move sideways not up and down)
The hardest part of what I am trying to achieve is (number 3 in the diagram), customisizing a UIImageView, so that it slowly drains downward with the inverse colours to the background (so the dog will be white with cyan flooding downwards to coincide with the progressView elapsing behind it).
These are the options I have tried to solve this issue, thus far, to no avail.
I have tried using a progressView for the backgroundColor, but I cannot find any examples anywhere of anyone else doing this (downwards)so I’m not sure if it’s even possible?
For the image I have tried drawing a CAShape layer but it appears the dog is too difficult a shape for a novice like myself to draw effectively. And upon realising that I would not be able to set a layer of a different colour behind the dog to move downwards as the screen will also be changing color I abandoned all hope of using this option.
I have tried a UIView transition for the dog image, however, the only option I could find that was anywhere close was the transitionCrossDissolve which did not give the downward effect I was hoping for, but instead just faded from a white dog to a cyan dog which was not appropriate. Should I somehow be using progressImage? If so, is there anywhere I can find help with the syntax for that? I can’t seem to find any anywhere.
I currently have 55 images in my assets folder, each with slightly more cyan in than the last, progressively moving downwards (as it animates through an array of the images). Although this works, it is not exactly seamless and does look a little like the user is waiting for an image to load on dial up.
If anyone has any ideas or could spare the time to walk me through how I would go about doing this I would very much appreciate it. I am very much still a beginner so the more detail the better! Oh yes to make matters more difficult, so far I have managed to do the app programatically, so an answers in this form would be great.
Thanks in advance!
I hope you have done number 1 and number 2, perfectly. I have tried for number 3.
I have tried with two UIView. Its working fine. I thought, it will give some idea to achieve yours.
I have two images.
With the help of TIMER , I tried sample for this ProgressView .
Intially, cyanDogView height should be Zero. Once Timer Starts, height should increased by 2px. Once cyanDogView's height should be greater than BlackDogView, then Timer Stops.
Coding
#IBOutlet weak var blackDogView: UIView!
#IBOutlet weak var blackDogImgVw: UIImageView!
#IBOutlet weak var cyanDogView: UIView!
#IBOutlet weak var cyanDogImgVw: UIImageView!
var getHeight : CGFloat = 0.0
var progressTime = Timer()
override func viewDidAppear(_ animated: Bool) {
cyanDogView.frame.size.height = 0
getHeight = blackDogView.frame.height
}
#IBAction func startAnimateButAcn(_ sender: UIButton) {
progressTime = Timer.scheduledTimer(timeInterval: 0.2, target: self, selector: #selector(self.update), userInfo: nil, repeats: true)
}
#objc func update() {
cyanDogView.frame.size.height = cyanDogView.frame.size.height + 2
if cyanDogView.frame.size.height >= getHeight
{
progressTime.invalidate()
cyanDogView.frame.size.height = 0
}
}
Story Board
Output
I'm going to give you some Frankenstein answers here, Part Obj-C Part Swift. I hope it helps.
First, You could create a bezier path of the image mask you're using as a template:
- (UIImage *)cerateImageFromImage:(UIImage *)image
withMaskImage:(UIImage *)mask {
CGImageRef imageRef = image.CGImage;
CGImageRef maskRef = mask.CGImage;
CGImageRef imageMask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef),
NULL,
YES);
CGImageRef maskedReference = CGImageCreateWithMask(imageRef, imageMask);
CGImageRelease(imageMask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedReference];
CGImageRelease(maskedReference);
return maskedImage;
}
UIImage *image = [UIImage imageNamed:#"Photo.png"];
UIImage *mask = [UIImage imageNamed:#"Mask.png"];
self.imageView.image = [self cerateImageFromImage:image
withMaskImage:mask];
Credit to Keenle
Next you can create a custom progress view based on a path :
func drawProgressLayer(){
let bezierPath = UIBezierPath(roundedRect: viewProg.bounds, cornerRadius: viewCornerRadius)
bezierPath.closePath()
borderLayer.path = bezierPath.CGPath
borderLayer.fillColor = UIColor.blackColor().CGColor
borderLayer.strokeEnd = 0
viewProg.layer.addSublayer(borderLayer)
}
//Make sure the value that you want in the function `rectProgress` that is going to define
//the width of your progress bar must be in the range of
// 0 <--> viewProg.bounds.width - 10 , reason why to keep the layer inside the view with some border left spare.
//if you are receiving your progress values in 0.00 -- 1.00 range , just multiply your progress values to viewProg.bounds.width - 10 and send them as *incremented:* parameter in this func
func rectProgress(incremented : CGFloat){
print(incremented)
if incremented <= viewProg.bounds.width - 10{
progressLayer.removeFromSuperlayer()
let bezierPathProg = UIBezierPath(roundedRect: CGRectMake(5, 5, incremented , viewProg.bounds.height - 10) , cornerRadius: viewCornerRadius)
bezierPathProg.closePath()
progressLayer.path = bezierPathProg.CGPath
progressLayer.fillColor = UIColor.whiteColor().CGColor
borderLayer.addSublayer(progressLayer)
}
}
Credit to Dravidian
Please click the blue links and explore their answers in full to get a grasp of what is possible.
Ok so using McDonal_11's answer I have managed to get the progressView working however, I am still experiencing some problems. I cannot add anything on top of the progressView, it just blanket covers everything else underneath, and before the dog begins its animation into a cyan dog there is a brief flash of the entire cyan dog image.
Code below
private let contentView = UIView(frame: .zero)
private let backgroundImageView = UIImageView(frame: .zero)
private let progressView = ProgressView(frame: .zero)
private let clearViewOverProgress = UIView(frame: .zero)
private let blackDogView = UIView(frame: .zero)
private let blackDogViewImage = UIImageView(frame: .zero)
private let cyanDogView = UIView(frame: .zero)
private let cyanDogViewImage = UIImageView()
var timer = Timer()
var startHeight : CGFloat = 0.0
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
self.progressView.setProgress(10.0, animated: true)
startHeight = cyanDogViewImage.frame.height
self.cyanDogViewImage.frame.size.height = 0
self.timer = Timer.scheduledTimer(timeInterval: 0.01, target: self, selector: #selector(self.updateImage), userInfo: nil, repeats: true)
}
override func viewDidLoad() {
super.viewDidLoad()
setupViews()
}
func setupViews() {
self.view.addSubview(backgroundImageView)
self.backgroundImageView.addSubview(progressView)
self.progressView.addSubview(clearViewOverProgress)
self.clearViewOverProgress.addSubview(blackDogView)
self.blackDogView.addSubview(blackDogViewImage)
self.blackDogViewImage.addSubview(cyanDogView)
self.cyanDogView.addSubview(cyanDogViewImage)
// Setting up constraints of both labels (has been omitted for brevity)
self.blackDogViewImage.image = UIImage(named: “BlackDogImage”)
self.cyanDogViewImage.contentMode = UIViewContentMode.top
self.cyanDogViewImage.clipsToBounds = true
self.cyanDogViewImage.image = UIImage(named: “CyanDogImage”)
}
func updateImage() {
cyanDogViewImage.frame.size.height =
cyanDogViewImage.frame.size.height + 0.07
if cyanDogViewImage.frame.size.height >= blackDogViewImage.frame.size.height
{
timer.invalidate()
cyanDogViewImage.frame.size.height = blackDogViewImage.frame.size.height
}
}
func outOfTime() {
timer.invalidate()
}

Animate background UIImage

I have this asset as the background of a view and its assigned to the background using the code described below.
The animation is to get the diagonal rows animate, so they move from left to right when the loading is happening.
Any pointers of how to get this done?
var view = UIImageView()
view.translatesAutoresizingMaskIntoConstraints = false
view.image = UIImage(assetIdentifier: "background-view")
view.layer.cornerRadius = 8.0
view.layer.masksToBounds = true
view.contentMode = .ScaleAspectFill
view.clipsToBounds = true
view.backgroundColor = UIColor.whiteColor()
return view
"background-view" is here
I guess the best would be to have all the images needed ( all the frames ) to create the animated image you want and then put these images in UIImageView's animationImages property.
For instance, if you get a loading bar gif loading_bar.gif, you can get all the different images in that gif ( c.f. this tutorial among others : http://www.macobserver.com/tmo/article/preview-extracting-frames-from-animated-gifs ).
Load all the images in your code ( in the assets folder for instance ) and then do something like :
func getAnimatedImages -> Array<UIImage>
{
var animatedImages = Array<UIImage>()
var allImagesLoaded = false
var i = 0
while !allImagesLoaded
{
if let image = UIImage(named: "background_" + String(i))
{
animatedImages.append(image)
i++
}
else
{
allImagesLoaded = true
}
}
return animatedImages
}
( if you called your images background_0, background_1, etc... )
and then
self.yourBackgroundImageView.animationImages = self.getAnimatedImages
self.yourBackgroundImageView.startAnimating()
I would use the method class func animatedImageNamed(_ name: String, duration duration: NSTimeInterval) -> UIImage?
From the Apple doc:
this method would attempt to load images from files with the names ‘image0’, ‘image1’ and so on all the way up to ‘image1024’. All images included in the animated image should share the same size and scale.
If you create an animated image you can assign it to your UIImageView and it will animate automatically.
As for the image creation #Randy had a pretty good idea :)

CS193P Cassini code runs on simulator but crashes on device? "Message from debugger: Terminated due to Memory Error"

I'm going through Stanford's CS193P online course doing ios dev.
Lecture 9 deals with UIScrollView / delegation via simple url UIImage fetch app. Said app works perfectly fine in simulator but launches then crashes on live device (iPhone5) after trying to fetch an img with the following:
Message from debugger: Terminated due to Memory Error
I went back into my code, reread about delegation, searched SO (I found a similar thread, I made sure my project scheme does NOT have zombies enabled). I updated my device, my compiler / os, and am kinda bummed about what might be preventing this from running on the device...
The class example can be downloaded from Stanford at https://web.stanford.edu/class/cs193p/cgi-bin/drupal/system/files/sample_code/Cassini.zip but this code behaves the same way! This was originally written for ios 8.1 and we're at 8.4, are there any known issues?
code for the imageview controller:
import UIKit
class ImageViewController: UIViewController, UIScrollViewDelegate
{
// our Model
// publicly settable
// when it changes (but only if we are on screen)
// we'll fetch the image from the imageURL
// if we're off screen when this happens (view.window == nil)
// viewWillAppear will get it for us later
var imageURL: NSURL? {
didSet {
image = nil
if view.window != nil {
fetchImage()
}
}
}
// fetches the image at imageURL
// does so off the main thread
// then puts a closure back on the main queue
// to handle putting the image in the UI
// (since we aren't allowed to do UI anywhere but main queue)
private func fetchImage()
{
if let url = imageURL {
spinner?.startAnimating()
let qos = Int(QOS_CLASS_USER_INITIATED.value)
dispatch_async(dispatch_get_global_queue(qos, 0)) { () -> Void in
let imageData = NSData(contentsOfURL: url) // this blocks the thread it is on
dispatch_async(dispatch_get_main_queue()) {
// only do something with this image
// if the url we fetched is the current imageURL we want
// (that might have changed while we were off fetching this one)
if url == self.imageURL { // the variable "url" is capture from above
if imageData != nil {
// this might be a waste of time if our MVC is out of action now
// which it might be if someone hit the Back button
// or otherwise removed us from split view or navigation controller
// while we were off fetching the image
self.image = UIImage(data: imageData!)
} else {
self.image = nil
}
}
}
}
}
}
#IBOutlet private weak var spinner: UIActivityIndicatorView!
#IBOutlet private weak var scrollView: UIScrollView! {
didSet {
scrollView.contentSize = imageView.frame.size // critical to set this!
scrollView.delegate = self // required for zooming
scrollView.minimumZoomScale = 0.03 // required for zooming
scrollView.maximumZoomScale = 1.0 // required for zooming
}
}
// UIScrollViewDelegate method
// required for zooming
func viewForZoomingInScrollView(scrollView: UIScrollView) -> UIView? {
return imageView
}
private var imageView = UIImageView()
// convenience computed property
// lets us get involved every time we set an image in imageView
// we can do things like resize the imageView,
// set the scroll view's contentSize,
// and stop the spinner
private var image: UIImage? {
get { return imageView.image }
set {
imageView.image = newValue
imageView.sizeToFit()
scrollView?.contentSize = imageView.frame.size
spinner?.stopAnimating()
}
}
// put our imageView into the view hierarchy
// as a subview of the scrollView
// (will install it into the content area of the scroll view)
override func viewDidLoad() {
super.viewDidLoad()
scrollView.addSubview(imageView)
}
// for efficiency, we will only actually fetch the image
// when we know we are going to be on screen
override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
if image == nil {
fetchImage()
}
}
}
The source of the issue that decompression image from data (file format representative of image data) to screen can 'eat' a lot of memory.
Here is very good article about iOS image decompression -> Avoiding Image Decompression Sickness
Since all images in Cassini application are VERY large (wave_earth_mosaic_3.jpg (9999×9999), pia03883-full.jpg (14400×9600)) image decompression process 'eat' all phone memory. This leads to application crash.
To fix Cassini issue I modified code and added small function to lower images resolution by 2.
Here is code example (code fixed to Swift 2.0):
...
if imageData != nil {
// this might be a waste of time if our MVC is out of action now
// which it might be if someone hit the Back button
// or otherwise removed us from split view or navigation controller
// while we were off fetching the image
if let imageSource = UIImage(data: imageData!) {
self.image = self.imageResize(imageSource)
}
} else {
self.image = nil
}
...
func imageResize (imageOriginal:UIImage) -> UIImage {
let image = imageOriginal.CGImage
let width = CGImageGetWidth(image) / 2
let height = CGImageGetHeight(image) / 2
let bitsPerComponent = CGImageGetBitsPerComponent(image)
let bytesPerRow = CGImageGetBytesPerRow(image)
let colorSpace = CGImageGetColorSpace(image)
let bitmapInfo = CGImageGetBitmapInfo(image)
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)
CGContextSetInterpolationQuality(context, CGInterpolationQuality.High)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), image)
let scaledImage = UIImage(CGImage: CGBitmapContextCreateImage(context)!)
return scaledImage
}
So now application load all images without crash.
SWIFT 2.0 fix:
add this to Info.plist to allow HTTP loading
<key>NSAppTransportSecurity</key>
<dict>
<!--Include to allow all connections (DANGER)-->
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>

Large UIImage not showing in UIImageView

I'm capturing a full page screenshot of a UIWebView and passing the image through a segue:
// WebViewController.swift
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
if segue.identifier == "captureSegue" {
var captureViewcontroller:CaptureViewController = segue.destinationViewController as! CaptureViewController
captureViewcontroller.tempCaptureImage = image
}
}
The UIImage is then being saved to Documents/Images in the application directory just fine. If I open it from the file system it shows just fine in Preview. However, if the screenshot is really large, the images won't show in a UIImageView. The UIImageView renders blank. Adding a breakpoint shows that the image data is there, just not rendering.
// CaptureViewController.swift
var tempCaptureImage: UIImage?
#IBOutlet weak var imageScrollView: UIScrollView!
#IBOutlet weak var captureImageView: UIImageView!
#IBOutlet weak var titleField: UITextField!
override func viewDidLoad() {
super.viewDidLoad()
self.automaticallyAdjustsScrollViewInsets = false
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
if let image = tempCaptureImage {
captureImageView.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: image.size)
captureImageView.image = image
imageScrollView.contentSize = image.size
let scrollViewFrame = imageScrollView.bounds
let scaleWidth = scrollViewFrame.size.width / imageScrollView.contentSize.width
let scaleHeight = scrollViewFrame.size.height / imageScrollView.contentSize.height
let minScale = min(scaleWidth, scaleHeight);
imageScrollView.minimumZoomScale = minScale;
imageScrollView.maximumZoomScale = 1.0
imageScrollView.zoomScale = minScale;
}
}
Could this be because the image is large? If the pages that I capture are smaller, then the images show just fine. Are there any recommended techniques for viewing large images on iDevices?
EDIT:
Here is the output of the Variables View with a breakpoint. Maybe you see something I'm missing.
For people have similar problem, maybe tried to test in real device. For me, image with size 1000 x 7000 isn't displayed in Simulator nor Interface Builder, but can be displayed in an iPhone 6s plus.
Also, Interface Builder will complain if image size is larger than 10000 x 10000.
You do not give any info on how large the image actually is, so it's impossible to say what limit you might be hitting. Remember, though, that a triple-scale image multiplies the underlying bitmap memory size by 9 in comparison to a single-scale image. That's an order of magnitude! I can well believe that you would quickly exceed the memory capacity of the app and even crash if you tried to access an image that's too large. So step one would surely be to load the image as a single-scale image; see the ImageIO framework to learn how to do that. See also the docs on CATiledLayer if you want to know how to display a very large image.

Resources