Activity Indicator for Swift only covering part of the screen - ios

I'm trying to do the activity indicator with a background that covers the entire screen. My problem is that when I run the app, it only covers a portion of the screen. I am not sure what I am doing wrong here.
func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage, editingInfo: [String : AnyObject]?) {
self.dismissViewControllerAnimated(true) { () -> Void in
self.activityIndicator = UIActivityIndicatorView(frame: self.view.frame)
self.activityIndicator.backgroundColor = UIColor(white: 1.0, alpha: 0.5)
self.activityIndicator.center = self.view.center
self.activityIndicator.hidesWhenStopped = true
self.activityIndicator.activityIndicatorViewStyle = UIActivityIndicatorViewStyle.Gray
self.view.addSubview(self.activityIndicator)
self.activityIndicator.startAnimating()
UIApplication.sharedApplication().beginIgnoringInteractionEvents()
print("save the data on to core data and onto Parse, then segue to the new input controller")
let file = PFFile(data: UIImageJPEGRepresentation(image, 1.0)!)
let insurance = PFObject(className: "Insurance")
insurance["userId"] = PFUser.currentUser()?.objectId
insurance["imageFile"] = file as PFFile
insurance.saveInBackgroundWithBlock({ (success, error ) -> Void in
self.activityIndicator.stopAnimating()
UIApplication.sharedApplication().endIgnoringInteractionEvents()
if (success) {
self.performSegueWithIdentifier("segueToDetails", sender: self)
}else{
self.displayAlert("Problems Savings", message: "There was an error with saving the data")
}
})
}
}
Are there any suggestions?

Change this line
self.activityIndicator = UIActivityIndicatorView(frame: self.view.frame)
to
self.activityIndicator = UIActivityIndicatorView(frame: self.view.bounds)
Judging from your screenshot, the frame of self.view is something like (0, 64, /* some width */, /* some height */). The y origin is 64 points down the screen. You copy this frame and give the activity indicator that same frame. This would would if you were adding the activity indicator to a view whose y origin is at 0 points. Since the view you add it to is already 64 points down the screen, adding the activity indicator with that frame will make the activity indicators real y in terms of the phone screen 128 points down the screen. I got this number from adding the y origin of the view and the y origin of the activity indicator.

What I did to solve this problem is pushing the loading indicator on topmost controller:
private var topMostController: UIViewController? {
var presentedVC = UIApplication.sharedApplication().keyWindow?.rootViewController
while let pVC = presentedVC?.presentedViewController {
presentedVC = pVC
}
return presentedVC
}
You might want to try this, a project of mine: https://github.com/goktugyil/EZLoadingActivity

Related

How to fullscreen image and dismiss with swipe similar to Apple Photos

How can I implement the fullscreen image and dismiss image functionality of the Apple Photos? Additionally, how can I get the aspect ratio of an image to fit within a fullscreen view? Are they sending a UIImage to a new view controller?
My current method of fullscreening an image simply sets a UIImageView's frame equal to the superview's frame while turning the alphas of the UINavigationBar and UITabBar to 0. And to dismiss, I added a tap gesture recognizer that reverses the alphas and removes the UIImageView from the superview.
Here's my fullscreen and dismiss code
func fullscreen(forImage image: UIImage) {
let imageView = UIImageView(image: image)
imageView.frame = self.view.frame
imageView.backgroundColor = .black
imageView.contentMode = .scaleAspectFill
imageView.clipsToBounds = true
imageView.isUserInteractionEnabled = true
self.navigationController?.navigationBar.alpha = 0
self.tabBarController?.tabBar.alpha = 0
let dismissTap = UITapGestureRecognizer(target: self, action: #selector(dismissFullscreenImage))
imageView.addGestureRecognizer(dismissTap)
self.view.addSubview(imageView)
}
#objc func dismissFullscreenImage(_ sender: UITapGestureRecognizer) {
sender.view?.removeFromSuperview()
self.navigationController?.navigationBar.alpha = 1
self.tabBarController?.tabBar.alpha = 1
}
This could easily be achieved using Apple's QuickLook framework as it works great for many extensions and collections of images.
Edit: Most of the functionality you want is built into the QLPreviewController
let previewController = QLPreviewController()
previewController.dataSource = self
self.present(previewController, animated: true, completion: nil)
The data source is whatever class conforms to the QLPreviewControllerDataSource protocol
Here is a video guide from apple on how to achieve this easily
Edit: This part goes in the previewItemAt function
guard let url = Bundle.main.url(forResource: "imageName", withExtension: "jpg")
else {
fatalError("Could not load imageName.jpg")
}
return url as QLPreviewItem

UIImagePickerController cropping image rect is not correct

I have a UIViewController that holds the image picker:
let picker = UIImagePickerController()
and I call the image picker like that:
private func showCamera() {
picker.allowsEditing = true
picker.sourceType = .camera
picker.cameraCaptureMode = .photo
picker.modalPresentationStyle = .fullScreen
present(picker, animated: true, completion: nil)
}
when I'm done I get a delegate callback like that:
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
DispatchQueue.main.async {
if let croppedImage = info[UIImagePickerControllerEditedImage] as? UIImage {
self.imageView.contentMode = .scaleAspectFill
self.imageView.image = croppedImage
self.dismiss(animated:true, completion: nil)
}
}
}
and I get the cropping UI after I took the image and in the video you can see the behaviour:
https://youtu.be/OaJnsjrlwF8
As you can see, I can not scroll the zoomed rect to the bottom or top. This behaviour is reproducible on iOS 10/11 on multiple devices.
Is there any way to get this right with UIImagePickerController?
No, this component has been bugged for quite some time now. Not only this one with positioning but also the cropped rect is usually incorrect (off by some 20px vertically).
It seems Apple has no interest in fixing it and you should create your own. It is not too much of work. Start by creating a screen that accepts and displays image on scroll view. Then ensure zoom and pan is working (maybe even rotation) which should be all done pretty quickly.
Then the cropping part occurs which is actually done quickest by using view snapshot:
The following will create an image from image view:
func snapshotImageFor(view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, false, 0.0)
guard let context = UIGraphicsGetCurrentContext() else {
return nil
}
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
Then in your view controller you can do this little trick:
func createSnapshot(inFrame rect: CGRect) -> UIImage? {
let temporaryView = UIView(frame: rect) // This is a view from which the snapshot will occure
temporaryView.clipsToBounds = true
view.addSubview(temporaryView) // We want to put it into hierarchy
guard let viewToSnap = scrollViewContainer else { return nil } // We want to use the superview of the scrollview because using scroll view directly may have some issues.
let originalImageViewFrame = viewToSnap.frame // Preserve previous frame
guard let originalImageViewSuperview = viewToSnap.superview else { return nil } // Preserve previous superview
guard let index = originalImageViewSuperview.subviews.index(of: viewToSnap) else { return nil } // Preserve view hierarchy index
// Now change the frame and put it on the new view
viewToSnap.frame = originalImageViewSuperview.convert(originalImageViewFrame, to: temporaryView)
temporaryView.addSubview(viewToSnap)
// Create snapshot
let croppedImage = snapshotImageFor(view: temporaryView)
// Put everything back the way it was
viewToSnap.frame = originalImageViewFrame // Reset frame
originalImageViewSuperview.insertSubview(viewToSnap, at: index) // Reset superview
temporaryView.removeFromSuperview() // Remove the temporary view
self.croppedImage = croppedImage
return croppedImage
}
There are some downsides to this procedures like doing everything on main thread but for your specific procedure this should not be a problem at all.
You might at some point want some control of the image output size. You can do that easiest by modifying snapshot image to include custom scale:
func snapshotImageFor(view: UIView, scale: CGFloat = 0.0) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, false, scale)
Then you would for instance call snapshotImageFor(view: view, scale: expectedWidth/view.bounds.width).

Swift: Overlay rectangle with text over view

If our app has a network connectivity error we would like to overlay a colored and transparent rectangle over the top of the screen with some "Network not available" like text. The rectangle should cover the full width of the screen and the height should be enough to just show the text. We would use a timer to only show the rectangle for a brief period of time. How can you do this?
The actual view may be a UITableViewController, a UIViewController, or something else...
You can do this:
let deadlineTime = DispatchTime.now() + .seconds(2)
let window = UIApplication.shared.keyWindow!
let rectangleView = UIView(frame: CGRect(x: 0, y: 0, width: self.view.frame.size.width, height: 20))
rectangleView.backgroundColor = UIColor.red
let label = UILabel(frame: CGRect(x: 0, y: 0, width: self.view.frame.size.width, height: 20))
label.text = "Network not available"
rectangleView.addSubview(label)
window.addSubview(rectangleView)
DispatchQueue.main.asyncAfter(deadline: deadlineTime) {
rectangleView.removeFromSuperview()
}
The way I have done overlays is to drag a new UIViewController into a storyboard, and drag a UIView into that. While laying out the UI it can be helpful to make the background color of the UIViewController black. When you're done laying out your elements inside the UIView, change the background color of the UIViewController to transparent.
Here's an example of a profile overlay:
In this case I've actually made the UIViewController background a gray color with an alpha of about 50%. Then when I present this view controller using a fade transition it looks like it appears over top the current context:
func showOverlay() {
//
guard let vc = UIStoryboard(name: "MyStoryboard", bundle: nil).instantiateViewController(withIdentifier: "myOverlay") as? UIViewController else {
print("failed to get myOverlay from MyStoryboard")
return
}
vc.modalPresentationStyle = .overCurrentContext
vc.modalTransitionStyle = .crossDissolve
self.present(vc, animated: true, completion: {
// after 3 seconds, dismiss the overlay
dispatchAfterSeconds(3) {
vc.dismiss(animated: true)
}
})
}
This uses a handy function, dispatchAfterSeconds:
// execute function after delay using GCD
func dispatchAfterSeconds(_ seconds: Double, completion: #escaping (() -> Void)) {
let triggerTime = Int64(Double(NSEC_PER_SEC) * seconds)
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + Double(triggerTime) / Double(NSEC_PER_SEC), execute: { () -> Void in
completion()
})
}
Note that when I talk about changing the background color of the UIViewController, what I actually mean by that is the background color of the view created by default inside of a UIViewController that has been created in a storyboard.

CGImageCreateWithImageInRect Holding Onto Image Data - Leaking?

I am trying to take an image snapshot, crop it, and save it to a UIImageView.
I have tried this from a few dozen different directions but here is the general setup.
First, I am running this under ARC, XCODE 7.2, testing on a 6Plus phone iOS 9.2.
Here is now the delegate is setup..
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSLog(#"CameraViewController : imagePickerController");
//Get the Image Data
NSData *getDataImage = UIImageJPEGRepresentation([info objectForKey:#"UIImagePickerControllerOriginalImage"], 0.9);
// Turn it into a UI image
UIImage *getCapturedImage = [[UIImage alloc] initWithData:getDataImage];
// Figure out the size and build the rectangle we are going to put the image into
CGSize imageSize = getCapturedImage.size;
CGFloat imageScale = getCapturedImage.scale;
int yCoord = (imageSize.height - ((imageSize.width*2)/3))/2;
CGRect getRect = CGRectMake(0, yCoord, imageSize.width, ((imageSize.width*2)/3));
CGRect rect = CGRectMake(getRect.origin.x*imageScale,
getRect.origin.y*imageScale,
getRect.size.width*imageScale,
getRect.size.height*imageScale);
//Resize the image and store it
CGImageRef imageRef = CGImageCreateWithImageInRect([getCapturedImage CGImage], rect);
//Stick the resulting image into an image variable
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
//Release that reference
CGImageRelease(imageRef);
//Save the newly cropped image to a UIImageView property
_imageView.image = cropped;
_saveBtn.hidden = NO;
[picker dismissViewControllerAnimated:YES completion:^{
// After we are finished with dismissing the picker, run the below to close out the camera tool
[self dismissCameraViewFromImageSelect];
}];
}
When I run the above I get the below image.
At this point I am viewing the image in the previously set _imageView.image. And the image data has gobbled up 30MB. But when I back out of this view, the image data is still retained.
If I try to go through the process of capturing a new image this is what I get.
And when I bypass resizing the image and assign it to the ImageView there is no 30MB gobbled.
I have looked at all the advice on this and everything suggested doesn't make a dent but lets go over what I tried and didn't work.
Did not work.
Putting it in a #autoreleasepool block.
This never seems to work. Maybe I am not doing it right but having tried this a few different ways, nothing released the memory.
CGImageRelease(imageRef);
I am doing that but I have tried this a number of different ways. Still no luck.
CFRelease(imageRef);
Also doesn't work.
Setting imageRef = nil;
Still retains. Even the combination of that and CGImageRelease didn't work for me.
I have tried separating the cropping aspect into its own function and returning the results but still no luck.
I haven't found anything particularly helpful online and all references to similar issues have advice (as mentioned above) that doesn't seem to work.
Thanks for your advice in advance.
Alright, after much time thinking on this, I decided to just start from scratch and since most of my recent work has been in Swift, I put together a swift class that can be called, controls the camera, and passes up the image through a delegate to the caller.
The end result is that I don't have this memory leak where some variable is holding on to the memory of the previous image and I can use it in my current project by bridging the Swift class file to my Obj-C ViewControllers.
Here is the Code for the class that does the fetching.
//
// CameraOverlay.swift
// CameraTesting
//
// Created by Chris Cantley on 3/3/16.
// Copyright © 2016 Chris Cantley. All rights reserved.
//
import Foundation
import UIKit
import AVFoundation
//We want to pass an image up to the parent class once the image has been taken so the easiest way to send it up
// and trigger the placing of the image is through a delegate.
protocol CameraOverlayDelegate: class {
func cameraOverlayImage(image:UIImage)
}
class CameraOverlay: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
//MARK: Internal Variables
//Setting up the delegate reference to be used later on.
internal var delegate: CameraOverlayDelegate?
//Varibles for setting the camera view
internal var returnImage : UIImage!
internal var previewView : UIView!
internal var boxView:UIView!
internal let myButton: UIButton = UIButton()
//Setting up Camera Capture required properties
internal var previewLayer:AVCaptureVideoPreviewLayer!
internal var captureDevice : AVCaptureDevice!
internal let session=AVCaptureSession()
internal var stillImageOutput: AVCaptureStillImageOutput!
//When we put up the camera preview and the button we have to reference a parent view so this will hold the
// parent view passed into the class so that other methods can work with it.
internal var view : UIView!
//When this class is instantiated, we want to require that the calling class passes us
//some view that we can tie the camera previewer and button to.
//MARK: - Instantiation Methods
init(parentView: UIView){
//Instantiate the reference to the passed-in UIView
self.view = parentView
//We are doing the following here because this only needs to be setup once per instantiation.
//Create the output container with settings to specify that we are getting a still Image, and that it is a JPEG.
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
//Now we are sticking the image into the above formatted container
session.addOutput(stillImageOutput)
}
//MARK: - Public Functions
func showCameraView() {
//This handles showing the camera previewer and button
self.setupCameraView()
//This sets up the parameters for the camera and begins the camera session.
self.setupAVCapture()
}
//MARK: - Internal Functions
//When the user clicks the button, this gets the image, sends it up to the delegate, and shuts down all the Camera related views.
internal func didPressTakePhoto(sender: UIButton) {
//Create a media connection...
if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
//Setup the orientation to be locked to portrait
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
//capture the still image from the camera
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
//Get the image data
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
//The 2.0 scale halves the scale of the image. Where as the 1.0 gives you the full size.
let image = UIImage(CGImage: cgImageRef!, scale: 2.0, orientation: UIImageOrientation.Up)
// What size is this image.
let imageSize = image.size
let imageScale = image.scale
let yCoord = (imageSize.height - ((imageSize.width*2)/3))/2
let getRect = CGRectMake(0, yCoord, imageSize.width, ((imageSize.width*2)/3))
let rect = CGRectMake(getRect.origin.x*imageScale, getRect.origin.y*imageScale, getRect.size.width*imageScale, getRect.size.height*imageScale)
let imageRef = CGImageCreateWithImageInRect(image.CGImage, rect)
//let newImage = UIImage(CGImage: imageRef!)
//This app forces the user to use landscapto take pictures so this simply turns the image so that it looks correct when we take the image.
let newImage: UIImage = UIImage(CGImage: imageRef!, scale: image.scale, orientation: UIImageOrientation.Down)
//Pass the image up to the delegate.
self.delegate?.cameraOverlayImage(newImage)
//stop the session
self.session.stopRunning()
//Remove the views.
self.previewView.removeFromSuperview()
self.boxView.removeFromSuperview()
self.myButton.removeFromSuperview()
//By this point the image has been handed off to the caller through the delegate and memory has been cleaned up.
}
})
}
}
internal func setupCameraView(){
//Add a view that is big as the frame that acts as a background.
self.boxView = UIView(frame: self.view.frame)
self.boxView.backgroundColor = UIColor(red: 255, green: 255, blue: 255, alpha: 1.0)
self.view.addSubview(self.boxView)
//Add Camera Preview View
// This sets up the previewView to be a 3:2 aspect ratio
let newHeight = UIScreen.mainScreen().bounds.size.width / 2 * 3
self.previewView = UIView(frame: CGRectMake(0, 0, UIScreen.mainScreen().bounds.size.width, newHeight))
self.previewView.backgroundColor = UIColor.cyanColor()
self.previewView.contentMode = UIViewContentMode.ScaleToFill
self.view.addSubview(previewView)
//Add the button.
myButton.frame = CGRectMake(0,0,200,40)
myButton.backgroundColor = UIColor.redColor()
myButton.layer.masksToBounds = true
myButton.setTitle("press me", forState: UIControlState.Normal)
myButton.setTitleColor(UIColor.whiteColor(), forState: UIControlState.Normal)
myButton.layer.cornerRadius = 20.0
myButton.layer.position = CGPoint(x: self.view.frame.width/2, y:(self.view.frame.height - myButton.frame.height ) )
myButton.addTarget(self, action: "didPressTakePhoto:", forControlEvents: .TouchUpInside)
self.view.addSubview(myButton)
}
internal func setupAVCapture(){
session.sessionPreset = AVCaptureSessionPresetPhoto;
let devices = AVCaptureDevice.devices();
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the front camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
//-> Now that we have the back of the camera, start a session.
beginSession()
break;
}
}
}
}
}
// Sets up the session
internal func beginSession(){
var err : NSError? = nil
var deviceInput:AVCaptureDeviceInput?
//See if we can get input from the Capture device as defined in setupAVCapture()
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
} catch let error as NSError {
err = error
deviceInput = nil
}
if err != nil {
print("error: \(err?.localizedDescription)")
}
//If we can add input into the AVCaptureSession() then do so.
if self.session.canAddInput(deviceInput){
self.session.addInput(deviceInput)
}
//Now show layers that were setup in the previewView, and mask it to the boundary of the previewView layer.
let rootLayer :CALayer = self.previewView.layer
rootLayer.masksToBounds=true
//put a live video capture based on the current session.
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session);
// Determine how to fill the previewLayer. In this case, I want to fill out the space of the previewLayer.
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.previewLayer.frame = rootLayer.bounds
//Put the sublayer into the previewLayer
rootLayer.addSublayer(self.previewLayer)
session.startRunning()
}
}
Here is how I am using this class in a view controller.
//
// ViewController.swift
// CameraTesting
//
// Created by Chris Cantley on 2/26/16.
// Copyright © 2016 Chris Cantley. All rights reserved.
//
import UIKit
import AVFoundation
class ViewController: UIViewController, CameraOverlayDelegate{
//Setting up the class reference.
var cameraOverlay : CameraOverlay!
//Connected to the UIViewController main view.
#IBOutlet var getView: UIView!
//Connected to an ImageView that will display the image when it is passed back to the delegate.
#IBOutlet weak var imgShowImage: UIImageView!
//Connected to the button that is pressed to bring up the camera view.
#IBAction func btnPictureTouch(sender: AnyObject) {
//Remove the image from the UIImageView and take another picture.
self.imgShowImage.image = nil
self.cameraOverlay.showCameraView()
}
override func viewDidLoad() {
super.viewDidLoad()
//Pass in the target UIView which in this case is the main view
self.cameraOverlay = CameraOverlay(parentView: getView)
//Make this class the delegate for the instantiated class.
//That way it knows to receive the image when the user takes a picture
self.cameraOverlay.delegate = self
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
//Nothing here but if you run out of memorry you might want to do something here.
}
override func shouldAutorotate() -> Bool {
if (UIDevice.currentDevice().orientation == UIDeviceOrientation.LandscapeLeft ||
UIDevice.currentDevice().orientation == UIDeviceOrientation.LandscapeRight ||
UIDevice.currentDevice().orientation == UIDeviceOrientation.Unknown) {
return false;
}
else {
return true;
}
}
//This references the delegate from CameraOveralDelegate
func cameraOverlayImage(image: UIImage) {
//Put the image passed up from the CameraOverlay class into the UIImageView
self.imgShowImage.image = image
}
}
Here is a link to the project where I put that together.
GitHub - Boiler plate get image from camera

CS193P Cassini code runs on simulator but crashes on device? "Message from debugger: Terminated due to Memory Error"

I'm going through Stanford's CS193P online course doing ios dev.
Lecture 9 deals with UIScrollView / delegation via simple url UIImage fetch app. Said app works perfectly fine in simulator but launches then crashes on live device (iPhone5) after trying to fetch an img with the following:
Message from debugger: Terminated due to Memory Error
I went back into my code, reread about delegation, searched SO (I found a similar thread, I made sure my project scheme does NOT have zombies enabled). I updated my device, my compiler / os, and am kinda bummed about what might be preventing this from running on the device...
The class example can be downloaded from Stanford at https://web.stanford.edu/class/cs193p/cgi-bin/drupal/system/files/sample_code/Cassini.zip but this code behaves the same way! This was originally written for ios 8.1 and we're at 8.4, are there any known issues?
code for the imageview controller:
import UIKit
class ImageViewController: UIViewController, UIScrollViewDelegate
{
// our Model
// publicly settable
// when it changes (but only if we are on screen)
// we'll fetch the image from the imageURL
// if we're off screen when this happens (view.window == nil)
// viewWillAppear will get it for us later
var imageURL: NSURL? {
didSet {
image = nil
if view.window != nil {
fetchImage()
}
}
}
// fetches the image at imageURL
// does so off the main thread
// then puts a closure back on the main queue
// to handle putting the image in the UI
// (since we aren't allowed to do UI anywhere but main queue)
private func fetchImage()
{
if let url = imageURL {
spinner?.startAnimating()
let qos = Int(QOS_CLASS_USER_INITIATED.value)
dispatch_async(dispatch_get_global_queue(qos, 0)) { () -> Void in
let imageData = NSData(contentsOfURL: url) // this blocks the thread it is on
dispatch_async(dispatch_get_main_queue()) {
// only do something with this image
// if the url we fetched is the current imageURL we want
// (that might have changed while we were off fetching this one)
if url == self.imageURL { // the variable "url" is capture from above
if imageData != nil {
// this might be a waste of time if our MVC is out of action now
// which it might be if someone hit the Back button
// or otherwise removed us from split view or navigation controller
// while we were off fetching the image
self.image = UIImage(data: imageData!)
} else {
self.image = nil
}
}
}
}
}
}
#IBOutlet private weak var spinner: UIActivityIndicatorView!
#IBOutlet private weak var scrollView: UIScrollView! {
didSet {
scrollView.contentSize = imageView.frame.size // critical to set this!
scrollView.delegate = self // required for zooming
scrollView.minimumZoomScale = 0.03 // required for zooming
scrollView.maximumZoomScale = 1.0 // required for zooming
}
}
// UIScrollViewDelegate method
// required for zooming
func viewForZoomingInScrollView(scrollView: UIScrollView) -> UIView? {
return imageView
}
private var imageView = UIImageView()
// convenience computed property
// lets us get involved every time we set an image in imageView
// we can do things like resize the imageView,
// set the scroll view's contentSize,
// and stop the spinner
private var image: UIImage? {
get { return imageView.image }
set {
imageView.image = newValue
imageView.sizeToFit()
scrollView?.contentSize = imageView.frame.size
spinner?.stopAnimating()
}
}
// put our imageView into the view hierarchy
// as a subview of the scrollView
// (will install it into the content area of the scroll view)
override func viewDidLoad() {
super.viewDidLoad()
scrollView.addSubview(imageView)
}
// for efficiency, we will only actually fetch the image
// when we know we are going to be on screen
override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
if image == nil {
fetchImage()
}
}
}
The source of the issue that decompression image from data (file format representative of image data) to screen can 'eat' a lot of memory.
Here is very good article about iOS image decompression -> Avoiding Image Decompression Sickness
Since all images in Cassini application are VERY large (wave_earth_mosaic_3.jpg (9999×9999), pia03883-full.jpg (14400×9600)) image decompression process 'eat' all phone memory. This leads to application crash.
To fix Cassini issue I modified code and added small function to lower images resolution by 2.
Here is code example (code fixed to Swift 2.0):
...
if imageData != nil {
// this might be a waste of time if our MVC is out of action now
// which it might be if someone hit the Back button
// or otherwise removed us from split view or navigation controller
// while we were off fetching the image
if let imageSource = UIImage(data: imageData!) {
self.image = self.imageResize(imageSource)
}
} else {
self.image = nil
}
...
func imageResize (imageOriginal:UIImage) -> UIImage {
let image = imageOriginal.CGImage
let width = CGImageGetWidth(image) / 2
let height = CGImageGetHeight(image) / 2
let bitsPerComponent = CGImageGetBitsPerComponent(image)
let bytesPerRow = CGImageGetBytesPerRow(image)
let colorSpace = CGImageGetColorSpace(image)
let bitmapInfo = CGImageGetBitmapInfo(image)
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)
CGContextSetInterpolationQuality(context, CGInterpolationQuality.High)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), image)
let scaledImage = UIImage(CGImage: CGBitmapContextCreateImage(context)!)
return scaledImage
}
So now application load all images without crash.
SWIFT 2.0 fix:
add this to Info.plist to allow HTTP loading
<key>NSAppTransportSecurity</key>
<dict>
<!--Include to allow all connections (DANGER)-->
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>

Resources