AVFoundation camera image quality degraded upon processing - ios

I made an AVFoundation camera to crop square images based on #fsaint answer: Cropping AVCaptureVideoPreviewLayer output to a square . The sizing of the photo is great, that works perfectly. However the image quality is noticeably degraded (See Below: first image is preview layer showing good resolution, second is the degraded image that was captured). It definitely has to do with what happens in processImage: as the image resolution is fine without it, just not the right aspect ratio. The documentation on image processing is pretty bare, any insights are GREATLY appreciated!!
Setting up camera:
func setUpCamera() {
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
if ((backCamera?.hasFlash) != nil) {
do {
try backCamera.lockForConfiguration()
backCamera.flashMode = AVCaptureFlashMode.Auto
backCamera.unlockForConfiguration()
} catch {
// error handling
}
}
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
input = nil
}
if error == nil && captureSession!.canAddInput(input) {
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession!.canAddOutput(stillImageOutput) {
captureSession!.sessionPreset = AVCaptureSessionPresetHigh;
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
previewVideoView.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
}
}
}
Snapping photo:
#IBAction func onSnapPhotoButtonPressed(sender: UIButton) {
if let videoConnection = self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
self.stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
self.processImage(image)
self.clearPhotoButton.hidden = false
self.nextButton.hidden = false
self.view.bringSubviewToFront(self.imageView)
}
})
}
}
Process image to square:
func processImage(image:UIImage) {
let deviceScreen = previewLayer?.bounds
let width:CGFloat = (deviceScreen?.size.width)!
UIGraphicsBeginImageContext(CGSizeMake(width, width))
let aspectRatio:CGFloat = image.size.height * width / image.size.width
image.drawInRect(CGRectMake(0, -(aspectRatio - width) / 2.0, width, aspectRatio))
let smallImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let cropRect = CGRectMake(0, 0, width, width)
let imageRef:CGImageRef = CGImageCreateWithImageInRect(smallImage.CGImage, cropRect)!
imageView.image = UIImage(CGImage: imageRef)
}

There are a few things wrong with your processImage() function.
First of all, you're creating a new graphics context with UIGraphicsBeginImageContext().
According to the Apple docs on this function:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
Because the scale factor is 1.0, it is going to look pixelated when displayed on-screen, as the screen's resolution is (most likely) higher.
You want to be using the UIGraphicsBeginImageContextWithOptions() function, and pass 0.0 for the scale factor. According to the docs on this function, for the scale argument:
If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
For example:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, width), NO, 0.0);
Your output should now look nice and crisp, as it is being rendered with the same scale as the screen.
Second of all, there's a problem with the width you're passing in.
let width:CGFloat = (deviceScreen?.size.width)!
UIGraphicsBeginImageContext(CGSizeMake(width, width))
You shouldn't be passing in the width of the screen here, it should be the width of the image. For example:
let width:CGFloat = image.size.width
You will then have to change the aspectRatio variable to take the screen width, such as:
let aspectRatio:CGFloat = image.size.height * (deviceScreen?.size.width)! / image.size.width
Third of all, you can simplify your cropping function significantly.
func processImage(image:UIImage) {
let screenWidth = UIScreen.mainScreen().bounds.size.width
let width:CGFloat = image.size.width
let height:CGFloat = image.size.height
let aspectRatio = screenWidth/width;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(screenWidth, screenWidth), false, 0.0) // create context
let ctx = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(ctx, 0, (screenWidth-(aspectRatio*height))*0.5) // shift context up, to create a sqaured 'frame' for your image to be drawn in
image.drawInRect(CGRect(origin:CGPointZero, size: CGSize(width:screenWidth, height:height*aspectRatio))) // draw image
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
imageView.image = img
}
There's no need to draw the image twice, you only need to just translate the context up, and then draw the image.

Related

CMSampleBuffer automatically rotate every time

Here I try to select video from photo library and read it frame by frame as a samplebuffer so that I can crop or rotate later.but the problem is CMSampleBuffer by default rotated.The variables are I use for initialization is
var asset:AVAsset! //load asset from url
var assetReader:AVAssetReader!
var assetVideoOutput: AVAssetReaderTrackOutput! // add assetreader for video
var assetAudioOutput: AVAssetReaderTrackOutput! // add assetreader for audio
var readQueue: DispatchQueue!
the settings of previous variables looks like this.
func resetRendering() {
do{
assetReader = try! AVAssetReader(asset: asset)
}catch {
print(error)
}
var tracks = asset.tracks(withMediaType: .video)
var track : AVAssetTrack!
if tracks.count > 0 {
track = tracks[0]
}
let decompressionvideoSettings:[String:Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
assetVideoOutput = AVAssetReaderTrackOutput(track: track, outputSettings: decompressionvideoSettings)
tracks = asset.tracks(withMediaType: .audio)
if tracks.count > 0 {
track = tracks[0]
}
let audioReadSettings = [AVFormatIDKey: kAudioFormatLinearPCM]
assetAudioOutput = AVAssetReaderTrackOutput(track: track, outputSettings: audioReadSettings)
if (assetAudioOutput.responds(to: #selector(getter: AVAssetReaderOutput.alwaysCopiesSampleData))) {
assetAudioOutput.alwaysCopiesSampleData = false
}
if assetReader.canAdd(assetAudioOutput) {
assetReader.add(assetAudioOutput)
}
if (assetVideoOutput.responds(to: #selector(getter: AVAssetReaderOutput.alwaysCopiesSampleData))) {
assetVideoOutput.alwaysCopiesSampleData = false
}
if assetReader.canAdd(assetVideoOutput) {
assetReader.add(assetVideoOutput)
}
}
now when I try to read video frame by frame and convert into image it automatically rotate frame in 90 degrees.the conversion extention from samplebuffer to uiimage looks like this
extension CMSampleBuffer {
var uiImage: UIImage? {
guard let imageBuffer = CMSampleBufferGetImageBuffer(self) else { return nil }
CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
guard let context = CGContext(data: baseAddress,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: bytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo.rawValue) else { return nil }
guard let cgImage = context.makeImage() else { return nil }
CVPixelBufferUnlockBaseAddress(imageBuffer,CVPixelBufferLockFlags(rawValue: 0));
return UIImage(cgImage: cgImage)
}
}
resulted photo is
and actual photo is this one
Update
CGImage doesn't hold orientation property. So you have to manually set when you convert to UIImage. We get orientation from video track and set it to UIImage.
Below answer's right/left or up/down might be opposite. Pls try and check.
//pls get orientation property outside of image processing to avoid redundancy
let videoTrack = videoAsset.tracks(withMediaType: AVMediaType.video).first!
let videoSize = videoTrack.naturalSize
let transform = videoTrack.preferredTransform
var orientation = UIImage.Orientation.right
switch (transform.tx, transform.ty) {
case (0, 0): orientation = UIImage.Orientation.up
case (videoSize.width, videoSize.height): UIImage.Orientation.right
case (0, videoSize.width): UIImage.Orientation.left
default: orientation = UIImage.Orientation.down
}
// then set orientation in image processing such as captureOutput
let uiImage = UIImage.init(cgImage: cgImage, scale: 1.0, orientation: orientation)
Hope this helps.

Resize a CVPixelBuffer

I'm trying to resize a CVPixelBuffer to a size of 128x128. I'm working with one that is 750x750. I'm currently using the CVPixelBuffer to create a new CGImage, which I resize then convert back into a CVPixelBuffer. Here is my code:
func getImageFromSampleBuffer (buffer:CMSampleBuffer) -> UIImage? {
if let pixelBuffer = CMSampleBufferGetImageBuffer(buffer) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext()
let imageRect = CGRect(x: 0, y: 0, width: 128, height: 128)
if let image = context.createCGImage(ciImage, from: imageRect) {
let t = CIImage(cgImage: image)
let new = t.applying(transformation)
context.render(new, to: pixelBuffer)
return UIImage(cgImage: image, scale: UIScreen.main.scale, orientation: .right)
}
}
return nil
}
I've also tried scaling the CIImage then converting it:
let t = CIImage(cgImage: image)
let transformation = CGAffineTransform(scaleX: 1, y: 2)
let new = t.applying(transformation)
context.render(new, to: pixelBuffer)
But that didn't work either.
Any help is appreciated. Thanks!
There's no need for pixel buffer rendering and alike. Just transform the original CIImage and crop to size. Cropping is necessary if you source and destination dimensions aren't proportional.
func getImageFromSampleBuffer (buffer:CMSampleBuffer) -> UIImage? {
if let pixelBuffer = CMSampleBufferGetImageBuffer(buffer) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let srcWidth = CGFloat(ciImage.extent.width)
let srcHeight = CGFloat(ciImage.extent.height)
let dstWidth: CGFloat = 128
let dstHeight: CGFloat = 128
let scaleX = dstWidth / srcWidth
let scaleY = dstHeight / srcHeight
let scale = min(scaleX, scaleY)
let transform = CGAffineTransform.init(scaleX: scale, y: scale)
let output = ciImage.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: dstWidth, height: dstHeight))
return UIImage(ciImage: output)
}
return nil
}
Try this
func getImageFromSampleBuffer (buffer:CMSampleBuffer) -> UIImage? {
if let pixelBuffer = CMSampleBufferGetImageBuffer(buffer) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let resizedCIImage = ciImage.applying(CGAffineTransform(scaleX: 128.0 / 750.0, y: 128.0 / 750.0))
let context = CIContext()
if let image = context.createCGImage(resizedCIImage, from: resizedCIImage.extent) {
return UIImage(cgImage: image)
}
}
return nil
}
Here I assume that pixel buffer is square and size is equal to 750x750, you can change it to work with all aspect ratios and sizes

CIDetector , detected face image is not showing?

I am using CIDetector to detect face in a UIImage. i am getting the face rect correctly but when i crop the image to detected face rect. it is not showing on my image view.
I have already checked. my image is not nil
Here is my code :-
#IBAction func detectFaceOnImageView(_: UIButton) {
let image = myImageView.getFaceImage()
myImageView.image = image
}
extension UIView {
func getFaceImage() -> UIImage? {
let faceDetectorOptions: [String: AnyObject] = [CIDetectorAccuracy: CIDetectorAccuracyHigh as AnyObject]
let faceDetector: CIDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: faceDetectorOptions)!
let viewScreenShotImage = generateScreenShot(scaleTo: 1.0)
if viewScreenShotImage.cgImage != nil {
let sourceImage = CIImage(cgImage: viewScreenShotImage.cgImage!)
let features = faceDetector.features(in: sourceImage)
if features.count > 0 {
var faceBounds = CGRect.zero
var faceImage: UIImage?
for feature in features as! [CIFaceFeature] {
faceBounds = feature.bounds
let faceCroped: CIImage = sourceImage.cropping(to: faceBounds)
faceImage = UIImage(ciImage: faceCroped)
}
return faceImage
} else {
return nil
}
} else {
return nil
}
}
func generateScreenShot(scaleTo: CGFloat = 3.0) -> UIImage {
let rect = self.bounds
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
self.layer.render(in: context!)
let screenShotImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
let aspectRatio = screenShotImage.size.width / screenShotImage.size.height
let resizedScreenShotImage = screenShotImage.scaleImage(toSize: CGSize(width: self.bounds.size.height * aspectRatio * scaleTo, height: self.bounds.size.height * scaleTo))
return resizedScreenShotImage!
}
}
For More Information, I am attaching Screen Shots of values .
Screen Shot 1
Screen Shot 2
Screen Shot 3
Try this:
let faceCroped: CIImage = sourceImage.cropping(to: faceBounds)
//faceImage = UIImage(ciImage: faceCroped)
let cgImage: CGImage = {
let context = CIContext(options: nil)
return context.createCGImage(faceCroped, from: faceCroped.extent)!
}()
faceImage = UIImage(cgImage: cgImage)

Downsized Image gets blurry after being copied to Pasteboard - Swift 3.0

I am capturing an image which is then placed in a small imageView. The picture is not blurry in the small imageView, but when I copy it to the clipboard, I am resizing the picture so that it is the same size as the imageView, but now it is blurry when I paste.
Here is the code:
import UIKit
import AVFoundation
class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
#IBOutlet weak var cameraView: UIView!
#IBOutlet weak var imageView: UIImageView!
var session: AVCaptureSession?
var stillImageOutput: AVCaptureStillImageOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var captureDevice:AVCaptureDevice?
var imagePicker: UIImagePickerController!
override func viewDidLoad() {
super.viewDidLoad()
alignment()
tapToCopy()
}
override func viewWillAppear(_ animated: Bool) {
session = AVCaptureSession()
session!.sessionPreset = AVCaptureSessionPresetPhoto
let videoDevices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo)
for device in videoDevices!{
let device = device as! AVCaptureDevice
if device.position == AVCaptureDevicePosition.front {
captureDevice = device
}
}
//We will make a new AVCaptureDeviceInput and attempt to associate it with our backCamera input device.
//There is a chance that the input device might not be available, so we will set up a try catch to handle any potential errors we might encounter.
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: captureDevice)
} catch let error1 as NSError {
error = error1
input = nil
print(error!.localizedDescription)
}
if error == nil && session!.canAddInput(input) {
session!.addInput(input)
// ...
// The remainder of the session setup will go here...
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if session!.canAddOutput(stillImageOutput) {
session!.addOutput(stillImageOutput)
// ...
// Configure the Live Preview here...
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
videoPreviewLayer!.videoGravity = AVLayerVideoGravityResizeAspect
videoPreviewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
}
}
}
func alignment() {
let height = view.bounds.size.height
let width = view.bounds.size.width
cameraView.bounds.size.height = height / 10
cameraView.bounds.size.width = height / 10
cameraView.layer.cornerRadius = height / 20
imageView.bounds.size.height = height / 10
imageView.bounds.size.width = height / 10
imageView.layer.cornerRadius = height / 20
imageView.clipsToBounds = true
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
videoPreviewLayer!.frame = cameraView.bounds
}
#IBAction func takePic(_ sender: Any) {
if (stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)) != nil {
let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)
// ...
// Code for photo capture goes here...
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: { (sampleBuffer, error) -> Void in
// ...
// Process the image data (sampleBuffer) here to get an image file we can put in our captureImageView
if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProvider(data: imageData as! CFData)
let cgImageRef = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)
let image = UIImage(cgImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.right)
// ...
// Add the image to captureImageView here...
self.imageView.image = self.resizeImage(image: image, newHeight: self.view.bounds.size.height / 10)
}
})
}
}
func tapToCopy() {
let imageTap = UITapGestureRecognizer(target: self, action: #selector(self.copyToClipboard(recognizer:)))
imageTap.numberOfTapsRequired = 1
imageView.isUserInteractionEnabled = true
imageView.addGestureRecognizer(imageTap)
}
func copyToClipboard(recognizer: UITapGestureRecognizer) {
UIPasteboard.general.image = self.resizeImage(image: imageView.image!, newHeight: self.view.bounds.size.height / 10)
}
func resizeImage(image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
First of all, you are saying:
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
Never, never, never call UIGraphicsBeginImageContext. Just pretend it doesn't exist. Always call UIGraphicsBeginImageContextWithOptions instead. It takes two extra parameters, which should just about always be false and 0. Things will be a lot better when you make that change, because the image will contain scale information that you are currently stripping out.
Another problem is that you are resizing the same image twice in succession — once to display the image in the image view, and then again resizing it some more when you pull the image from the image view and put it on the pasteboard. Don't do that! Instead, store the original image, without resizing. Later, you can put that on the pasteboard — or resize the original image so that you are only resizing it once, totally separate from the image in the image view.

App is crashing silently during custom camera (Swift)

The app is crashing at random points in this function. I believe I need to scale it down but I am not sure. The only requirements I have for the image is that it remains a square and it remains decently sized because I need it to be big enough to take the entire screens width.
Here is an error that sometimes comes along with the crash:
warning: could not load any Objective-C class information. This will significantly reduce the quality of type information available.
#IBAction func didPressTakePhoto(sender: UIButton) {
self.previewLayer?.connection.enabled = false
if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
var image = UIImage()
if UIDevice.currentDevice().orientation == .Portrait{
image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
}else if UIDevice.currentDevice().orientation == .LandscapeLeft{
image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Up)
}else if UIDevice.currentDevice().orientation == .LandscapeRight{
image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Down)
}
//Crop the image to a square
let imageSize: CGSize = image.size
let width: CGFloat = imageSize.width
let height: CGFloat = imageSize.height
if width != height {
let newDimension: CGFloat = min(width, height)
let widthOffset: CGFloat = (width - newDimension) / 2
let heightOffset: CGFloat = (height - newDimension) / 2
UIGraphicsBeginImageContextWithOptions(CGSizeMake(newDimension, newDimension), false, 0.0)
image.drawAtPoint(CGPointMake(-widthOffset, -heightOffset), blendMode: .Copy, alpha: 1.0)
image = UIGraphicsGetImageFromCurrentImageContext()
let imageData: NSData = UIImageJPEGRepresentation(image, 0.1)!
UIGraphicsEndImageContext()
self.captImage = UIImage(data: imageData)!
}
}
self.performSegueWithIdentifier("fromCustomCamera", sender: self)
})
}
}
This code is running in my viewDidAppear and stillImageOutput is returning nil when I take a photo.
if self.isRunning == false{
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
var error: NSError?
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
print(error)
input = nil
}
if error == nil && captureSession!.canAddInput(input) {
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession!.canAddOutput(stillImageOutput) {
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
previewView.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
self.isRunning = true
}
}
}
Fixed it. The reason it was crashing was actually due to my images being way too big. I had to compress them.

Resources