Save image as is in photo album using swift - ios

I've written a steganography application in Swift v2. The workflow is simple : I open an image, I type in a message to save, I perform bitmanip to modify the least significant bit and then I save to photo album.
Problem is, iOS is running compression (I believe) on my image and some of the bits change.
How can I save my image directly to the photo album without having iOS change any of my bits? (I can post the code here, but there is a lot of it)
(this is a small snippet of the overall code)
let imageRef = CGBitmapContextCreateImage(context);
let newImage = UIImage(CGImage: imageRef!)
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil)

It seems that I just needed to convert my newImage to be a UIImagePNGRepresenation.
let imageRef = CGBitmapContextCreateImage(context);
let newImage = UIImage(CGImage: imageRef!)
let newImagePNG = UIImagePNGRepresentation(newImage)
var saveableImage = UIImage(data: newImagePNG!)
saveImage(saveableImage!)

Related

Need some help converting cvpixelbuffer data to a jpeg/png in iOS

So I'm trying to get a jpeg/png representation of the grayscale depth maps that are typically used in iOS image depth examples. The depth data is stored in each jpeg as aux data. I've followed some tutorials and I have no problem rendering this gray scale data to the screen, but I can find no way to actually save it as a jpeg/png representation. I'm pretty much using this code: https://www.raywenderlich.com/168312/image-depth-maps-tutorial-ios-getting-started
The depth data is put into a cvpixelbuffer and manipulated accordingly. I believe it's in the format kCVPixelFormatType_DisparityFloat32.
While I'm able to see this data represented accordingly on the screen, I'm unable to use UIImagePNGRepresentation or UIImageJPGRepresentation. Sure, I could manually capture a screenshot, but that's not really ideal.
I have a suspicion that the cvpixelbuffer data format is not compatible with these UIImage functions and that's why I can't get them to spit out an image.
Does anyone have any suggestions?
// CVPixelBuffer to UIImage
let ciImageDepth = CIImage(cvPixelBuffer: cvPixelBufferDepth)
let contextDepth:CIContext = CIContext.init(options: nil)
let cgImageDepth:CGImage = contextDepth.createCGImage(ciImageDepth, from: ciImageDepth.extent)!
let uiImageDepth:UIImage = UIImage(cgImage: cgImageDepth, scale: 1, orientation: UIImage.Orientation.up)
// Save UIImage to Photos Album
UIImageWriteToSavedPhotosAlbum(uiImageDepth, nil, nil, nil)
I figured it out. I had to convert to a CGImage first. Ended up doing cvPixelBuffer to CIImage to CGImage to UIImage.
Posting a swift code sample in case anyone wants to use it.
let ciimage = CIImage(cvPixelBuffer: depthBuffer) // depth cvPixelBuffer
let depthUIImage = UIImage(ciImage: ciimage)

GPUImage doubles image size - iOS/Swift

I am trying to convert an image into grayscale one using GPUImage. I wrote an extension to get my work done. Grayscale thing is okay. But output image has become doubled in size. In my case I need the image to be in exact size. Can someone please help me on this? Any help would be highly appreciated.
This is the extension I wrote
import UIKit
import GPUImage
extension UIImage {
public func grayscale() -> UIImage?{
var processedImage = self
print("1: "+"\(processedImage.size)")
let image = GPUImagePicture(image: processedImage)
let grayFilter = GPUImageGrayscaleFilter()
image?.addTarget(grayFilter)
grayFilter.useNextFrameForImageCapture()
image?.processImage()
processedImage = grayFilter.imageFromCurrentFramebuffer()
print("2: "+"\(processedImage.size)")
return processedImage
}
}
This is the output in console
Edit: I know the image can be resized later on. But need to know why is this happening and is there anything to do to keep the image size as it is using GPUImage.
Try to scale the image later:
if let cgImage = processedImage.cgImage {
//The scale value 2.0 here should be replaced by the original image's scale.
let scaledImage = UIImage(cgImage: cgImage, scale: 2.0, orientation: processedImage.imageOrientation)
return scaledImage
}

If a filter is applied to a PNG where height > width, it rotates the image 90 degrees. How can I efficiently prevent this?

I'm making a simple filter app. I've found that if you load an image from the camera roll that is a PNG (PNGs have no orientation data flag) and the height is greater than the width, upon applying certain distortion filters to said image it will rotate and present it self as if it were a landscape image.
I found the below technique online somewhere in the many tabs i had open and it seems to do exactly what i want. It uses the original scale and orientation of the image when it was first loaded.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.origImage.imageOrientation)
but this is the warning i get when i try to use it:
Ambiguous use of 'init(CIImage:scale:orientation:)'
Here's the entire thing I'm trying to get working:
//global variables
var image: UIImage!
var origImage: UIImage!
func setFilter(action: UIAlertAction) {
origImage = image
// make sure we have a valid image before continuing!
guard let image = self.imageView.image?.cgImage else { return }
let openGLContext = EAGLContext(api: .openGLES3)
let context = CIContext(eaglContext: openGLContext!)
let ciImage = CIImage(cgImage: image)
let currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter?.setValue(ciImage, forKey: kCIInputImageKey)
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
//the line below is the one giving me errors which i thought would work.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.image.imageOrientation)
self.imageView.image = UIImage(cgImage: context.createCGImage(newImage, from: output.extent)!)}
The filters all work, they unfortunately turn images described above by 90 degrees for the reasons I suspect.
I've tried some other methods like using an extension that checks orientation of UIimages and converting the CIimage to a Uiimage, using the extension, then trying to convert it back to a Ciimage or just load the UIimage to the imageView for output. I ran into snag after snag with that process. I started to seem really convoluted just to get certain images to their default orientation as well.
Any advice would be greatly appreciated!
EDIT: heres where I got the method I was trying: When applying a filter to a UIImage the result is upside down
I found the answer. My biggest issue was the "Ambiguous use of 'init(CIImage:scale:orientation:)' "
it turned out that Xcode was auto populating the code as 'CIImage:scale:orientation' when it should have been ciImage:scale:orientation' The very vague error left a new dev like my scratching my head for 3 days over this. (This was true for CGImage and UIImage inits as well, but my original error was with CIImage so I used that to explain.)
with that knowledge I was able to formulate the code below for my new output:
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
let outputImage = UIImage(cgImage: context.createCGImage(output, from: output.extent)!)
let imageTurned = UIImage(cgImage: outputImage.cgImage!, scale: CGFloat(1.0), orientation: origImage.imageOrientation)
centerScrollViewContents()
self.imageView.image = imageTurned
}
This code replaces the if let output in the OP.

Cannot force unwrap value of non-optional type 'UIImage'

In my project contain this following line of code. But I always get the error. I am using Xcode 7.2 and iOS 9.
let image: UIImage = UIImage(CGImage:imageRef, scale:originalImage.scale, orientation:originalImage.imageOrientation)!
Remove the !
The result of that method isn't optional - you don't need to unwrap it.
NB You don't need the : UIImage from the variable - Swift will infer it's type for you.
EDIT: What if imageRef is optional (from #chewie's comment)?
You have a few options.
1 Use if let:
if let imageRef = imageRef {
let image = UIImage(CGImage: imageRef, scale: originalImage.scale, orientation: originalImage.imageOrientation)
// Do something with image here
}
2 Use guard
guard let imageRef = imageRef else {
print("Oops, no imageRef - aborting")
return
}
// Do something with image here
let image = UIImage(CGImage: imageRef, scale: originalImage.scale, orientation: originalImage.imageOrientation)
3 Use map
let image = imageRef.map {
UIImage(CGImage: $0, scale: originalImage.scale, orientation: originalImage.imageOrientation)
}
// Do something with image here, remembering that it's
// optional this time :)
The choice of which to use is yours, but here's my rule of thumb.
If what you need to do requires an image, use guard and abort early if you don't have one. This generally makes your code easier to read and understand.
If what you need to do can be done without an image, use if let or map. if let is useful if you just want to maybe do something and then carry on. map is useful if you need to pass your UIImage? around and use it later.

How To Properly Compress UIImages At Runtime

I need to load 4 images for simultaneous editing. When I load them from the users library, the memory exceeds 500mb and crashes.
Here is a log from a raw allocations dump before I did any compression attempts:
Code:
var pickedImage = UIImage(data: imageData)
Instrument:
I have read several posts on compressing UIImages. I have tried reducing the UIImage:
New Code:
var pickedImage = UIImage(data: imageData, scale:0.1)
Instrument:
Reducing the scale of the UIImage had NO EFFECT?! Very odd.
So now I tried creating a JPEG compression based on the full UIImage
New code:
var pickedImage = UIImage(data: imageData)
var compressedData:NSData = UIImageJPEGRepresentation(pickedImage,0)
var compressedImage:UIImage = UIImage(data: compressedData)!//this is now used to display
Instrument:
Now, I suspect because I am converting the image its still being loaded. And since this is all occuring inside a callback from PHImageManager, I need a way to create a compressed UIImage from the NSData, but the setting the scale to 0.1 did NOTHING.
So any suggestions as to how I can compress this UIImage right from the NSData would be life saving!!!
Thanks
I ended up hard coding a size reduction before processing the image. Here is the code:
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize:CGSizeMake(CGFloat(asset.pixelWidth), CGFloat(asset.pixelHeight)), contentMode: .AspectFill, options: options)
{
result, info in
var minRatio:CGFloat = 1
//Reduce file size so take 1/2 UIScreen.mainScreen().bounds.width/2 || CGFloat(asset.pixelHeight) > UIScreen.mainScreen().bounds.height/2)
{
minRatio = min((UIScreen.mainScreen().bounds.width/2)/(CGFloat(asset.pixelWidth)), ((UIScreen.mainScreen().bounds.height/2)/CGFloat(asset.pixelHeight)))
}
var size:CGSize = CGSizeMake((CGFloat(asset.pixelWidth)*minRatio),(CGFloat(asset.pixelHeight)*minRatio))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
result.drawInRect(CGRectMake(0, 0, size.width, size.height))
var final = UIGraphicsGetImageFromCurrentImageContext()
var image = iImage(uiimage: final)
}
The reason you're having crashes and seeing such high memory usage is that you are missing the call to UIGraphicsEndImageContext(); -- so you are leaking memory like crazy.
For every call to UIGraphicsBeginImageContextWithOptions, make sure you have a call to UIGraphicsEndImageContext (after UIGraphicsGetImage*).
Also, you should wrap in #autorelease (I'm presuming you're using ARC), otherwise you'll still have out-of-memory crashes if you are rapidly processing images.
Do it like this:
#autorelease {
UIGraphicsBeginImageContextWithOptions(...);
..
something = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}

Resources