This question already has answers here:
UIImageWriteToSavedPhotosAlbum() doesn't save cropped image
(2 answers)
Closed 4 years ago.
I'm desperately trying to create a qr-code in Swift and convert the image to a (NS)Data-String. It's supposed to act as an image in a HTML file later.
Although the qr-code is created perfectly, the conversion to a data string nevertheless produces nil. Does anyone have an idea what's wrong with my code?
let dataString = "some text or code or whatever"
let data = dataString.data(using: .utf8)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
filter.setValue("Q", forKey: "inputCorrectionLevel")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
let bild = UIImage(ciImage: output) // <-- works quite well, image is shown in ImageView
let bildData = UIImageJPEGRepresentation(bild, 1.0) // <-- produces NIL
let bildString = bildData?.base64EncodedString(options: .lineLength64Characters)
//also tried: let bildString: String = String(data: bildData, encoding: .utf8)!
print("QRCODE-String: \(bildString)") // NIL
}
}
I also tried UIImagePNGRepresentation() with the same result.
CIImage is filter instructions. CGImage is bitmap data.
UIImage is a wrapper. A UIImage wrapping a CIImage merely contains filter instructions. A UIImage wrapping a CGImage contains bitmap data.
So the problem you are having has nothing to do with NSData. It has to do with UIImage. You are saying:
let bild = UIImage(ciImage: output)
let bildData = UIImageJPEGRepresentation(bild, 1.0)
bild is not a "real" image; it is merely a kind of wrapper for a CIImage. There is no data in the image — all there is is the instructions for the CIImage filter. You can't see anything until you render the image into a bitmap. UIImageView might be able to do that for you, but UIImageJPEGRepresentation cannot.
If you want to save the image as data, first draw the image into an image graphics context, to get the bitmap. Extract the resulting image, and now you have a real UIImage backed by CGImage. You can now save its data, because now it has data.
Related
So I'm trying to get a jpeg/png representation of the grayscale depth maps that are typically used in iOS image depth examples. The depth data is stored in each jpeg as aux data. I've followed some tutorials and I have no problem rendering this gray scale data to the screen, but I can find no way to actually save it as a jpeg/png representation. I'm pretty much using this code: https://www.raywenderlich.com/168312/image-depth-maps-tutorial-ios-getting-started
The depth data is put into a cvpixelbuffer and manipulated accordingly. I believe it's in the format kCVPixelFormatType_DisparityFloat32.
While I'm able to see this data represented accordingly on the screen, I'm unable to use UIImagePNGRepresentation or UIImageJPGRepresentation. Sure, I could manually capture a screenshot, but that's not really ideal.
I have a suspicion that the cvpixelbuffer data format is not compatible with these UIImage functions and that's why I can't get them to spit out an image.
Does anyone have any suggestions?
// CVPixelBuffer to UIImage
let ciImageDepth = CIImage(cvPixelBuffer: cvPixelBufferDepth)
let contextDepth:CIContext = CIContext.init(options: nil)
let cgImageDepth:CGImage = contextDepth.createCGImage(ciImageDepth, from: ciImageDepth.extent)!
let uiImageDepth:UIImage = UIImage(cgImage: cgImageDepth, scale: 1, orientation: UIImage.Orientation.up)
// Save UIImage to Photos Album
UIImageWriteToSavedPhotosAlbum(uiImageDepth, nil, nil, nil)
I figured it out. I had to convert to a CGImage first. Ended up doing cvPixelBuffer to CIImage to CGImage to UIImage.
Posting a swift code sample in case anyone wants to use it.
let ciimage = CIImage(cvPixelBuffer: depthBuffer) // depth cvPixelBuffer
let depthUIImage = UIImage(ciImage: ciimage)
I am trying to convert an image into grayscale one using GPUImage. I wrote an extension to get my work done. Grayscale thing is okay. But output image has become doubled in size. In my case I need the image to be in exact size. Can someone please help me on this? Any help would be highly appreciated.
This is the extension I wrote
import UIKit
import GPUImage
extension UIImage {
public func grayscale() -> UIImage?{
var processedImage = self
print("1: "+"\(processedImage.size)")
let image = GPUImagePicture(image: processedImage)
let grayFilter = GPUImageGrayscaleFilter()
image?.addTarget(grayFilter)
grayFilter.useNextFrameForImageCapture()
image?.processImage()
processedImage = grayFilter.imageFromCurrentFramebuffer()
print("2: "+"\(processedImage.size)")
return processedImage
}
}
This is the output in console
Edit: I know the image can be resized later on. But need to know why is this happening and is there anything to do to keep the image size as it is using GPUImage.
Try to scale the image later:
if let cgImage = processedImage.cgImage {
//The scale value 2.0 here should be replaced by the original image's scale.
let scaledImage = UIImage(cgImage: cgImage, scale: 2.0, orientation: processedImage.imageOrientation)
return scaledImage
}
After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))
I'm making a simple filter app. I've found that if you load an image from the camera roll that is a PNG (PNGs have no orientation data flag) and the height is greater than the width, upon applying certain distortion filters to said image it will rotate and present it self as if it were a landscape image.
I found the below technique online somewhere in the many tabs i had open and it seems to do exactly what i want. It uses the original scale and orientation of the image when it was first loaded.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.origImage.imageOrientation)
but this is the warning i get when i try to use it:
Ambiguous use of 'init(CIImage:scale:orientation:)'
Here's the entire thing I'm trying to get working:
//global variables
var image: UIImage!
var origImage: UIImage!
func setFilter(action: UIAlertAction) {
origImage = image
// make sure we have a valid image before continuing!
guard let image = self.imageView.image?.cgImage else { return }
let openGLContext = EAGLContext(api: .openGLES3)
let context = CIContext(eaglContext: openGLContext!)
let ciImage = CIImage(cgImage: image)
let currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter?.setValue(ciImage, forKey: kCIInputImageKey)
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
//the line below is the one giving me errors which i thought would work.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.image.imageOrientation)
self.imageView.image = UIImage(cgImage: context.createCGImage(newImage, from: output.extent)!)}
The filters all work, they unfortunately turn images described above by 90 degrees for the reasons I suspect.
I've tried some other methods like using an extension that checks orientation of UIimages and converting the CIimage to a Uiimage, using the extension, then trying to convert it back to a Ciimage or just load the UIimage to the imageView for output. I ran into snag after snag with that process. I started to seem really convoluted just to get certain images to their default orientation as well.
Any advice would be greatly appreciated!
EDIT: heres where I got the method I was trying: When applying a filter to a UIImage the result is upside down
I found the answer. My biggest issue was the "Ambiguous use of 'init(CIImage:scale:orientation:)' "
it turned out that Xcode was auto populating the code as 'CIImage:scale:orientation' when it should have been ciImage:scale:orientation' The very vague error left a new dev like my scratching my head for 3 days over this. (This was true for CGImage and UIImage inits as well, but my original error was with CIImage so I used that to explain.)
with that knowledge I was able to formulate the code below for my new output:
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
let outputImage = UIImage(cgImage: context.createCGImage(output, from: output.extent)!)
let imageTurned = UIImage(cgImage: outputImage.cgImage!, scale: CGFloat(1.0), orientation: origImage.imageOrientation)
centerScrollViewContents()
self.imageView.image = imageTurned
}
This code replaces the if let output in the OP.
I have a database with images stored as blobs and would like to display them in a UiImage. I am able to get the data into my app as a JSON feed and I am able to grab the image data (via print) as follows
[["image": <UIImage: 0x14ea397b0> size {750, 750} orientation 0 scale 1.000000]]
I have no idea how I translate this data back into the UIImage I have on my storyboard
suppose you have blob data as a string from db. you can convert it to image as bellow.
let imageBytes = "" // blob data
DispatchQueue.main.async {
if let imgData = imageBytes.dataUsingEncoding(NSUTF8StringEncoding){
self.imageView.image = UIImage(data: imgData)
}
}