I have a database with images stored as blobs and would like to display them in a UiImage. I am able to get the data into my app as a JSON feed and I am able to grab the image data (via print) as follows
[["image": <UIImage: 0x14ea397b0> size {750, 750} orientation 0 scale 1.000000]]
I have no idea how I translate this data back into the UIImage I have on my storyboard
suppose you have blob data as a string from db. you can convert it to image as bellow.
let imageBytes = "" // blob data
DispatchQueue.main.async {
if let imgData = imageBytes.dataUsingEncoding(NSUTF8StringEncoding){
self.imageView.image = UIImage(data: imgData)
}
}
Related
This question already has answers here:
UIImageWriteToSavedPhotosAlbum() doesn't save cropped image
(2 answers)
Closed 4 years ago.
I'm desperately trying to create a qr-code in Swift and convert the image to a (NS)Data-String. It's supposed to act as an image in a HTML file later.
Although the qr-code is created perfectly, the conversion to a data string nevertheless produces nil. Does anyone have an idea what's wrong with my code?
let dataString = "some text or code or whatever"
let data = dataString.data(using: .utf8)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
filter.setValue("Q", forKey: "inputCorrectionLevel")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
let bild = UIImage(ciImage: output) // <-- works quite well, image is shown in ImageView
let bildData = UIImageJPEGRepresentation(bild, 1.0) // <-- produces NIL
let bildString = bildData?.base64EncodedString(options: .lineLength64Characters)
//also tried: let bildString: String = String(data: bildData, encoding: .utf8)!
print("QRCODE-String: \(bildString)") // NIL
}
}
I also tried UIImagePNGRepresentation() with the same result.
CIImage is filter instructions. CGImage is bitmap data.
UIImage is a wrapper. A UIImage wrapping a CIImage merely contains filter instructions. A UIImage wrapping a CGImage contains bitmap data.
So the problem you are having has nothing to do with NSData. It has to do with UIImage. You are saying:
let bild = UIImage(ciImage: output)
let bildData = UIImageJPEGRepresentation(bild, 1.0)
bild is not a "real" image; it is merely a kind of wrapper for a CIImage. There is no data in the image — all there is is the instructions for the CIImage filter. You can't see anything until you render the image into a bitmap. UIImageView might be able to do that for you, but UIImageJPEGRepresentation cannot.
If you want to save the image as data, first draw the image into an image graphics context, to get the bitmap. Extract the resulting image, and now you have a real UIImage backed by CGImage. You can now save its data, because now it has data.
I am doing a simple image file size reduction task but got this issue. When using no-loss option for JPEGs, the file size tripled then the original NSData for the image (same resolution). Any suggestion?
Here is the simple code:
let data = someImageData
print(data.length)
let image = UIImage(data: data)!
let newImageData = UIImageJPEGRepresentation(image, 1)
print(newImageData.length)
let newImageData = UIImageJPEGRepresentation(image, 0.8)
print(newImageData.length)
and output:
2604768 //original length
8112955 //new length if compression = 1
2588870 //new length if compression = 0.8
It seems I have to take the 0.8 quality loss to get the same data length. Did I miss something? Pease help.
Edit: I did some more test by converting the data to UIImage then to UIImageJPEGRepresentation(image, 1), the length size of the new data increases every cycle. But if I use UIImageJPEGRepresentation(image, 0.8), the length size of the new data decreases a bit (very little), however, the compound qualify loss should be a concern.
What your code is doing is decompressing/extracting the image into memory with the line --> let image = UIImage(data: data)!
and then recompressing as JPEG, with let newImageData = UIImageJPEGRepresentation(image, 1) with the highest quality ratio (q=1.0). That’s why the image is suddenly so big.
So the moment you get your image as an UIImage from NSData, the decompression/conversion changes the size of the image.
The image's file size will actually be 8112955.
I have an image on disk and load it like so:
guard let image = UIImage(contentsOfFile: url.path!) else { return }
self.testImageView.image = image
When loading an image like this form file, will the image be drawn on screen with the correct scale factor?
I am asking because when I do this:
guard let imageData = NSData(contentsOfFile: url.path!) else { return }
guard let image = UIImage(data: imageData, scale: UIScreen.mainScreen().scale) else { return }
self.testImageView.image = image
the image looks way sharper.
As stated in the Apple Documentation on supporting High-Resolution Screens In Views which can be found in the Drawing and Printing Guide for iOS:
On devices with high-resolution screens, the imageNamed:, imageWithContentsOfFile:, and initWithContentsOfFile: methods automatically looks for a version of the requested image with the #2x modifier in its name. If it finds one, it loads that image instead. If you do not provide a high-resolution version of a given image, the image object still loads a standard-resolution image (if one exists) and scales it during drawing.
Hi I am trying to compress a UIImage my code is like so:
var data = UIImageJPEGRepresentation(image, 1.0)
print("Old image size is \(data?.length) bytes")
data = UIImageJPEGRepresentation(image, 0.7)
print("New image size is \(data?.length) bytes")
let optimizedImage = UIImage(data: data!, scale: 1.0)
The print out is:
Old image size is Optional(5951798) bytes
New image size is Optional(1416792) bytes
Later on in the app I upload the UIImage to a webserver (it's converted to NSData first). When I check the size it's the original 5951798 bytes. What could I be doing wrong here?
A UIImage provides a bitmap (uncompressed) version of the image which can be used for rendering the image to screen. The JPEG data of an image can be compressed by reducing the quality (the colour variance in effect) of the image, but this is only in the JPEG data. Once the compressed JPEG is unpacked into a UIImage it again requires the full bitmap size (which is based on the colour format and the image dimensions).
Basically, keep the best quality image you can for display and compress just before upload.
Send this data to server data = UIImageJPEGRepresentation(image, 0.7), instead of sending optimizedimage to server.
You already compressed the image to data. So it will be repeating process(resetting to original size).
This is in the context of a swift app trying to save an image to Parse.com backend
In the project, I crop an original image into a square one:
#IBOutlet weak var imageContainer: UIImageView! //Original image container
#IBOutlet weak var dest: UIImageView! //Cropped image container
original image goes trough this:
//Crop image
let croppedImage = imageContainer.image
let ciImage = CIImage(image: croppedImage)
let ciImageCropped = ciImage.imageByCroppingToRect(grabberRectangleCoordinates)
let finalImageToUIImage = UIImage(CIImage: ciImageCropped!)
dest.image = finalImageToUIImage as UIImage!
When I compare both image with println() I get very similar results:
//Cropped
Optional(<UIImage: 0x79e7a0b0> size {850, 850} orientation 0 scale 1.000000)
//Original
Optional(<UIImage: 0x7992d390> size {1280, 850} orientation 0 scale 1.000000)
So you see that both image do exist and are similar. I see them both on screen each in their container.
Still, when I try to save to parse with:
let user = PFUser.currentUser()
let imageData = UIImagePNGRepresentation(dest.image!)
let imageFile = PFFile(name:"image.png", data:imageData)
var userEntry = PFObject(className:"Photos")
userEntry["imageFile"] = imageFile
//save code below but doesn't matter
It violently crash with error message:
Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Could not save file data
for image.png : (null)'
What is strange and blocks me here, is that in the save code when I replace cropped container dest.image! by original image container imageContainer.image!, it function well, yet we saw with println that both image exist and are similar.
My question is: is there something during the crop process that prevents the saving to happen properly ?
(please answer in swift)
Got the solution.
Turns out that converting image to CIImage, causes problem. So this must be avoided. Thus we will convert to CGImage instead, and use another method to crop from a rectangle coordinate. Not only it doesn't cause problem when converting back to UIImage (no more nil) it makes the code simpler:
//Crop image
let croppedImage = imageContainer.image //Original image
//CGImageCreateWithImageInRect is the function to create the cropped image. Must be a CGImage.
let finalCroppedImage : CGImageRef = CGImageCreateWithImageInRect(croppedImage?.CGImage, grabberRectangleCoordinates)
dest.image = UIImage(CGImage: finalCroppedImage)
You can now save dest.image into Parse.com and it won't return nil.