How to get thumbnail and Original image from UIImagePickerViewcontroller? - ios

After captured photo from camera, I was doing image compression For (400kb and 1 Mb), it look almost 3 seconds in iPhone 6 and less than a second in iPhone 6s.
Is there any way to get thumbnail and original image without doing manual compression?
Code used for image compression
Extension for UIImage
extension UIImage {
// MARK: - UIImage+Resize
func compressTo(_ expectedSizeInMb:Int) -> Data? {
let sizeInBytes = expectedSizeInMb * 1024 * 1024
var needCompress:Bool = true
var imgData:Data?
var compressingValue:CGFloat = 1.0
while (needCompress && compressingValue > 0.0) {
if let data:Data = jpegData(compressionQuality: compressingValue) {
if data.count < sizeInBytes {
needCompress = false
imgData = data
} else {
compressingValue -= 0.1
}
}
}
if let data = imgData {
if (data.count < sizeInBytes) {
return data
}
}
return nil
}
}
usage:
if let imageData = image.compressTo(1) {
print(imageData)
}

For images saved in Photos Library :
Try :
let phAsset = info[UIImagePickerController.InfoKey.phAsset] as! PHAsset
let options = PHImageRequestOptions()
options.deliveryMode = .fastFormat
options.isSynchronous = false
// you can change your target size to CGSize(width: Int , height: Int) any number you want.
PHImageManager.default().requestImage(for: phAsset, targetSize: PHImageManagerMaximumSize, contentMode: .default, options: options, resultHandler: { image , _ in
let thumbnail = image
// use your thumbnail
})
For Captured images from Camera, you can get image pixels without recalculating data count :
let image = info[UIImagePickerController.InfoKey.originalImage] as! UIImage
// pixels are the same on each device’s camera
let widthPixels = image.size.width * image.scale
let heightPixels = image.size.height * image.scale
let sizeInBytes = 1024 * 1024
var thumbnail : UIImage! = nil
if Int(widthPixels * heightPixels) > sizeInBytes {
// assign custom width and height you need
let rect = CGRect(x: 0.0, y: 0.0, width: 100, height: 100)
UIGraphicsBeginImageContextWithOptions(rect.size, false, 1)
let context = UIGraphicsGetCurrentContext()
context?.interpolationQuality = .low
image.draw(in: rect)
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
thumbnail = resizedImage
} else {
thumbnail = image
}

Related

How to set image convert from base64 string width and height?

I have a question about UIImage convert to base64 String.
And I already resize my image width and height (1024 x 768).
Then I convert this image to base64 String.
But when I use this base64 String to UIImage and look image width and height not I resize before(2304.0 x 3072.0).
How to make my base64 String image size correctly?
guard let image = image else { return }
print("image => \(image)")
//image => <UIImage:0x283cce7f0 anonymous {768, 1024}>
guard let base64ImageString = image.toBase64(format: .jpeg(0.2)) else { return }
let dataDecoded : Data = Data(base64Encoded: base64ImageString, options: .ignoreUnknownCharacters)!
let decodedimage = UIImage(data: dataDecoded)
let height = decodedimage?.size.height
let width = decodedimage?.size.width
print("====) \(String(describing: width)) / \(String(describing: height))")
//====) Optional(2304.0) / Optional(3072.0)
public enum ImageFormat {
case png
case jpeg(CGFloat)
}
extension UIImage {
public func toBase64(format: ImageFormat) -> String? {
var imageData: Data?
switch format {
case .png:
imageData = self.pngData()
case .jpeg(let compression):
imageData = self.jpegData(compressionQuality: compression)
}
return imageData?.base64EncodedString()
}
}
Try this way, the below code resizes your image, and should be added before you change it to base64.
guard let image = image else { return }
print("image => \(image)")
//image => <UIImage:0x283cce7f0 anonymous {768, 1024}>
let desiredSize = CGSize(width: 768, height: 1024)
UIGraphicsBeginImageContextWithOptions(desiredSize, false, 1.0)
image.draw(in: CGRect(origin: CGPoint.zero, size: CGSize(width:
desiredSize.width, height: desiredSize.height)))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
guard let base64ImageString = resizedImage.toBase64(format: .jpeg(0.2)) else { return }

ios 13 PHImageManager.default().requestImage and Fusuma

i am using Fusuma pod for my project, but with the last iOS 13 there is one bug related to image which is selected from Gallery.
exactly saying, if selected image is from Gallery (on iphone device with ios 13, not simulator), then the dimensions of image are width:39 and height:39, functions below are Fusuma's , located in FusumaViewController
private func requestImage(with asset: PHAsset, cropRect: CGRect, completion: #escaping (PHAsset, UIImage) -> Void) {
DispatchQueue.global(qos: .default).async(execute: {
let options = PHImageRequestOptions()
options.deliveryMode = .highQualityFormat
options.isNetworkAccessAllowed = true
options.normalizedCropRect = cropRect
options.resizeMode = .exact
let targetWidth = floor(CGFloat(asset.pixelWidth) * cropRect.width)
let targetHeight = floor(CGFloat(asset.pixelHeight) * cropRect.height)
let dimensionW = max(min(targetHeight, targetWidth), 1024 * UIScreen.main.scale)
let dimensionH = dimensionW * self.getCropHeightRatio()
let targetSize = CGSize(width: dimensionW, height: dimensionH)
PHImageManager.default().requestImage(for: asset, targetSize: targetSize, contentMode: .aspectFill, options: options) { result, info in
guard let result = result else { return }
DispatchQueue.main.async(execute: {
completion(asset, result)
})
}
})
}
private func fusumaDidFinishInMultipleMode() {
guard let view = albumView.imageCropView else { return }
let normalizedX = view.contentOffset.x / view.contentSize.width
let normalizedY = view.contentOffset.y / view.contentSize.height
let normalizedWidth = view.frame.width / view.contentSize.width
let normalizedHeight = view.frame.height / view.contentSize.height
let cropRect = CGRect(x: normalizedX,
y: normalizedY,
width: normalizedWidth,
height: normalizedHeight)
var images = [UIImage]()
var metaData = [ImageMetadata]()
for asset in albumView.selectedAssets {
requestImage(with: asset, cropRect: cropRect) { asset, result in
images.append(result)
metaData.append(self.getMetaData(asset: asset))
if asset == self.albumView.selectedAssets.last {
self.doDismiss {
self.delegate?.fusumaMultipleImageSelected(images, source: self.mode, metaData: metaData)
}
}
}
}
}
requestImage(...) function returns incorrect image size
The documentation says PHImageManager may call requestImage completion multiple times and the final call should have an image with a requested targetSize.

Converting Image Data to Varbinary

I'm trying to upload image file on iOS device to remote SQL Server's VARBINARY(MAX) column but having difficulties.
Basically i need to convert UIImage to whatever i need to insert into:
INSERT INTO table (ImageColumn) VALUES ('byteString here')
Column type is: VARBINARY(MAX) and I can print string via:
let imageData = UIImagePNGRepresentation(value)
let base64String = imageData?.base64EncodedString()
imageBase64 = base64String!
print(imageBase64) // this prints all the string
this even writes 8-9mb of data to the column but it's always corrupted. I don't know if my sql query string is totally wrong or not
EDIT: quick fix
let value_resized = value.resizedTo1MB()
let imageData = UIImageJPEGRepresentation(value_resized!, 0)
let strBase64 = imageData?.base64EncodedString(options: .lineLength64Characters)
imageBase64 = strBase64!
// i can insert imageBase64 to sql as a string
extension UIImage {
func resized(withPercentage percentage: CGFloat) -> UIImage? {
let canvasSize = CGSize(width: size.width * percentage, height: size.height * percentage)
UIGraphicsBeginImageContextWithOptions(canvasSize, false, scale)
defer { UIGraphicsEndImageContext() }
draw(in: CGRect(origin: .zero, size: canvasSize))
return UIGraphicsGetImageFromCurrentImageContext()
}
func resizedTo1MB() -> UIImage? {
guard let imageData = UIImagePNGRepresentation(self) else { return nil }
var resizingImage = self
var imageSizeKB = Double(imageData.count) / 1000.0
while imageSizeKB > 1000 {
guard let resizedImage = resizingImage.resized(withPercentage: 0.9),
let imageData = UIImagePNGRepresentation(resizedImage)
else { return nil }
resizingImage = resizedImage
imageSizeKB = Double(imageData.count) / 1000.0
}
return resizingImage
}
}

CIDetector , detected face image is not showing?

I am using CIDetector to detect face in a UIImage. i am getting the face rect correctly but when i crop the image to detected face rect. it is not showing on my image view.
I have already checked. my image is not nil
Here is my code :-
#IBAction func detectFaceOnImageView(_: UIButton) {
let image = myImageView.getFaceImage()
myImageView.image = image
}
extension UIView {
func getFaceImage() -> UIImage? {
let faceDetectorOptions: [String: AnyObject] = [CIDetectorAccuracy: CIDetectorAccuracyHigh as AnyObject]
let faceDetector: CIDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: faceDetectorOptions)!
let viewScreenShotImage = generateScreenShot(scaleTo: 1.0)
if viewScreenShotImage.cgImage != nil {
let sourceImage = CIImage(cgImage: viewScreenShotImage.cgImage!)
let features = faceDetector.features(in: sourceImage)
if features.count > 0 {
var faceBounds = CGRect.zero
var faceImage: UIImage?
for feature in features as! [CIFaceFeature] {
faceBounds = feature.bounds
let faceCroped: CIImage = sourceImage.cropping(to: faceBounds)
faceImage = UIImage(ciImage: faceCroped)
}
return faceImage
} else {
return nil
}
} else {
return nil
}
}
func generateScreenShot(scaleTo: CGFloat = 3.0) -> UIImage {
let rect = self.bounds
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
self.layer.render(in: context!)
let screenShotImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
let aspectRatio = screenShotImage.size.width / screenShotImage.size.height
let resizedScreenShotImage = screenShotImage.scaleImage(toSize: CGSize(width: self.bounds.size.height * aspectRatio * scaleTo, height: self.bounds.size.height * scaleTo))
return resizedScreenShotImage!
}
}
For More Information, I am attaching Screen Shots of values .
Screen Shot 1
Screen Shot 2
Screen Shot 3
Try this:
let faceCroped: CIImage = sourceImage.cropping(to: faceBounds)
//faceImage = UIImage(ciImage: faceCroped)
let cgImage: CGImage = {
let context = CIContext(options: nil)
return context.createCGImage(faceCroped, from: faceCroped.extent)!
}()
faceImage = UIImage(cgImage: cgImage)

OSX UIGraphicsBeginImageContext

I have an iOS app that I am now creating for Mac OSX. I have the code below that converts the image to a size of 1024 and works out the width based on the aspect ratio of the image. This works on iOS but obviously does not on OSX. I am not sure how to create a PNG representation of the NSImage or what I should be using instead of UIGraphicsBeginImageContext. Any suggestions?
Thanks.
var image = myImageView.image
let imageData = UIImagePNGRepresentation(image)
let imageWidth = image?.size.width
let calculationNumber:CGFloat = imageWidth! / 1024.0
let imageHeight = image?.size.height
let newImageHeight = imageHeight! / calculationNumber
UIGraphicsBeginImageContext(CGSizeMake(1024.0, newImageHeight))
image?.drawInRect(CGRectMake(0, 0, 1024.0, newImageHeight))
var resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let imageData = UIImagePNGRepresentation(resizedImage)
let theImageData:NSData = UIImagePNGRepresentation(resizedImage)
imageFile = PFFile(data: theImageData)
You can use:
let image = NSImage(size: newSize)
image.lockFocus()
//draw your stuff here
image.unlockFocus()
if let data = image?.TIFFRepresentation {
let imageRep = NSBitmapImageRep(data: data)
let imageData = imageRep?.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [:])
//do something with your PNG data here
}
my two cents...
a quick extension to draw an image with a partially overlapped image:
extension NSImage {
func mergeWith(anotherImage: NSImage) -> NSImage {
self.lockFocus()
//draw your stuff here
self.draw(in: CGRect(origin: .zero, size: size))
let frame2 = CGRect(x: 4, y: 4, width: size.width/3, height: size.height/3)
anotherImage.draw(in: frame2)
self.unlockFocus()
return self
}
}
final effect:

Resources