Issue with saving image in gallery and displaying - ios

Do consider this as a question from someone who is not so good at swift..:).I have a button on the click of which the imagepicker is opened and I am able to select the images. In the didFinishPickingMediaWithInfo I'm adding the image to array like so...
var imageArray = [UIImage]()
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
if let image = info[UIImagePickerControllerOriginalImage] as? UIImage {
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
imageArray.append(image)
for i in 0..<imageArray.count {
imageView.image = imageArray[i]
imageView.contentMode = .scaleAspectFit
let xPosition = self.view.frame.width * CGFloat(i)
imageView.frame = CGRect(x: xPosition, y: 0, width: self.imageScrollView.frame.width, height: self.imageScrollView.frame.height)
imageScrollView.contentSize.width = imageScrollView.frame.width * (CGFloat(i + 1))
imageScrollView.addSubview(imageView)
}
}
self.dismiss(animated: true, completion: nil)
}
I'm also having these functions:
func saveImage(image: UIImage, path: String) -> Bool {
let jpgImageData = UIImageJPEGRepresentation(image, 1.0)
do {
try jpgImageData?.write(to: URL(fileURLWithPath: path), options: .atomic)
} catch {
print(error)
}
return (jpgImageData != nil)
}
func getDocumentsURL() -> NSURL {
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
return documentsURL as NSURL
}
func fileInDocumentsDirectory(filename: String) -> String {
let fileURL = getDocumentsURL().appendingPathComponent(filename)
return fileURL!.path
}
But my issue is this..I just don't want to show just one image that is picked from the gallery. I want to pick multiple images from the gallery(one at a time), store them in an array and then display them all in a horizontal scrolling format. For this purpose, I'm setting a scrollview to take the images(as given in didFinishPickingMediaWithInfo)
Maybe I have to read the image also. But how that can be done I'm not able to figure out...Please help!

Please see this loop which i have corrected
You are only creating one UIImageView and adding to the scrollview.
Please initialize the UIImageView every time
for i in 0..<imageArray.count {
var imageView = UIImageView() //*** Add this line to your code
imageView.image = imageArray[i]
imageView.contentMode = .scaleAspectFit
let xPosition = self.view.frame.width * CGFloat(i)
imageView.frame = CGRect(x: xPosition, y: 0, width: self.imageScrollView.frame.width, height: self.imageScrollView.frame.height)
imageScrollView.contentSize.width = imageScrollView.frame.width * (CGFloat(i + 1))
imageScrollView.addSubview(imageView)
}
When ever you update your scrollview with newImages dont forget to remove the old ones.

Use this to save image
var imageArr:[UIImage] = []
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
let chosenImage = info[UIImagePickerControllerOriginalImage] as! UIImage
UIImageWriteToSavedPhotosAlbum(chosenImage, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
imageArr.append(chosenImage)
for i in 0..<imageArray.count
{
imageView.image = imageArray[i]
imageView.contentMode = .scaleAspectFit
let xPosition = self.view.frame.width * CGFloat(i)
imageView.frame = CGRect(x: xPosition, y: 0, width: self.imageScrollView.frame.width, height: self.imageScrollView.frame.height)
imageScrollView.contentSize.width = imageScrollView.frame.width * (CGFloat(i + 1))
imageScrollView.addSubview(imageView)
}
}

Related

Strange Issue With Transparent PNG Files From iOS Photo Library

I'm having a very strange issue with transparent PNG files, sourced from the Photos app.
The issue is that I am writing an app that allows the user to bring up an instance of UIImagePickerController, where they select an image, and that image is then added to a UIImageView via its image property.
Pretty straightforward, eh? The issue is when the image in the library is a transparent PNG.
For whatever reason, whenever I try to render the image, it always has the background white.
As far as I can tell, the image is stored in the library as a transparent PNG. When I drag it out, and examine it with an image editor, it's fine. Just what I expect.
But when I extract it programmatically, it has a white background. I can't seem to get it to be transparent.
Here's the code that I use to extract the image (It's a picker callback):
func imagePickerController(_ inPicker: UIImagePickerController, didFinishPickingMediaWithInfo inInfo: [UIImagePickerController.InfoKey: Any]) {
let info = Dictionary(uniqueKeysWithValues: inInfo.map { key, value in (key.rawValue, value) })
guard let image = (info[UIImagePickerController.InfoKey.editedImage.rawValue] as? UIImage ?? info[UIImagePickerController.InfoKey.originalImage.rawValue] as? UIImage)?.resizeThisImage(toNewWidth: Self.maximumImageWidthAndHeightInPixels) else { return }
organization?.icon = image
inPicker.dismiss(animated: true) { DispatchQueue.main.async { [weak self] in
self?.imageButton?.image = image
self?.imageButton?.alpha = 1.0
self?.imageButton?.tintColor = self?.view.tintColor
self?.updateUI()
}
}
}
It's not actually a UIButton. It's a UIImageView, with an attached tap recognizer.
The resizeThisImage() method is in an extension that I wrote for UIImage. It works fine. I've been using it forever:
func resizeThisImage(toNewWidth inNewWidth: CGFloat? = nil, toNewHeight inNewHeight: CGFloat? = nil) -> UIImage? {
guard nil == inNewWidth,
nil == inNewHeight else {
var scaleX: CGFloat = (inNewWidth ?? size.width) / size.width
var scaleY: CGFloat = (inNewHeight ?? size.height) / size.height
scaleX = nil == inNewWidth ? scaleY : scaleX
scaleY = nil == inNewHeight ? scaleX : scaleY
let destinationSize = CGSize(width: size.width * scaleX, height: size.height * scaleY)
let destinationRect = CGRect(origin: .zero, size: destinationSize)
UIGraphicsBeginImageContextWithOptions(destinationSize, false, 0)
defer { UIGraphicsEndImageContext() } // This makes sure that we get rid of the offscreen context.
draw(in: destinationRect, blendMode: .normal, alpha: 1)
return UIGraphicsGetImageFromCurrentImageContext()
}
return nil
}
In any case, it happens whether or not I use the resizeThisImage() method. That's not the issue.
Does anyone have any ideas what may be causing the issue?
UPDATE: I implemented #DonMag 's example, and here's what I got:
Note that the generated "A" is surrounded by white.
I should note that I'm using a classic storyboard UIKit app (no scene stuff). I don't think that should be an issue, but I'm happy to provide my little sample app. I don't think it's worth creating a GH repo for.
There doesn't seem to be anything wrong with your code, so I have to wonder if your images really, truly have transparency?
Here's a simple example to check. It looks like this when run:
The code creates Red and Blue image views, with .contentMode = .center.
Tapping the "Create" button will generate a UIImage using SF Symbol -- green with transparent background, the size of the Red image view -- and save it to Photos in PNG format with transparency.
Tapping the "Load" button will bring up the image picker. Selecting an image (such as the one just created and saved) will load the image and - using your extension - resize it to 80 x 80 and assign it to the .image property of the Blue image view.
As you can see, the image loaded from the Photo Picker still has its transparency.
Your UIImage extension for resizing
extension UIImage {
func resizeThisImage(toNewWidth inNewWidth: CGFloat? = nil, toNewHeight inNewHeight: CGFloat? = nil) -> UIImage? {
guard nil == inNewWidth,
nil == inNewHeight else {
var scaleX: CGFloat = (inNewWidth ?? size.width) / size.width
var scaleY: CGFloat = (inNewHeight ?? size.height) / size.height
scaleX = nil == inNewWidth ? scaleY : scaleX
scaleY = nil == inNewHeight ? scaleX : scaleY
let destinationSize = CGSize(width: size.width * scaleX, height: size.height * scaleY)
let destinationRect = CGRect(origin: .zero, size: destinationSize)
UIGraphicsBeginImageContextWithOptions(destinationSize, false, 0)
defer { UIGraphicsEndImageContext() } // This makes sure that we get rid of the offscreen context.
draw(in: destinationRect, blendMode: .normal, alpha: 1)
return UIGraphicsGetImageFromCurrentImageContext()
}
return nil
}
}
UIImage extension to save to Photos in PNG format with transparency
extension UIImage {
// save to Photos in PNG format with transparency
func saveToPhotos(completion: #escaping (_ success:Bool) -> ()) {
if let pngData = self.pngData() {
PHPhotoLibrary.shared().performChanges({ () -> Void in
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
creationRequest.addResource(with: PHAssetResourceType.photo, data: pngData, options: options)
}, completionHandler: { (success, error) -> Void in
if success == false {
if let errorString = error?.localizedDescription {
print("Photo could not be saved: \(errorString))")
}
completion(false)
} else {
print("Photo saved!")
completion(true)
}
})
} else {
completion(false)
}
}
}
Example view controller uses (essentially) your func imagePickerController for loading a photo
class TestImageViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
var imgViewA: UIImageView = UIImageView()
var imgViewB: UIImageView = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
let vStack = UIStackView()
vStack.axis = .vertical
vStack.spacing = 20
let btnStack = UIStackView()
btnStack.axis = .horizontal
btnStack.distribution = .fillEqually
btnStack.spacing = 20
let btnCreate = UIButton()
let btnLoad = UIButton()
btnCreate.setTitle("Create", for: [])
btnLoad.setTitle("Load", for: [])
[btnCreate, btnLoad].forEach { b in
b.setTitleColor(.white, for: .normal)
b.setTitleColor(.lightGray, for: .highlighted)
b.backgroundColor = UIColor(red: 0.0, green: 0.5, blue: 0.75, alpha: 1.0)
btnStack.addArrangedSubview(b)
}
vStack.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(vStack)
[btnStack, imgViewA, imgViewB].forEach { v in
vStack.addArrangedSubview(v)
}
[imgViewA, imgViewB].forEach { v in
v.contentMode = .center
}
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
vStack.centerXAnchor.constraint(equalTo: g.centerXAnchor),
vStack.centerYAnchor.constraint(equalTo: g.centerYAnchor),
vStack.widthAnchor.constraint(equalToConstant: 200.0),
imgViewA.heightAnchor.constraint(equalTo: imgViewA.widthAnchor),
imgViewB.heightAnchor.constraint(equalTo: imgViewB.widthAnchor),
])
imgViewA.backgroundColor = .red
imgViewB.backgroundColor = .blue
btnCreate.addTarget(self, action: #selector(self.createAndSave(_:)), for: .touchUpInside)
btnLoad.addTarget(self, action: #selector(importPicture(_:)), for: .touchUpInside)
}
#objc func createAndSave(_ sender: Any) {
let w = imgViewA.frame.width
// create a Green image with transparent background
if let img = drawSystemImage("a.circle.fill", at: 80, centeredIn: CGSize(width: w, height: w)) {
imgViewA.image = img
// save it to Photos in PNG format with transparency
img.saveToPhotos { (success) in
if success {
// image saved to photos
print("saved")
}
else {
// image not saved
fatalError("save failed")
}
}
}
}
// create UIImage from SF Symbol system image
// at Point Size
// centered in CGSize
// will draw symbol in Green on transparent background
private func drawSystemImage(_ sysName: String, at pointSize: CGFloat, centeredIn size: CGSize) -> UIImage? {
let cfg = UIImage.SymbolConfiguration(pointSize: pointSize)
guard let img = UIImage(systemName: sysName, withConfiguration: cfg)?.withTintColor(.green, renderingMode: .alwaysOriginal) else { return nil }
let x = (size.width - img.size.width) * 0.5
let y = (size.height - img.size.height) * 0.5
let renderer = UIGraphicsImageRenderer(size: size)
return renderer.image { context in
img.draw(in: CGRect(origin: CGPoint(x: x, y: y), size: img.size))
}
}
#objc func importPicture(_ sender: Any) {
let picker = UIImagePickerController()
picker.allowsEditing = true
picker.delegate = self
present(picker, animated: true)
}
func imagePickerController(_ inPicker: UIImagePickerController, didFinishPickingMediaWithInfo inInfo: [UIImagePickerController.InfoKey: Any]) {
let info = Dictionary(uniqueKeysWithValues: inInfo.map { key, value in (key.rawValue, value) })
guard let image = (info[UIImagePickerController.InfoKey.editedImage.rawValue] as? UIImage ?? info[UIImagePickerController.InfoKey.originalImage.rawValue] as? UIImage)?.resizeThisImage(toNewWidth: 80) else { return }
// organization?.icon = image
inPicker.dismiss(animated: true) {
DispatchQueue.main.async { [weak self] in
self?.imgViewB.image = image
//self?.imageButton?.image = image
//self?.imageButton?.alpha = 1.0
//self?.imageButton?.tintColor = self?.view.tintColor
//self?.updateUI()
}
}
}
}

Using Vision to scan images from photo library

Is there a way that I can use the Vision framework to scan an existing image from the user's photo library? As in, not taking a new picture using the camera, but just choosing an image that the user already has?
Yes, you can. Adding on to #Zulqarnayn's answer, here's a working example to detect and draw a bounding box on rectangles.
1. Set up the image view where the image will be displayed
#IBOutlet weak var imageView: UIImageView!
#IBAction func pickImage(_ sender: Any) {
let picker = UIImagePickerController()
picker.delegate = self
self.present(picker, animated: true)
}
override func viewDidLoad() {
super.viewDidLoad()
imageView.layer.borderWidth = 4
imageView.layer.borderColor = UIColor.blue.cgColor
imageView.contentMode = .scaleAspectFill
imageView.backgroundColor = UIColor.green.withAlphaComponent(0.3)
imageView.layer.masksToBounds = false /// allow image to overflow, for testing purposes
}
2. Get the image from the image picker
extension ViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
guard let image = info[.originalImage] as? UIImage else { return }
/// set the imageView's image
imageView.image = image
/// start the request & request handler
detectCard()
/// dismiss the picker
dismiss(animated: true)
}
}
3. Start the vision request
func detectCard() {
guard let cgImage = imageView.image?.cgImage else { return }
/// perform on background thread, so the main screen is not frozen
DispatchQueue.global(qos: .userInitiated).async {
let request = VNDetectRectanglesRequest { request, error in
/// this function will be called when the Vision request finishes
self.handleDetectedRectangle(request: request, error: error)
}
request.minimumAspectRatio = 0.0
request.maximumAspectRatio = 1.0
request.maximumObservations = 1 /// only look for 1 rectangle
let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: .up)
do {
try imageRequestHandler.perform([request])
} catch let error {
print("Error: \(error)")
}
}
}
4. Get the result from the Vision request
func handleDetectedRectangle(request: VNRequest?, error: Error?) {
if let results = request?.results {
if let observation = results.first as? VNRectangleObservation {
/// get back to the main thread
DispatchQueue.main.async {
guard let image = self.imageView.image else { return }
let convertedRect = self.getConvertedRect(
boundingBox: observation.boundingBox,
inImage: image.size,
containedIn: self.imageView.bounds.size
)
self.drawBoundingBox(rect: convertedRect)
}
}
}
}
5. Convert observation.boundingBox to the UIKit coordinates of the image view, then draw a border around the detected rectangle
I explain this more in detail in this answer.
func getConvertedRect(boundingBox: CGRect, inImage imageSize: CGSize, containedIn containerSize: CGSize) -> CGRect {
let rectOfImage: CGRect
let imageAspect = imageSize.width / imageSize.height
let containerAspect = containerSize.width / containerSize.height
if imageAspect > containerAspect { /// image extends left and right
let newImageWidth = containerSize.height * imageAspect /// the width of the overflowing image
let newX = -(newImageWidth - containerSize.width) / 2
rectOfImage = CGRect(x: newX, y: 0, width: newImageWidth, height: containerSize.height)
} else { /// image extends top and bottom
let newImageHeight = containerSize.width * (1 / imageAspect) /// the width of the overflowing image
let newY = -(newImageHeight - containerSize.height) / 2
rectOfImage = CGRect(x: 0, y: newY, width: containerSize.width, height: newImageHeight)
}
let newOriginBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
var convertedRect = VNImageRectForNormalizedRect(newOriginBoundingBox, Int(rectOfImage.width), Int(rectOfImage.height))
/// add the margins
convertedRect.origin.x += rectOfImage.origin.x
convertedRect.origin.y += rectOfImage.origin.y
return convertedRect
}
/// draw an orange frame around the detected rectangle, on top of the image view
func drawBoundingBox(rect: CGRect) {
let uiView = UIView(frame: rect)
imageView.addSubview(uiView)
uiView.backgroundColor = UIColor.clear
uiView.layer.borderColor = UIColor.orange.cgColor
uiView.layer.borderWidth = 3
}
Result | Demo repo
Input image
Result
Yes, you can. First, take an instance of UIImagePickerController & present it.
let picker = UIImagePickerController()
picker.delegate = self
picker.sourceType = .photoLibrary
present(picker, animated: true, completion: nil)
Then implement the delegate method take the desired image
extension YourViewController: UIImagePickerControllerDelegate {
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let pickedImage = info[.originalImage] as? UIImage {
## here start your request & request handler
}
picker.dismiss(animated: true, completion: nil)
}
}

How to crop area from camera

I have draw a rectangle with in the native camera view, I'm trying to use it as a guide or crop area to capture only the business card image, I'm unable to crop image from camera native view within drawn rectangle
extension UIScreen {
func fullScreenSquare() -> CGRect {
var hw:CGFloat = 0
var isLandscape = false
if UIScreen.main.bounds.size.width < UIScreen.main.bounds.size.height {
hw = UIScreen.main.bounds.size.width
}
else {
isLandscape = true
hw = UIScreen.main.bounds.size.height
}
var x:CGFloat = 0
var y:CGFloat = 0
if isLandscape {
x = (UIScreen.main.bounds.size.width / 2) - (hw / 2)
}
else {
y = (UIScreen.main.bounds.size.height / 2) - (hw / 2)
}
return CGRect(x: x, y: y, width: hw, height: hw/3*2)
}
func isLandscape() -> Bool {
return UIScreen.main.bounds.size.width > UIScreen.main.bounds.size.height
}
}
func guideForCameraOverlay() -> UIView {
let guide = UIView(frame: UIScreen.main.fullScreenSquare())
guide.backgroundColor = UIColor.clear
guide.layer.borderWidth = 4
guide.layer.borderColor = UIColor.orange.cgColor
guide.isUserInteractionEnabled = false
return guide
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if setPhoto == 1 {
if let image = info[UIImagePickerController.InfoKey.editedImage] as? UIImage{
let size = CGSize(width: 600, height: 400)
//let imageCroped = image.cgImage?.cropping(to: size)
let imageCroped = image.crop(to: size)
frontPhotoImageView.image = UIImage(cgImage: imageCroped as! CGImage)
setPhoto = 0
frontPhotoImage.setTitle("", for: UIControl.State.normal)
}
else {
// Error message
}
self.dismiss(animated: true, completion: nil)
}
if setPhoto == 2 {
if let image = info[UIImagePickerController.InfoKey.editedImage] as? UIImage{
backPhotoImageView.image = image
setPhoto = 0
backPhotoImage.setTitle("", for: UIControl.State.normal)
}
else {
// Error message
}
self.dismiss(animated: true, completion: nil)
}
}
I expect to have the image from with in the drawn rectangle, but it does not happen.
I expect to crop the image to the size inside the orange rectangle on this image

ios Swift 3 using Property Observer update Image from UIImagePicker

Hi I'm using IOS swift 3 to let user pick images from library or album.I have an UIImage variable.How can we use property Observer to update the UIImage when user finished pick an Image
Some thing like
var image: UIImage = {
didSet....
}
Currently I'm doing this
func show(image: UIImage) {
imageView.image = image
imageView.isHidden = false
imageView.frame = CGRect(x: 10, y: 10, width: 260, height: 260)
addPhotoLabel.isHidden = true
}
func imagePickerController(_ picker: UIImagePickerController,
didFinishPickingMediaWithInfo info: [String : Any]) {
image = info[UIImagePickerControllerEditedImage] as? UIImage
if let theImage = image {
show(image: theImage)
}
dismiss(animated: true, completion: nil)
}
Thinking of using property Observer to improve the approach.Any help is much appreciate.Thanks!
If you really want to update the image view any time the image property is set, then simply put all of the code in your show method in the didSet block for the image property.
var image: UIImage = {
didSet {
imageView.image = image
imageView.isHidden = false
imageView.frame = CGRect(x: 10, y: 10, width: 260, height: 260)
addPhotoLabel.isHidden = true
}
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
if let theImage = info[UIImagePickerControllerEditedImage] as? UIImage {
image = theImage
}
picker.dismiss(animated: true, completion: nil)
}

crop UIImage to the shape of a rectangular overlay - Swift

I am pretty new to Swift.
I intend to draw a rectangle and capture the image in the rectangular overlay.
I drew a transparent image over the camera.
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType = UIImagePickerControllerSourceType.Camera
imagePicker.mediaTypes = [kUTTypeImage as String]
imagePicker.allowsEditing = true
var overlayedImageView: UIImageView = UIImageView(image: UIImage(named: "transparent.png"))
var cgRect: CGRect = CGRect(x: 200, y: 50, width: 100, height: 400)
overlayedImageView.frame = cgRect
imagePicker.cameraOverlayView = overlayedImageView
self.presentViewController(imagePicker, animated: true,
completion: nil)
This is what i do after i capture the image
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : AnyObject]) {
let mediaType = info[UIImagePickerControllerMediaType] as! String
self.dismissViewControllerAnimated(true, completion: nil)
if mediaType == (kUTTypeImage as String) {
let image = info[UIImagePickerControllerOriginalImage]
as! UIImage
imageView.image = cropToBoundsNew(image)
if (newMedia == true) {
UIImageWriteToSavedPhotosAlbum(image, self,
"image:didFinishSavingWithError:contextInfo:", nil)
} else if mediaType == (kUTTypeMovie as String) {
// Code to support video here
}
}
}
My Problem is the quality of the image diminishes
func cropToBoundsNew(image: UIImage) -> UIImage {
let contextImage: UIImage = UIImage(CGImage: image.CGImage!)
let contextSize: CGSize = contextImage.size
let rect: CGRect = CGRectMake(200, 50, 100, 400)
let imageRef: CGImageRef = CGImageCreateWithImageInRect(contextImage.CGImage, rect)!
let image: UIImage = UIImage(CGImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
return image
}

Resources