Is there a way that I can use the Vision framework to scan an existing image from the user's photo library? As in, not taking a new picture using the camera, but just choosing an image that the user already has?
Yes, you can. Adding on to #Zulqarnayn's answer, here's a working example to detect and draw a bounding box on rectangles.
1. Set up the image view where the image will be displayed
#IBOutlet weak var imageView: UIImageView!
#IBAction func pickImage(_ sender: Any) {
let picker = UIImagePickerController()
picker.delegate = self
self.present(picker, animated: true)
}
override func viewDidLoad() {
super.viewDidLoad()
imageView.layer.borderWidth = 4
imageView.layer.borderColor = UIColor.blue.cgColor
imageView.contentMode = .scaleAspectFill
imageView.backgroundColor = UIColor.green.withAlphaComponent(0.3)
imageView.layer.masksToBounds = false /// allow image to overflow, for testing purposes
}
2. Get the image from the image picker
extension ViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
guard let image = info[.originalImage] as? UIImage else { return }
/// set the imageView's image
imageView.image = image
/// start the request & request handler
detectCard()
/// dismiss the picker
dismiss(animated: true)
}
}
3. Start the vision request
func detectCard() {
guard let cgImage = imageView.image?.cgImage else { return }
/// perform on background thread, so the main screen is not frozen
DispatchQueue.global(qos: .userInitiated).async {
let request = VNDetectRectanglesRequest { request, error in
/// this function will be called when the Vision request finishes
self.handleDetectedRectangle(request: request, error: error)
}
request.minimumAspectRatio = 0.0
request.maximumAspectRatio = 1.0
request.maximumObservations = 1 /// only look for 1 rectangle
let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: .up)
do {
try imageRequestHandler.perform([request])
} catch let error {
print("Error: \(error)")
}
}
}
4. Get the result from the Vision request
func handleDetectedRectangle(request: VNRequest?, error: Error?) {
if let results = request?.results {
if let observation = results.first as? VNRectangleObservation {
/// get back to the main thread
DispatchQueue.main.async {
guard let image = self.imageView.image else { return }
let convertedRect = self.getConvertedRect(
boundingBox: observation.boundingBox,
inImage: image.size,
containedIn: self.imageView.bounds.size
)
self.drawBoundingBox(rect: convertedRect)
}
}
}
}
5. Convert observation.boundingBox to the UIKit coordinates of the image view, then draw a border around the detected rectangle
I explain this more in detail in this answer.
func getConvertedRect(boundingBox: CGRect, inImage imageSize: CGSize, containedIn containerSize: CGSize) -> CGRect {
let rectOfImage: CGRect
let imageAspect = imageSize.width / imageSize.height
let containerAspect = containerSize.width / containerSize.height
if imageAspect > containerAspect { /// image extends left and right
let newImageWidth = containerSize.height * imageAspect /// the width of the overflowing image
let newX = -(newImageWidth - containerSize.width) / 2
rectOfImage = CGRect(x: newX, y: 0, width: newImageWidth, height: containerSize.height)
} else { /// image extends top and bottom
let newImageHeight = containerSize.width * (1 / imageAspect) /// the width of the overflowing image
let newY = -(newImageHeight - containerSize.height) / 2
rectOfImage = CGRect(x: 0, y: newY, width: containerSize.width, height: newImageHeight)
}
let newOriginBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
var convertedRect = VNImageRectForNormalizedRect(newOriginBoundingBox, Int(rectOfImage.width), Int(rectOfImage.height))
/// add the margins
convertedRect.origin.x += rectOfImage.origin.x
convertedRect.origin.y += rectOfImage.origin.y
return convertedRect
}
/// draw an orange frame around the detected rectangle, on top of the image view
func drawBoundingBox(rect: CGRect) {
let uiView = UIView(frame: rect)
imageView.addSubview(uiView)
uiView.backgroundColor = UIColor.clear
uiView.layer.borderColor = UIColor.orange.cgColor
uiView.layer.borderWidth = 3
}
Result | Demo repo
Input image
Result
Yes, you can. First, take an instance of UIImagePickerController & present it.
let picker = UIImagePickerController()
picker.delegate = self
picker.sourceType = .photoLibrary
present(picker, animated: true, completion: nil)
Then implement the delegate method take the desired image
extension YourViewController: UIImagePickerControllerDelegate {
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let pickedImage = info[.originalImage] as? UIImage {
## here start your request & request handler
}
picker.dismiss(animated: true, completion: nil)
}
}
Related
I am using Gyroscope to determine my ipad is perpendicular (attitude of 88 to 92 degree) or not .
if it is then can take picture.
I have something like a traffic light red or green to show permission of take picture but I can not disable capture button when the light is red
any help would be appreciated
here is my code
#IBAction func camera1(_ sender: Any) {
var imageView : UIImageView
imageView = UIImageView(frame:CGRect(x:10, y:10, width:50, height:50));
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.allowsEditing = true
imagePicker.sourceType = .camera
imagePicker.cameraCaptureMode = .photo
imagePicker.cameraOverlayView = imageView
imagePicker.cameraViewTransform = imagePicker.cameraViewTransform.scaledBy(x: 3, y: 3);
//Gyroscop
func myGyroscope() {
motion.deviceMotionUpdateInterval = 0.2
motion.startDeviceMotionUpdates(to: OperationQueue()) { (motion, error) -> Void in
if let attitude = motion?.attitude {
// print(attitude.roll * 180 / Double.pi)
DispatchQueue.main.async{
if (((attitude.roll * 180 / Double.pi) * -1) > 88 && ((attitude.roll * 180 / Double.pi) * -1) < 92 ){
imageView.image = #imageLiteral(resourceName: "GREEN_Light")//Take picture is permitted
} else{
imageView.image = #imageLiteral(resourceName: "Red_Light")//Take picture is not permitted
}
}
}
}
}
myGyroscope()
present(imagePicker, animated: true, completion: nil)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let pickedImage = info[UIImagePickerController.InfoKey.originalImage] as? UIImage {
frontpic.contentMode = .scaleAspectFit
if (picker.sourceType.rawValue == 1){//if camera
frontpic.image = pickedImage.cropedToRatio(ratio: 0.33)
} else{//if album
frontpic.image = pickedImage
}
}
dismiss(animated: true, completion: nil)
}
You cannot interfere with the built in camera controls. If you don't like way they behave, remove them and substitute your own interface as part of the cameraOverlayView.
I'm having a very strange issue with transparent PNG files, sourced from the Photos app.
The issue is that I am writing an app that allows the user to bring up an instance of UIImagePickerController, where they select an image, and that image is then added to a UIImageView via its image property.
Pretty straightforward, eh? The issue is when the image in the library is a transparent PNG.
For whatever reason, whenever I try to render the image, it always has the background white.
As far as I can tell, the image is stored in the library as a transparent PNG. When I drag it out, and examine it with an image editor, it's fine. Just what I expect.
But when I extract it programmatically, it has a white background. I can't seem to get it to be transparent.
Here's the code that I use to extract the image (It's a picker callback):
func imagePickerController(_ inPicker: UIImagePickerController, didFinishPickingMediaWithInfo inInfo: [UIImagePickerController.InfoKey: Any]) {
let info = Dictionary(uniqueKeysWithValues: inInfo.map { key, value in (key.rawValue, value) })
guard let image = (info[UIImagePickerController.InfoKey.editedImage.rawValue] as? UIImage ?? info[UIImagePickerController.InfoKey.originalImage.rawValue] as? UIImage)?.resizeThisImage(toNewWidth: Self.maximumImageWidthAndHeightInPixels) else { return }
organization?.icon = image
inPicker.dismiss(animated: true) { DispatchQueue.main.async { [weak self] in
self?.imageButton?.image = image
self?.imageButton?.alpha = 1.0
self?.imageButton?.tintColor = self?.view.tintColor
self?.updateUI()
}
}
}
It's not actually a UIButton. It's a UIImageView, with an attached tap recognizer.
The resizeThisImage() method is in an extension that I wrote for UIImage. It works fine. I've been using it forever:
func resizeThisImage(toNewWidth inNewWidth: CGFloat? = nil, toNewHeight inNewHeight: CGFloat? = nil) -> UIImage? {
guard nil == inNewWidth,
nil == inNewHeight else {
var scaleX: CGFloat = (inNewWidth ?? size.width) / size.width
var scaleY: CGFloat = (inNewHeight ?? size.height) / size.height
scaleX = nil == inNewWidth ? scaleY : scaleX
scaleY = nil == inNewHeight ? scaleX : scaleY
let destinationSize = CGSize(width: size.width * scaleX, height: size.height * scaleY)
let destinationRect = CGRect(origin: .zero, size: destinationSize)
UIGraphicsBeginImageContextWithOptions(destinationSize, false, 0)
defer { UIGraphicsEndImageContext() } // This makes sure that we get rid of the offscreen context.
draw(in: destinationRect, blendMode: .normal, alpha: 1)
return UIGraphicsGetImageFromCurrentImageContext()
}
return nil
}
In any case, it happens whether or not I use the resizeThisImage() method. That's not the issue.
Does anyone have any ideas what may be causing the issue?
UPDATE: I implemented #DonMag 's example, and here's what I got:
Note that the generated "A" is surrounded by white.
I should note that I'm using a classic storyboard UIKit app (no scene stuff). I don't think that should be an issue, but I'm happy to provide my little sample app. I don't think it's worth creating a GH repo for.
There doesn't seem to be anything wrong with your code, so I have to wonder if your images really, truly have transparency?
Here's a simple example to check. It looks like this when run:
The code creates Red and Blue image views, with .contentMode = .center.
Tapping the "Create" button will generate a UIImage using SF Symbol -- green with transparent background, the size of the Red image view -- and save it to Photos in PNG format with transparency.
Tapping the "Load" button will bring up the image picker. Selecting an image (such as the one just created and saved) will load the image and - using your extension - resize it to 80 x 80 and assign it to the .image property of the Blue image view.
As you can see, the image loaded from the Photo Picker still has its transparency.
Your UIImage extension for resizing
extension UIImage {
func resizeThisImage(toNewWidth inNewWidth: CGFloat? = nil, toNewHeight inNewHeight: CGFloat? = nil) -> UIImage? {
guard nil == inNewWidth,
nil == inNewHeight else {
var scaleX: CGFloat = (inNewWidth ?? size.width) / size.width
var scaleY: CGFloat = (inNewHeight ?? size.height) / size.height
scaleX = nil == inNewWidth ? scaleY : scaleX
scaleY = nil == inNewHeight ? scaleX : scaleY
let destinationSize = CGSize(width: size.width * scaleX, height: size.height * scaleY)
let destinationRect = CGRect(origin: .zero, size: destinationSize)
UIGraphicsBeginImageContextWithOptions(destinationSize, false, 0)
defer { UIGraphicsEndImageContext() } // This makes sure that we get rid of the offscreen context.
draw(in: destinationRect, blendMode: .normal, alpha: 1)
return UIGraphicsGetImageFromCurrentImageContext()
}
return nil
}
}
UIImage extension to save to Photos in PNG format with transparency
extension UIImage {
// save to Photos in PNG format with transparency
func saveToPhotos(completion: #escaping (_ success:Bool) -> ()) {
if let pngData = self.pngData() {
PHPhotoLibrary.shared().performChanges({ () -> Void in
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
creationRequest.addResource(with: PHAssetResourceType.photo, data: pngData, options: options)
}, completionHandler: { (success, error) -> Void in
if success == false {
if let errorString = error?.localizedDescription {
print("Photo could not be saved: \(errorString))")
}
completion(false)
} else {
print("Photo saved!")
completion(true)
}
})
} else {
completion(false)
}
}
}
Example view controller uses (essentially) your func imagePickerController for loading a photo
class TestImageViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
var imgViewA: UIImageView = UIImageView()
var imgViewB: UIImageView = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
let vStack = UIStackView()
vStack.axis = .vertical
vStack.spacing = 20
let btnStack = UIStackView()
btnStack.axis = .horizontal
btnStack.distribution = .fillEqually
btnStack.spacing = 20
let btnCreate = UIButton()
let btnLoad = UIButton()
btnCreate.setTitle("Create", for: [])
btnLoad.setTitle("Load", for: [])
[btnCreate, btnLoad].forEach { b in
b.setTitleColor(.white, for: .normal)
b.setTitleColor(.lightGray, for: .highlighted)
b.backgroundColor = UIColor(red: 0.0, green: 0.5, blue: 0.75, alpha: 1.0)
btnStack.addArrangedSubview(b)
}
vStack.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(vStack)
[btnStack, imgViewA, imgViewB].forEach { v in
vStack.addArrangedSubview(v)
}
[imgViewA, imgViewB].forEach { v in
v.contentMode = .center
}
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
vStack.centerXAnchor.constraint(equalTo: g.centerXAnchor),
vStack.centerYAnchor.constraint(equalTo: g.centerYAnchor),
vStack.widthAnchor.constraint(equalToConstant: 200.0),
imgViewA.heightAnchor.constraint(equalTo: imgViewA.widthAnchor),
imgViewB.heightAnchor.constraint(equalTo: imgViewB.widthAnchor),
])
imgViewA.backgroundColor = .red
imgViewB.backgroundColor = .blue
btnCreate.addTarget(self, action: #selector(self.createAndSave(_:)), for: .touchUpInside)
btnLoad.addTarget(self, action: #selector(importPicture(_:)), for: .touchUpInside)
}
#objc func createAndSave(_ sender: Any) {
let w = imgViewA.frame.width
// create a Green image with transparent background
if let img = drawSystemImage("a.circle.fill", at: 80, centeredIn: CGSize(width: w, height: w)) {
imgViewA.image = img
// save it to Photos in PNG format with transparency
img.saveToPhotos { (success) in
if success {
// image saved to photos
print("saved")
}
else {
// image not saved
fatalError("save failed")
}
}
}
}
// create UIImage from SF Symbol system image
// at Point Size
// centered in CGSize
// will draw symbol in Green on transparent background
private func drawSystemImage(_ sysName: String, at pointSize: CGFloat, centeredIn size: CGSize) -> UIImage? {
let cfg = UIImage.SymbolConfiguration(pointSize: pointSize)
guard let img = UIImage(systemName: sysName, withConfiguration: cfg)?.withTintColor(.green, renderingMode: .alwaysOriginal) else { return nil }
let x = (size.width - img.size.width) * 0.5
let y = (size.height - img.size.height) * 0.5
let renderer = UIGraphicsImageRenderer(size: size)
return renderer.image { context in
img.draw(in: CGRect(origin: CGPoint(x: x, y: y), size: img.size))
}
}
#objc func importPicture(_ sender: Any) {
let picker = UIImagePickerController()
picker.allowsEditing = true
picker.delegate = self
present(picker, animated: true)
}
func imagePickerController(_ inPicker: UIImagePickerController, didFinishPickingMediaWithInfo inInfo: [UIImagePickerController.InfoKey: Any]) {
let info = Dictionary(uniqueKeysWithValues: inInfo.map { key, value in (key.rawValue, value) })
guard let image = (info[UIImagePickerController.InfoKey.editedImage.rawValue] as? UIImage ?? info[UIImagePickerController.InfoKey.originalImage.rawValue] as? UIImage)?.resizeThisImage(toNewWidth: 80) else { return }
// organization?.icon = image
inPicker.dismiss(animated: true) {
DispatchQueue.main.async { [weak self] in
self?.imgViewB.image = image
//self?.imageButton?.image = image
//self?.imageButton?.alpha = 1.0
//self?.imageButton?.tintColor = self?.view.tintColor
//self?.updateUI()
}
}
}
}
I have draw a rectangle with in the native camera view, I'm trying to use it as a guide or crop area to capture only the business card image, I'm unable to crop image from camera native view within drawn rectangle
extension UIScreen {
func fullScreenSquare() -> CGRect {
var hw:CGFloat = 0
var isLandscape = false
if UIScreen.main.bounds.size.width < UIScreen.main.bounds.size.height {
hw = UIScreen.main.bounds.size.width
}
else {
isLandscape = true
hw = UIScreen.main.bounds.size.height
}
var x:CGFloat = 0
var y:CGFloat = 0
if isLandscape {
x = (UIScreen.main.bounds.size.width / 2) - (hw / 2)
}
else {
y = (UIScreen.main.bounds.size.height / 2) - (hw / 2)
}
return CGRect(x: x, y: y, width: hw, height: hw/3*2)
}
func isLandscape() -> Bool {
return UIScreen.main.bounds.size.width > UIScreen.main.bounds.size.height
}
}
func guideForCameraOverlay() -> UIView {
let guide = UIView(frame: UIScreen.main.fullScreenSquare())
guide.backgroundColor = UIColor.clear
guide.layer.borderWidth = 4
guide.layer.borderColor = UIColor.orange.cgColor
guide.isUserInteractionEnabled = false
return guide
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if setPhoto == 1 {
if let image = info[UIImagePickerController.InfoKey.editedImage] as? UIImage{
let size = CGSize(width: 600, height: 400)
//let imageCroped = image.cgImage?.cropping(to: size)
let imageCroped = image.crop(to: size)
frontPhotoImageView.image = UIImage(cgImage: imageCroped as! CGImage)
setPhoto = 0
frontPhotoImage.setTitle("", for: UIControl.State.normal)
}
else {
// Error message
}
self.dismiss(animated: true, completion: nil)
}
if setPhoto == 2 {
if let image = info[UIImagePickerController.InfoKey.editedImage] as? UIImage{
backPhotoImageView.image = image
setPhoto = 0
backPhotoImage.setTitle("", for: UIControl.State.normal)
}
else {
// Error message
}
self.dismiss(animated: true, completion: nil)
}
}
I expect to have the image from with in the drawn rectangle, but it does not happen.
I expect to crop the image to the size inside the orange rectangle on this image
I have AR session which adds SCNText and 3D Objects as well. And now, I want to add UIImage from Image Picker and don't know how to do this. Is there any solutions?
SOLUTION
func insertImage(image: UIImage, width: CGFloat = 0.3, height: CGFloat = 0.3) -> SCNNode {
let plane = SCNPlane(width: width, height: height)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
let image = insertImage(image: addedImage)
node.addChildNode(image)
As I am sure you are aware an SCNGeometry has a materials property which is simply:
A container for the color or texture of one of a material’s visual
properties.
As such you could render a UIImage onto an SCNGeometry using for example the diffuse property.
Here is a fully working and tested example. Which loads a UIImagePickerController after 5 seconds, and then creates an SCNNode with an SCNPlane Geometry which has is contents set to the selected UIImage.
The code is fully commented so it should be easy enough to understand:
//-------------------------------------
//MARK: UIImagePickerControllerDelegate
//-------------------------------------
extension ViewController: UIImagePickerControllerDelegate{
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
//1. Check We Have A Valid Image
if let selectedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
//2. We Havent Created Our PlaneNode So Create It
if planeNode == nil{
//d. Dismiss The Picker
picker.dismiss(animated: true) {
//a. Create An SCNPlane Geometry
let planeGeometry = SCNPlane(width: 0.5, height: 0.5)
//b. Set's It's Contents To The Picked Image
planeGeometry.firstMaterial?.diffuse.contents = self.correctlyOrientated(selectedImage)
//c. Set The Geometry & Add It To The Scene
self.planeNode = SCNNode()
self.planeNode?.geometry = planeGeometry
self.augmentedRealityView.scene.rootNode.addChildNode(self.planeNode!)
self.planeNode?.position = SCNVector3(0, 0, -1.5)
}
}
}
picker.dismiss(animated: true, completion: nil)
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) { picker.dismiss(animated: true, completion: nil) }
}
class ViewController: UIViewController, UINavigationControllerDelegate {
//1. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
#IBOutlet weak var augmentedRealityView: ARSCNView!
//2. Create Our ARWorld Tracking Configuration & Session
let configuration = ARWorldTrackingConfiguration()
let augmentedRealitySession = ARSession()
//3. Create A Reference To Our PlaneNode
var planeNode: SCNNode?
var planeGeomeryImage: UIImage?
//---------------
//MARK: LifeCycle
//---------------
override func viewDidLoad() {
super.viewDidLoad()
//1. Setup The Session
setupARSession()
//2. Show The UIImagePicker After 4 Seconds
DispatchQueue.main.asyncAfter(deadline: .now() + 4) {
self.selectPhotoFromGallery()
}
}
override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() }
//-------------
//MARK: ARSetup
//-------------
func setupARSession(){
//1. Run Our Session
augmentedRealityView.session = augmentedRealitySession
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
//---------------------
//MARK: Image Selection
//---------------------
/// Loads The UIImagePicker & Allows Us To Select An Image
func selectPhotoFromGallery(){
if UIImagePickerController.isSourceTypeAvailable(UIImagePickerControllerSourceType.photoLibrary){
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.allowsEditing = true
imagePicker.sourceType = UIImagePickerControllerSourceType.photoLibrary
self.present(imagePicker, animated: true, completion: nil)
}
}
/// Correctly Orientates A UIImage
///
/// - Parameter image: UIImage
/// - Returns: UIImage?
func correctlyOrientated(_ image: UIImage) -> UIImage {
if (image.imageOrientation == .up) { return image }
UIGraphicsBeginImageContextWithOptions(image.size, false, image.scale)
let rect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
image.draw(in: rect)
let normalizedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return normalizedImage
}
}
Don't forget to add the NSPhotoLibraryUsageDescription to your info.plist:
<key>NSPhotoLibraryUsageDescription</key>
<string>For ARkit</string>
This should be more than enough to get you started...
Do consider this as a question from someone who is not so good at swift..:).I have a button on the click of which the imagepicker is opened and I am able to select the images. In the didFinishPickingMediaWithInfo I'm adding the image to array like so...
var imageArray = [UIImage]()
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
if let image = info[UIImagePickerControllerOriginalImage] as? UIImage {
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
imageArray.append(image)
for i in 0..<imageArray.count {
imageView.image = imageArray[i]
imageView.contentMode = .scaleAspectFit
let xPosition = self.view.frame.width * CGFloat(i)
imageView.frame = CGRect(x: xPosition, y: 0, width: self.imageScrollView.frame.width, height: self.imageScrollView.frame.height)
imageScrollView.contentSize.width = imageScrollView.frame.width * (CGFloat(i + 1))
imageScrollView.addSubview(imageView)
}
}
self.dismiss(animated: true, completion: nil)
}
I'm also having these functions:
func saveImage(image: UIImage, path: String) -> Bool {
let jpgImageData = UIImageJPEGRepresentation(image, 1.0)
do {
try jpgImageData?.write(to: URL(fileURLWithPath: path), options: .atomic)
} catch {
print(error)
}
return (jpgImageData != nil)
}
func getDocumentsURL() -> NSURL {
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
return documentsURL as NSURL
}
func fileInDocumentsDirectory(filename: String) -> String {
let fileURL = getDocumentsURL().appendingPathComponent(filename)
return fileURL!.path
}
But my issue is this..I just don't want to show just one image that is picked from the gallery. I want to pick multiple images from the gallery(one at a time), store them in an array and then display them all in a horizontal scrolling format. For this purpose, I'm setting a scrollview to take the images(as given in didFinishPickingMediaWithInfo)
Maybe I have to read the image also. But how that can be done I'm not able to figure out...Please help!
Please see this loop which i have corrected
You are only creating one UIImageView and adding to the scrollview.
Please initialize the UIImageView every time
for i in 0..<imageArray.count {
var imageView = UIImageView() //*** Add this line to your code
imageView.image = imageArray[i]
imageView.contentMode = .scaleAspectFit
let xPosition = self.view.frame.width * CGFloat(i)
imageView.frame = CGRect(x: xPosition, y: 0, width: self.imageScrollView.frame.width, height: self.imageScrollView.frame.height)
imageScrollView.contentSize.width = imageScrollView.frame.width * (CGFloat(i + 1))
imageScrollView.addSubview(imageView)
}
When ever you update your scrollview with newImages dont forget to remove the old ones.
Use this to save image
var imageArr:[UIImage] = []
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
let chosenImage = info[UIImagePickerControllerOriginalImage] as! UIImage
UIImageWriteToSavedPhotosAlbum(chosenImage, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
imageArr.append(chosenImage)
for i in 0..<imageArray.count
{
imageView.image = imageArray[i]
imageView.contentMode = .scaleAspectFit
let xPosition = self.view.frame.width * CGFloat(i)
imageView.frame = CGRect(x: xPosition, y: 0, width: self.imageScrollView.frame.width, height: self.imageScrollView.frame.height)
imageScrollView.contentSize.width = imageScrollView.frame.width * (CGFloat(i + 1))
imageScrollView.addSubview(imageView)
}
}