Custom annotation on the map - ios

I have an app where the user presses a button and takes a photo, then the photo goes to ImageAnnotation class and it must be added to the map, but I get such an error: "Unexpectedly found while implicitly unwrapping the optional"
Only this error prevents the app to work correctly, if You can help me I would appreciate that highly
So here is the code
extension MapViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
picker.dismiss(animated: true, completion: nil)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
picker.dismiss(animated: true, completion: nil)
guard let ecoImage = info[UIImagePickerController.InfoKey.originalImage] as? UIImage else { return }
guard let currentLocation = locationManager.location else { return }
let pin = ImageAnnotation(coordinate: CLLocationCoordinate2D(latitude: currentLocation.coordinate.latitude, longitude: currentLocation.coordinate.longitude), image: ecoImage, color: UIColor.systemGreen)
}
}
Here is the class that must create a custom annotation for the map
class ImageAnnotation : NSObject, MKAnnotation {
var coordinate: CLLocationCoordinate2D
var imageEco: UIImage?
var color: UIColor?
var imageView: UIImageView!
init(coordinate: CLLocationCoordinate2D, image: UIImage, color: UIColor) {
self.coordinate = coordinate
imageEco = image
self.color = color
imageView.image = image
imageView.frame = CGRect(x: 0, y: 0, width: 20, height: 20)
self.imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 20, height: 20))
imageView.addSubview(self.imageView)
self.imageView.layer.cornerRadius = 25
self.imageView.layer.masksToBounds = true
}
}
Here is the final result that must be implemented

You are saying imageView.image = image. But imageView is nil. So you crash. No big surprise.
Taking a larger perspective, it appears that you have failed to appreciate the difference between an annotation and an annotation view. An annotation is just a message carrier: it can say what the coordinate is and perhaps what image should be displayed, but that's all. It is the job of your map view delegate to create the corresponding annotation view on demand, and that is where you are dealing with a view and can arrange for it to display your circular image.

Related

Using Vision to scan images from photo library

Is there a way that I can use the Vision framework to scan an existing image from the user's photo library? As in, not taking a new picture using the camera, but just choosing an image that the user already has?
Yes, you can. Adding on to #Zulqarnayn's answer, here's a working example to detect and draw a bounding box on rectangles.
1. Set up the image view where the image will be displayed
#IBOutlet weak var imageView: UIImageView!
#IBAction func pickImage(_ sender: Any) {
let picker = UIImagePickerController()
picker.delegate = self
self.present(picker, animated: true)
}
override func viewDidLoad() {
super.viewDidLoad()
imageView.layer.borderWidth = 4
imageView.layer.borderColor = UIColor.blue.cgColor
imageView.contentMode = .scaleAspectFill
imageView.backgroundColor = UIColor.green.withAlphaComponent(0.3)
imageView.layer.masksToBounds = false /// allow image to overflow, for testing purposes
}
2. Get the image from the image picker
extension ViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
guard let image = info[.originalImage] as? UIImage else { return }
/// set the imageView's image
imageView.image = image
/// start the request & request handler
detectCard()
/// dismiss the picker
dismiss(animated: true)
}
}
3. Start the vision request
func detectCard() {
guard let cgImage = imageView.image?.cgImage else { return }
/// perform on background thread, so the main screen is not frozen
DispatchQueue.global(qos: .userInitiated).async {
let request = VNDetectRectanglesRequest { request, error in
/// this function will be called when the Vision request finishes
self.handleDetectedRectangle(request: request, error: error)
}
request.minimumAspectRatio = 0.0
request.maximumAspectRatio = 1.0
request.maximumObservations = 1 /// only look for 1 rectangle
let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: .up)
do {
try imageRequestHandler.perform([request])
} catch let error {
print("Error: \(error)")
}
}
}
4. Get the result from the Vision request
func handleDetectedRectangle(request: VNRequest?, error: Error?) {
if let results = request?.results {
if let observation = results.first as? VNRectangleObservation {
/// get back to the main thread
DispatchQueue.main.async {
guard let image = self.imageView.image else { return }
let convertedRect = self.getConvertedRect(
boundingBox: observation.boundingBox,
inImage: image.size,
containedIn: self.imageView.bounds.size
)
self.drawBoundingBox(rect: convertedRect)
}
}
}
}
5. Convert observation.boundingBox to the UIKit coordinates of the image view, then draw a border around the detected rectangle
I explain this more in detail in this answer.
func getConvertedRect(boundingBox: CGRect, inImage imageSize: CGSize, containedIn containerSize: CGSize) -> CGRect {
let rectOfImage: CGRect
let imageAspect = imageSize.width / imageSize.height
let containerAspect = containerSize.width / containerSize.height
if imageAspect > containerAspect { /// image extends left and right
let newImageWidth = containerSize.height * imageAspect /// the width of the overflowing image
let newX = -(newImageWidth - containerSize.width) / 2
rectOfImage = CGRect(x: newX, y: 0, width: newImageWidth, height: containerSize.height)
} else { /// image extends top and bottom
let newImageHeight = containerSize.width * (1 / imageAspect) /// the width of the overflowing image
let newY = -(newImageHeight - containerSize.height) / 2
rectOfImage = CGRect(x: 0, y: newY, width: containerSize.width, height: newImageHeight)
}
let newOriginBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
var convertedRect = VNImageRectForNormalizedRect(newOriginBoundingBox, Int(rectOfImage.width), Int(rectOfImage.height))
/// add the margins
convertedRect.origin.x += rectOfImage.origin.x
convertedRect.origin.y += rectOfImage.origin.y
return convertedRect
}
/// draw an orange frame around the detected rectangle, on top of the image view
func drawBoundingBox(rect: CGRect) {
let uiView = UIView(frame: rect)
imageView.addSubview(uiView)
uiView.backgroundColor = UIColor.clear
uiView.layer.borderColor = UIColor.orange.cgColor
uiView.layer.borderWidth = 3
}
Result | Demo repo
Input image
Result
Yes, you can. First, take an instance of UIImagePickerController & present it.
let picker = UIImagePickerController()
picker.delegate = self
picker.sourceType = .photoLibrary
present(picker, animated: true, completion: nil)
Then implement the delegate method take the desired image
extension YourViewController: UIImagePickerControllerDelegate {
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let pickedImage = info[.originalImage] as? UIImage {
## here start your request & request handler
}
picker.dismiss(animated: true, completion: nil)
}
}

Why my Object Detection app is not returning anything?

I have a simple app that contains a button, UIImageView, and a label. Once you click on the button, you will be able to take a photo using the camera. Then the model has to predict what is the object in the photo, and finally the label has to display the output (the predicted object).
Everything is working fine and there are no errors, but after I take a picture, the label is not being changed, the model is not returning anything, why is that ?
NOTE: The model is working fine and it has been tested using another app, but I think I am missing something in this code.
Here is my code:
import UIKit
import CoreML
class secondViewController: UIViewController, UINavigationControllerDelegate {
#IBOutlet weak var imageView: UIImageView!
#IBOutlet weak var classifier: UILabel!
var model: VGG16!
let cameraPicker = UIImagePickerController()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
override func viewWillAppear(_ animated: Bool) {
model = VGG16()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
#IBAction func camera(_ sender: Any) {
if !UIImagePickerController.isSourceTypeAvailable(.camera) {
return
}
cameraPicker.delegate = self
cameraPicker.sourceType = .camera
cameraPicker.allowsEditing = false
present(cameraPicker, animated: true)
}
}
extension secondViewController: UIImagePickerControllerDelegate {
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
dismiss(animated: true, completion: nil)
}
// private func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
private func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
picker.dismiss(animated: true, completion: nil)
let image = info[UIImagePickerController.InfoKey.originalImage]! as! UIImage
picker.dismiss(animated: true)
classifier.text = "Analyzing Image..."
UIGraphicsBeginImageContextWithOptions(CGSize(width: 299, height: 299), true, 2.0)
image.draw(in: CGRect(x: 0, y: 0, width: 299, height: 299))
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(newImage.size.width), Int(newImage.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
return
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pixelData, width: Int(newImage.size.width), height: Int(newImage.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) //3
context?.translateBy(x: 0, y: newImage.size.height)
context?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context!)
newImage.draw(in: CGRect(x: 0, y: 0, width: newImage.size.width, height: newImage.size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
imageView.image = newImage
// Core ML
guard let prediction = try? model.prediction(image: pixelBuffer!) else {
return
}
classifier.text = "I think this is a \(prediction.classLabel)."
}
}
You need to hold a strong reference to
let cameraPicker = UIImagePickerController()
override func viewDidLoad() {
by making it an instance variable for the delegate methods to be called

How to add UIImage to a Scene in AR Kit

I have AR session which adds SCNText and 3D Objects as well. And now, I want to add UIImage from Image Picker and don't know how to do this. Is there any solutions?
SOLUTION
func insertImage(image: UIImage, width: CGFloat = 0.3, height: CGFloat = 0.3) -> SCNNode {
let plane = SCNPlane(width: width, height: height)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
let image = insertImage(image: addedImage)
node.addChildNode(image)
As I am sure you are aware an SCNGeometry has a materials property which is simply:
A container for the color or texture of one of a material’s visual
properties.
As such you could render a UIImage onto an SCNGeometry using for example the diffuse property.
Here is a fully working and tested example. Which loads a UIImagePickerController after 5 seconds, and then creates an SCNNode with an SCNPlane Geometry which has is contents set to the selected UIImage.
The code is fully commented so it should be easy enough to understand:
//-------------------------------------
//MARK: UIImagePickerControllerDelegate
//-------------------------------------
extension ViewController: UIImagePickerControllerDelegate{
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
//1. Check We Have A Valid Image
if let selectedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
//2. We Havent Created Our PlaneNode So Create It
if planeNode == nil{
//d. Dismiss The Picker
picker.dismiss(animated: true) {
//a. Create An SCNPlane Geometry
let planeGeometry = SCNPlane(width: 0.5, height: 0.5)
//b. Set's It's Contents To The Picked Image
planeGeometry.firstMaterial?.diffuse.contents = self.correctlyOrientated(selectedImage)
//c. Set The Geometry & Add It To The Scene
self.planeNode = SCNNode()
self.planeNode?.geometry = planeGeometry
self.augmentedRealityView.scene.rootNode.addChildNode(self.planeNode!)
self.planeNode?.position = SCNVector3(0, 0, -1.5)
}
}
}
picker.dismiss(animated: true, completion: nil)
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) { picker.dismiss(animated: true, completion: nil) }
}
class ViewController: UIViewController, UINavigationControllerDelegate {
//1. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
#IBOutlet weak var augmentedRealityView: ARSCNView!
//2. Create Our ARWorld Tracking Configuration & Session
let configuration = ARWorldTrackingConfiguration()
let augmentedRealitySession = ARSession()
//3. Create A Reference To Our PlaneNode
var planeNode: SCNNode?
var planeGeomeryImage: UIImage?
//---------------
//MARK: LifeCycle
//---------------
override func viewDidLoad() {
super.viewDidLoad()
//1. Setup The Session
setupARSession()
//2. Show The UIImagePicker After 4 Seconds
DispatchQueue.main.asyncAfter(deadline: .now() + 4) {
self.selectPhotoFromGallery()
}
}
override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() }
//-------------
//MARK: ARSetup
//-------------
func setupARSession(){
//1. Run Our Session
augmentedRealityView.session = augmentedRealitySession
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
//---------------------
//MARK: Image Selection
//---------------------
/// Loads The UIImagePicker & Allows Us To Select An Image
func selectPhotoFromGallery(){
if UIImagePickerController.isSourceTypeAvailable(UIImagePickerControllerSourceType.photoLibrary){
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.allowsEditing = true
imagePicker.sourceType = UIImagePickerControllerSourceType.photoLibrary
self.present(imagePicker, animated: true, completion: nil)
}
}
/// Correctly Orientates A UIImage
///
/// - Parameter image: UIImage
/// - Returns: UIImage?
func correctlyOrientated(_ image: UIImage) -> UIImage {
if (image.imageOrientation == .up) { return image }
UIGraphicsBeginImageContextWithOptions(image.size, false, image.scale)
let rect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
image.draw(in: rect)
let normalizedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return normalizedImage
}
}
Don't forget to add the NSPhotoLibraryUsageDescription to your info.plist:
<key>NSPhotoLibraryUsageDescription</key>
<string>For ARkit</string>
This should be more than enough to get you started...

ios Swift 3 using Property Observer update Image from UIImagePicker

Hi I'm using IOS swift 3 to let user pick images from library or album.I have an UIImage variable.How can we use property Observer to update the UIImage when user finished pick an Image
Some thing like
var image: UIImage = {
didSet....
}
Currently I'm doing this
func show(image: UIImage) {
imageView.image = image
imageView.isHidden = false
imageView.frame = CGRect(x: 10, y: 10, width: 260, height: 260)
addPhotoLabel.isHidden = true
}
func imagePickerController(_ picker: UIImagePickerController,
didFinishPickingMediaWithInfo info: [String : Any]) {
image = info[UIImagePickerControllerEditedImage] as? UIImage
if let theImage = image {
show(image: theImage)
}
dismiss(animated: true, completion: nil)
}
Thinking of using property Observer to improve the approach.Any help is much appreciate.Thanks!
If you really want to update the image view any time the image property is set, then simply put all of the code in your show method in the didSet block for the image property.
var image: UIImage = {
didSet {
imageView.image = image
imageView.isHidden = false
imageView.frame = CGRect(x: 10, y: 10, width: 260, height: 260)
addPhotoLabel.isHidden = true
}
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
if let theImage = info[UIImagePickerControllerEditedImage] as? UIImage {
image = theImage
}
picker.dismiss(animated: true, completion: nil)
}

how to overlay an image over an image in xcode?

I am trying to make an app where the user can select an image out of their image library that will be used as background, followed by their logo that they can also choose from their (photo) library...
For now i haven't found a tutorial that can help me, mostly because the user must be able to set their own images instead of a default image.
Also, does anyone of you guys also know how i can have multiple UIImagePickerControllers since this code affects both images at the same time instead of one per picker?
What i use/have :
I use swift
I use xcode 7.0.1
This is
the storyboard i currently use.
My swift file :
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
#IBOutlet weak var logo: UIImageView!
#IBOutlet weak var bgimg: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
}
#IBAction func selectbg(sender: AnyObject) {
let bgPicker = UIImagePickerController()
bgPicker.delegate = self
bgPicker.sourceType = .PhotoLibrary
self.presentViewController(bgPicker, animated: true, completion: nil)
}
#IBAction func selectlogo(sender: AnyObject) {
let logoPicker = UIImagePickerController()
logoPicker.delegate = self
logoPicker.sourceType = .PhotoLibrary
self.presentViewController(logoPicker, animated: true, completion: nil)
}
func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage, editingInfo: [String : AnyObject]?) {
logo.image = image
bgimg.image = image
self.dismissViewControllerAnimated(false, completion: nil)
}
#IBAction func addImage(sender: AnyObject) {
let ActionAlert = UIAlertController(title: nil, message: "What image do you want to add/edit?", preferredStyle: .ActionSheet)
let bgimage = UIAlertAction(title: "Background", style: .Default, handler: { (Alert:UIAlertAction!) -> Void in
print("Edit background image")
})
let logo = UIAlertAction(title: "Logo", style: .Default) { (Alert:UIAlertAction!) -> Void in
print("Edit logo")
}
let cancel = UIAlertAction(title: "Cancel", style: .Cancel) { (Alert:UIAlertAction!) -> Void in
print("remove menu")
}
ActionAlert.addAction(bgimage)
ActionAlert.addAction(logo)
ActionAlert.addAction(cancel)
self.presentViewController(ActionAlert, animated: true, completion: nil)
}
}
If i wasn't clear about my question, feel free to ask for more info
Create two UIImageViews for each image and add the foreground one as a subview to the parent.
Something like this:
let bgimg = UIImage(named: "image-name") // The image used as a background
let bgimgview = UIImageView(image: bgimg) // Create the view holding the image
bgimgview.frame = CGRect(x: 0, y: 0, width: 500, height: 500) // The size of the background image
let frontimg = UIImage(named: "front-image") // The image in the foreground
let frontimgview = UIImageView(image: frontimg) // Create the view holding the image
frontimgview.frame = CGRect(x: 150, y: 300, width: 50, height: 50) // The size and position of the front image
bgimgview.addSubview(frontimgview) // Add the front image on top of the background
Here's the sample code for merging footer image to original
extension UIImage{
func merge(mergewith:UIImage) -> UIImage {
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
let actualArea = CGRect(x: 0, y: 0, width: size.width, height: size.height)
let mergeArea = CGRect(x: 0, y: size.height - mergewith.size.height, width: size.width, height: mergewith.size.height)
self.draw(in: actualArea)
mergewith.draw(in: mergeArea)
let merged = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return merged
}
}

Resources