How to add UIImage to a Scene in AR Kit - ios

I have AR session which adds SCNText and 3D Objects as well. And now, I want to add UIImage from Image Picker and don't know how to do this. Is there any solutions?
SOLUTION
func insertImage(image: UIImage, width: CGFloat = 0.3, height: CGFloat = 0.3) -> SCNNode {
let plane = SCNPlane(width: width, height: height)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
let image = insertImage(image: addedImage)
node.addChildNode(image)

As I am sure you are aware an SCNGeometry has a materials property which is simply:
A container for the color or texture of one of a material’s visual
properties.
As such you could render a UIImage onto an SCNGeometry using for example the diffuse property.
Here is a fully working and tested example. Which loads a UIImagePickerController after 5 seconds, and then creates an SCNNode with an SCNPlane Geometry which has is contents set to the selected UIImage.
The code is fully commented so it should be easy enough to understand:
//-------------------------------------
//MARK: UIImagePickerControllerDelegate
//-------------------------------------
extension ViewController: UIImagePickerControllerDelegate{
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
//1. Check We Have A Valid Image
if let selectedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
//2. We Havent Created Our PlaneNode So Create It
if planeNode == nil{
//d. Dismiss The Picker
picker.dismiss(animated: true) {
//a. Create An SCNPlane Geometry
let planeGeometry = SCNPlane(width: 0.5, height: 0.5)
//b. Set's It's Contents To The Picked Image
planeGeometry.firstMaterial?.diffuse.contents = self.correctlyOrientated(selectedImage)
//c. Set The Geometry & Add It To The Scene
self.planeNode = SCNNode()
self.planeNode?.geometry = planeGeometry
self.augmentedRealityView.scene.rootNode.addChildNode(self.planeNode!)
self.planeNode?.position = SCNVector3(0, 0, -1.5)
}
}
}
picker.dismiss(animated: true, completion: nil)
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) { picker.dismiss(animated: true, completion: nil) }
}
class ViewController: UIViewController, UINavigationControllerDelegate {
//1. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
#IBOutlet weak var augmentedRealityView: ARSCNView!
//2. Create Our ARWorld Tracking Configuration & Session
let configuration = ARWorldTrackingConfiguration()
let augmentedRealitySession = ARSession()
//3. Create A Reference To Our PlaneNode
var planeNode: SCNNode?
var planeGeomeryImage: UIImage?
//---------------
//MARK: LifeCycle
//---------------
override func viewDidLoad() {
super.viewDidLoad()
//1. Setup The Session
setupARSession()
//2. Show The UIImagePicker After 4 Seconds
DispatchQueue.main.asyncAfter(deadline: .now() + 4) {
self.selectPhotoFromGallery()
}
}
override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() }
//-------------
//MARK: ARSetup
//-------------
func setupARSession(){
//1. Run Our Session
augmentedRealityView.session = augmentedRealitySession
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
//---------------------
//MARK: Image Selection
//---------------------
/// Loads The UIImagePicker & Allows Us To Select An Image
func selectPhotoFromGallery(){
if UIImagePickerController.isSourceTypeAvailable(UIImagePickerControllerSourceType.photoLibrary){
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.allowsEditing = true
imagePicker.sourceType = UIImagePickerControllerSourceType.photoLibrary
self.present(imagePicker, animated: true, completion: nil)
}
}
/// Correctly Orientates A UIImage
///
/// - Parameter image: UIImage
/// - Returns: UIImage?
func correctlyOrientated(_ image: UIImage) -> UIImage {
if (image.imageOrientation == .up) { return image }
UIGraphicsBeginImageContextWithOptions(image.size, false, image.scale)
let rect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
image.draw(in: rect)
let normalizedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return normalizedImage
}
}
Don't forget to add the NSPhotoLibraryUsageDescription to your info.plist:
<key>NSPhotoLibraryUsageDescription</key>
<string>For ARkit</string>
This should be more than enough to get you started...

Related

Custom annotation on the map

I have an app where the user presses a button and takes a photo, then the photo goes to ImageAnnotation class and it must be added to the map, but I get such an error: "Unexpectedly found while implicitly unwrapping the optional"
Only this error prevents the app to work correctly, if You can help me I would appreciate that highly
So here is the code
extension MapViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
picker.dismiss(animated: true, completion: nil)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
picker.dismiss(animated: true, completion: nil)
guard let ecoImage = info[UIImagePickerController.InfoKey.originalImage] as? UIImage else { return }
guard let currentLocation = locationManager.location else { return }
let pin = ImageAnnotation(coordinate: CLLocationCoordinate2D(latitude: currentLocation.coordinate.latitude, longitude: currentLocation.coordinate.longitude), image: ecoImage, color: UIColor.systemGreen)
}
}
Here is the class that must create a custom annotation for the map
class ImageAnnotation : NSObject, MKAnnotation {
var coordinate: CLLocationCoordinate2D
var imageEco: UIImage?
var color: UIColor?
var imageView: UIImageView!
init(coordinate: CLLocationCoordinate2D, image: UIImage, color: UIColor) {
self.coordinate = coordinate
imageEco = image
self.color = color
imageView.image = image
imageView.frame = CGRect(x: 0, y: 0, width: 20, height: 20)
self.imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 20, height: 20))
imageView.addSubview(self.imageView)
self.imageView.layer.cornerRadius = 25
self.imageView.layer.masksToBounds = true
}
}
Here is the final result that must be implemented
You are saying imageView.image = image. But imageView is nil. So you crash. No big surprise.
Taking a larger perspective, it appears that you have failed to appreciate the difference between an annotation and an annotation view. An annotation is just a message carrier: it can say what the coordinate is and perhaps what image should be displayed, but that's all. It is the job of your map view delegate to create the corresponding annotation view on demand, and that is where you are dealing with a view and can arrange for it to display your circular image.

Using Vision to scan images from photo library

Is there a way that I can use the Vision framework to scan an existing image from the user's photo library? As in, not taking a new picture using the camera, but just choosing an image that the user already has?
Yes, you can. Adding on to #Zulqarnayn's answer, here's a working example to detect and draw a bounding box on rectangles.
1. Set up the image view where the image will be displayed
#IBOutlet weak var imageView: UIImageView!
#IBAction func pickImage(_ sender: Any) {
let picker = UIImagePickerController()
picker.delegate = self
self.present(picker, animated: true)
}
override func viewDidLoad() {
super.viewDidLoad()
imageView.layer.borderWidth = 4
imageView.layer.borderColor = UIColor.blue.cgColor
imageView.contentMode = .scaleAspectFill
imageView.backgroundColor = UIColor.green.withAlphaComponent(0.3)
imageView.layer.masksToBounds = false /// allow image to overflow, for testing purposes
}
2. Get the image from the image picker
extension ViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
guard let image = info[.originalImage] as? UIImage else { return }
/// set the imageView's image
imageView.image = image
/// start the request & request handler
detectCard()
/// dismiss the picker
dismiss(animated: true)
}
}
3. Start the vision request
func detectCard() {
guard let cgImage = imageView.image?.cgImage else { return }
/// perform on background thread, so the main screen is not frozen
DispatchQueue.global(qos: .userInitiated).async {
let request = VNDetectRectanglesRequest { request, error in
/// this function will be called when the Vision request finishes
self.handleDetectedRectangle(request: request, error: error)
}
request.minimumAspectRatio = 0.0
request.maximumAspectRatio = 1.0
request.maximumObservations = 1 /// only look for 1 rectangle
let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: .up)
do {
try imageRequestHandler.perform([request])
} catch let error {
print("Error: \(error)")
}
}
}
4. Get the result from the Vision request
func handleDetectedRectangle(request: VNRequest?, error: Error?) {
if let results = request?.results {
if let observation = results.first as? VNRectangleObservation {
/// get back to the main thread
DispatchQueue.main.async {
guard let image = self.imageView.image else { return }
let convertedRect = self.getConvertedRect(
boundingBox: observation.boundingBox,
inImage: image.size,
containedIn: self.imageView.bounds.size
)
self.drawBoundingBox(rect: convertedRect)
}
}
}
}
5. Convert observation.boundingBox to the UIKit coordinates of the image view, then draw a border around the detected rectangle
I explain this more in detail in this answer.
func getConvertedRect(boundingBox: CGRect, inImage imageSize: CGSize, containedIn containerSize: CGSize) -> CGRect {
let rectOfImage: CGRect
let imageAspect = imageSize.width / imageSize.height
let containerAspect = containerSize.width / containerSize.height
if imageAspect > containerAspect { /// image extends left and right
let newImageWidth = containerSize.height * imageAspect /// the width of the overflowing image
let newX = -(newImageWidth - containerSize.width) / 2
rectOfImage = CGRect(x: newX, y: 0, width: newImageWidth, height: containerSize.height)
} else { /// image extends top and bottom
let newImageHeight = containerSize.width * (1 / imageAspect) /// the width of the overflowing image
let newY = -(newImageHeight - containerSize.height) / 2
rectOfImage = CGRect(x: 0, y: newY, width: containerSize.width, height: newImageHeight)
}
let newOriginBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
var convertedRect = VNImageRectForNormalizedRect(newOriginBoundingBox, Int(rectOfImage.width), Int(rectOfImage.height))
/// add the margins
convertedRect.origin.x += rectOfImage.origin.x
convertedRect.origin.y += rectOfImage.origin.y
return convertedRect
}
/// draw an orange frame around the detected rectangle, on top of the image view
func drawBoundingBox(rect: CGRect) {
let uiView = UIView(frame: rect)
imageView.addSubview(uiView)
uiView.backgroundColor = UIColor.clear
uiView.layer.borderColor = UIColor.orange.cgColor
uiView.layer.borderWidth = 3
}
Result | Demo repo
Input image
Result
Yes, you can. First, take an instance of UIImagePickerController & present it.
let picker = UIImagePickerController()
picker.delegate = self
picker.sourceType = .photoLibrary
present(picker, animated: true, completion: nil)
Then implement the delegate method take the desired image
extension YourViewController: UIImagePickerControllerDelegate {
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let pickedImage = info[.originalImage] as? UIImage {
## here start your request & request handler
}
picker.dismiss(animated: true, completion: nil)
}
}

Why my Object Detection app is not returning anything?

I have a simple app that contains a button, UIImageView, and a label. Once you click on the button, you will be able to take a photo using the camera. Then the model has to predict what is the object in the photo, and finally the label has to display the output (the predicted object).
Everything is working fine and there are no errors, but after I take a picture, the label is not being changed, the model is not returning anything, why is that ?
NOTE: The model is working fine and it has been tested using another app, but I think I am missing something in this code.
Here is my code:
import UIKit
import CoreML
class secondViewController: UIViewController, UINavigationControllerDelegate {
#IBOutlet weak var imageView: UIImageView!
#IBOutlet weak var classifier: UILabel!
var model: VGG16!
let cameraPicker = UIImagePickerController()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
override func viewWillAppear(_ animated: Bool) {
model = VGG16()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
#IBAction func camera(_ sender: Any) {
if !UIImagePickerController.isSourceTypeAvailable(.camera) {
return
}
cameraPicker.delegate = self
cameraPicker.sourceType = .camera
cameraPicker.allowsEditing = false
present(cameraPicker, animated: true)
}
}
extension secondViewController: UIImagePickerControllerDelegate {
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
dismiss(animated: true, completion: nil)
}
// private func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
private func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
picker.dismiss(animated: true, completion: nil)
let image = info[UIImagePickerController.InfoKey.originalImage]! as! UIImage
picker.dismiss(animated: true)
classifier.text = "Analyzing Image..."
UIGraphicsBeginImageContextWithOptions(CGSize(width: 299, height: 299), true, 2.0)
image.draw(in: CGRect(x: 0, y: 0, width: 299, height: 299))
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(newImage.size.width), Int(newImage.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
return
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pixelData, width: Int(newImage.size.width), height: Int(newImage.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) //3
context?.translateBy(x: 0, y: newImage.size.height)
context?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context!)
newImage.draw(in: CGRect(x: 0, y: 0, width: newImage.size.width, height: newImage.size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
imageView.image = newImage
// Core ML
guard let prediction = try? model.prediction(image: pixelBuffer!) else {
return
}
classifier.text = "I think this is a \(prediction.classLabel)."
}
}
You need to hold a strong reference to
let cameraPicker = UIImagePickerController()
override func viewDidLoad() {
by making it an instance variable for the delegate methods to be called

No preview with the downloaded image in ARKIT

I have been working on an ARKit app and I achieved my goal of detecting the pictures in the scene and playback video in the scene.
The problem occurred is when I tried to fetch the image from the internet.
* The image got detected and started playback (I was able to hear the audio) but never showed any video on the scene.(I reverted the code below to from where I started)
* What I actually want is to update the reference images and the videos of the playback on the go when my app is in the App Store.
KINDLY TELL THE BEST SOLUTION... THANKS
Below is my complete code:
import UIKit
import SceneKit
import ARKit
import Alamofire
import AlamofireImage
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
var imageServer = [UIImage]()
var trackedImages = Set<ARReferenceImage>()
let configuration = ARImageTrackingConfiguration()
let videoNode = SKVideoNode(url: URL(fileURLWithPath: "https://example.com/video1.mp4"))
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
fetchImage {
self.configuration.trackingImages = self.trackedImages
self.configuration.maximumNumberOfTrackedImages = 1
self.sceneView.session.run(self.configuration)
}
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
videoNode.play()
let videoScene = SKScene(size: CGSize(width: 480, height: 360))
videoNode.position = CGPoint(x: videoScene.size.width / 2, y: videoScene.size.height / 2)
videoNode.yScale = -1.0
videoScene.addChild(videoNode)
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = videoScene
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
return node
}
func fetchImage(completion: #escaping ()->()) {
Alamofire.request("https://example.com/four.png").responseImage { response in
debugPrint(response)
print(response.request as Any)
print(response.response as Any)
debugPrint(response.result)
if let image = response.result.value {
print("image downloaded: \(image)")
self.imageServer.append(image)
print("ImageServer append Successful")
print("The new number of images = \(self.imageServer.count)")
}
completion()
}
}
}

Check if imageView is empty

I have a camera feature in my app and when you take the picture it places that picture into an imageView. I have a button that I've hidden and what I want is for when the image is placed in the imageView for the button to be unhidden.
#IBOutlet weak var imageView: UIImageView!
#IBOutlet weak var toGoFurther: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
toGoFurther.hidden = true
if (self.imageView.image != nil){
toGoFurther.hidden = false
}
let testObject = PFObject(className: "TestObject")
testObject["foo"] = "bar"
testObject.saveInBackgroundWithBlock { (success: Bool, error: NSError?) -> Void in
print("Object has been saved.")
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
#IBAction func continueNextPage(sender: AnyObject) {
}
#IBAction func takePhoto(sender: AnyObject) {
if !UIImagePickerController.isSourceTypeAvailable(UIImagePickerControllerSourceType.Camera){
return
}
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType = UIImagePickerControllerSourceType.Camera;
//Create camera overlay
let pickerFrame = CGRectMake(0, UIApplication.sharedApplication().statusBarFrame.size.height, imagePicker.view.bounds.width, imagePicker.view.bounds.height - imagePicker.navigationBar.bounds.size.height - imagePicker.toolbar.bounds.size.height)
let squareFrame = CGRectMake(pickerFrame.width/2 - 400/2, pickerFrame.height/2 - 400/2, 640, 640)
UIGraphicsBeginImageContext(pickerFrame.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextAddRect(context, CGContextGetClipBoundingBox(context))
CGContextMoveToPoint(context, squareFrame.origin.x, squareFrame.origin.y)
CGContextAddLineToPoint(context, squareFrame.origin.x + squareFrame.width, squareFrame.origin.y)
CGContextAddLineToPoint(context, squareFrame.origin.x + squareFrame.width, squareFrame.origin.y + squareFrame.size.height)
CGContextAddLineToPoint(context, squareFrame.origin.x, squareFrame.origin.y + squareFrame.size.height)
CGContextAddLineToPoint(context, squareFrame.origin.x, squareFrame.origin.y)
CGContextEOClip(context)
CGContextMoveToPoint(context, pickerFrame.origin.x, pickerFrame.origin.y)
CGContextSetRGBFillColor(context, 0, 0, 0, 1)
CGContextFillRect(context, pickerFrame)
CGContextRestoreGState(context)
let overlayImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
let overlayView = UIImageView(frame: pickerFrame)
overlayView.image = overlayImage
imagePicker.cameraOverlayView = overlayView
self.presentViewController(imagePicker, animated: true, completion: nil)
}
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : AnyObject]) {
imageView.image = info[UIImagePickerControllerOriginalImage] as? UIImage
dismissViewControllerAnimated(true, completion: nil)
}
Try putting this in the viewDidAppear method. That should do it.
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
toGoFurther.hidden = self.imageView.image == nil
}
So, you want to unhide your button when image is captured and set on view. You need to do this in your didFinishPickingMediaWithInfo function like this:
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [NSObject: AnyObject]) {
imageView.image = info[UIImagePickerControllerOriginalImage] as? UIImage
// dissmiss the image picker controller window
self.dismissViewControllerAnimated(true, completion:^{
toGoFurther.hidden = false
})
}

Resources