run a function when another ends, iOS, Swift - ios

Im trying to call a function when another one ends but I keep getting an error that tells me the image is not above 0 pixels high and wide so I presume it hasn't got the image before the function is called. when I add a breakpoint where the function to call the OCR the app doesn't show the image on the screen at this point which is why I came to that conclusion.
here is the error I get from the ocr.
NSAssert( widthOfImage > 0 && heightOfImage > 0, #"Passed image must not be empty - it should be at least 1px tall and wide");
below is my console readout where I placed prints to see the flow.
Tony 1 Requested....
Tony 3 run OCR....
Tony 2 Handle Rectangle....
Tony: Corected image here......
(lldb)
Below is my code. should i have in a completion that makes sure the function is not called until the image is in place?.
func startOCR() {
swiftOCRInstance.recognize(correctedImageView.image!) {recognizedString in
print(recognizedString)
self.classificationLabel.text = recognizedString
}
}
lazy var rectanglesRequest: VNDetectRectanglesRequest = {
print("Tony 1 Requested....")
return VNDetectRectanglesRequest(completionHandler: self.handleRectangles)
}()
func handleRectangles(request: VNRequest, error: Error?) {
guard let observations = request.results as? [VNRectangleObservation]
else { fatalError("unexpected result type from VNDetectRectanglesRequest") }
guard let detectedRectangle = observations.first else {
DispatchQueue.main.async {
self.classificationLabel.text = "No rectangles detected."
}
return
}
let imageSize = inputImage.extent.size
// Verify detected rectangle is valid.
let boundingBox = detectedRectangle.boundingBox.scaled(to: imageSize)
guard inputImage.extent.contains(boundingBox)
else { print("invalid detected rectangle"); return }
// Rectify the detected image and reduce it to inverted grayscale for applying model.
let topLeft = detectedRectangle.topLeft.scaled(to: imageSize)
let topRight = detectedRectangle.topRight.scaled(to: imageSize)
let bottomLeft = detectedRectangle.bottomLeft.scaled(to: imageSize)
let bottomRight = detectedRectangle.bottomRight.scaled(to: imageSize)
let correctedImage = inputImage
.cropped(to: boundingBox)
.applyingFilter("CIPerspectiveCorrection", parameters: [
"inputTopLeft": CIVector(cgPoint: topLeft),
"inputTopRight": CIVector(cgPoint: topRight),
"inputBottomLeft": CIVector(cgPoint: bottomLeft),
"inputBottomRight": CIVector(cgPoint: bottomRight)
])
// .applyingFilter("CIColorControls", parameters: [
// kCIInputSaturationKey: 0,
// kCIInputContrastKey: 32
// ])
// Show the pre-processed image
DispatchQueue.main.async {
self.correctedImageView.image = UIImage(ciImage: correctedImage)
if self.correctedImageView.image != nil {
print("Tony 2 Handle Rectangle....")
print("Tony: Corected image here......")
}else {
print("Tony: No corected image......")
}
}
print("Tony 3 run OCR....")
self.startOCR()
}
I also get a purple error the says the UIImage should be used on the main thread in the pic below...

You need to use the main thread whenever you're working with UIImageView, or any other UIKit class (unless otherwise noted, such as when constructing UIImages on background threads).
You can use GCD to do this on the main thread.
DispatchQueue.main.async {
//Handle UIKit actions here
}
Source: Apple Documentation
Threading Considerations:
Manipulations to your application’s user
interface must occur on the main thread. Thus, you should always call
the methods of the UIView class from code running in the main thread
of your application. The only time this may not be strictly necessary
is when creating the view object itself, but all other manipulations
should occur on the main thread.

Related

How to combine MTLTextures into the currentDrawable

I am new to using Metal but I have been following the tutorial here that takes the camera output and renders it on to the screen using metal.
Now I want to take an image, turn it into a MTLTexture, and position and render that texture on top of the camera output.
My current rendering code is as follows:
private func render(texture: MTLTexture, withCommandBuffer commandBuffer: MTLCommandBuffer, device: MTLDevice) {
guard
let currentRenderPassDescriptor = metalView.currentRenderPassDescriptor,
let currentDrawable = metalView.currentDrawable,
let renderPipelineState = renderPipelineState,
let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: currentRenderPassDescriptor)
else {
semaphore.signal()
return
}
encoder.pushDebugGroup("RenderFrame")
encoder.setRenderPipelineState(renderPipelineState)
encoder.setFragmentTexture(texture, index: 0)
encoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1)
encoder.popDebugGroup()
encoder.endEncoding()
commandBuffer.addScheduledHandler { [weak self] (buffer) in
guard let unwrappedSelf = self else { return }
unwrappedSelf.didRenderTexture(texture, withCommandBuffer: buffer, device: device)
unwrappedSelf.semaphore.signal()
}
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
I know that I can convert a UIImage to a MTLTexture using the following code:
let textureLoader = MTKTextureLoader(device: device)
let cgImage = UIImage(named: "myImage")!.cgImage!
let imageTexture = try! textureLoader.newTexture(cgImage: cgImage, options: nil)
So now I have two MTLTextures. Is there a simple function that allows me to combine them? I've been trying to search online and someone mentioned a function called over, but I haven't actually been able to find that one. Any help would be greatly appreciated.
You can simply do this inside the shader by adding or multiplying color values. I guess that's what shaders are for.

CIImage extent crash

I got this method that grabs the camera buffer image to do some image processing every few seconds.
Below method runs fine on all my testing, yet this is present in crash reports with a significant amount of crashes.
final func cameraBufferProcessing () {
DispatchQueue.global(qos: .background).sync { [unowned self] in
if let bufferImage = self.cameraBufferImage?.oriented(.downMirrored) {
let heightPropotion : CGFloat = bufferImage.extent.height * 0.5 //Crashes on this line
if let cgImg = self.context.createCGImage(bufferImage.clampedToExtent(), from: CGRect(x: 0, y: heightPropotion, width: bufferImage.extent.width, height: heightPropotion))
{
DispatchQueue.main.async {
// use with cgImg to do image processing
}
}
} else {
// do something else
}
}
}
Crash reports point to the third line above:
let heightPropotion : CGFloat = bufferImage.extent.height * 0.5
Crash seems to be related to extent.
What could be the cause here?

ARKIT: Move Object with PanGesture (the right way)

I've been reading plenty of StackOverflow answers on how to move an object by dragging it across the screen. Some use hit tests against .featurePoints some use the gesture translation or just keeping track of the lastPosition of the object. But honestly.. none work the way everyone is expecting it to work.
Hit testing against .featurePoints just makes the object jump all around, because you dont always hit a featurepoint when dragging your finger. I dont understand why everyone keeps suggesting this.
Solutions like this one work: Dragging SCNNode in ARKit Using SceneKit
But the object doesnt really follow your finger, and the moment you take a few steps or change the angle of the object or the camera.. and try to move the object.. the x,z are all inverted.. and makes total sense to do that.
I really want to move objects as good as the Apple Demo, but I look at the code from Apple... and is insanely weird and overcomplicated I cant even understand a bit. Their technique to move the object so beautifly is not even close to what everyone propose online.
https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality
There's gotta be a simpler way to do it.
Short answer:
To get this nice and fluent dragging effect like in the Apple demo project, you will have to do it like in the Apple demo project (Handling 3D Interaction). On the other side I agree with you, that the code might be confusing if you look at it for the first time. It is not easy at all to calculate the correct movement for an object placed on a floor plane - always and from every location or viewing angle. It’s a complex code construct, that is doing this superb dragging effect. Apple did a great job to achieve this, but didn’t make it too easy for us.
Full Answer:
Striping down the AR Interaction template for your needy results in a nightmare - but should work too if you invest enough time. If you prefer to begin from scratch, basically start using a common swift ARKit/SceneKit Xcode template (the one containing the space ship).
You will also require the entire AR Interaction Template Project from Apple. (The link is included in the SO question)
At the End you should be able to drag something called VirtualObject, which is in fact a special SCNNode. In Addition you will have a nice Focus Square, that can be useful for whatever purpose - like initially placing objects or adding a floor, or a wall. (Some code for the dragging effect and the focus square usage are kind of merged or linked together - doing it without the focus square will actually be more complicated)
Get started:
Copy the following files from the AR Interaction template to your empty project:
Utilities.swift (usually I name this file Extensions.swift, it contains some basic extensions that are required)
FocusSquare.swift
FocusSquareSegment.swift
ThresholdPanGesture.swift
VirtualObject.swift
VirtualObjectLoader.swift
VirtualObjectARView.swift
Add the UIGestureRecognizerDelegate to the ViewController class definition like so:
class ViewController: UIViewController, ARSCNViewDelegate, UIGestureRecognizerDelegate {
Add this code to your ViewController.swift, in the definitions section, right before viewDidLoad:
// MARK: for the Focus Square
// SUPER IMPORTANT: the screenCenter must be defined this way
var focusSquare = FocusSquare()
var screenCenter: CGPoint {
let bounds = sceneView.bounds
return CGPoint(x: bounds.midX, y: bounds.midY)
}
var isFocusSquareEnabled : Bool = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
/// The tracked screen position used to update the `trackedObject`'s position in `updateObjectToCurrentTrackingPosition()`.
private var currentTrackingPosition: CGPoint?
/**
The object that has been most recently intereacted with.
The `selectedObject` can be moved at any time with the tap gesture.
*/
var selectedObject: VirtualObject?
/// The object that is tracked for use by the pan and rotation gestures.
private var trackedObject: VirtualObject? {
didSet {
guard trackedObject != nil else { return }
selectedObject = trackedObject
}
}
/// Developer setting to translate assuming the detected plane extends infinitely.
let translateAssumingInfinitePlane = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
In viewDidLoad, before you setup the scene add this code:
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let panGesture = ThresholdPanGesture(target: self, action: #selector(didPan(_:)))
panGesture.delegate = self
// Add gestures to the `sceneView`.
sceneView.addGestureRecognizer(panGesture)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
At the very end of your ViewController.swift add this code:
// MARK: - Pan Gesture Block
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
#objc
func didPan(_ gesture: ThresholdPanGesture) {
switch gesture.state {
case .began:
// Check for interaction with a new object.
if let object = objectInteracting(with: gesture, in: sceneView) {
trackedObject = object // as? VirtualObject
}
case .changed where gesture.isThresholdExceeded:
guard let object = trackedObject else { return }
let translation = gesture.translation(in: sceneView)
let currentPosition = currentTrackingPosition ?? CGPoint(sceneView.projectPoint(object.position))
// The `currentTrackingPosition` is used to update the `selectedObject` in `updateObjectToCurrentTrackingPosition()`.
currentTrackingPosition = CGPoint(x: currentPosition.x + translation.x, y: currentPosition.y + translation.y)
gesture.setTranslation(.zero, in: sceneView)
case .changed:
// Ignore changes to the pan gesture until the threshold for displacment has been exceeded.
break
case .ended:
// Update the object's anchor when the gesture ended.
guard let existingTrackedObject = trackedObject else { break }
addOrUpdateAnchor(for: existingTrackedObject)
fallthrough
default:
// Clear the current position tracking.
currentTrackingPosition = nil
trackedObject = nil
}
}
// - MARK: Object anchors
/// - Tag: AddOrUpdateAnchor
func addOrUpdateAnchor(for object: VirtualObject) {
// If the anchor is not nil, remove it from the session.
if let anchor = object.anchor {
sceneView.session.remove(anchor: anchor)
}
// Create a new anchor with the object's current transform and add it to the session
let newAnchor = ARAnchor(transform: object.simdWorldTransform)
object.anchor = newAnchor
sceneView.session.add(anchor: newAnchor)
}
private func objectInteracting(with gesture: UIGestureRecognizer, in view: ARSCNView) -> VirtualObject? {
for index in 0..<gesture.numberOfTouches {
let touchLocation = gesture.location(ofTouch: index, in: view)
// Look for an object directly under the `touchLocation`.
if let object = virtualObject(at: touchLocation) {
return object
}
}
// As a last resort look for an object under the center of the touches.
// return virtualObject(at: gesture.center(in: view))
return virtualObject(at: (gesture.view?.center)!)
}
/// Hit tests against the `sceneView` to find an object at the provided point.
func virtualObject(at point: CGPoint) -> VirtualObject? {
// let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
let hitTestResults = sceneView.hitTest(point, options: [SCNHitTestOption.categoryBitMask: 0b00000010, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
// let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
// let hitTestResults = sceneView.hitTest(point, options: hitTestOptions)
return hitTestResults.lazy.compactMap { result in
return VirtualObject.existingObjectContainingNode(result.node)
}.first
}
/**
If a drag gesture is in progress, update the tracked object's position by
converting the 2D touch location on screen (`currentTrackingPosition`) to
3D world space.
This method is called per frame (via `SCNSceneRendererDelegate` callbacks),
allowing drag gestures to move virtual objects regardless of whether one
drags a finger across the screen or moves the device through space.
- Tag: updateObjectToCurrentTrackingPosition
*/
#objc
func updateObjectToCurrentTrackingPosition() {
guard let object = trackedObject, let position = currentTrackingPosition else { return }
translate(object, basedOn: position, infinitePlane: translateAssumingInfinitePlane, allowAnimation: true)
}
/// - Tag: DragVirtualObject
func translate(_ object: VirtualObject, basedOn screenPos: CGPoint, infinitePlane: Bool, allowAnimation: Bool) {
guard let cameraTransform = sceneView.session.currentFrame?.camera.transform,
let result = smartHitTest(screenPos,
infinitePlane: infinitePlane,
objectPosition: object.simdWorldPosition,
allowedAlignments: [ARPlaneAnchor.Alignment.horizontal]) else { return }
let planeAlignment: ARPlaneAnchor.Alignment
if let planeAnchor = result.anchor as? ARPlaneAnchor {
planeAlignment = planeAnchor.alignment
} else if result.type == .estimatedHorizontalPlane {
planeAlignment = .horizontal
} else if result.type == .estimatedVerticalPlane {
planeAlignment = .vertical
} else {
return
}
/*
Plane hit test results are generally smooth. If we did *not* hit a plane,
smooth the movement to prevent large jumps.
*/
let transform = result.worldTransform
let isOnPlane = result.anchor is ARPlaneAnchor
object.setTransform(transform,
relativeTo: cameraTransform,
smoothMovement: !isOnPlane,
alignment: planeAlignment,
allowAnimation: allowAnimation)
}
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
Add some Focus Square Code
// MARK: - Focus Square (code by Apple, some by me)
func updateFocusSquare(isObjectVisible: Bool) {
if isObjectVisible {
focusSquare.hide()
} else {
focusSquare.unhide()
}
// Perform hit testing only when ARKit tracking is in a good state.
if let camera = sceneView.session.currentFrame?.camera, case .normal = camera.trackingState,
let result = smartHitTest(screenCenter) {
DispatchQueue.main.async {
self.sceneView.scene.rootNode.addChildNode(self.focusSquare)
self.focusSquare.state = .detecting(hitTestResult: result, camera: camera)
}
} else {
DispatchQueue.main.async {
self.focusSquare.state = .initializing
self.sceneView.pointOfView?.addChildNode(self.focusSquare)
}
}
}
And add some control Functions:
func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } } // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square
From the VirtualObjectARView.swift COPY! the entire function smartHitTest to the ViewController.swift (so they exist twice)
func smartHitTest(_ point: CGPoint,
infinitePlane: Bool = false,
objectPosition: float3? = nil,
allowedAlignments: [ARPlaneAnchor.Alignment] = [.horizontal, .vertical]) -> ARHitTestResult? {
// Perform the hit test.
let results = sceneView.hitTest(point, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane, .estimatedHorizontalPlane])
// 1. Check for a result on an existing plane using geometry.
if let existingPlaneUsingGeometryResult = results.first(where: { $0.type == .existingPlaneUsingGeometry }),
let planeAnchor = existingPlaneUsingGeometryResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
return existingPlaneUsingGeometryResult
}
if infinitePlane {
// 2. Check for a result on an existing plane, assuming its dimensions are infinite.
// Loop through all hits against infinite existing planes and either return the
// nearest one (vertical planes) or return the nearest one which is within 5 cm
// of the object's position.
let infinitePlaneResults = sceneView.hitTest(point, types: .existingPlane)
for infinitePlaneResult in infinitePlaneResults {
if let planeAnchor = infinitePlaneResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
if planeAnchor.alignment == .vertical {
// Return the first vertical plane hit test result.
return infinitePlaneResult
} else {
// For horizontal planes we only want to return a hit test result
// if it is close to the current object's position.
if let objectY = objectPosition?.y {
let planeY = infinitePlaneResult.worldTransform.translation.y
if objectY > planeY - 0.05 && objectY < planeY + 0.05 {
return infinitePlaneResult
}
} else {
return infinitePlaneResult
}
}
}
}
}
// 3. As a final fallback, check for a result on estimated planes.
let vResult = results.first(where: { $0.type == .estimatedVerticalPlane })
let hResult = results.first(where: { $0.type == .estimatedHorizontalPlane })
switch (allowedAlignments.contains(.horizontal), allowedAlignments.contains(.vertical)) {
case (true, false):
return hResult
case (false, true):
// Allow fallback to horizontal because we assume that objects meant for vertical placement
// (like a picture) can always be placed on a horizontal surface, too.
return vResult ?? hResult
case (true, true):
if hResult != nil && vResult != nil {
return hResult!.distance < vResult!.distance ? hResult! : vResult!
} else {
return hResult ?? vResult
}
default:
return nil
}
}
You might see some errors in the copied function regarding the hitTest. Just correct it like so:
hitTest... // which gives an Error
sceneView.hitTest... // this should correct it
Implement the renderer updateAtTime function and add this lines:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// For the Focus Square
if isFocusSquareEnabled { showFocusSquare() }
self.updateObjectToCurrentTrackingPosition() // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
}
And finally add some helper functions for the Focus Square
func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } } // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square
At this point you might still see about a dozen errors and warnings in the imported files, this might occur, when doing this in Swift 5 and you have some Swift 4 files. Just let Xcode correct the errors. (Its all about renaming some code statements, Xcode knows best)
Go in VirtualObject.swift and search for this code block:
if smoothMovement {
let hitTestResultDistance = simd_length(positionOffsetFromCamera)
// Add the latest position and keep up to 10 recent distances to smooth with.
recentVirtualObjectDistances.append(hitTestResultDistance)
recentVirtualObjectDistances = Array(recentVirtualObjectDistances.suffix(10))
let averageDistance = recentVirtualObjectDistances.average!
let averagedDistancePosition = simd_normalize(positionOffsetFromCamera) * averageDistance
simdPosition = cameraWorldPosition + averagedDistancePosition
} else {
simdPosition = cameraWorldPosition + positionOffsetFromCamera
}
Outcomment or replace this entire block by this single line of code:
simdPosition = cameraWorldPosition + positionOffsetFromCamera
At this point you should be able to compile the project and run it on a device. You should see the Spaceship and a yellow focus square that should already work.
To start placing an Object, that you can drag you need some function to create a so called VirtualObject as I said in the beginning.
Use this example function to test (add it somewhere in the view controller):
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
if focusSquare.state != .initializing {
let position = SCNVector3(focusSquare.lastPosition!)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let testObject = VirtualObject() // give it some name, when you dont have anything to load
testObject.geometry = SCNCone(topRadius: 0.0, bottomRadius: 0.2, height: 0.5)
testObject.geometry?.firstMaterial?.diffuse.contents = UIColor.red
testObject.categoryBitMask = 0b00000010
testObject.name = "test"
testObject.castsShadow = true
testObject.position = position
sceneView.scene.rootNode.addChildNode(testObject)
}
}
Note: everything you want to drag on a plane, must be setup using VirtualObject() instead of SCNNode(). Everything else regarding the VirtualObject stays the same as SCNNode
(You can also add some common SCNNode extensions as well, like the one to load scenes by its name - useful when referencing imported models)
Have fun!
I added some of my ideas to Claessons's answer. I noticed some lag when dragging the node around. I found that the node cannot follow the finger's movement.
To make the node move more smoothly, I added a variable that keeps track of the node that is currently being moved, and set the position to the location of the touch.
var selectedNode: SCNNode?
Also, I set a .categoryBitMask value to specify the category of nodes that I want to edit(move). The default bit mask value is 1.
The reason why we set the category bit mask is to distinguish between different kinds of nodes, and specify those that you wish to select (to move around, etc).
enum CategoryBitMask: Int {
case categoryToSelect = 2 // 010
case otherCategoryToSelect = 4 // 100
// you can add more bit masks below . . .
}
Then, I added a UILongPressGestureRecognizer in viewDidLoad().
let longPressRecognizer = UILongPressGestureRecognizer(target: self, action: #selector(longPressed))
self.sceneView.addGestureRecognizer(longPressRecognizer)
The following is the UILongPressGestureRecognizer I used to detect a long press, which initiates the dragging of the node.
First, obtain the touch location from the recognizerView
#objc func longPressed(recognizer: UILongPressGestureRecognizer) {
guard let recognizerView = recognizer.view as? ARSCNView else { return }
let touch = recognizer.location(in: recognizerView)
The following code runs once when a long press is detected.
Here, we perform a hitTest to select the node that has been touched. Note that here, we specify a .categoryBitMask option to select only nodes of the following category: CategoryBitMask.categoryToSelect
// Runs once when long press is detected.
if recognizer.state == .began {
// perform a hitTest
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: CategoryBitMask.categoryToSelect])
guard let hitNode = hitTestResult.first?.node else { return }
// Set hitNode as selected
self.selectedNode = hitNode
The following code will run periodically until the user releases the finger.
Here we perform another hitTest to obtain the plane you want the node to move along.
// Runs periodically after .began
} else if recognizer.state == .changed {
// make sure a node has been selected from .began
guard let hitNode = self.selectedNode else { return }
// perform a hitTest to obtain the plane
let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
guard let hitPlane = hitTestPlane.first else { return }
hitNode.position = SCNVector3(hitPlane.worldTransform.columns.3.x,
hitNode.position.y,
hitPlane.worldTransform.columns.3.z)
Make sure you deselect the node when the finger is removed from the screen.
// Runs when finger is removed from screen. Only once.
} else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{
guard let hitNode = self.selectedNode else { return }
// Undo selection
self.selectedNode = nil
}
}
Kind of late answer but I know I had some problems solving this as well. Eventually I figured out a way to do it by performing two separate hit tests whenever my gesture recognizer is called.
First, I perform a hit test for my 3d-object to detect if I'm currently pressing an object or not (as you would get results for pressing featurePoints, planes etc. if you don't specify any options). I do this by using the .categoryBitMaskvalue of SCNHitTestOption.
Keep in mind you have to assign the correct .categoryBitMask value to your object node and all it's child nodes beforehand in order for the hit test to work. I declare an enum I can use for that:
enum BodyType : Int {
case ObjectModel = 2;
}
As becomes apparent by the answer to my question about .categoryBitMask values I posted here, it is important to consider what values you assign your bitmask.
Below is the code i use in in conjunction with UILongPressGestureRecognizer in
order to select the object I'm currently pressing:
guard let recognizerView = recognizer.view as? ARSCNView else { return }
let touch = recognizer.location(in: recognizerView)
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
guard let modelNodeHit = hitTestResult.first?.node else { return }
After that I perform a 2nd hit test in order to find a plane I'm pressing on.
You can use the type .existingPlaneUsingExtent if you don't want to move your object further than the edge of a plane, or .existingPlane if you want to move your object indefinitely along a detected plane surface.
var planeHit : ARHitTestResult!
if recognizer.state == .changed {
let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
guard hitTestPlane.first != nil else { return }
planeHit = hitTestPlane.first!
modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)
}else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{
modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)
}
I made a GitHub repo when I tried this out while also experimenting with ARAnchors. You can check it out if you want to see my method in practice, but I did not make it with the intention of anyone else using it so it's quite unfinished. Also, the development branch should support some functionality for an object with more childNodes.
EDIT: ==================================
For clarification if you want to use a .scn object instead of a regular geometry, you need to iterate through all the child nodes of the object when creating it, setting the bitmask of each child like this:
let objectModelScene = SCNScene(named:
"art.scnassets/object/object.scn")!
let objectNode = objectModelScene.rootNode.childNode(
withName: "theNameOfTheParentNodeOfTheObject", recursively: true)
objectNode.categoryBitMask = BodyType.ObjectModel.rawValue
objectNode.enumerateChildNodes { (node, _) in
node.categoryBitMask = BodyType.ObjectModel.rawValue
}
Then, in the gesture recognizer after you get a hitTestResult
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
you need to find the parent node since otherwise you might be moving the individual child node you just pressed. Do this by searching recursively upwards through the node tree of the node you just found.
guard let objectNode = getParentNodeOf(hitTestResult.first?.node) else { return }
where you declare the getParentNode-method as follows
func getParentNodeOf(_ nodeFound: SCNNode?) -> SCNNode? {
if let node = nodeFound {
if node.name == "theNameOfTheParentNodeOfTheObject" {
return node
} else if let parent = node.parent {
return getParentNodeOf(parent)
}
}
return nil
}
Then you are free to perform any operation on the objectNode, as it will be the parent node of your .scn object, meaning that any transformation applied to it will also be applied to the child nodes.
As #ZAY mentioned out, Apple made it quite confusing in addition they used ARRaycastQuery which only works on iOS 13 and above. Therefore, I reached to a solution that works by using the current camera orientation to calculate the translation on a plane in the world coordinates.
First, by using this snippet we are able to get the current orientation the user is facing using quaternions.
private func getOrientationYRadians()-> Float {
guard let cameraNode = arSceneView.pointOfView else { return 0 }
//Get camera orientation expressed as a quaternion
let q = cameraNode.orientation
//Calculate rotation around y-axis (heading) from quaternion and convert angle so that
//0 is along -z-axis (forward in SceneKit) and positive angle is clockwise rotation.
let alpha = Float.pi - atan2f( (2*q.y*q.w)-(2*q.x*q.z), 1-(2*pow(q.y,2))-(2*pow(q.z,2)) )
// here I convert the angle to be 0 when the user is facing +z-axis
return alpha <= Float.pi ? abs(alpha - (Float.pi)) : (3*Float.pi) - alpha
}
Handle Pan Method
private var lastPanLocation2d: CGPoint!
#objc func handlePan(panGesture: UIPanGestureRecognizer) {
let state = panGesture.state
guard state != .failed && state != .cancelled else {
return
}
let touchLocation = panGesture.location(in: self)
if (state == .began) {
lastPanLocation2d = touchLocation
}
// 200 here is a random value that controls the smoothness of the dragging effect
let deltaX = Float(touchLocation.x - lastPanLocation2d!.x)/200
let deltaY = Float(touchLocation.y - lastPanLocation2d!.y)/200
let currentYOrientationRadians = getOrientationYRadians()
// convert delta in the 2D dimensions to the 3d world space using the current rotation
let deltaX3D = (deltaY*sin(currentYOrientationRadians))+(deltaX*cos(currentYOrientationRadians))
let deltaY3D = (deltaY*cos(currentYOrientationRadians))+(-deltaX*sin(currentYOrientationRadians))
// assuming that the node is currently positioned on a plane so the y-translation will be zero
let translation = SCNVector3Make(deltaX3D, 0.0, deltaY3D)
nodeToDrag.localTranslate(by: translation)
lastPanLocation2d = touchLocation
}

How to add black and white filter on arkit (swift4)

All I want to do is take the basic arkit view and turn it into a black and white view. Right now the basic view is just normal and I have no idea on how add the filter. Ideally when taking a screenshot the black and white filter is added onto the screenshot.
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
#IBAction func changeTextColour(){
let snapShot = self.augmentedRealityView.snapshot()
UIImageWriteToSavedPhotosAlbum(snapShot, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
}
}
If you want to apply the filter in real-time the best way to achieve that is to use SCNTechnique. Techniques are used for postprocessing and allow us to render an SCNView content in several passes – exactly what we need (first render a scene, then apply an effect to it).
Here's the example project.
Plist setup
First, we need to describe a technique in a .plist file.
Here's a screenshot of a plist that I've come up with (for better visualization):
And here's it's source:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>sequence</key>
<array>
<string>apply_filter</string>
</array>
<key>passes</key>
<dict>
<key>apply_filter</key>
<dict>
<key>metalVertexShader</key>
<string>scene_filter_vertex</string>
<key>metalFragmentShader</key>
<string>scene_filter_fragment</string>
<key>draw</key>
<string>DRAW_QUAD</string>
<key>inputs</key>
<dict>
<key>scene</key>
<string>COLOR</string>
</dict>
<key>outputs</key>
<dict>
<key>color</key>
<string>COLOR</string>
</dict>
</dict>
</dict>
</dict>
The topic of SCNTechniques is quire broad and I will only quickly cover the things we need for the case at hand. To get a real understanding of what they are capable of I recommend reading Apple's comprehensive documentation on techniques.
Technique description
passes is a dictionary containing description of passes that you want an SCNTechnique to perform.
sequence is an array that specifies an order in which these passes are going to be performed using their keys.
You do not specify the main render pass here (meaning whatever is rendered without applying SCNTechniques) – it is implied and it's resulting color can be accessed using COLOR constant (more on it in a bit).
So the only "extra" pass (besides the main one) that we are going to do will be apply_filter that converts colors into black and white (it can be named whatever you want, just make sure it has the same key in passes and sequence).
Now to the description of the apply_filter pass itself.
Render pass description
metalVertexShader and metalFragmentShader – names of Metal shader functions that are going to be used for drawing.
draw defines what the pass is going to render. DRAW_QUAD stands for:
Render only a rectangle covering the entire bounds of the view. Use
this option for drawing passes that process image buffers output by
earlier passes.
which means, roughly speaking, that we are going to be rendering a plain "image" with out render pass.
inputs specifies input resources that we will be able to use in shaders. As I previously said, COLOR refers to a color data provided by a main render pass.
outputs specifies outputs. It can be color, depth or stencil, but we only need a color output. COLOR value means that we, simply put, are going to be rendering "directly" to the screen (as opposed to rendering into intermediate targets, for example).
Metal shader
Create a .metal file with following contents:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct VertexInput {
float4 position [[ attribute(SCNVertexSemanticPosition) ]];
float2 texcoord [[ attribute(SCNVertexSemanticTexcoord0) ]];
};
struct VertexOut {
float4 position [[position]];
float2 texcoord;
};
// metalVertexShader
vertex VertexOut scene_filter_vertex(VertexInput in [[stage_in]])
{
VertexOut out;
out.position = in.position;
out.texcoord = float2((in.position.x + 1.0) * 0.5 , (in.position.y + 1.0) * -0.5);
return out;
}
// metalFragmentShader
fragment half4 scene_filter_fragment(VertexOut vert [[stage_in]],
texture2d<half, access::sample> scene [[texture(0)]])
{
constexpr sampler samp = sampler(coord::normalized, address::repeat, filter::nearest);
constexpr half3 weights = half3(0.2126, 0.7152, 0.0722);
half4 color = scene.sample(samp, vert.texcoord);
color.rgb = half3(dot(color.rgb, weights));
return color;
}
Notice, that the function names for fragment and vertex shaders should be the same names that are specified in the plist file in the pass descriptor.
To get a better understanding of what VertexInput and VertexOut structures mean, refer to the SCNProgram documentation.
The given vertex function can be used pretty much in any DRAW_QUAD render pass. It basically gives us normalized coordinates of the screen space (that are accessed with vert.texcoord in the fragment shader).
The fragment function is where all the "magic" happens. There, you can manipulate the texture that you've got from the main pass. Using this setup you can potentially implement a ton of filters/effects and more.
In our case, I used a basic desaturation (zero saturation) formula to get the black and white colors.
Swift setup
Now, we can finally use all of this in the ARKit/SceneKit.
let plistName = "SceneFilterTechnique" // the name of the plist you've created
guard let url = Bundle.main.url(forResource: plistName, withExtension: "plist") else {
fatalError("\(plistName).plist does not exist in the main bundle")
}
guard let dictionary = NSDictionary(contentsOf: url) as? [String: Any] else {
fatalError("Failed to parse \(plistName).plist as a dictionary")
}
guard let technique = SCNTechnique(dictionary: dictionary) else {
fatalError("Failed to initialize a technique using \(plistName).plist")
}
and just set it as technique of the ARSCNView.
sceneView.technique = technique
That's it. Now the whole scene is going to be rendered in grayscale including when taking snapshots.
Filter ARSCNView Snapshot: If you want to create a black and white screenShot of your ARSCNView you can do something like this which returns a UIImage in GrayScale and whereby augmentedRealityView refers to an ARSCNView:
/// Converts A UIImage To A High Contrast GrayScaleImage
///
/// - Returns: UIImage
func highContrastBlackAndWhiteFilter() -> UIImage?
{
//1. Convert It To A CIIamge
guard let convertedImage = CIImage(image: self) else { return nil }
//2. Set The Filter Parameters
let filterParameters = [kCIInputBrightnessKey: 0.0,
kCIInputContrastKey: 1.1,
kCIInputSaturationKey: 0.0]
//3. Apply The Basic Filter To The Image
let imageToFilter = convertedImage.applyingFilter("CIColorControls", parameters: filterParameters)
//4. Set The Exposure
let exposure = [kCIInputEVKey: NSNumber(value: 0.7)]
//5. Process The Image With The Exposure Setting
let processedImage = imageToFilter.applyingFilter("CIExposureAdjust", parameters: exposure)
//6. Create A CG GrayScale Image
guard let grayScaleImage = CIContext().createCGImage(processedImage, from: processedImage.extent) else { return nil }
return UIImage(cgImage: grayScaleImage, scale: self.scale, orientation: self.imageOrientation)
}
An example of using this therefore could be like so:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
//1. Create A UIImageView Dynamically
let imageViewResult = UIImageView(frame: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height))
self.view.addSubview(imageViewResult)
//2. Create The Snapshot & Get The Black & White Image
guard let snapShotImage = self.augmentedRealityView.snapshot().highContrastBlackAndWhiteFilter() else { return }
imageViewResult.image = snapShotImage
//3. Remove The ImageView After A Delay Of 5 Seconds
DispatchQueue.main.asyncAfter(deadline: .now() + 5) {
imageViewResult.removeFromSuperview()
}
}
Which will yield a result something like this:
In order to make your code reusable you could also create an extension of `UIImage:
//------------------------
//MARK: UIImage Extensions
//------------------------
extension UIImage
{
/// Converts A UIImage To A High Contrast GrayScaleImage
///
/// - Returns: UIImage
func highContrastBlackAndWhiteFilter() -> UIImage?
{
//1. Convert It To A CIIamge
guard let convertedImage = CIImage(image: self) else { return nil }
//2. Set The Filter Parameters
let filterParameters = [kCIInputBrightnessKey: 0.0,
kCIInputContrastKey: 1.1,
kCIInputSaturationKey: 0.0]
//3. Apply The Basic Filter To The Image
let imageToFilter = convertedImage.applyingFilter("CIColorControls", parameters: filterParameters)
//4. Set The Exposure
let exposure = [kCIInputEVKey: NSNumber(value: 0.7)]
//5. Process The Image With The Exposure Setting
let processedImage = imageToFilter.applyingFilter("CIExposureAdjust", parameters: exposure)
//6. Create A CG GrayScale Image
guard let grayScaleImage = CIContext().createCGImage(processedImage, from: processedImage.extent) else { return nil }
return UIImage(cgImage: grayScaleImage, scale: self.scale, orientation: self.imageOrientation)
}
}
Which you can then use easily like so:
guard let snapShotImage = self.augmentedRealityView.snapshot().highContrastBlackAndWhiteFilter() else { return }
Remembering that you should place your extension above your class declaration e.g:
extension UIImage{
}
class ViewController: UIViewController, ARSCNViewDelegate {
}
So based on the code provided in your question you would have something like this:
/// Creates A Black & White ScreenShot & Saves It To The Photo Album
#IBAction func changeTextColour(){
//1. Create A Snapshot
guard let snapShotImage = self.augmentedRealityView.snapshot().highContrastBlackAndWhiteFilter() else { return }
//2. Save It The Photos Album
UIImageWriteToSavedPhotosAlbum(snapShotImage, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
}
///Calback To Check Whether The Image Has Been Saved
#objc func image(_ image: UIImage, didFinishSavingWithError error: Error?, contextInfo: UnsafeRawPointer) {
if let error = error {
print("Error Saving ARKit Scene \(error)")
} else {
print("ARKit Scene Successfully Saved")
}
}
Live Rendering In Black & White:
Using this brilliant answer here by diviaki I was also able to get the entire camera feed to render in Black and White using the following methods:
1st. Register for the ARSessionDelegate like so:
augmentedRealitySession.delegate = self
2nd. Then in the following delegate callback add the following:
//-----------------------
//MARK: ARSessionDelegate
//-----------------------
extension ViewController: ARSessionDelegate{
func session(_ session: ARSession, didUpdate frame: ARFrame) {
/*
Full Credit To https://stackoverflow.com/questions/45919745/reliable-access-and-modify-captured-camera-frames-under-scenekit
*/
//1. Convert The Current Frame To Black & White
guard let currentBackgroundFrameImage = augmentedRealityView.session.currentFrame?.capturedImage,
let pixelBufferAddressOfPlane = CVPixelBufferGetBaseAddressOfPlane(currentBackgroundFrameImage, 1) else { return }
let x: size_t = CVPixelBufferGetWidthOfPlane(currentBackgroundFrameImage, 1)
let y: size_t = CVPixelBufferGetHeightOfPlane(currentBackgroundFrameImage, 1)
memset(pixelBufferAddressOfPlane, 128, Int(x * y) * 2)
}
}
Which successfully renders the camera feed Black & White:
Filtering Elements Of An SCNScene In Black & White:
As #Confused rightly said, If you decided that you wanted the cameraFeed to be in colour, but the contents of your AR Experience to be in Black & White you can apply a filter directly to an SCNNode using it's filters property which is simply:
An array of Core Image filters to be applied to the rendered contents
of the node.
Let's say for example that we dynamically create 3 SCNNodes with a Sphere Geometry we can apply a CoreImageFilter to these directly like so:
/// Creates 3 Objects And Adds Them To The Scene (Rendering Them In GrayScale)
func createObjects(){
//1. Create An Array Of UIColors To Set As The Geometry Colours
let colours = [UIColor.red, UIColor.green, UIColor.yellow]
//2. Create An Array Of The X Positions Of The Nodes
let xPositions: [CGFloat] = [-0.3, 0, 0.3]
//3. Create The Nodes & Add Them To The Scene
for i in 0 ..< 3{
let sphereNode = SCNNode()
let sphereGeometry = SCNSphere(radius: 0.1)
sphereGeometry.firstMaterial?.diffuse.contents = colours[i]
sphereNode.geometry = sphereGeometry
sphereNode.position = SCNVector3( xPositions[i], 0, -1.5)
augmentedRealityView.scene.rootNode.addChildNode(sphereNode)
//a. Create A Black & White Filter
guard let blackAndWhiteFilter = CIFilter(name: "CIColorControls", withInputParameters: [kCIInputSaturationKey:0.0]) else { return }
blackAndWhiteFilter.name = "bw"
sphereNode.filters = [blackAndWhiteFilter]
sphereNode.setValue(CIFilter(), forKeyPath: "bw")
}
}
Which will yield a result something like the following:
For a full list of these filters you can refer to the following: CoreImage Filter Reference
Example Project: Here is a complete Example Project which you can download and explore for yourself.
Hope it helps...
The snapshot object should be an UIImage. Apply filters on this UIImage object by importing CoreImage framework and then apply Core Image filters on it. You should be adjusting the exposure and control values on the image. For more implementation details check this answer . From iOS6, you can also use CIColorMonochromefilter to achieve the same effect.
Here is the apple documentation for all the available filters. Click on each of the filters, to know the visual effects on the image upon application of the filter.
Here is the swift 4 code.
func imageBlackAndWhite() -> UIImage?
{
if let beginImage = CoreImage.CIImage(image: self)
{
let paramsColor: [String : Double] = [kCIInputBrightnessKey: 0.0,
kCIInputContrastKey: 1.1,
kCIInputSaturationKey: 0.0]
let blackAndWhite = beginImage.applyingFilter("CIColorControls", parameters: paramsColor)
let paramsExposure: [String : AnyObject] = [kCIInputEVKey: NSNumber(value: 0.7)]
let output = blackAndWhite.applyingFilter("CIExposureAdjust", parameters: paramsExposure)
guard let processedCGImage = CIContext().createCGImage(output, from: output.extent) else {
return nil
}
return UIImage(cgImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}
return nil
}
This might be the easiest and fastest way to do this:
Apply a CoreImage Filter to the Scene:
https://developer.apple.com/documentation/scenekit/scnnode/1407949-filters
This filter gives a very good impression of a black and white photograph, with good transitions through grays: https://developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIPhotoEffectMono
You could also use this one, and get results easy to shift in hue, too:
https://developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIColorMonochrome
And here, in Japanese, is the proof of filters and SceneKit ARKit working together: http://appleengine.hatenablog.com/entry/advent20171215

How to remove the border/drop shadow from an UIImageView?

I've been generating QR Codes using the CIQRCodeGenerator CIFilter and it works very well:
But when I resize the UIImageView and generate again
#IBAction func sizeSliderValueChanged(_ sender: UISlider) {
qrImageView.transform = CGAffineTransform(scaleX: CGFloat(sender.value), y: CGFloat(sender.value))
}
I get a weird Border/DropShadow around the image sometimes:
How can I prevent it from appearing at all times or remove it altogether?
I have no idea what it is exactly, a border, a dropShadow or a Mask, as I'm new to Swift/iOS.
Thanks in advance!
PS. I didn't post any of the QR-Code generating code as it's pretty boilerplate and can be found in many tutorials out there, but let me know if you need it
EDIT:
code to generate the QR Code Image
private func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
guard let qrEncodedImage = filter.outputImage else {
return nil
}
let scaleX = qrImageView.frame.size.width / qrEncodedImage.extent.size.width
let scaleY = qrImageView.frame.size.height / qrEncodedImage.extent.size.height
let transform = CGAffineTransform(scaleX: scaleX, y: scaleY )
if let outputImage = filter.outputImage?.applying(transform) {
return UIImage(ciImage: outputImage)
}
return nil
}
Code for button pressed
#IBAction func generateCodeButtonPressed(_ sender: CustomButton) {
if codeTextField.text == "" {
return
}
let newEncodedMessage = codeTextField.text!
let encodedImage: UIImage = generateQRCode(from: newEncodedMessage)!
qrImageView.image = encodedImage
qrImageView.transform = CGAffineTransform(scaleX: CGFloat(sizeSlider.value), y: CGFloat(sizeSlider.value))
qrImageView.layer.minificationFilter = kCAFilterNearest
qrImageView.layer.magnificationFilter = kCAFilterNearest
}
It’s a little hard to be sure without the code you’re using to generate the image for the image view, but that looks like a resizing artifact—the CIImage may be black or transparent outside the edges of the QR code, and when the image view size doesn’t match the image’s intended size, the edges get fuzzy and either the image-outside-its-boundaries or the image view’s background color start bleeding in. Might be able to fix it by setting the image view layer’s minification/magnification filters to “nearest neighbor”, like so:
imageView.layer.minificationFilter = kCAFilterNearest
imageView.layer.magnificationFilter = kCAFilterNearest
Update from seeing the code you added—you’re currently resizing the image twice, first with the call to applying(transform) and then by setting a transform on the image view itself. I suspect the first resize is adding the blurriness, which the minification / magnification filter I suggested earlier then can’t fix. Try shortening generateQRCode to this:
private func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
if let qrEncodedImage = filter.outputImage {
return UIImage(cgImage: qrEncodedImage)
}
return nil
}
I think the problem here is that you try to resize it to "non-square" (as your scaleX isn't always the same as scaleY), while the QR code is always square so both side should have the same scale factor to get a non-blurred image.
Something like:
let scaleX = qrImageView.frame.size.width / qrEncodedImage.extent.size.width
let scaleY = qrImageView.frame.size.height / qrEncodedImage.extent.size.height
let scale = max(scaleX, scaleY)
let transform = CGAffineTransform(scaleX: scale, y: scale)
will make sure you have "non-bordered/non-blurred/squared" UIImage.
I guess the issue is with the image(png) file not with your UIImageView. Try to use another image and I hope it will work!

Resources