I have an odd problem implementing corner detection using GPUImage.
I'm trying to use the filter template from Brad's download as a starting point, but although I can generate a composite view of the image and the corner points (as single white pixels), when I try to add in the crosshair generator, the callback function is never called.
In my simplified example, I have an output view configured as RenderView and videoCamera defined as Camera?
do {
videoCamera = try Camera(sessionPreset:AVCaptureSessionPreset640x480, location:.backFacing)
videoCamera!.runBenchmark = true
} catch {
videoCamera = nil
print("Couldn't initialize camera with error: \(error)")
}
let filter = HarrisCornerDetector()
let crosshairGenerator = CrosshairGenerator(size: Size(width: 480, height: 640))
filter.cornersDetectedCallback = { corners in
crosshairGenerator.renderCrosshairs(corners)
}
videoCamera! --> filter
let blendFilter = AlphaBlend()
videoCamera! --> blendFilter --> renderView
//crosshairGenerator --> blendFilter // INPUT 1: if I add this line, the callback never happens
filter --> blendFilter // INPUT 2: with this input to blendFilter, the callback is good
As shown, the callback function is called as expected, and I see little white dots in the output.
If I remove comments to enable INPUT 1, and comment out INPUT 2, the display shows the camera only, and the corner detection callback is never called.
The sample project that Brad provides works on my device, so I know there's no hardware or iOS issue!
Any thoughts?
You forgot to input your crosshair width
crosshairGenerator.crosshairWidth = 15.0
and there is no threshold that you declared
Make the Harris-detector object an instance property.
Related
How can I get my VNCoreMLRequest to detect objects appearing anywhere within the fullscreen view?
I am currently using the Apple sample project for object recognition in breakfast foods:BreakfastFinder. The model and recognition works well, and generally gives the correct bounding box (visual) of the objects it is detecting / finding.
The issue arises here with changing the orientation of this detection.
In portrait mode, the default orientation for this project, the model identifies objects well in the full bounds of the view. Naturally, given the properties of the SDK objects, rotating the camera causes poor performance and visual identification.
In landscape mode, the model behaves strangely. The window / area of which the model is detecting objects is not the full view. Instead, it is (what seems like) the same aspect ratio of the phone itself, but centered and in portrait mode. I have a screenshot below showing approximately where the model stops detecting objects when in landscape:
The blue box with red outline is approximately where the detection stops. It behaves strangely, but consistently does not find any objects outside this approbate view / near the left or right edge. However, the top and bottom edges near the center detect without any issue.
regionOfInterest
I have adjusted this to be the maximum: x: 0, y: 0, width: 1, height: 1. This made no difference
imageCropAndScaleOption
This is the only setting that allows detection in the full screen, however, the performance became noticeably worse, and that's not really an allowable con.
Is there a scale / size setting somewhere in this process that I have not set properly? Or perhaps a mode I am not using. Any help would be most appreciated. Below is my detection controller:
ViewController.swift
// All unchanged from the download in Apples folder
" "
session.sessionPreset = .hd1920x1080 // Model image size is smaller.
...
previewLayer.connection?.videoOrientation = .landscapeRight
" "
VisionObjectRecognitionViewController
#discardableResult
func setupVision() -> NSError? {
// Setup Vision parts
let error: NSError! = nil
guard let modelURL = Bundle.main.url(forResource: "ObjectDetector", withExtension: "mlmodelc") else {
return NSError(domain: "VisionObjectRecognitionViewController", code: -1, userInfo: [NSLocalizedDescriptionKey: "Model file is missing"])
}
do {
let visionModel = try VNCoreMLModel(for: MLModel(contentsOf: modelURL))
let objectRecognition = VNCoreMLRequest(model: visionModel, completionHandler: { (request, error) in
DispatchQueue.main.async(execute: {
// perform all the UI updates on the main queue
if let results = request.results {
self.drawVisionRequestResults(results)
}
})
})
// These are the only properties that impact the detection area
objectRecognition.regionOfInterest = CGRect(x: 0, y: 0, width: 1, height: 1)
objectRecognition.imageCropAndScaleOption = VNImageCropAndScaleOption.scaleFit
self.requests = [objectRecognition]
} catch let error as NSError {
print("Model loading went wrong: \(error)")
}
return error
}
EDIT:
When running the project in portrait mode only (locked by selecting only Portrait in Targets -> General), then rotating the device to landscape, the detection occurs perfectly across the entire screen.
The issue seemed to reside in the rotation of the physical device.
When telling Vision that the device is “not rotated”, but passing all other elements the current orientation, this allowed for the detection bounds to remain the full screen (as if portrait), but allowing the controller to in fact be landscape.
The bounding Boxes are normalised rect which we get from CoreML bounding box observation which we have convert with due ratio of screen to generate boxes in the image for Words
I'm using metal to draw some lines, my drawing canvas has a texture in MTLRenderPassDescriptor and when I draw inside it blending is enabled MTLRenderPipelineDescriptor and I'm using alphaBlendOperation = .max
renderPassDescriptor = MTLRenderPassDescriptor()
let attachment = renderPassDescriptor?.colorAttachments[0]
attachment?.texture = self.texture
attachment?.loadAction = .load
attachment?.storeAction = .store
let rpd = MTLRenderPipelineDescriptor()
rpd.colorAttachments[0].pixelFormat = .rgba8Unorm
let attachment = rpd.colorAttachments[0]!
attachment.isBlendingEnabled = true
attachment.rgbBlendOperation = .max
attachment.alphaBlendOperation = .max
I can change the properties in brush (size, opacity, hardness "blur"). However first two brushes are working really great as in the image bellow
But I have only one weird behavior when I use blurred brush with faded sides where lines are connected the faded areas is not blending as expected and an empty small line created between the connection. the image bellow described this issue, please check the single line and single point and then check the connections you can see this behavior very clear
MTLRenderPassDescriptor Should choose even the bellow alpha from down texture or brush alpha but when tap in the second and third point its making empty line instead of choosing a one of the alpha, Its like making alpha zero in these areas.
This is my faded brush you can see there is a gradian of color but i don't know if there is a problem with it
Please share with me any idea you have to solve it
For some time now, I have been trying to add a realistic ground shadow to an object in RealityKit. For my use case, I will not be using Reality Composer, nor (per this question) will I be using an anchor entity from a horizontal plane (my user will tap to place an object and that tap could align with either a horizontal plane or an ARMeshAnchor, as we support LiDAR in our app).
When I test my USDZ model via QuickLook on iOS, I see that iOS adds a shadow beneath my model, and while not wholly realistic, it appears a bit more "placed" on a surface, as compared to no shadow.
In trying to add my model, I am taking the following steps;
self.model = Entity.load(named: "model.usdz")
When a user taps on the screen, I perform a raycast and add the model to the built anchor;
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
for anchor in anchors {
if anchor.name == "tapped" {
let anchorEntity = AnchorEntity(anchor: anchor)
anchorEntity.addChild(self.model!)
arView.scene.addAnchor(anchorEntity)
}
}
}
When the model is added to the tapped point, there are no ground shadows. As a test, I had gone down the path of trying to add a Directional Light, believing that its placement may cast a light on the object and, therefore, create shadows. I create the light like so;
class Lighting: Entity, HasDirectionalLight {
required init() {
super.init()
self.light = DirectionalLightComponent(color: .white, intensity: 5000, isRealWorldProxy: true)
}
}
I've added a global var lightEntity = AnchorEntity(). Then, in my viewDidLoad method, I am attempting to set up the light like so;
let spotLight = Lighting().light
let shadow = Lighting().shadow
lightAnchor.components.set(shadow!)
lightAnchor.components.set(spotLight)
arView.scene.anchors.append(lightAnchor)
self.model = Entity.load(named: "model.usdz")
While I can see that there is a light shining on the object, it does not seem to cause any shadows to be cast.
if your app supports LiDAR , you can use
arView.environment.sceneUnderstanding.options.insert(.receivesLighting)
I am trying to get the corner points from a still image using GPUImageHarrisCornerDetectionFilter.
I have looked at the example code from the project, I have looked at the documentation, and I have looked at this post that is about the same thing:
GPUImage Harris Corner Detection on an existing UIImage gives a black screen output
But I can't make it work - and I have a hard time understanding how this is supposed to work with still images.
What I have at this point is this:
func harrisCorners() -> [CGPoint] {
var points = [CGPoint]()
let stillImageSource: GPUImagePicture = GPUImagePicture(image: self.image)
let filter = GPUImageHarrisCornerDetectionFilter()
filter.cornersDetectedBlock = { (cornerArray:UnsafeMutablePointer<GLfloat>, cornersDetected:UInt, frameTime:CMTime) in
for index in 0..<Int(cornersDetected) {
points.append(CGPoint(x:CGFloat(cornerArray[index * 2]), y:CGFloat(cornerArray[(index * 2) + 1])))
}
}
filter.forceProcessingAtSize(self.image.size)
stillImageSource.addTarget(filter)
stillImageSource.processImage()
return points
}
This function always returns [] so it's obviously not working.
An interesting detail - I compiled the FilterShowcaseSwift project from GPUImage examples, and the filter fails to find very clear corners, like on a sheet of paper on a black background.
filter.cornersDetectedBlock = { (cornerArray:UnsafeMutablePointer<GLfloat>, cornersDetected:UInt, frameTime:CMTime) in
for index in 0..<Int(cornersDetected) {
points.append(CGPoint(x:CGFloat(cornerArray[index * 2]), y:CGFloat(cornerArray[(index * 2) + 1])))
}
}
This code you have here sets a block that gets called every frame.
This is an asynchronous process so when your function returns that has never been called yet and your array should always be empty. It should be called after the frame has finished processing.
To verify this, set a breakpoint inside that block and see if it gets called.
Warning from Brad Larson (creator of GPUImage) in the comments:
The GPUImage you create here stillImageSource will be deallocated after this function exits and may cause crashes in this case.
I am trying to blur multiple SKNode objects. I do this by having a parent SKEffectNode with a CIFilter set to #"CIGaussianBlur". Like so:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
blurNode.shouldRasterize = YES;
[blurNode setShouldEnableEffects:NO];
[blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
This works fine for a bunch of nodes currently onscreen. But when I space these notes far away from each other (about 3000 pixels), the blurring no longer happens and I get a big black box. This happens regardless of whether the SKNodes I'm blurring are SKShapeNodes or SKSpriteNodes. Here's a sample project with this issue: Sample Project. (By the way, thanks to BobMoff for the initial version found here):
Here's happy blur (when nodes are less than 3000 pixels away from each other):
Sad blur (when nodes are more than 3000 pixels away from each other):
UPDATE
This behavior occurs whenever an SKEffectNode is the parent. It doesn't matter if it's enabling effects, blurring, etc. If the parent node is an SKNode, it's fine. i.e. Even if the parent blur node is created like it is below, you will get the blackness:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
// blurNode.shouldRasterize = YES;
// [blurNode setShouldEnableEffects:NO];
// [blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
// keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
I had a similar problem, with a very wide, panning scene that I wanted to blur.
To get the blur effect to work, I removed any nodes that were sticking out too far past the edges of the scene:
// Property declarations, elsewhere in the class:
var blurNode: SKEffectNode
var mainScene: SKScene
var exParents: [SKNode : SKNode] = [:]
/**
* Remove outlying nodes from the scene and activate the SKEffectNode
*/
func blurScene() {
let FILTER_MARGIN: CGFloat = 100
let widthMax: CGFloat = mainScene.size.width + FILTER_MARGIN
let heightMax: CGFloat = mainScene.size.height + FILTER_MARGIN
// Recursively iterate through all blurNode's children
blurNode.enumerateChildNodesWithName(".//*", usingBlock: {
[unowned self]
node, stop in
if node.parent != nil && node.scene != nil { // Ignore nodes we already removed
if let sprite = node as? SKSpriteNode {
// Calculate sprite node position in scene coordinates
let sceneOrig = sprite.scene!.convertPoint(sprite.position, fromNode: sprite.parent!)
// Find left, right, bottom and top edges of sprite
let l = sceneOrig.x - sprite.size.width*sprite.anchorPoint.x
let r = l + sprite.size.width
let b = sceneOrig.y - sprite.size.height*sprite.anchorPoint.y
let t = b + sprite.size.height
if l < -FILTER_MARGIN || r > widthMax || b < -FILTER_MARGIN || t > heightMax {
self.exParents[sprite] = sprite.parent!
sprite.removeFromParent()
}
}
}
})
blurNode.shouldEnableEffects = true
}
/**
* Disable blur and reparent nodes we removed earlier
*/
func removeBlur() {
self.blurNode.shouldEnableEffects = false
for (kid, parent) in exParents {
parent.addChild(kid)
}
exParents = [:]
}
NOTES:
This does remove content from your effect node, so extremely wide nodes won't show up in the final result:
You can see the mountain highlighted in red stuck out too far and was removed from the resulting blur.
This code only considers SKSpriteNodes. Empty SKNodes don't seem to break the effect node, but if you're using other visible nodes like SKShapeNodes or SKLabelNodes, you'll have to modify this code to include them.
If you have ignoreSiblingOrder = false, this code might mess up your z-ordering since you can't guarantee what order the nodes are added back to the scene.
Stuff I tried that didn't work
Simply saying node.hidden = true instead of using removeFromParent() doesn't work. That would be WAY too easy ;)
Using an SKCropNode to crop out outlying content didn't work for me. I tried having the SKEffectNode parent the SKCropNode and the other way around, but the black square appeared no matter how small I made the cropped area. This might still be worth looking into if you're desperate for a cleaner solution.
As noted here, SKScenes are secretly SKEffectNodes and you can set their filter just like our blurNode above. SKScenes don't show a black screen when their content is too big. Unfortunately, they seem to just silently disable the filter instead. Again, I might have missed something, so you could explore this option further if you're trying to apply an effect across the entire scene.
Alternate Solutions
You can capture an image of the whole screen and apply a filter to that, as suggested here. I ended up going with an even simpler solution; I took a generic screenshot of the stuff I wanted to blur, then applied a very heavy blur so you can't see the precise details. I used that as the blurred background and you can hardly tell it's not the real thing ;) This also saves a healthy chunk of memory and avoids a small UI hiccup.
Musings
This is a pretty nasty bug, and I hope Apple comes up with a solution soon. You can click this cute picture of a camera to get a GPU trace and some insight on what's happening:
The device seems to be discarding the framebuffer for the effect node because it takes up too much memory. This is affirmed by the fact that when there's more memory pressure on the device, it's easier to get the 'black square' on smaller content in the SKEffectNode.
I used a method that worked for my game but it requires the blurred area to be static without movement.
On iOS 10 using Swift 3 I used SKSpriteNode, SKView, SKEffectNode, CIFilter. I created a sprite from a texture returned from the SKView method "texture from node" and passed the current scene as the parameter because it inherits from SKNode. So essentially I was taking a "screenshot" of the scene and creating a sprite from it. I then put it in an SKEffectNode with a blur filter. (set "should rasterize" to true for better performance as I only needed to blur once). Finally I added the new sprite to the scene. From there you could add sprites to the scene and place them above the new blurred node.
let blurFilter = CIFilter(name: "CIGaussianBlur")!
let blurAmount = 15.0
blurFilter.setValue(blurAmount, forKey: kCIInputRadiusKey)
let blurEffect = SKEffectNode()
blurEffect.shouldRasterize = true
let screenshotNode = SKSpriteNode(texture: gameScene.view!.texture(from: gameScene))
blurEffect.addChild(screenshotNode)
blurEffect.filter = blurFilter
gameScene.addChild(blurEffect)
Possible workaround for the bug:
Use a camera, zoom WAY out, so you can see most everything of your background, take a screenshot style rendering of this image. Crop it to your needs, and then blur it. Then rasterise this.
Then scale this image back up, and slice it up if needs be, and place accordingly.
SKEffectNode renders into a texture. In most iOS systems the maximum size for a texture is 2048x2048. If an SKEffectNode is trying to render content larger than that, it will just use a 2048x2048 texture and anything outside of it will just not appear in the texture. It won't give you any error or warning about this happening; it simply does it silently.
And no, there is no way to tell SKEffectNode to use a texture of a specific size, and pan&clamp the content into it. It always uses a texture that will cover all the child nodes, and if the texture would be too large, it just silently uses that 2048x2048 texture.