How to achieve motion blur effect in SceneKit? - ios

How to achieve "motion effect" in SceneKit? Motion effect is that blur that gets created if you shoot (with a camera) fast moving objects. I am running an action on a node and would like a little blur in the direction of moving when the node is moving, to emphasise that the node is moving fast. Can this be done in SceneKit?
This image has motion effect - blur applied to the whole scene. you can tell that the camera is moving inwards by the direction of blur lines. I only want to apply motion blur to a single object and not while scene.

In recent versions of SceneKit, motion blur is built in — you can just set the motionBlurIntensity on your scene’s camera.
In iOS 10, motion blur is for camera motion only — moving objects won’t blur. (You have to set the movabilityHint for nodes that you want to not blur when the camera moves fast.)
In iOS 11 and later, moving objects can also blur, so you can just set motionBlurIntensity on the camera and everything “just works”.
The rest of this answer predates iOS 10, and is still relevant if you’re (for some reason) supporting iOS 9.x or older.
To get a really good motion blur effect you'd have to write your own shaders and maybe even replace some of the SceneKit CPU-side pipeline -- not for the faint of heart.
For an easier approximation that might still give you some bang for your buck, take a look at the node.filters property and Core Image filters. By selectively applying a linear or zoom blur filter to certain nodes, and carefully setting (or even animating) the filter parameters, you might get a convincing fake motion blur.

You'll want to look into writing a motion blur fragment shader, in either GLSL or Metal Shading Language.

iOS 10 introduced camera.motionBlurIntensity for SCNCamera. Values are between 0.0 and 1.0, with the default at 0.0.
https://developer.apple.com/documentation/scenekit/scncamera/1644099-motionblurintensity

Related

Why we use fixed focus for AR tracking in ARCore?

I am using ARCore to track an image. Based on the following reference, the FOCUSMODE of the camera should be set to FIXED for better AR tracking performance.
Since for each frame we can get camera intrinsic parameter of focal length, why we need to use a fixed focus?
With Fixed Camera Focus ARCore can better calculate a parallax (no near or distant real-world objects must be out of focus), so your Camera Tracking will be reliable and accurate. At Tracking Stage, your gadget should be able to clearly distinguish all textures of surrounding objects and feature points – to build correct 3D scene.
Also, Scene Understanding stage requires fixed focus as well (to correctly detect planes, catch lighting intensity and direction, etc). That's what you expect from ARCore, don't you?
Fixed Focus also guarantees that your "in-focus" rendered 3D model will be placed in scene beside the real-world objects that are "in-focus" too. However, if we're using Depth API we can defocus real-world and virtual objects.
P.S.
In the future ARCore engineers may change the aforementioned behaviour of camera focus.

How to get the lens position on ARKit 1.5

Before ARKit 1.5, we had no way to adjust the focus of the camera and getting the lens position would always return the same value. With ARKit 1.5, however, we can now use autofocus by setting ARWorldTrackingConfiguration.isAutoFocusEnabled. My question is that, is there any way to get the current lens position from ARKit so that I can apply an out-of-focus effect on my virtual objects? I had a look at some classes where this information may be stored, like ARFrame or ARSession, but they don't seem to have such a field.
I've stumbled upon this thread where the OP says that he was able to set the lens position by using some private API's, but this was before the release of ARKit 1.5 and a sure way to get your app rejected by the App Store.
Are there any legal ways to get the lens position from ARKit?
My guess is: probably not, but there are things you might try.
The intrinsics matrix vended by ARCamera is defined to express focal length in pixel units. But I’m not sure if that’s a measurement you could (together with others like aperture) define a depth blur effect with. Nor whether it changes during autofocus (that part you can test, at least).
The AVCapture APIs underlying ARKit offer a lensPosition indicator, but it’s a generic floating point value. Zero is minimum focus distance, one is maximum, and with no real world measurement this corresponds to you wouldn’t know how much blur to apply (or what physically based camera settings in SceneKit, Unity settings to use) for each possible lens position.
Even if you could put lensPosition to use, there’s no API for getting the capture device used by an ARSession. You can probably safely assume it’s the back (wide) camera, though.

Convert ARKit SCNNode's bounding extent

I have an ARKit app that uses plane detection, and successfully places objects on those planes. I want to use some of the information on what's sitting below the object in my approach to shading it - something a bit similar to the WWDC demo where the chameleon blended in with the color of the table. I want to grab the rectangular region of the screen around the footprint of the object, (or in this case, the bounding volume of the whole node would work just as well) so I can take the camera capture data for the region of interest and use it in the image processing, like a metal sphere that reflects the ground it's sitting on. I'm just not sure what combination of transforms to apply - I've tried various combinations of convertPoint and projectPoint, and I occasionally get the origin, height, or width right, but never all 3. Is there an easy helper method I'm missing? I assume basically what I'm looking for is a way of going from SCNNode -> extent.

Can ARKit detect specific surfaces as planes?

Using iOS 11 and iOS 12 and ARKit, we are currently able to detect planes on horizontal surfaces, and we may also visualize that plane on the surface.
I am wondering if we can declare, through some sort of image file, specific surfaces in which we want to detect planes? (possibly ignoring all other planes that ARKit detects from other surfaces)
If that is not possible, could we then perhaps capture the plane detected (via an image), to which we could then process through a CoreML model which identifies that specific surface?
ARKit has no support for such thing at the moment. You can indeed capture the plane detected as an image and if you're able to match this through core ML in real time, I'm sure lot of people would be interested!
You should:
get the 3D position of the corners of the plane
find their 2D position in the frame, using sceneView.projectPoint
extract the frame from the currentFrame.capturedImage
do an affine transform on the image to be left with the your plane, reprojected to a rectangle
do some ML / image processing to detect a match
Keep in mind that the ARKit rectangle detection is often not well aligned, and can have only part of the full plane.
Finally, unfortunately, the feature points that ARKit exposes are not useful since they dont contain any characteristics used for matching feature points across frames, and Apple has not say what algorithm they use to compute their feature points.
Here is small demo code for Find horizontal surface. In #Swift5 Github

iOS Camera Color Recognition in Real Time: Tracking a Ball

I have been looking for a bit and know that people are able to track faces with core image and openGL. However I am not that sure where to start the process of tracking a colored ball with the iOS camera.
Once I have a lead to being able to track the ball. I hope to create something to detect. when the ball changes directions.
Sorry I don't have source code, but I am unsure where to even start.
The key point is image preprocessing and filtering. You can use the Camera API-s to get the video stream from the camera. Take a snapshot picture from it, then you should use a Gaussian-blur on it (spatial enhance), then a Luminance Average Threshold Filter (to make black and white image). After that a morphological preprocessing should be wise (opening, closing operators), to hide the small noises. Then an Edge detection algorithm (with for example a Prewitt-operator). After these processes only the edges remain, your ball should be a circle (when the recording environment was ideal) After that you can use a Hough-transform to find the center of the ball. You should record the ball position and in the next frame, the small part of the picture can be processed (around the ball only).
Other keyword could be: blob detection
A fast library for image processing (on GPU with openGL) is Brad Larsons: GPUImage library https://github.com/BradLarson/GPUImage
It implements all the needed filter (except Hough-transformation)
The tracking process can be defined as following:
Having the initial coordinate and dimensions of an object with a given visual characteristics (image features)
In the next video frame, find the same visual characteristics near the coordinate of the last frame.
Near means considering basic transformations related to the last frame:
translation in each direction;
scale;
rotation;
The variation of these tranformations are strictly related with the frame rate. Higher the frame rate, nearest the position will be in the next frame.
Marvin Framework provides plug-ins and examples to perform this task. It's not compatible with iOs yet. However, it is open source and I think you can port the source code easily.
This video demonstrates some tracking features, starting at 1:10.

Resources