How to smooth geometry with morpher? - ios

How to smooth geometry with morpher attached in SceneKit or Unity 3D? I heard that average the normal of vertices will work but I have no ideal how to do it.
For example I have a rough sphere like this, How can I smooth it by changing normal of vertices? subdivisionLevel doesn’t works because there are some morpher animations on it.

Related

Convert ARKit SCNNode's bounding extent

I have an ARKit app that uses plane detection, and successfully places objects on those planes. I want to use some of the information on what's sitting below the object in my approach to shading it - something a bit similar to the WWDC demo where the chameleon blended in with the color of the table. I want to grab the rectangular region of the screen around the footprint of the object, (or in this case, the bounding volume of the whole node would work just as well) so I can take the camera capture data for the region of interest and use it in the image processing, like a metal sphere that reflects the ground it's sitting on. I'm just not sure what combination of transforms to apply - I've tried various combinations of convertPoint and projectPoint, and I occasionally get the origin, height, or width right, but never all 3. Is there an easy helper method I'm missing? I assume basically what I'm looking for is a way of going from SCNNode -> extent.

How to remove "Fish Eye" effect on SceneKit camera?

I'm using SceneKit. I have created and assigned my own camera to the scene and I have adjusted its xFov and yFov. When I set a value higher than 50, there starts to be some distortion. Everything near the edges of the screen is stretched – almost like the camera suddenly becomes a "Fish Eye."
I need the xFov and yFov to be above 50 (I actually need it to be 100), but I can't have that distortion. What do I do?
What you're asking isn't theoretically impossible per se, but theoretically interesting at least.
What happens to a physical camera when you increase the field of view? The wider it gets, the more "fisheye" it looks. The projection matrix and perspective divide of a 3D graphics pipeline like SceneKit works in a similar way. It looks a little different because it's a rectilinear transformation rather than the effect of a spherical lens, but it's the same general idea — it maps a volume (called a frustum) of 3D space "seen" by the camera onto the viewing plane. This is a general aspect of 3D graphics, not something specific to SceneKit, so you can find plenty of good tutorials that cover the underlying math pretty well.
That frustum projection fixes a certain relationship between the amount of viewing angle something takes up and its width on the viewing plane. You can't really change that relationship and still have a linear (well, rational, but mostly linear) transformation that 3D hardware can apply with a single matrix multiplication (and perspective divide).
You could, in theory, define a different relationship — say, one where a large angular size corresponds to a much larger part of the viewing plane near the center of the view, but to a much smaller part farther away from the center. But you can't do that in the camera transform... You'd have to do such calculations pixel by pixel in some kind of post-processing shader. (In fact, this is generally how rendering for the lenses of a VR headset works.)

How to achieve motion blur effect in SceneKit?

How to achieve "motion effect" in SceneKit? Motion effect is that blur that gets created if you shoot (with a camera) fast moving objects. I am running an action on a node and would like a little blur in the direction of moving when the node is moving, to emphasise that the node is moving fast. Can this be done in SceneKit?
This image has motion effect - blur applied to the whole scene. you can tell that the camera is moving inwards by the direction of blur lines. I only want to apply motion blur to a single object and not while scene.
In recent versions of SceneKit, motion blur is built in — you can just set the motionBlurIntensity on your scene’s camera.
In iOS 10, motion blur is for camera motion only — moving objects won’t blur. (You have to set the movabilityHint for nodes that you want to not blur when the camera moves fast.)
In iOS 11 and later, moving objects can also blur, so you can just set motionBlurIntensity on the camera and everything “just works”.
The rest of this answer predates iOS 10, and is still relevant if you’re (for some reason) supporting iOS 9.x or older.
To get a really good motion blur effect you'd have to write your own shaders and maybe even replace some of the SceneKit CPU-side pipeline -- not for the faint of heart.
For an easier approximation that might still give you some bang for your buck, take a look at the node.filters property and Core Image filters. By selectively applying a linear or zoom blur filter to certain nodes, and carefully setting (or even animating) the filter parameters, you might get a convincing fake motion blur.
You'll want to look into writing a motion blur fragment shader, in either GLSL or Metal Shading Language.
iOS 10 introduced camera.motionBlurIntensity for SCNCamera. Values are between 0.0 and 1.0, with the default at 0.0.
https://developer.apple.com/documentation/scenekit/scncamera/1644099-motionblurintensity

DirectX: How to draw smooth 2D water (particle based water)

I recently got a water simulation using particles (1000-1500) working (using the stokes equation), but my problem is that I use a IDXSprite which just draws the particles using blue texture quads (7x7), that doesnt look very smooth.
Is there any way or known technique for drawing such systems so the surface will look smooth (and the water shouldnt have the edges from the textures in it)

XNA Adding Craters (via GPU) with a "Burn" Effect

I am currently working on a 2D "Worms" clone in XNA, and one of the features is "deformable" terrain (e.g. when a rocket hits the terrain, there is an explosion and a chunk of the terrain disappears).
How I am currently doing this is by using a texture that has a progressively higher Red value as it approaches the center. I cycle through every pixel of that "Deform" texture, and if the current pixel overlaps a terrain pixel and has a high enough red value, I modify the color array representing the terrain to transparent. If the current pixel does NOT have a high enough Red value, I blacken the terrain color (it gets blacker the closer the Red value is to the threshold). At the end of this operation I use SetData to update my terrain texture.
I realize this is not a good way to do it, not only because I have read about pipeline stalls and such, but also because it can become quite laggy if lots of craters are being added at the same time. I want to remake my Crater Generation on the GPU instead using Render Targets "ping-ponging" between being the target and the texture to modify. That isn't the problem, I know how to do that. the problem is I don't know how to keep my burn effect using this method.
Here's how the burn effect looks right now:
Does anybody have an idea how I would create a similar burn effect (darkening the edges around the formed crater)? I am completely unfamiliar with Shaders but if it requires it I would be really thankful if someone walked me through on how to do it. If there are any other ways that'd be great too.
Sounds like you're good in the right way. But you're doing a lot of things by hand, which can also be done by just drawing sprites and applying the right formulas.
For example:
Suppose your terrain is saved into a giant texture in the alpha channel of the texture. 1 is terrain, 0 is nothing.
An explosion happens and the terrain has to be deformed. Update your texture easily by just drawing a black transparent sphere (or explosion area) onto your texture. The terrain is gone, because the alpha value is 0 of the black sphere. Your texture is now up to date, everything was done by the spriteBatch. And nothing had to be checked.
I don't know if you wanted a solution for this as well, but now you have one.
For the burn effect
Now that we have our terrain in a texture, we can do a post effect on the drawing by using a shader (just like you said). The shader obtains the texture's alpha channel and can now do different things to get our burn effect.
The first option is to do edge detection. Check a few pixels in all 4 directions and see if the pixel is at the edge. If so, it needs to do a burn by, for example, multiplying it with the distance to the edge (or any other function you like)
Another way is quite similar to the first one, but does it in two steps. First you do the same kind of edge detection, but you save the edges in a seperate texture. Now, when you are drawing your texture, you can overlay your edges. So it's quite the same as just drawing the ground at once.
The main difference for the second option is that you can also choose to just draw your normal ground and you are not adjusting the pixel in the ground texture on rendering.
I know this is a long story, but it is a nice technique. Have a look at toon shaders, they do edge detection as well, even though it is 3D.
Keywords: Toon shading, HLSL, Post effects, edge detection, image processing.
Recommended reading: http://rbwhitaker.wikidot.com/xna-tutorials

Resources