SceneKit shader modifier is not modifying the geometry's position - ios

I am trying to apply a simple shader modifier to move the position of the cube. The regular diffuse color of the cube is a light blue, but with this modifier is does turn red, so I know the shader modifier is working (somewhat). However, the cube remains in the center of the screen at the position (0,0,0), so the position is not being modified by the shader modifier. Any ideas?
Here is the code
let modifier = """
_surface.diffuse = float4(1,0,0,1);
_surface.position = float3(10.0,0.0,0.0);
"""
cube.geometry?.shaderModifiers = [SCNShaderModifierEntryPoint.surface : modifier]

The magenta tint is SceneKit's way to indicate that a shader has failed to compile.
_surface.position = float3(10.0,0.0,0.0);
Looking at SCNShadable.h we see that SCNShaderSurface's position is a float4, not a float3.

Related

Face texture from ARKit

I am running a face tracking configuration in ARKit with SceneKit, in each frame i can access the camera feed via the snapshot property or the capturedImage as a buffer, i have also been able to map each face vertex to the image coordinate space and add some UIView helpers(1 point squares) to display in realtime all the face vertices on the screen, like this:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceGeometry = node.geometry as? ARSCNFaceGeometry,
let anchorFace = anchor as? ARFaceAnchor,
anchorFace.isTracked
else { return }
let vertices = anchorFace.geometry.vertices
for (index, vertex) in vertices.enumerated() {
let vertex = sceneView.projectPoint(node.convertPosition(SCNVector3(vertex), to: nil))
let xVertex = CGFloat(vertex.x)
let yVertex = CGFloat(vertex.y)
let newPosition = CGPoint(x: xVertex, y: yVertex)
// Here i update the position of each UIView in the screen with the calculated vertex new position, i have an array of views that matches the vertex count that is consistent across sessions.
}
}
Since the UV coordinates are also constant across sessions, i am trying to draw for each pixel that is over the face mesh its corresponding position in the UV texture so i can get, after some iterations, a persons face texture to a file.
I have come to some theorical solutions, like creating CGPaths for each triangle, and ask for each pixel if it is contained in that triangle and if it is, create a triangular image, cropping a rectangle and then applying a triangle mask obtained from the points projected by the triangle vertices in the image coordinates, so in this fashion i can obtain a triangular image that has to be translated to the underlying triangle transform (like skewing it in place), and then, in a UIView (1024x1024) add each triangle image as UIImageView as a sub view, and finally encode that UIView as PNG, this sounds like a lot of work, specifically the part of matching the cropped triangle with the UV texture corresponding triangle.
In the Apple demo project there is an image that shows how that UV texture looks like, if you edit this image and add some colors it will then show up in the face, but i need the other way around, from what i am seeing in the camera feed, create a texture of your face, in the same demo project there is an example that does exactly what i need but with a shader, and with no clues on how to extract the texture to a file, the shader codes looks like this:
/*
<samplecode>
<abstract>
SceneKit shader (geometry) modifier for texture mapping ARKit camera video onto the face.
</abstract>
</samplecode>
*/
#pragma arguments
float4x4 displayTransform // from ARFrame.displayTransform(for:viewportSize:)
#pragma body
// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;
// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;
// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;
// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;
// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;
/**
* MARK: Post-process special effects
*/
Honestly i do not have much experience with shaders, so any help would be appreciated in translating the shader info on how to translate to a more Cocoa Touch Swift code, right now i am not thinking yet in performance, so it if has to be done in the CPU like in a background thread or offline is ok, anyway i will have to choose the right frames to avoid skewed samples, or triangles with very good information and some other with barely a few pixels stretched (like checking if the normal of the triangle is pointing to the camera, sample it), or other UI helpers to make the user turns the face to sample all the face correctly.
I have already checked this post and this post but cannot get it to work.
This app does exactly what i need, but they do not seem like using ARKit.
Thanks.

Fragment shader output interferes with conditional statement

Context: I'm doing all of the following using OpenGLES 2 on iOS 11
While implementing different blend modes used to blend two textures together I came across a weird issue that I managed to reduce to the following:
I'm trying to blend the following two textures together, only using the fragment shader and not the OpenGL blend functions or equations. GL_BLEND is disabled.
Bottom - dst:
Top - src:
(The bottom image is the same as the top image but rotated and blended onto an opaque white background using "normal" (as in Photoshop 'normal') blending)
In order to do the blending I use the
#extension GL_EXT_shader_framebuffer_fetch
extension, so that in my fragment shader I can write:
void main()
{
highp vec4 dstColor = gl_LastFragData[0];
highp vec4 srcColor = texture2D(textureUnit, textureCoordinateInterpolated);
gl_FragColor = blend(srcColor, dstColor);
}
The blend doesn't perform any blending itself. It only chooses the correct blend function to apply based on a uniform blendMode integer value. In this case the first texture gets drawn with an already tested normal blending function and then the second texture gets drawn on top with the following blendTest function:
Now here's where the problem comes in:
highp vec4 blendTest(highp vec4 srcColor, highp vec4 dstColor) {
highp float threshold = 0.7; // arbitrary
highp float g = 0.0;
if (dstColor.r > threshold && srcColor.r > threshold) {
g = 1.0;
}
//return vec4(0.6, g, 0.0, 1.0); // no yellow lines (Case 1)
return vec4(0.8, g, 0.0, 1.0); // shows yellow lines (Case 2)
}
This is the output I would expect (made in Photoshop):
So red everywhere and green/yellow in the areas where both textures contain an amount of red that is larger than the arbitrary threshold.
However, the results I get are for some reason dependent on the output value I choose for the red component (0.6 or 0.8) and none of these outputs matches the expected one.
Here's what I see (The grey border is just the background):
Case 1:
Case 2:
So to summarize: If I return a red value that is larger than the threshold, e.g
return vec4(0.8, g, 0.0, 1.0);
I see vertical yellow lines, whereas if the red component is less than the threshold there will be no yellow/green in the result whatsoever.
Question:
Why does the output of my fragment shader determine whether or not the conditional statement is executed and even then, why do I end up with green vertical lines instead of green boxes (which indicates that the dstColor is not being read properly)?
Does it have to do with the extension that I'm using?
I also want to point out that the textures are both being loaded in and bound properly. I can see them just fine if I just return the individual texture info without blending or even with a normal blending function that I've implemented everything works as expected.
I found out what the problem was (and I realize that it's not something anyone could have known from just reading the question):
There is an additional fully transparent texture being drawn between the two textures you can see above, which I had forgotten about.
Instead of accounting for that and just returning the dstColor in case the srcColor alpha is 0, the transparent texture's color information (which is (0.0, 0.0, 0.0, 0.0)) was being used when blending, therefore altering the framebuffer content.
Both the transparent texture and the final texture were drawn with the blendTest function, so the output of the first function call was then being read in when blending the final texture.

Can we render shadow on a transparent Plane in SceneKit

I have used shader modifiers for Plane but its not working. Can anyone suggest me how to solve it?
let myShaderfragment = "#pragma transparent;\n" + "_output.color.a = 0.0;"
let myShaderSurface = "#pragma transparent;\n" + "_surface.diffuse.a = 0.0;"
material.shaderModifiers = [SCNShaderModifierEntryPoint.fragment : myShaderfragment, SCNShaderModifierEntryPoint.surface : myShaderSurface]
The SceneKit: What's New session from WWDC 2017 explains how to do that.
For the plane, use a material with constant as its lightingModel. It's the cheapest one.
This material will have writesToDepthBuffer set to true and colorBufferWriteMask set to [] (empty option set). That way the plane will write in the depth buffer, but won't draw anything on screen.
Set the light's shadowMode to deferred so that shadows are not applied when rendering the objects themselves, but as a final post-process.
There's a dedicated lighting model now (SCNLightingModelShadowOnly) to only render shadows

How to draw a spritebatch without Color?

I'm drawing a Texture2D like this
//background_texture is white in color
spritebatch.Draw(content.Load<Texture2D>("background_texture"),
new Rectangle(10, 10, 100, 100),
Color.Red)
The texture is white; however, on screen it's displayed as red.
Why is the draw method requiring a Color?
How does one simply draw the texture, and only the texture without having Color.something distort the graphic?
take a look at the documentation here:
http://msdn.microsoft.com/en-us/library/ff433986.aspx
you want to try Color.White, that additional parameter of a color typically refers to a tint, while a white "tint" should display the sprite without a tint
Color.White does not change the color of your image. Use
spritebatch.Draw(content.Load<Texture2D>("background_texture"),
new Rectangle(10, 10, 100, 100),
Color.White);
Instead of Color.Red, which applies a tint.
Note: Be careful. Intellisense will want to make this Color.Wheat, so be sure to type the first 3 letters before you hit space.
Color.White is uneccesary because in default sprite shader it looks like this:
PixelShader....
{
....
return Texture * Color;
}
Where color is Color that is given from Vertex shader defined by that Color in spritebatch.Draw... if it would be null, black, it would create invisible sprites. Whole point is that by this you set vertex color of each vertex that is used as multiplicative to texture you set for sprite.

Applying color to a OpenGL ES 2.0 Point Sprite texture in Fragment shader?

I am creating a particle emitter with a texture that is a white circle with alpha. Unable to color the sprites using color passed to the fragment shader.
I tried the following:
gl_FragColor = texture2D(Sampler, gl_PointCoord) * colorVarying;
This seems to be doing some kind of additive coloring.
What I am attempting is porting this:
http://maniacdev.com/2009/07/source-code-particle-based-explosions-in-iphone-opengl-es/
from ES 1.1 to ES 2.0
with your code, consider the following example:
texture2D = (1,0,0,1) = red - fully opaque
colorVarying = (0,1,0,0.5) = green - half transparent
then gl_FragColor would be (0,0,0,0.5) black - half transparent.
Generally, you can use mix to interpolate values, but if I understood your problem then its even easier.
Basically, you only want the alpha channel from your texture and apply it to another color, right? then you could do this:
gl_FragColor = vec4(colorVarying.rgb, texture2D(Sampler, gl_PointCoord).a)

Resources