Using shaders from Shadertoy in Interface Builder (Xcode) - ios

I'm attempting to see what shaders look like in Interface Builder using sprite kit, and would like to use some of the shaders at ShaderToy. To do it, I created a "shader.fsh" file, a scene file, and added a color sprite to the scene, giving it a custom shader (shader.fsh)
While very basic shaders seem to work:
void main() {
gl_FragColor = vec4(0.0,1.0,0.0,1.0);
}
Any attempt I make to convert shaders from ShaderToy cause Xcode to freeze up (spinning color ball) as soon as the attempt is made to render them.
The shader I am working with for example, is this one:
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float size = 30.0;
float prob = 0.95;
vec2 pos = floor(1.0 / size * fragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(iGlobalTime + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(fragCoord.xy, center) / (0.5 * size);
color = color * t / (abs(fragCoord.y - center.y)) * t / (abs(fragCoord.x - center.x));
}
else if (rand(fragCoord.xy / iResolution.xy) > 0.996)
{
float r = rand(fragCoord.xy);
color = r * (0.25 * sin(iGlobalTime * (r * 5.0) + 720.0 * r) + 0.75);
}
fragColor = vec4(vec3(color), 1.0);
}
I've tried:
Replacing mainImage() with main(void) (so that it will be called)
Replacing the iXxxxx variables (iGlobalTime, iResolution) and fragCoord variables with their related variables (based on the suggestions here)
Replacing some of the variables (iGlobalTime)...
While changing mainImage to main() and swapping out the variables got it to work without error in TinyShading realtime tester app - the outcome is always the same in Xcode (spinning ball, freeze). Any advice here would be helpful as there is a surprisingly small amount of information currently available on the topic.

I managed to get this working in SpriteKit using SKShader. I've been able to render every shader from ShaderToy that I've attempted so far. The only exception is that you must remove any code using iMouse, since there is no mouse in iOS. I did the following...
1) Change the mainImage function declaration in the ShaderToy to...
void main(void) {
...
}
The ShaderToy mainImage function has an input named fragCoord. In iOS, this is globally available as gl_FragCoord, so your main function no longer needs any inputs.
2) Do a replace all to change the following from their ShaderToy names to their iOS names...
fragCoord becomes gl_FragCoord
fragColor becomes gl_FragColor
iGlobalTime becomes u_time
Note: There are more that I haven't encountered yet. I'll update as I do
3) Providing iResolution is slightly more involved...
iResolution is the viewport size (in pixels), which translates to the sprite size in SpriteKit. This used to be available as u_sprite_size in iOS, but has been removed. Luckily, Apple provides a nice example of how to inject it into your shader using uniforms in their SKShader documentation.
However, as stated in Shader Inputs section of ShaderToy, the type of iResolution is vec3 (x, y and z) as opposed to u_sprite_size, which is vec2 (x and y). I am yet to see a single ShaderToy that uses the z value of iResolution. So, we can simply use a z value of zero. I modified the example in the Apple documentation to provide my shader an iResolution of type vec3 like so...
let uniformBasedShader = SKShader(fileNamed: "YourShader.fsh")
let sprite = SKSpriteNode()
sprite.shader = uniformBasedShader
let spriteSize = vector_float3(
Float(sprite.frame.size.width), // x
Float(sprite.frame.size.height), // y
Float(0.0) // z - never used
)
uniformBasedShader.uniforms = [
SKUniform(name: "iResolution", vectorFloat3: spriteSize)
]
That's it :)

Here is the change to the shader that works when loaded as a shader with swift:
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co);
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main()
{
float size = 50.0; //Item 1:
float prob = 0.95; //Item 2:
vec2 pos = floor(1.0 / size * gl_FragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(u_time + (starValue - prob) / (1.0 - prob) * 45.0); //Item 3:
color = 1.0 - distance(gl_FragCoord.xy, center) / (0.9 * size);
color = color * t / (abs(gl_FragCoord.y - center.y)) * t / (abs(gl_FragCoord.x - center.x));
}
else if (rand(v_tex_coord) > 0.996)
{
float r = rand(gl_FragCoord.xy);
color = r * (0.25 * sin(u_time * (r * 5.0) + 720.0 * r) + 0.75);
}
gl_FragColor = vec4(vec3(color), 1.0);
}
Play with Item 1: to increase the number of stars in the sky the smaller the number the more stars I like the number to be around 50 not too dense
Item 2: changes the randomness or how close together the stars will appear 1 = none, 0.1 = side by side around 0.75 gives a nice feel.
Item 3 is where most of the magic happens this is the size and pulse of the stars.
float t = 0.9
Changing 0.9, will increase the initial star sign up or down a nice value is 1.4 not too big not too small.
float t = 0.9 + 0.2
Changing the second value in this equation 0.2, will increase the pulse effect width of the stars proportionally to the original size I like with 1.4 a value of 1.2.
To add the shader to your swift project add a sprite to the scene the size of the screen then add the shader like this:
let backgroundImage = SKSpriteNode()
backgroundImage.texture = textureAtlas.textureNamed("any )
backgroundImage.size = screenSize
let shader = SKShader(fileNamed: "nightSky.fsh")
backgroundImage.shader = shader

Related

GLSL hue shader producing odd results, IOS only?

I have a cross-platform LibGDX app. This particular GLSL shader code is used to shift the hue of a particular texture.
It works great on Android and when debugging on Desktop, but on an iPad this is the result (excuse photos of screen, easiest way to get data from this device).
Code:
const mat3 rgb2yiq = mat3(0.299, 0.595716, 0.211456, 0.587, -0.274453, -0.522591, 0.114, -0.321263, 0.311135);
const mat3 yiq2rgb = mat3(1.0, 1.0, 1.0, 0.9563, -0.2721, -1.1070, 0.6210, -0.6474, 1.7046);
vec4 outColor = texture2D(u_texture, v_texCoord) * v_color;
float alpha = outColor.a;
// Hue shift
if (u_hueAdjust > 0.0 && u_hueAdjust < 1.0 && alpha > 0.0)
{
vec3 unmultipliedRGB = outColor.rgb / alpha;
vec3 yColor = rgb2yiq * unmultipliedRGB;
float originalHue = atan(yColor.b, yColor.g);
float finalHue = originalHue + u_hueAdjust * 6.28318; //convert 0-1 to radians
float chroma = sqrt(yColor.b * yColor.b + yColor.g * yColor.g);
vec3 yFinalColor = vec3(yColor.r, chroma * cos(finalHue), chroma * sin(finalHue));
outColor.rgb = (yiq2rgb * yFinalColor) * alpha;
}
Obviously there's some really weird artifacts that seem to affect certain areas, in particular black/white colors. But also in general a subtle change in color is noted that isn't attributable to a desired hue-change effect.
Overall this shader is wonky on IOS (but working fine on Android/Desktop), but after playing with it for a while I'm completely out of ideas, anyone lead me in the right direction?
In the documentation for atan, it says The result is undefined if x=0..
Is it possible that yColor.g is zero on the greyscale?
The issue is discussed here: Robust atan(y,x) on GLSL for converting XY coordinate to angle

Applying Spotlights Over Dark Ambient Light - HLSL - Monogame

I wrote an HLSL shader for my Monogame project that uses ambient lighting to create a day/night cycle.
#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_4_0_level_9_1
#define PS_SHADERMODEL ps_4_0_level_9_1
#endif
sampler s0;
struct VertexShaderOutput
{
float4 Position : SV_POSITION;
float4 Color : COLOR0;
float2 TextureCoordinates : TEXCOORD0;
};
float ambient = 1.0f;
float percentThroughDay = 0.0f;
float4 MainPS(VertexShaderOutput input) : COLOR
{
float4 pixelColor = tex2D(s0, input.TextureCoordinates);
float4 outputColor = pixelColor;
// lighting intensity is gradient of pixel position
float Intensity = 1 + (1 - input.TextureCoordinates.y) * 1.3;
outputColor.r = outputColor.r / ambient * Intensity;
outputColor.g = outputColor.g / ambient * Intensity;
outputColor.b = outputColor.b / ambient * Intensity;
// sun set/rise blending
float exposeRed = (1 + (.39 - input.TextureCoordinates.y) * 8); // overexpose red
float exposeGreen = (1 + (.39 - input.TextureCoordinates.y) * 2); // some extra green for the blue pixels
float exposeBlue = (1 + (.39 - input.TextureCoordinates.y) * 6); // some extra blue
// happens over full screen
if (input.TextureCoordinates.y < 1.0f) {
float redAdder = max(1, (exposeRed * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
float greenAdder = max(1, (exposeGreen * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
float blueAdder = max(1, (exposeBlue * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
// begin reducing adders
if (percentThroughDay >= 0.25f && percentThroughDay < 0.50f) {
redAdder = max(1, (exposeRed * (1-(percentThroughDay - 0.25f)/0.25f)));
greenAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.25f)/0.25f)));
blueAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.25f)/0.25f)));
}
//mid day
else if (percentThroughDay >= 0.50f && percentThroughDay < 0.75f) {
redAdder = 1;
greenAdder = 1;
blueAdder = 1;
}
// add adders back for sunset
else if (percentThroughDay >= 0.75f && percentThroughDay < 0.85f) {
redAdder = max(1, (exposeRed * ((percentThroughDay - 0.75f)/0.10f)));
greenAdder = max(1, (exposeGreen * ((percentThroughDay - 0.75f)/0.10f)));
blueAdder = max(1, (exposeBlue * ((percentThroughDay - 0.75f)/0.10f)));
}
// begin reducing adders
else if (percentThroughDay >= 0.85f) {
redAdder = max(1, (exposeRed * (1-(percentThroughDay - 0.85f)/0.15f)));
greenAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.85f)/0.15f)));
blueAdder = max(1, (exposeBlue * (1-(percentThroughDay - 0.85f)/0.15f)));
}
outputColor.r = outputColor.r * redAdder;
outputColor.g = outputColor.g * greenAdder;
outputColor.b = outputColor.b * blueAdder;
}
return outputColor;
}
technique ambientLightDayNight
{
pass P0
{
PixelShader = compile ps_2_0 MainPS();
}
};
This works how I want it to for the most part (it could definitely use some calculation optimization though).
However, I am now looking at adding spotlights in my game for the player to use. I followed along with this method which I got working independently of the ambientLight shader. It is a pretty simple shader that uses a lightMask.
sampler s0;
texture lightMask;
sampler lightSampler = sampler_state{Texture = lightMask;};
float4 PixelShaderLight(float2 coords: TEXCOORD0) : COLOR0
{
float4 color = tex2D(s0, coords);
float4 lightColor = tex2D(lightSampler, coords);
return color * lightColor;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderLight();
}
}
My problem is now using both of these shaders together. My current method is to draw my game scene to a render target, apply the ambient light shader, and then finish by drawing the gamescene (with the ambient light now) to the client screen while applying the spotlight shader.
This bring up multiple issues:
Applying the spotlight shader after the ambient light completely blacks out anything around the light, when in reality the area surrounding the light should be the ambient light.
The light intensity (how bright the light is) calculated in the spotlight shader is too dull when it is "night" because it is calculating the light color based on the ambient light shader's output.
I've tried to apply the ambient light shader after the spotlight shader instead, but this just renders most of everything black because the ambient light calculates against a mostly black background.
I've tried adding some code to the spotlight shader to color black pixels to white in order to reveal the ambient light background, however the light intensity is still being calculated against the darker ambient light - resulting in a very dull light.
Another thought was to just modify my ambient light shader to take the lightMask as a param and just not apply the ambient light to lights marked on the light mask. Then I could just use the spotlight shader to apply the graident of the light and modify the color. But I was unsure if I should be cramming these two seemingly separate light effects into one pixel shader. When I tried this, my shader also didn't compile because there were too many arithmetic ops.
So my questions for everyone are:
Should I avoid cramming multiple effects into one pixel shader?
Generally, how would I apply spot lighting over an ambient light effect that can be "dark"?
EDIT
my solution - Did not end up using the spot light shader, but still draw the light mask with the texture given in the article, then pass that light mask to this ambient light shader and offset the texture gradient.
float4 MainPS(VertexShaderOutput input) : COLOR
{
float4 constant = 1.5f;
float4 pixelColor = tex2D(s0, input.TextureCoordinates);
float4 outputColor = pixelColor;
// lighting intensity is gradient of pixel position
float Intensity = 1 + (1 - input.TextureCoordinates.y) * 1.05;
outputColor.r = outputColor.r / ambient * Intensity;
outputColor.g = outputColor.g / ambient * Intensity;
outputColor.b = outputColor.b / ambient * Intensity;
// sun set/rise blending
float gval = (1 - input.TextureCoordinates.y); // replace 1 with .39 to lock to 39 percent of screen (this is how it was before)
float exposeRed = (1 + gval * 8); // overexpose red
float exposeGreen = (1 + gval * 2); // some extra green
float exposeBlue = (1 + gval * 4); // some extra blue
float quarterDayPercent = (percentThroughDay/0.25f);
float redAdder = max(1, (exposeRed * quarterDayPercent)); // be at full exposure at 25% of day gone
float greenAdder = max(1, (exposeGreen * quarterDayPercent)); // be at full exposure at 25% of day gone
float blueAdder = max(1, (exposeBlue * quarterDayPercent)); // be at full exposure at 25% of day gone
// begin reducing adders
if (percentThroughDay >= 0.25f ) {
float gradientVal1 = (1-(percentThroughDay - 0.25f)/0.25f);
redAdder = max(1, (exposeRed * gradientVal1));
greenAdder = max(1, (exposeGreen * gradientVal1));
blueAdder = max(1, (exposeGreen * gradientVal1));
}
//mid day
if (percentThroughDay >= 0.50f) {
redAdder = 1;
greenAdder = 1;
blueAdder = 1;
}
// add adders back for sunset
if (percentThroughDay >= 0.75f) {
float gradientVal2 = ((percentThroughDay - 0.75f)/0.10f);
redAdder = max(1, (exposeRed * gradientVal2));
greenAdder = max(1, (exposeGreen * gradientVal2));
blueAdder = max(1, (exposeBlue * gradientVal2));
}
// begin reducing adders
if (percentThroughDay >= 0.85f) {
float gradientVal3 = (1-(percentThroughDay - 0.85f)/0.15f);
redAdder = max(1, (exposeRed * gradientVal3));
greenAdder = max(1, (exposeGreen * gradientVal3));
blueAdder = max(1, (exposeBlue * gradientVal3));
}
outputColor.r = outputColor.r * redAdder;
outputColor.g = outputColor.g * greenAdder;
outputColor.b = outputColor.b * blueAdder;
// first check if we are in a lightMask light
float4 lightMaskColor = tex2D(lightSampler, input.TextureCoordinates);
if (lightMaskColor.r != 0.0f || lightMaskColor.g != 0.0f || lightMaskColor.b != 0.0f)
{
// we are in the light so don't apply ambient light
return pixelColor * (lightMaskColor + outputColor) * constant; // have to offset by outputColor here because the lightMask is pure black
}
return outputColor * pixelColor * constant; // must multiply by pixelColor here to offset the lightMask bounds. TODO: could try to restore original color by removing this multiplaction and factoring in more of an offset on ln 91
}
To chain lights as you want, you need a different approach. As you already encountered, chaining lights solely on the color won't work, as once the color has become black it can't be highlighted anymore. To deal with multiple lights there are two typical approaches: forward shading and deferred shading. Each has its advantages and disadvantages, so you need to look which fits better your situation.
Forward Shading
This approach is the one you tested with stuffing all lighting computations in a single shading pass. You are adding all light intensities together to a final light intensity and then multiply it with the color.
Pros are the performance and simplicity, Cons are the limitation in the amount of lights and more complex shader code.
Deferred Shading
This approach decouples single lights from each other and can be used to draw scenes with very many lights. Each light needs the original scene color (albedo) to compute its part of the final image. Therefore you first render your scene without any lighting onto a texture (usually called color buffer or albedo buffer). Then you can render each light separately with multiplying it with the albedo and adding it to the final image. So even in the dark parts the original color comes back again with a light.
Pros are the cleaner structure and possibility to use a lot of lights, even with different shapes. Cons are the extra buffers and draw calls which have to be made.

SpriteKit SKShader flipping after SKTransition completes

I'm creating a space-themed game in SpriteKit and to simulate a starfield I'm using an GLSL fragment shader from ShaderToy, converted to work in SpriteKit.
To use it I simply init a clear SKSpriteNode of the same size as my scene (2048 x 1536) and apply the shader to this node.
let starField = SKSpriteNode(color: UIColor.clear, size: CGSize(width: 2048, height: 1536))
let shader = SKShader(fileNamed: "Starfield.fsh")
starField.shader = shader
The shader renders just fine and shows a nice starfield. So far, so good.
The problem occurs when I transition from a different scene using SKTransition. During the transition, the shader appears to be rasterising, and as soon as the transition is complete the whole thing flips upside down.
My transition code (doesn't appear to matter what transition or duration I use):
let gameScene = GameScene(level: level)
gameScene.scaleMode = self.scaleMode
let transition = SKTransition.fade(withDuration: 3.0)
transition.pausesIncomingScene = true
self.view?.presentScene(gameScene, transition:transition)
I've tried with a different shader and the same occurs - the starfield is one way up during the transition, and instantly 'flips' as soon as the previous scene has been cleared up. Has anyone experienced the same and knows what is going on?
I have a video of the problem, which you can see occurring at around 7 seconds:
https://youtu.be/l1lLv6MwKYU
The shader code is as follows:
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co);
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main()
{
float size = 50.0;
float prob = 0.95;
vec2 pos = floor(1.0 / size * gl_FragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(u_time + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(gl_FragCoord.xy, center) / (0.9 * size);
color = color * t / (abs(gl_FragCoord.y - center.y)) * t / (abs(gl_FragCoord.x - center.x));
}
else if (rand(v_tex_coord) > 0.996)
{
float r = rand(gl_FragCoord.xy);
color = r * (0.25 * sin(u_time * (r * 5.0) + 720.0 * r) + 0.75);
}
gl_FragColor = vec4(vec3(color), 1.0);
}
:

Edge/outline detection from texture in fragment shader

I am trying to display sharp contours from a texture in WebGL.
I pass a texture to my fragment shaders then I use local derivatives to display the contours/outline, however, it is not smooth as I would expect it to.
Just printing the texture without processing works as expected:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
gl_FragColor = color;
With local derivatives, it misses some edges:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
float maxColor = length(color.rgb);
gl_FragColor.r = abs(dFdx(maxColor));
gl_FragColor.g = abs(dFdy(maxColor));
gl_FragColor.a = 1.;
In theory, your code is right.
But in practice most GPUs are computing derivatives on blocks of 2x2 pixels.
So for all 4 pixels of such block the dFdX and dFdY values will be the same.
(detailed explanation here)
This will cause some kind of aliasing and you will miss some pixels for the contour of the shape randomly (this happens when the transition from black to the shape color occurs at the border of a 2x2 block).
To fix this, and get the real per pixel derivative, you can instead compute it yourself, this would look like this :
// get tex coordinates
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
// compute the U & V step needed to read neighbor pixels
// for that you need to pass the texture dimensions to the shader,
// so let's say those are texWidth and texHeight
float step_u = 1.0 / texWidth;
float step_v = 1.0 / texHeight;
// read current pixel
vec4 centerPixel = texture2D(uTextureFilled, texc);
// read nearest right pixel & nearest bottom pixel
vec4 rightPixel = texture2D(uTextureFilled, texc + vec2(step_u, 0.0));
vec4 bottomPixel = texture2D(uTextureFilled, texc + vec2(0.0, step_v));
// now manually compute the derivatives
float _dFdX = length(rightPixel - centerPixel) / step_u;
float _dFdY = length(bottomPixel - centerPixel) / step_v;
// display
gl_FragColor.r = _dFdX;
gl_FragColor.g = _dFdY;
gl_FragColor.a = 1.;
A few important things :
texture should not use mipmaps
texture min & mag filtering should be set to GL_NEAREST
texture clamp mode should be set to clamp (not repeat)
And here is a ShaderToy sample, demonstrating this :

How to call glPointSize() (or the SceneKit equivalent) when making custom geometries using SceneKit and SCNGeometryPrimitiveTypePoint

I'm writing an iOS app that renders a pointcloud in SceneKit using a custom geometry. This post was super helpful in getting me there (though I translated this to Objective-C), as was David Rönnqvist's book 3D Graphics with SceneKit (see chapter on custom geometries). The code works fine, but I'd like to make the points render at a larger point size - at the moment the points are super tiny.
According to the OpenGL docs, you can do this by calling glPointSize(). From what I understand, SceneKit is built on top of OpenGL so I'm hoping there is a way to access this function or do the equivalent using SceneKit. Any suggestions would be much appreciated!
My code is below. I've also posted a small example app on bitbucket accessible here.
// set the number of points
NSUInteger numPoints = 10000;
// set the max distance points
int randomPosUL = 2;
int scaleFactor = 10000; // because I want decimal points
// but am getting random values using arc4random_uniform
PointcloudVertex pointcloudVertices[numPoints];
for (NSUInteger i = 0; i < numPoints; i++) {
PointcloudVertex vertex;
float x = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
float y = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
float z = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
vertex.x = (x - randomPosUL * scaleFactor) / scaleFactor;
vertex.y = (y - randomPosUL * scaleFactor) / scaleFactor;
vertex.z = (z - randomPosUL * scaleFactor) / scaleFactor;
vertex.r = arc4random_uniform(255) / 255.0;
vertex.g = arc4random_uniform(255) / 255.0;
vertex.b = arc4random_uniform(255) / 255.0;
pointcloudVertices[i] = vertex;
// NSLog(#"adding vertex #%lu with position - x: %.3f y: %.3f z: %.3f | color - r:%.3f g: %.3f b: %.3f",
// (long unsigned)i,
// vertex.x,
// vertex.y,
// vertex.z,
// vertex.r,
// vertex.g,
// vertex.b);
}
// convert array to point cloud data (position and color)
NSData *pointcloudData = [NSData dataWithBytes:&pointcloudVertices length:sizeof(pointcloudVertices)];
// create vertex source
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithData:pointcloudData
semantic:SCNGeometrySourceSemanticVertex
vectorCount:numPoints
floatComponents:YES
componentsPerVector:3
bytesPerComponent:sizeof(float)
dataOffset:0
dataStride:sizeof(PointcloudVertex)];
// create color source
SCNGeometrySource *colorSource = [SCNGeometrySource geometrySourceWithData:pointcloudData
semantic:SCNGeometrySourceSemanticColor
vectorCount:numPoints
floatComponents:YES
componentsPerVector:3
bytesPerComponent:sizeof(float)
dataOffset:sizeof(float) * 3
dataStride:sizeof(PointcloudVertex)];
// create element
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:nil
primitiveType:SCNGeometryPrimitiveTypePoint
primitiveCount:numPoints
bytesPerIndex:sizeof(int)];
// create geometry
SCNGeometry *pointcloudGeometry = [SCNGeometry geometryWithSources:#[ vertexSource, colorSource ] elements:#[ element]];
// add pointcloud to scene
SCNNode *pointcloudNode = [SCNNode nodeWithGeometry:pointcloudGeometry];
[self.myView.scene.rootNode addChildNode:pointcloudNode];
I was looking into rendering point clouds in ios myself and found a solution on twitter, by a "vade", and figured I post it here for others:
ProTip: SceneKit shader modifiers are useful:
mat.shaderModifiers = #{SCNShaderModifierEntryPointGeometry : #"gl_PointSize = 16.0;"};

Resources