Example of goal:
I see three.js has this example.
It's simply a 3D Cube with many Spheres on its surface.
How can I do something like this using SceneKit?
You could use an array of points, on planes, and place spheres at those locations.
Each plane divide by 10 in both directions (X and Y) and then make six of these planes and rotate them into the cube face positions.
I think performance is probably going to suck, though. This is a lot of polygons, for each of these spheres. Let's imagine each sphere has 200 tris. That's 100x 6x 200 = 1.2 million triangles.
Probably better to use circular textures on quads, placed facing the camera, at each of these 600 points. Then it's only 1200 triangles.
Cheats way to do this:
Create a SCNBox with the number of vertices desired in x, y & z axis.
Then use it as a particle emitter shape, and assign emittance to each vertex at a rate that makes them always appear at these locations, using a small circle texture, and the "look at camera" mode of placard presentation.
here is that cheat, done with particles:
Related
I have a WebGL scene that wants to draw both point and line primitives, and am wondering: Is it possible to draw multiple WebGL primitives inside a single draw call?
My hunch is this is not possible, but WebGL is constantly surprising me with tricks one can do to accomplish strange edge cases, and searching has not let me confirm whether this is possible or not.
I'd be grateful for any insight others can offer on this question.
You can't draw WebGL lines, points, and triangles in the same draw call. You can generate points and lines from triangles and then just draw triangles in one draw call that happens to have triangles that make points and triangles that draw lines and triangles that draw other stuff all one draw call.
Not a good example but for fun here's a vertex shader than generates points and lines from triangles on the fly.
There's also this for an example of making lines from triangles
How creative you want to get with your shaders vs doing things on the CPU is up to you but it's common to draw lines with triangles as the previous article points out since WebGL lines can generally only be a single pixel thick.
It's also common to draw points with triangles since
WebGL is only required to support points of size 1
By drawing with triangles that limit is removed
WebGL points are always aligned with the screen
Triangle based points are far more flexible. You can rotate the point for example and or orient them in 3D. Here's a bunch of points made from triangles
Triangle based points can be scaled in 3D with no extra work
In other words a triangle based point in 3d space will scale with distance from the camera using standard 3D math. A WebGL point requires you to compute the size the point should be so you can set gl_PointSize and so requires extra work if you want it to scale with the scene.
It's not common to mix points, lines, and triangles in a single draw call but it's not impossible by any means.
Im having trouble generating a decent looking mesh using an image.
Here is an example of an image:
In my project I convert each pixel to 3d point with its height determined on how far away it is from the center of the line.
Here is what it looks like when I have created a 3d mesh from the image:
The problem with the mesh is that there are a lot of triangles (and vertices) and it looks really blocky, I triangulate the points just going through the 2d image and joining pixel neighbours in triangles.
Are there any algorithms that could be used to generate something better looking (less triangles / vertices, smoother transition).
Why don't you just sample both the midline and the boundaries of the white region, and triangulate with a constraint that contiguous vertices of the midline be edges? To preserve shape, the sampling should include all places where midline and boundaries "bend", i.e. all curvature changes.
i have a question about achieving an effect like on a lunar eclipse. The effect should look like in the first seconds of this gif. So just like a black shadow which goes over the circle. The ideal situation would be a function where i can passed a parameter in percentage to get this amount as a shadow on the circle:
The problem which i am facing is that my background is an gradient. So it's not possible to have a black circle which moves over the moon to get the effect.
I tried something with CCClippingNode but it looks not nice. Furthermore the clip on the edges was always a bit pixelated.
I thought about using something like a GLSL Shader to achieve the effect but i am not so familiar with GLSL and i can't find an example.
The effect is for an app game developed for an iphone. I use the cocos2d framework in version 3 (the current one).
Has somebody an idea how to get this effect? An idea where i can start to search?
Thank you in advance
The physics behind is simple you change the light shining on the moon. So
I would create a 1D gradient texture representing the lighting conditions
compute each rendered pixel of moon
you obviously have the 2D texture of moon. So you now need to obtain the position of each pixel inside the 1D lighting texture. So if moon is fully visible you are in sunlight. When partially eclipsed then you are in the umbra region. And finaly while total eclipse you are in penumbra region. so just compute the middle point's of the moon position. And for the rest use relative position in the moons motion direction.
So now just multiply the Moon surface with the lighting texture and render the output.
when working you can add the curvature correction
Now you got linerly cutted Moon phases but the real phases are curved as the lighting conditions differs also with radial distance from motion direction and moons center. To fix this you can do
convert the lighting to 2D texture
or shift the texture coordinate by some curvature dependent on the radial distance
I've been making progress in a fan-replicated game I'm coding, but I'm stuck with this problem.
Right now I'm drawing a texture pixel by pixel on the curve path, but this cuts down frames per second from 4000 to 50 on curves with long lengths.
I need to store pixel by pixel Vector2 + length data anyway, so I can produce static speed movement along it, looping through it to draw the curve as well.
Curves I need to be able to draw are Bezier, Circular and Catmull.
Any ideas of how to make it more efficient?
Maybe I have misunderstood the question but I did this once:
Create the curve and sample x points on it. (Red dots)
Create a mesh from it by calculating the cross vector of each point. (Green lines)
Build a quad between all of these. So basically 5 of them in my picture.
Set the U coordinate to be on the perpendicular plane and V coordinate follows the curve length. So 0 at the start an 1 at the end of it.
You can of course scale V if you want you texture to repeat.
Any ideas of how to make it more efficient?
Assuming the texture needs to be dynamic, draw the texture on the GPU-side using a shader. Drawing it on the CPU-side is not only slow, but bogs down both the CPU and GPU when you need to send it back to the GPU every frame. Much better to draw it GPU-side.
I need to store pixel by pixel Vector2 + length data anyway
The shader can store additional information into the texture. e.g. even though you may allocate a RGBA texture, it doesn't mean that it needs to store color information when it is your shaders that will interpret the data.
I'm looking for an water surface effect sample like Pocket pond HD. I have found some tutorials:
iPhone OpenGL demo water waves
Waves effect
However, it's sketchy.
It is very simple.
You just have to make a 2D heightmap (2D array of water height at that particular place). With heightmap, you can calculate (approximate, interpolate) a normal at each place depending on the nearest height points.
Then you perform a "simple raytracing". You "refract each ray" depending on normal, intersect it with plane (bottom) and get a color from texture at that place.
Practically: you make a triangle mesh from height map and render those triangles. You can send normals in Vertex Buffer or compute them in Vertex Shader. Raytracing is done in Fragment Shader. Direction of each ray can be (0, 0, 1). You refract it by current normal and scale the result, so Z coordinate equals water depth. The new X and Y coordinates are texture coordinates.
To make an animation, just update the heightmap in time.