WebGL Constructive Solid Geometry to Static Vertices - webgl

I'm super new to 3D graphics (I've been trying to learn actual WebGL instead of using a framework) and I'm now in the constructive solid geometry phase. I know sites like tinkercad.com use CSG with WebGL, but they have things set so that your design is calculated every time you load the page instead of doing the subtraction, addition and intersection of primitive objects once and then storing those end design vertices for later use. I'm curious if anybody knows why they're doing things that way (maybe just to conserve resources on the server?) and if there isn't some straightforward way of extracting those vertices right before the draw call? Maybe a built in function of WebGL? Haven't found anything so far, when I try logging the object data from gl.bufferData() I'm getting multiple Float32Arrays (one for each object that was unionized together) instead of one complete set of vertices.
By the way, the only github I've found with CSG for WebGL is this https://github.com/evanw/csg.js/ and it's pretty straightforward, however it uses a framework and was curious if you know of any other CSG WebGL code out there that doesn't rely on a framework. I'd like to write it myself either way, but just being able to see what others have done would be nice.

Related

How to create a non-spherical 3D GKAgent or GKObstacle?

The title is my direct question, but I will elaborate to provide context, or detail what is needed for a helpful workaround if the answer is: You can't, log Radar with Apple
Use case is simple: Have a behavior driven GKAgent3D avoid a plane in my ARKit/SceneKit/GameKit app.
This requires I add a 'toAvoid' GKGoal as a behavior of the agent. Apple currently provides two things to avoid, other GKAgents or GKObstacles. The problem I am having is that I see no way to create a GKAgent3D or GKObstacle for use in SceneKit that is not a sphere. GKAgent3D only has a .radius property to define it's "occupied space" and GKObstacle only has one 3D concrete subclass (GKSphereObstacle) and the obstacles(from:) functions use SpriteKit objects.
I have many agents that all have complex behaviors and there are many planes I'd like them to avoid (ARKit detected). I would rather not resort to manual collision detection, since the goal is to have the agents alter their behavior driven path as a result of the object being in the way. It is not enough to just know that the agent is going to hit the object, I need to have that fact influence it's movement considering all the other goals it has in it's behavior.
I am hoping I am missing something and there is a way to do this, or that someone has a clever workaround. The only workaround I have thought of (but hate for performance reasons) is creating a massive number of small sphere obstacles in a regular array to approximate the surface of the plane.

where is the actual code for the XNA "game loop"?

I am beginner trying to learn C# and XNA. I am trying to get a deeper understanding of the XNA game loop.
Although there are plenty of articles explaining what the game loop is, I can't seem to find the actual loop implementation anywhere.
The closest I think I have gone to finding the loop is this member of the game class in the MSDN documentation:
public void Tick ()
If this is the correct one, where can I find the inner implementation of this method to see how it calls the Update and Draw methods, or is that not possible?
Monogame is an open source replica of XNA based on modern rendering pipelines, and SharpDX.Toolkit implements a very XNA-Like interface for DX11 (Monogame actually uses SharpDX under the hood for DirectX integration)... you can probably find the game loop in the source code from either of those projects and it will likely be be close to if not identical to what MS XNA actually uses.
That being said, the game loop actually doesn't do much for simple demo applications (they tend to pile up everything in a single method to update/render a simple scene), though in full games with component based scene graphs it can get a little complicated. For single threaded engines, the general idea is to:
1. update the state of any inputs,
2. process any physics simulation if needed,
3. call update on all the updatable objects in
your scene graph (the engine I'm working on
uses interfaces to avoid making wasteful calls
on empty stub methods),
4. clear the working viewport/depth buffers,
5. call render on each renderable object to actually
get the GPU drawing on the back buffer.
6. swap the render buffers putting everything just
drawn to the back buffer up on screen, and making
the old viewport buffer the new back buffer to be
cleared and drawn on the next rendering pass.
7. start over at step 1 :)
That pretty much covers the basics and should generally apply no matter what underlying API or engine you are working with.
Since XNA is closed-sourced, it is not possible to see the code for the game loop, as defined in the Game class. The Tick() you have referenced is not exposed nor referenced in any XNA documentation(Although, it may be used behind the scenes, I do not know).
As #Ascendion has pointed out, MonoGame has an equivalent class named Game see here. While, this may not reflect the XNA code exactly, it is the best compatible source (all tests performed between the two, they return the same values), that we the public, have available.
The main side effect to the MonoGame implementation is the platform independent code, which may be hard to comprehend, since some of the implementation code is located in additional files. It is not hard to trace the sources back to the expected platform source code.
This post is to serve as a clarification to all who may stumble upon this later.

Hardware/Software rasterizer vs Ray-tracing

I saw the presentation at the High-Perf Graphics "High-Performance Software Rasterization on GPUs" and I was very impressed of the work/analysis/comparison..
http://www.highperformancegraphics.org/previous/www_2011/media/Papers/HPG2011_Papers_Laine.pdf
http://research.nvidia.com/sites/default/files/publications/laine2011hpg_paper.pdf
My background was Cuda, then I started learning OpenGL two years ago to develop the 3d interface of EMM-Check, a field-of-view-analyze program to check if a vehicle is going to fulfill a specific standard or not. essentially you load a vehicle (or different parts), then you can move it completely or separately, add mirrors/cameras, analyze the point of view and shadows for the point of view of the driver, etc..
We are dealing with some transparent elements (mainly the field of views, but also vehicle themselves might be), therefore I wrote some rough algorithm to sort on fly the elements to be rendered (at primitive level, a kind of Painter's algorithm) but of course there are cases in which it easily fails, although for most of cases is enough..
For this reason I started googling, I found many techniques, like (dual) depth peeling, A/R/K/F-buffer, ecc ecc
But it looks like all of them suffer at high resolution and/or large number of triangles..
Since we also deal with millions of triangles (up to 10 more or less), I was looking for something else and I ended up to software renderers, compared to the hw ones, they offer free programmability but they are slower..
So I wonder if it might be possible to implement something hybrid, that is using the hardware renderer for the opaque elements and the software one (cuda/opencl) for the transparent elements and then combining the two results..
Or maybe a simple (no complex visual effect required, just position, color, simple light and properly transparency) ray-tracing algorithm in cuda/opencl might be much simpler from this point of view and give us also a lot of freedom/flexibility in the future?
I did not find anything on the net regarding this... maybe is there any particular obstacle?
I would like to know every single think/tips/idea/suggestion that you have regarding this
Ps: I also found "Single Pass Depth Peeling via CUDA Rasterizer" by Liu, but the solution from the first paper seems fair faster
http://webstaff.itn.liu.se/~jonun/web/teaching/2009-TNCG13/Siggraph09/content/talks/062-liu.pdf
I might suggest that you look at OpenRL, which will let you have hardware-accelerated raytracing?

What strategy is thoughtful to evaluate WebGl-Based APIs?

Since there are lot of high level APIs , libraries and Frameworks available for webGL for developing 3D web applications, I want to select the best (sorry this is bit straightforward) to implement a particular model (which isn't Game oriented) on web. I'm confused how to approach for my work, The criteria I want to use for evaluation is:
pickable objects, easily defined geometry and corresponding texture, multi-camera rendering, possible to incorporate GSLS implementations, type of buffers available.
I can't experiment and judge myself a framework by developing every single demo application in every framework due to time constraint. Is there any particular way to read the documentation for available APIs which mention all these. Moreover, the problem is every framework says they are good in some part and how to overcome this to justify single framework among all those available in net world.
A suggestion would suffice my research...
If you have Maya at hand then www.inka3d.com is easy in terms of defining geometry and texture (because you do it with Maya and your favorite image editor) and you get pickable objects. For shaders you can't use glsl but have to use Maya's node based shader editor.

3D library recommendations for interactive spatial data visualisation?

Our software produces a lot of data that is georeferenced and recorded over time. We are considering ways to improve the visualisation, and showing the (processed) data in a 3D view, given it's georeferenced, seems a good idea.
I am looking for SO's recommendations for what 3D libraries are best to use as a base when building these kind of visualisations in a Delphi- / C++Builder-based Windows application. I'll add a bounty when I can.
The data
Is recorded over time (hours to days) and is GPS-tagged. So, we have a lot of data following a path over time.
Is spatial: it represents real 3D elements of the earth, such as the land, or 3D elements of objects around the earth.
Is high volume: we could have a point cloud, say, of hundreds of thousands to millions of points. Processed data may display as surfaces created from these point clouds.
From that, you can see that an interactive, spatially-based 3D visualisation seems a good approach. I'm envisaging something where you can easily and quickly navigate around in space, and data will load or be generated on the fly depending on what you're looking at. I would prefer we don't try to write our own 3D library from scratch - for something like this, there have to be good existing libraries we can work from.
So, I'm hoping for a library which supports:
good navigation (is the library based on Euler rotations only, for example? Can you 'pick' objects to rotate around or move with easily?);
modern GPUs (shader-only rendering is ok; being able to hook into the pipeline to write shaders that map values to colours and change dynamically would be great - think data values given a colour through a colour lookup table);
dynamic data / objects (data can be added as it's recorded; and if the data volume is too high, we should be able to page things in and out or recreate them, and only show a sensible subset so that whatever the user's viewport is looking at is there onscreen, but other data can be loaded/regenerated, preferably asynchronously, or at least quickly as the user navigates. Obviously data creation is dependent on us, but a library that has hooks for this kind of thing would be great.)
and technologically, works with Delphi / C++Builder and the VCL.
Libraries
There are two main libraries I've considered so far - I'm looking for knowledgeable opinions about these, or for other libraries I haven't considered.
1. FireMonkey
This is Embarcadero's new UI library, which is only available in XE2 and above. Our app is based on the VCL and we'd want to host this in a VCL window; that seems to be officially unsupported but unofficially works fine, or is available through third-parties.
The mix of UI framework and 3D framework with shaders etc sounds great. But I don't know how complex the library is, what support it has for data that's not a simple object like a cube or sphere, and how well-designed it is. That last link has major criticisms of the 3D side of the library - severe enough I am not sure it's worthwhile in its current state at the time of writing for a non-trivial 3D app.
Is it worth trying to write a new visualisation window in our VCL app using FireMonkey?
2. GLScene
GLScene is a well-known 3D OpenGL framework for Delphi. I have never used it myself so have no experience about how it works or is designed. However, I believe it integrates well into VCL windows and supports shaders and modern GPUs. I do not know how its scene graph or navigation work or how well dynamic data can be implemented.
Its feature list specifically mentions some things I'm interested in, such as easy rotation/movement, procedural objects (implying dynamic data is easy to implement), and helper functions for picking. It seems shaders are Cg only (not GLSL or another non-vendor-specific language.) It also supports "polymorphic image support for texturing (allows many formats as well as procedural textures), easily extendable" - that may just mean many image formats, or it may indicate something where the texture can be dynamically changed, such as for dynamic colour mapping.
Where to from here?
These are the only two major 3D libraries I know of for Delphi or C++Builder. Have I missed any? Are there pros and cons I'm not aware of? Do you have any experience using either of these for this kind of purpose, and what pitfalls should we be wary of or features should we know about and use?
We currently use Embarcadero RAD Studio 2010 and most of our software is written in C++. We have small amounts of Delphi and may consider upgrading IDEs, but we are most likely to wait until the 64-bit C++ compiler is released. For that reason, a library that works in RS2010 might be best.
Thanks for your input :) I'm after high-quality answers, so I'll add a bounty when I can!
I have used GLScene in my 3D geomapping software and although it's not used to an extent you're looking for I can vouch that it seems the most appropriate for what you're trying to do.
GLScene supports terrain rendering and adding customizable objects to the scene. Objects can be interacted with and you can create complex 3D models of objects using the various building blocks of GLScene.
Unfortunately I cannot state how it will work with millions of points, but I do know that it is quite optimized and performs great on minimal hardware - that being said - the target PC I found required a dedicated graphics card capable of using OpenGL 2.1 extensions or higher (I found small issues with integrated graphics cards).
The other library I looked at was DXscene - which appears quite similar to GLScene albeit using DirectX instead of OpenGL. From memory this was a commercial product where GLScene was licensed under GPL. (EDIT - the page seems to be down at the moment : http://www.ksdev.com/index.html)
GLScene is still in active development and provides a fairly comprehensive library of functions, base objects and texturing etc. Things like rotation, translation, pitch, roll, turn, ray casting - to name a few - are all provided for you. Visibility culling is provided for each base object as well as viewing cameras, lighting and meshes. Base objects include cubes, spheres, pipes, tetrahedrons, cones, terrain, grids, 3d text, arrows to name a few.
Objects can be picked with the mouse and moved along 1,2 or 3 axes. Helper functions are included to automatically calculate the top-most object the mouse is under. Complex 3D shapes can be built by attaching base objects to other base objects in a hierarchical manner. So, for example, a car could be built using a rectangle as the base object and attaching four cylinders to it for the wheels - then you can manipulate the 'car' as a whole - since the four cylinders are attached to the base rectangle.
The only downside I could bring to your attention is the sometimes limited help/support available to you. Yes, there is a reference manual and a number of demo applications to show you how to do things such as select objects and move them around, however the reference manual is not complete and there is potential to get 'stuck' on how to accomplish a certain task. Forum support is somewhat limited/sparse. If you have a sound knowledge of 3D basics and concepts I'm sure you could nut it out.
As for Firemonkey - I have had no experience with this so I can't comment. I believe this is more targeted at mobile applications with lower hardware requirements so you may have issues with larger data sets.
Here are some other links that you may consider - I have no experience with them:
http://www.truevision3d.com/
http://www.3impact.com/
Game Development in Delphi
The last one is targeted at game development - but may provide useful information.
Have you tried glData? http://gldata.sourceforge.net/
It is old (~2004, Delphi 7), and I have not personally used the library, but some of the output is amazing.
you can use the GLScene or OpenGL they are good 3D rendering and its very easy to use.
Since you are already using georeferenced data, maybe you should consider embedding GoogleEarth in your Delphi application like this? Then you can add data to it as points, paths, or objects.

Resources