Scenekit: Performance of SCNNode octrees/quadtrees vs all in root level - ios

To represent terrain in a SceneKit game, I have around 20k SCNNodes structured in a hierarchy, similar to an octree or quadtree. This tree isn't balanced - some branches have far more great(*n)-grandchildren than others.
How much extra time is SceneKit spending to get at individual SCNNodes for physics, rendering, adding/deleting nodes etc. compared to if they were all flat at the root level? Does it have to do lots of extra work to traverse the entire height of the tree just to iterate or perform a random access, or is it just not a significant overhead? (Maybe it's clever enough to have structured the nodes itself in advance?)
I'm not asking how a graphics engine might theoretically handle this. I'm asking what SceneKit actually does.
Edit: just in case it's putting people off answering this... I don't need exact numbers of how much time SceneKit takes, obviously that's device-dependent anyway. I just want to know if it's a significant proportion of time. Has anyone had experience of trying both approaches and comparing, or switching from one to the other and noticing whether there was a significant difference?
Thanks for your help.

I am also looking into this as my scene is starting to pack serious amount of nodes, but here's my take:
When you call the node.childNode(withName: String, recursively: true), this means the implementation uses some form of balanced or non balanced tree, take your pick between a binary tree, a-b tree b+ tree (in the case one node has multiple child nodes beyond a certain threshold), you will end up mostly with a log|n| complexity depending on whether you want to insert delete or search a tree structure. the AVL tree usually keeps the most used nodes up the hierarchy so you get less computation.
Just looking at the quad tree or R tree structures, they also probably have log|n| complexity since we're talking going into iterative sub quadrants of the root quadrant.
It would be good to know what kind of actual structure you got in terms of child nodes to see what best strategy to take.
On a side note, what prompts your terrain to have 20 k ndoes? aAre you attaching a node per bezier point or vertex on your terrain to do some morphing?

Related

DBSCAN: How to Cluster Large Dataset with One Huge Cluster

I am trying to perform DBSCAN on 18 million data points, so far just 2D but hoping to go up to 6D. I have not been able to find a way to run DBSCAN on that many points. The closest I got was 1 million with ELKI and that took an hour. I have used Spark before but unfortunately it does not have DBSCAN available.
Therefore, my first question is if anyone can recommend a way of running DBSCAN on this much data, likely in a distributed way?
Next, the nature of my data is that the ~85% lies in one huge cluster (anomaly detection). The only technique I have been able to come up with to allow me to process more data is to replace a big chunk of that huge cluster with one data point in a way that it can still reach all its neighbours (the deleted chunk is smaller than epsilon).
Can anyone provide any tips whether I'm doing this right or if there is a better way to reduce the complexity of DBSCAN when you know that most data is in one cluster centered around (0.0,0.0)?
Have you added an index to ELKI, and tried the parallel version? Except for the git version, ELKI will not automatically add an index; and even then fine-turning the index for the problem can help.
DBSCAN is not a good approach for anomaly detection - noise is not the same as anomalies. I'd rather use a density-based anomaly detection. There are variants that try to skip over "clear inliers" more efficiently if you know you are only interested in the top 10%.
If you already know that most of your data is in one huge cluster, why don't you directly model that big cluster, and remove it / replace it with a smaller approximation.
Subsample. There usually is next to no benefit to using the entire data. Even (or in particular) if you are interested in the "noise" objects, there is the trivial strategy of randomly splitting your data in, e.g., 32 subsets, then cluster each of these subsets, and join the results back. These 32 parts can be trivially processed in parallel on separate cores or computers; but because the underlying problem is quadratic in nature, the speedup will be anywhere between 32 and 32*32=1024.
This in particular holds for DBSCAN: larger data usually means you also want to use much larger minPts. But then the results will not differ much from a subsample with smaller minPts.
But by any means: before scaling to larger data, make sure your approach solves your problem, and is the smartest way of solving this problem. Clustering for anomaly detection is like trying to smash a screw into the wall with a hammer. It works, but maybe using a nail instead of a screw is the better approach.
Even if you have "big" data, and are proud of doing "big data", always begin with a subsample. Unless you can show that the result quality increases with data set size, don't bother scaling to big data, the overhead is too high unless you can prove value.

How is Growing Neural Gas used for clustering?

I know how the algorithm works, but I'm not sure how it determines the clusters. Based on images I guess that it sees all the neurons that are connected by edges as one cluster. So that you might have two clusters of two groups of neurons each all connected. But is that really it?
I also wonder.. is GNG really a neural network? It doesn't have a propagation function or an activation function or weighted edges.. isn't it just a graph? I guess that depends on personal opinion a bit but I would like to hear them.
UPDATE:
This thesis www.booru.net/download/MasterThesisProj.pdf deals with GNG-clustering and on page 11 you can see an example of what looks like clusters of connected neurons. But then I'm also confused by the number of iterations. Let's say I have 500 data points to cluster. Once I put them all in, do I remove them and add them again to adapt die existing network? And how often do I do that?
I mean.. I have to re-add them at some point.. when adding a new neuron r, between two old neurons u and v then some data points formerly belonging to u should now belong to r because it's closer. But the algorithm does not contain changing the assignment of these data points. And even if I remove them after one iteration and add them all again, then the false assignment of the points for the rest of that first iteration changes the processing of the network doesn't it?
NG and GNG are a form of self-organizing maps (SOM), which are also referred to as "Kohonen neural networks".
These are based on older, much wider view of neutal networks when they were still inspired by nature rather than being driven by GPU capabilites of matrix operations. Back then, when you did not yet have massive-SIMD architectures yet, there was nothing bad about having neurons self-organize rather than being preorganized in strict layers.
I would not call them clustering although that term is commonly (ab-) used in related work. Because I don't see any strong propery of these "clusters".
SOMs are literally maps as in geography. A SOM is a set of nodes ("neurons") usually arranged in a 2d rectangular or hexagonal grid. (=the map). The positions in the input space are then optimized iteratively to fit the data. Because they influence their neighbors, they cannot move freely. Think of wrapping a net around a tree; the knots of the net are your neurons. NG and GNG appear to be pretty mich the same thing, but with a more flexible structure of nodes. But actually a nice property of SOMs is the 2d map that you can get.
The only approach I remember for clustering was to project the input data to the discrete 2d space of the SOM grid, then run k-means on this projection. It will probably work okayish (as in: it will perform similar to k-means), but I'm not convinced that it's theoretically well supported.

R-Tree vs R+-Tree vs R*-Tree

What would be a major reasons to prefer R+-Tree over R-Tree for a spatial indexing? As I know, R+-Tree avoid nodes overlapping which lead to more complex code, more complex division algorithms and so on. R*-tree is very similar to R-tree, but minimizes node overlapping and require much less code than R+-tree. So, what would be a reason to choose R+-tree over R*-Tree, except the case when each node lookup requires expensive IO?
If you object overlap badly, the R+-tree paritioning may be beneficial, as you have to look at fewer leaves and paths through your tree for searching a particular location.

Number of hidden layers in a neural network model

Would someone be able to explain to me or point me to some resources of why (or situations where) more than one hidden layer would be necessary or useful in a neural network?
Basically more layers allow more functions to be represented. The standard book for AI courses, "Artificial Intelligence, A Modern Approach" by Russell and Norvig, goes into some detail of why multiple layers matter in Chapter 20.
One important point is that with a sufficiently large single hidden layer, you can represent every continuous function, but you will need at least 2 layers to be able to represent every discontinuous function.
In practice, though, a single layer is enough at least 99% of the time.
That's more similar to the way the brain works (which might not necessarily be a computational advantage, but a lot of people are researching NN to gain insight about the way the mind works, rather than to solve real world problems.
Its easier to achieve some kinds of invariance using more layers. For example, an image classifier that works regardless of where in the image the object is found, or the object's size. see Bouvrie, J. , L. Rosasco, and T. Poggio. "On Invariance in Hierarchical Models". Advances in Neural Information Processing Systems (NIPS) 22, 2009.
Each layer effectively raises the potential "complexity" of adaptation in an exponential fashion (as opposed to a multiplicative fashion of adding more nodes to a single layer).

How to interpolate between data points?

I am currently developing a piece of software using opencv and qt that plots data points. I need to be able fill in an image from incomplete data. I want to interpolate between the points I have. Can anyone recommend a library or function that could help me. I thought maybe the opencv reMap method but I can't seem to get that to work.
The data is a 2-d matrix of intensity values. I want to create an image of some sort. Its a school project.
Interpolation is a complex subject. There are infinitely many ways to interpolate a set of points, and this assuming that you truly do wish to do interpolation, and not smoothing of any sort. (An interpolant reproduces the original data points exactly.) And of course, the 2-d nature of this problem makes things more difficult.
There are several common schemes for interpolation of scattered data in 2-d. Actually, for those who have access to it, a very nice paper is available (Richard Franke, "Scattered data interpolation: Tests of some methods", Mathematics of Computation, 1982.)
Perhaps the most common method used is based on a triangulation of your data. Merely build a triangulation of the domain from your data points. Then any point inside the convex hull of the data must lie inside exactly one of the triangles, or it will be on a shared edge. This allows you to interpolate linearly inside the triangle. If you are using MATLAB, then the function griddata is available for this express purpose.)
The problem when trying to populate a complete rectangular image from scattered points is that very likely the data does not extend to the 4 corners of the array. In that event, a triangulation based scheme will fail, since the corners of the array do not lie inside the convex hull of the scattered points. An alternative then is to use "radial basis functions" (often abbreviated RBF). There are many such schemes to be found, including Kriging, when used by the geostatistics community.
http://en.wikipedia.org/wiki/Kriging
Finally, inpainting is the name for a scheme of interpolation where elements are given in an array, but where there are missing elements. The name obviously refers to that done by an art conservator who needs to repair a tear or rip in a valuable piece of artwork.
http://en.wikipedia.org/wiki/Inpainting
The idea behind inpainting is typically to formulate a boundary value problem. That is, define a partial differential equation on the region where there is a hole. Using the known boundary values, fill in the hole by solving the PDE for the unknown elements. This can be computationally intensive if there are a huge number of unknown elements, since it typically requires the solution of at least a massive sparse system of linear equations. If the PDE is a nonlinear one, then it becomes a more intensive problem yet. A simple, reasonably good choice for the PDE is the Laplacian, which results in a linear system that extrapolates well. Again, I can offer a solution for a MATLAB user.
http://www.mathworks.com/matlabcentral/fileexchange/4551
Better choices for the PDE may come from nonlinear PDEs. Once such is the Navier/Stokes equation. It is well suited to modeling the types of surfaces typically seen, but it is also more difficult to deal with. As in many facets of life, you get what you pay for.
Phew! Big subject.
The "right" answer depends a lot on your problem domain and various details of what you're doing.
Interpolating in more than 1 dimension requires making some choices. I'll assume that you are plotting on a regular grid, but that some of your grid points have no data. Big question: are the missing points sparse, or do they make big blobs?
You can't add information, so you're just trying to establish something that will look OK.
Conceptually simple suggestion (but the implementation may be some work):
For each region on missing data, identify all the edge points. That is find the x's in this figure
oooxxooo
oox..xoo
oox...xo
ox..xxoo
oox.xooo
oooxoooo
where the .'s are the points missing data, and the x's and o's have data (for a single missing point, this will be the four nearest neighbors). Fill in each missing data point with an average over the edge points around this blob. To make it smooth, weight each point by 1/d where d is the taxidriver distance (delta x + delta y) between the two points..
From before we had any details:
In the absence of that kind of information, have you tried straight ahead linear interpolation? If your data is reasonably dense this might do it for you, and it is simple enough to code in-line when you need it.
Next step is usually a cubic spline, but for that you'll probably want to grab an existing implementation.
When I need something more powerful than a quick linear interpolation, I usually use ROOT (and pick one of the TSpline classes), but this may be more overhead than you need.
As noted in the comments, ROOT is big, and while it is fast, it does try to force you to do things the ROOT way, so it can have a big effect on your program.
A linear interpolation between (or indeed extrapolation from) two points (x1, y1) and (x2, y2) gives you
y_i = (x_i-x1)*(y2-y1)/(x2-x1)
Considering this is a simple school project, probably the easiest interpolation technique to implement is the "Nearest Neighbors"
For each missing data point you find the nearest "filled" data point and use that as the value.
If you want to improve the retults a little bit more, then you can lets say, find K nearest data points, and use their weighted average as the value of your missing data point.
the weight could be proportional to the distance of the point from the missing data point.
There are zillion other techniques, but nearest neighbor is probably the easiest to implement.
if I understand that your need is as follows.
I think you have a subset of x,y,Intensity for a dimension of L by W and you want to fill for all X ranging from 0 to L and Y ranging from 0 to W.
If this is your question, then solution is to get other intensities by using Filters.
I think Bayer filter or Gaussian filter would do the job for you.
You can google these filters and you will get answers to implement.
Best of luck.

Resources