Is there reason to create multiple instances of Ogre::RaySceneQuery for same scene? - ogre3d

I wonder if there are any benefits if i use multiple instances of Ogre::RaySceneQuery for same scene. Or if there are any things that requires it's own special instance?

Usually you only perform one query at one time, so there's no need to use a new RSQ instance each time.
Even if you're doing multiple types of queries you have to give the RSQ a new ray each time, so there won't be any difference if you do this using one or more instances.
Usually the code looks something like this:
Ogre::SceneManager* mSceneMgr;
Ogre::Ray ray; //whatever you want to query
Ogre::RaySceneQuery* mRaySceneQuery = mSceneMgr->createRayQuery(Ogre::Ray());
mRaySceneQuery->setRay(ray);
Ogre::RaySceneQueryResult &result = mRaySceneQuery->execute();
//loop through the result
For each new ray you want to test you have to give the RSQ the new ray. Usually the rays are dependent on some coordinates (the player, the camera, ...) and you have to update the rays each iteration.
In case you have a static ray, a ray which doesn't depend on something and will be the same vector each and every iteration you may save the call to setRay(Ogre::Ray) if you use another RSQ instance for this specific ray only, but i doubt that this will be a huge performance boost or even noticeable as you still have to execute the query.
There's another point you should consider: query masks.
Every entity can have a binary mask which determines if it can get hit by a ray query.
Let's suppose you have this structure
enum QueryFlags {
FRIENDS = 1<<0;
FOES = 1<<1;
}
and every frame you wanna test if something hit a friend and/or a foe. There are a few possibilities:
You may check for both at once with mRaySceneQuery->setQueryMask(FRIENDS | FOES) and check every retrieved result if it is a friend or a foe.
You may execute two queries, one for FRIENDS and a second for the FOES.
You may use two RSQs, one for the friends and a second for the foe. This way you save a call to setQueryMask each time. As above I doubt this will give a significant performance gain, but I prefer the last option.

Related

update property in all edges, fast?

I want to update a property in every "edge" every n-cycles/seconds/minutes.
As you may suspect this is time consuming and probably wont work well.
One possible approach is to do it in chunks.
The question is what is the best way to do it.
Here is how a full sweep will look like :
match (n1)-[x:q]-(n2)
set x.decay = x.decay * exp(-rate)
So the idea is to decay the edges and remove them when they hit a specific value.
If I do it in chunks how do I keep track which ones I decayed already so that I skip them, faster and cheaper.
Sounds like you need a better approach.
For example, store the calculated expiration time (as a timestamp) in every relationship. And a query that wants to use such a relationship could test that it had not expired. This way, there is no need to update any relationship properties, and all queries will get the correct behavior (down to the millisecond).
Here is a sample snippet:
...
MATCH (foo)-[rel:REL]->(bar)
WHERE timestamp() < rel.expiration
You can also periodically remove expired relationships to clean up the DB and improve query performance.

Neo4j Structure for GPS coordinates log

I'm using neo4j for a, let's call it, social network where users will have the ability to log their position during workouts (think Runkeeper and Strava).
I'm thinking about how I want to save the coordinates.
Is it a good idea to have it like node(user)-has->node(workouts)<-is a-node(workout)-start->node(coord)-next->node(coord)-next->.... i.e. a linked list with coordinates for every workout?
I will never query the db for individual points, the workout will always be retrieved as a whole.
Is it a better way to solve this?
I can image that a graph db isn't the ideal db to store this type of data, but I don't want to add the complexity of adding another db right now.
Can someone give me any insight on this?
I would suggest you store it as:
user --has--> workout --positionedAt--> coord
This design feels more natural to me as the linked list design you mentioned in your question just produces a really deep traversal which might be annoying to query. In this way you can easily find all the coordinates for a particular workout by simply iterating edges on the workout vertex. I would recommend storing a datetime stamp on the positionedAt edge so that you can sort your coordinates easily.
The downside is that depending on how many coord vertices you intend to have you might end up with some fat workout vertices, but that may not really affect your use case. I can't think of a workout that would generate something like 100000 coordinates (and hence 100000 edges), but perhaps you can. If so, I suppose I could amend my answer a bit.

Order by relationship properties neo4j

Using Neo4j 1.9.3 -
I want to create a music program listing. On a given program there may be three pieces being performed. Each piece has a composer associated with them, and may appear on many different programs, so I can't put sequence numbers on the piece nodes.
I assume I can create the program, with relationships to each piece like so:
(program1)-[:PROGRAM_PIECE {program_seq: 1}]->(piece1)
(program1)-[:PROGRAM_PIECE {program_seq: 2}]->(piece2)
(program1)-[:PROGRAM_PIECE {program_seq: 3}]->(piece3)
My question is, how do I query the graph so that the pieces are in order of the relationship property program_seq? I'm fine using ORDER BY with node properties, but have not been successful with relationships (story of my life...)
If you like it, lock it down: that is, bind it to a variable. Then you can use ORDER BY the same way you would with node properties. If you have retrieved your program as (program1) you can do something like
MATCH (program1)-[r:PROGRAM_PIECE]->(piece1)
RETURN program1, r, piece1
ORDER BY r.program_seq
I have done the same thing recently to keep track of chess moves in a particular game. It is the same thing as node properties.
start program = node(*) // or better yet, use a real index query to find the program
match (program)-[program_piece:PROGRAM_PIECE]->(piece)
return program, piece
order by program_piece.program_seq

Tinkerpop Blueprints Vertex Query

I've been researching the Tinkerpop stack for quite a while. I think I have a good idea of what it can do and what databases it works well with. I've got a couple of different databases I'm thinking about right now, but haven't decided on a definite. So I've decided to write my code purely to the interfaces, and not take into account any implementation right now. Out of the databases I'm looking at, they implement TransactionalGraph and KeyIndexableGraph. I think that's good enough for what I need, but I have just one question.
I have different 'classes' of vertices. Using Blueprints, I believe that's best representable by having a field in each vertex containing the class name. Doing that, I can do something like graph.getVertices("classname", "User") and it would give me all of the user vertices. And since the getVertices function specifies that an implementation should make use of indexes, I'm guaranteed to get a fast lookup (if I index that field).
But let's say that I wanted to retrieve a vertex based on two properties. The vertex must have className=Users and username=admin. What's the best way to go about finding that single vertex? And is it possible to index over both of those properties, even though not all vertices will have a username field?
FYI - The databases I'm currently thinking of are OrientDB, Neo4j and Titan, but I haven't decided for sure yet. I'm also currently planning to use Gremlin if that helps at all.
Using a "class" or a "type" for vertices is a good way to segment them. Doing:
graph.createKeyIndex("classname",Vertex.class);
graph.getVertices("classname", "User");
is a pretty common pattern and should generally yield a fast lookup, though iterating an index of tens of millions of users might not be so great (if you intend to grow a particular classname to very big size). I think that leads to the second part of your question, in regards to doing a two property lookup.
Taking your example on the surface, the two element lookup would be something like (using Gremlin):
g.V('classname',"User").has('username','admin')
So, you narrow the vertices to just "User" vertices with a key index and then filter those for "admin". But, I'd model this differently. It would be even less expensive to simply do:
graph.createKeyIndex("username",Vertex.class);
graph.getVertices("username", "admin");
or in Gremlin:
g.V('username','admin')
If you know the username you want, there's no better/faster way to model this. You really only need the classname if you want to iterate over all "User" vertices. If you just want to find one (or a set of vertices with that username) then key indexing on that property is the better way.
Even if I don't create a key index on it, I still include a type or classname property on all vertices. I find it helpful in global operations where I may or may not care about speed, but just need an answer.
graph.getVertices() will iterate through all vertexes and look for ones with that property if you do not have the auto-index turned on in your graph implementation. If you already have data and cannot just turn on the auto-indexer, you should use is index = indexableGraph.getIndex() and then index.get('classname', 'User')
It's possible to perform a query over multiple objects, but without specifics, it's hard to say. For Neo4j they use Lucene, which means that query() will take a lucene query, such as className:Users AND username:admin, but I cannot speak for the others.
Yeah of those DB is good for playing with, I personally found neo4j to be the easiest, and as long as you understand their licensing structure, you shouldn't have any problems using them.

If I have two models and need a calculation on each attribute, should I calculate on the fly or create a 3rd model?

I have two models - Score & Weight.
Each of these models have about 5 attributes.
I need to be able to create a weighted_score for my User, which is basically the product of Score.attribute_A * Weight.attribute_A, Score.attribute_B * Weight.attribute_B, etc.
Am I better off creating a 3rd model - say Weighted_Score, where I store the product value for each attribute in a row with the user_id and then query that table whenever I need a particular weighted_score (e.g. my_user.weighted_score.attribute_A) or am I better off just doing the calculations on the fly every time?
I am asking from an efficiency stand-point.
Thanks.
I think the answer is very situation-dependent. Creating a 3rd table may be a good idea if the calculation is very expensive, you don't want to bog down the rest of the system and it's ok for you to respond to the user right away with a message saying that calculation will occur in the future. In that case, you can offload the processing into a background worker and create an instance of the 3rd model asynchronously. Additionally, you should de-normalize the table so that you can access it directly without having to lookup the Weight/Score records.
Some other ideas:
Focus optimizations on the model that has many records. If Weight, for instance, will only have 100 records, but Score could have infinite, then load Weight into memory and focus all your effort on optimizing the Score queries.
Use memoization on the calc methods
Use caching on the most expensive actions/methods. if you don't care too much about how frequently the values update, you can explicitly sweep the cache nightly or something.
Unless there is a need to store the calculated score (lets say that it changes and you want to preserve the changes to it) i dont see any benefit of adding complexity to store it in a separate table.

Resources