How to assign two models to one marker in ARToolKit - augmented-reality

I want to make two models appear over one marker in ARToolkit. The key is they much come from two separate WRL files. Is this possible?

I know this is an old question, but maybe it is useful for someone else.
The answer is yes, you can.
The way you do it is to have an empty object that is the parent of the other 2 objects and make that object trackable.
The other 2 objects can define an offset in any direction based on the position of the parent, so they will behave as a scene of sorts, keeping their relative positions.

Related

Removing sections from CGMutablePath

For this project, I'm using the addCurve method of CGMutablePath to draw a curve-shape on a view.
What I'm not understanding at all, is how (or whether it's even possible) to remove that exact same curve from the mutable path? Judging by the API, it looks like there are lots of methods for adding various shapes at different points, but no methods on how to remove them...
Is not possible, you may hold the elements of the original path so you can construct a new one just with the elements you want or, you can use CGPath.applyWithBlock to construct a new one selecting the wanted elements.

Do I need a random geometry, material, or something else? - SceneKit, Swift

In my game, I add a new a tempLeftBox (an SCNNode with an SCNBox geometry, and an SCNMaterial) every 5 seconds or so along a straight path. The tempLeftBox has its own geometry and its own material. For each tempLeftBox that is added, the color of the box should be random. This random color should ONLY affect that ONE box that was just added. Not all of the boxes that are going to be added or have been added already. How should I go about doing this? Swift, SceneKit
I assume you're instantiating instances of tempLeftBox class...
Each instance you create, of your tempLeftBox, is unique. It's a reference type, meaning you can have many references to it, but each instance is utterly unique. Each time you make a box, create a reference to it, and set the materials to the qualities you want, and it should be unique. ie not affect your other instances.

Corona sdk physics object anchor points

Can you tell me how to do this, and if not help me solve this problem? Is it possible to set the new anchor points outside out of the object? I am trying to get an object to rotate around another object. They are physics objects, so I cannot do a transparency trick. Any ideas? Thanks!
You could put the two objects in a display group and set the group's anchor point to be at the center of your main object. The code at http://coronalabs.com/blog/2013/10/15/tutorial-anchor-points-in-graphics-2-0/ should be sufficient. See if that will do the trick for you. Note though that you will have to apply a negative rotation to the main object because you don't want it turning too. If several objects are "in orbit" around main object at different speeds then this trick won't work.

OpenCV: Searching for pixels along single-pixel branches

I'm currently trying to find a neat way of storing separate "branches" in a binary image. This little animation explains it:
As I go along the branches I need to collect the pixel indices that makes up a single-pixel wide branch. When I hit a junction point it should split up and store the new branches.
One way of going about it is maybe to create a 3x3 subregion, find out if there are white pixels inside it, move it accordingly, create a junction point if there is more than two. Always store the previous subregion so one can use it for making sure that we don't move to regions we already scanned.
It's a bit tricky to figure out how I would go about it though.
I basically need to reorder the pixels based on a "line/curve" hierarchy. Another part of the application will then redraw the figures, which internally works by creating lines between points hence the need to have them "ordered".
I don't know if you could apply it in your case but you should take a look at cv::findContour.
you will get a vector of points ordered.
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html

XNA project - who is in charge of drawing?

I am just playing around with XNA, and I have several different models I need to draw in each frame.
at the moment, the Game object holds references to all my models, and draws them one after the other. Each with his own different way of drawing - one has two separate textures, the other might be mirrored to the other side, etc.
I was wondering if it is acceptable to just add a
public void Draw(SpriteBatch spriteBatch)
method to all my models (from the BaseModel, of course), and have each class be in charge of drawing itself, or maybe I should stick to letting the classes set their data according to events (KeyboardState) on the Update method, and keep all graphic logic in the Game class.
Is there a preferred way to do this?
Generally, I have a base class that contains a BaseModel, texture data, rotation and scale data, etc. For each type of actor in the game, I create a derived class. The base class provides a Draw method that, by default, draws the model with the texture, rotation, and scale data given in the class. Derived classes can override it to draw the actor however they like.
Then, I have a DrawableGameComponent that acts as my scene graph. It contains a list of all active actor objects. In the component's Draw and Update methods, I iterate through the list of actors and call their Draw and Update methods.
That's one way of approaching it ... for the sake of completeness in this post, I'll highlight the other approach. Basically, the opposing view states that no one entity should need (or have) custom knowledge of how to render itself. An entity is merely a collection of state ... and the renderer can simply look at that state, and draw it in the correct way.
An example ... say you have a number of ships. Some go fast, some shoot rockets, some have a sattelite orbiting around it that also shoots. Your "Entity" class can have the following properties
Model VisualRepresentation
Matrix Position
Entity[] AttachedEntities
Your renderer can then iterate over your generic "List<Entity>", and
Draw the visual representation (ie. Model) of the entity using the position
Loop over the AttachedEntities and draw them (recursively).
It's obviously a simplified example ... but this way the drawing logic is completely contained in the rendering code, and only needs to concern itself with as little amount of information as possible. While the ship class can focus on the game logic itself (ie. how fast do I fly, what weapon am I using, how much energy do I have in my shields, etc.).
As far as which one is preferred, really the answer lies within your project's requirements, and what you feel comfortable with. Don't try to make a game engine before making a game ... just do whatever it takes to make your game, and then maybe you can extract the components that worked after you ship the game :-P

Resources