Model is not on top of marker ARjs - augmented-reality

I'm just starting out in ARjs myself. I found an issue that I'm confused on how to solve. My gltf model won't show unless it's on scale 5x and it only showed partially as in the left top corner on the actual model. My assumption is from the position, but I'm not sure how to approach this?
I already saw the https://github.com/jeromeetienne/AR.js/issues/299 thread solution, but it didn't work.

You didn't share your code so it's really hard to know what is the problem there. From what you are describing I would try scaling your object down and not up. If you only see small parts of it on a 5x scale, I guess you were inside the object this entire time. Try scaling it down to ~0.1 and see if it works. Also, make sure your model is positioned to 0,0,0 (or just don't specify the position at all ass this is the default). Another thing I think you should try is our platform echoAR. You can upload your models and easily have an ARjs experience. Just follow the docs.

Related

Creating text / number objects in ARKit

I want to create some objects (boxes, cylinders, pyramids, doesnt really matter) which display text / a number on the side / on all of it's sides. Short of making individual materials with the numbers displayed on them by hand, is there a simple way to achieve this?
I am using Swift 4 in XCode.
First thing, please try not to be discouraged. Thank you for reaching out to the ARKit community on stack :-)
We are here to help each other.
(I do feel your pain…and why I am trying to help)
Here is an interesting stack page that has helped me with placing items on the sides of objects(like boxes cylinders, pyramids).
I hope it can help you or others.
SCNBox different colour or texture on each face
Rickster pointed out some other possibilities.
We all learn by sharing what we know.
Smartdog
Depends on what you mean by "by hand". If you want the text displayed on the surface of the geometry, like a texture map, then texture-mapping it is the way to go. If you draw your text into a UIImage, you can set that as the material contents, which is a bit more dynamic than, say, creating a bunch of PNGs that each have a different number on them. Just make sure to choose an image size/resolution that looks good at the size your objects are displayed at.
For anyone lost in the internet trying to find an answer to this it's stupidly simple. Use SCNText and set it as a node. I just wasted 7 hours of my life trying to make number .dae models position themselves next to each other because there is no mention of this feature anywhere.
I hope I saved you as much pain as I just endured discovering this.

Placing Virtual Object Behind the Real World Object

In ARKit for iOS. If you display a virtual item then it always comes before any real item. This means if I stand in front of the virtual item then I would still see the virtual item. How can I fix this scenario?
The bottle should be visible but it is cutting off.
You cannot achieve this with ARkit only. It offers no off the shelve solution for solving occlusion, which is a hard problem.
Ideally you'd know the depth of each pixel projected on the camera, and would use that to determine those that are in front and those that are behind. I would not try something with the feature points ARKit is exposing since 1) their position is innacurate 2) there's no way to know between two frames which feature point of frame A is which feature point in frame B. It's way to noisy data to do anything good.
You might be able to achieve something with third party options that'd process the captured image and understand depth or different depth levels in the scene, but I don't know any good solution. There's some SLAM technique that yields dense depth map like DTAM (https://www.kudan.eu/kudan-news/different-types-visual-slam-systems/) but that'd be redoing most of what arkit is doing. There might be other approaches that I'm not aware off. Apps like snap do this in their own way so it is possible!
So basically your question is to mapping the coordinate of the virtual item on real world coordinate system, in short, you want to see the virtual item blocked by the real item, and you can only see the virtual item once you pass the real item.
If so, you need to know the physical relations of each object in this environment, and then you need to know exactly where you are to decide if the virtual item is blocked.
It's not an intuitive way to fix this, however, it's the only way I can think of.
Cheers.
What you are trying to achieve is not easy.
You need to detect the parts of the real world that "should be visible" using some kind of image processing. Or maybe the ARKit feature points that have the depth information, then based on this you have to add "an invisible virtual object" that cuts the drawing of things behind it. This will represent your "real object" inside the "virtual world" so that the background (camera feed) remains visible in places where this invisible virtual object is present.

Plot user location onto line map

Ok, I've done some reading around the subject, have an idea of how I'd tackle my problem, but want to find out of this is the most efficient way, or if I'm missing something simple.
I have a line diagram of a section of railway that I'd like to plot the users location onto (the user being someone on a train moving up/down the railway).
Now, I initially went down the route of geo-referencing, but quickly realised this probably wasn't the way to go, as my image is not a real reflection of the area + I want the line diagram to be what the user sees.
OK, my though process of how I will tackle it:
I know the physical area so I could extract the coordinates along the railway, every x meters (my line diagram has a resolution of around 5m). Stick this into an array. Can anyone suggest a tool to do this?!
Allocate my line diagram a start and end, then match the image coordinates with the physical coordinates for the entire line.
Read in the users position and update where to draw the position based on the closest match in the array?
Does this sound doable, and would it give me decent results?
If you have more sophisticated answers, please do share.
It sounds reasonable in general. As the user is supposed to be on a train a simpler option may work where you just keep track of the physical distance moved and use that as a percentage distance along the line. This is a lot simpler to manage and could be backed up with some coordinate checkpoints to ensure you don't have a drifting error. I'd aim for a simpler implementation if you can.

Change Size of Game in XNA

I don't know a better way to say this, but I'm not looking to change the size of the window. I'm creating a maze, whose size that can be changed via scripting. As such, that may make the maze bigger what the window shows (even on full screen. Is there a way to shrink/enlarge the actual game inside the window?
Well, what you're asking is generally just a bad user experience, cause the size of the game will change, if the maze changes size.
That being said, what you're asking is technically possible. The way to do it would be to use a matrix for SpriteBatch.Begin's last parameter.
This matrix would look something like
Matrix.CreateScale(windowWidth / gameContentTotalWidth, windowHeight / gameContentTotalHeight, 1);
This will scale your game content to always be drawn within the screen. However this means if you make a large maze, you're likely to end up being unable to navigate it, cause you'd be having troubles seeing where you're going.

Find corner of field

I am working on project in C#/Emgu CV, but answer in any language with OpenCv should be ok.
I have following image: http://i42.tinypic.com/2z89h5g.jpg
Or it might look like this: http://i43.tinypic.com/122iwsk.jpg
I am trying to do automatic calibration and I would like to know how to find corners of the field. They are marked by LEDs, but I would prefer to find it by color tags. If need I can replace all tags by same color tags. (Note that light in room is changing so the colors might be bit different next time)
Edge detection might be ok too, but I am afraid that I would not find the corner correctly.
Please help.
Thank you.
Edit:
Thanks aardvarkk for advice, but I think I need to give you little bit more info.
I am already able to detect and identify robots in field and get their position and rotation. But for that I have to set corners of field manually first. So I was looking for aa automatic way, but I was worried I would not be able to distinguish color tags from background because light in the room is changing quite often.
And as for the camera angle. Point of this is that camera can be every time from different (reasonable) angle.
I would start by searching for the colours. The LEDs won't be much help to you as they're not much brighter than anything else in the scene. I would look for the rectangular pieces of coloured tape. Try segmenting the image based on colour. That may allow you to retrieve the corner tape pieces directly without needing to know their exact colour in advance. After that, you may look for pairs of the same colour blob that are close to each other to define the corners where the pieces of tape are the same. Knowing what kinds of camera angles you are going to have to solve is also very important -- if you need this to work when viewing from the side, then this approach certainly won't work. If it's almost top down, this would probably be a good start. Nobody will be able to provide you with a start to finish solution, but this might be a good base to begin with.

Resources