AR.js is difficult for vertically placed image tracking - does AR even make sense? - augmented-reality

We have a big mural on a big wall. It is requested that, when viewing this mural on your handheld device, like a smartphone's camera, image overlays should be placed at specific positions within that mural (that mural has left out parts and the respective cutouts should be displayed on top).
Now, I followed the ar.js tutorial on image tracking and it kind of works, but I have the feeling that this is almost solely designed for horizontal and small placements. Like putting a car on your desk. The objects I managed to place on top of the mural are impossible to position, even when you add an orientation changer or rotate the objects.
This is what I have so far, tested with different sizes, rotations, positions:
<!DOCTYPE html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/gh/aframevr/aframe#1c2407b26c61958baa93967b5412487cd94b290b/dist/aframe-master.min.js"></script>
<script src="https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar-nft.js"></script>
<title></title>
</head>
<body style="margin : 0px; overflow: hidden;">
<a-scene
vr-mode-ui="enabled: false;"
renderer="logarithmicDepthBuffer: true;"
embedded
arjs="trackingMethod: best; sourceType: webcam;debugUIEnabled: false;"
>
<a-nft
type="nft"
url="url"
smooth="true"
smoothCount="10"
smoothTolerance=".01"
smoothThreshold="5"
size="1,2"
>
<a-plane color="#FFF" height="10" width="10" rotation="45 45 45" position="0 0 0"></a-plane>
</a-nft>
<a-entity camera></a-entity>
</a-scene>
</body>
</html>
It would be interesting to know how the sizing and widths and heights really function alltogether (for instance, in the documentation it say size is the nft size in meters, but is that really important? What about the children then?)
So I wondered, do I even need AR? Actually, it would be enough to detect image A in that mural (i. e. camera stream) and place another image B on top of that (or replace it), respecting the perspective.

The below is based on my experience.
The idea of creating the AR environment is to mimic the real-world surroundings the best you can. It's never perfect because of the approximations but there are ways to help the algorithms. One of them is the size of the marker. When using something like a camera that captures the 2d images of the real world, extracting the X and Y coordinates is "simple", but the depth must be deducted from the camera movement and the relative change in the object's position on the 2d image. The marker size is a hint of how far that particular object should be so I would say that the size of the marker is indeed important - if you decide to specify that.
Take a look at the example below:
This is a great simplification but try to imagine these two images are potential candidates for the marker position. But with a specified size - let's say you set it smaller than the real object - the camera would settle with the closer one.
Solution?
As far as I know, you don't need to specify the size of the marker - that way everything is left for the AR app to calculate.
But you can also take measurements and enter the correct size for better tracking.
Also, just a side note, please correct me if I'm wrong. Usually in A-frame attributes are separated with white-space and not ,. That would mean the size would be size="1 2" and not size="1, 2". But don't take my word for it, this would need to be verified.
What about the children?
The a-nft entity is placed where the marker was detected. It behaves like every other element so its children would inherit its placement as a local space. That would mean every transformation done in the local space would be placed on top of the parent transformation. For example, in A-frame, the position="X Y Z" is performed in local space.
Regarding the overlapping the images
If you are working with a rectangular image, that you want to project on a rectangular wall, then I would say your idea is good enough. I think that the most straightforward way would be to detect the 4 corners of the wall and warp the image so the corners fit (Four Corner Image Warp). That would cover the perspective transformation if you just use rectangular elements. But still, you have to somehow detect the mural.
But you may also think in advance in case one day you would like to enhance the experience and add some depth or 3D to the scene, then you would need the AR.

Related

How can I get boxes to snap into position when I drag and drop them into certain regions of the screen?

I am currently building a game on swift, using Storyboards. The game revolves around generating income from fishing lobsters. Users have lobster pots, which they can place into either inshore or outshone regions of the water. With no prior experience. I have minimal knowledge on how to code in swift.
My problem at the moment is understanding collision detection. There are three regions of the screen where the users can drag their pots into. The first screen is the starting position of the lobster pots, from which the player must drag the pots into either inshore or offshore locations. Currently, I have managed to code the action of dragging and dropping the pots, so they can be placed into any point on the screen. What I hope to do is to be able to have the pots to snap into position when the pots are dropped within the regions of either the inshore of offshore boxes. Furthermore, when the pots are dropped into place, I would like them to be organized in a row, equally spaced, and dropping into a row below, filling up the box.
Image -
I think I should also mention that the background is an image view, taken as a screenshot of the view when the game is running. I did this to avoid layering, as some pots would sometimes move behind the boxes when dragging them.
Thanks in advance.
Here some ideas:
You already have the code to move the tiles, that's good. All you now need is same math.
Although your background is an image, you also need some data model to keep, where your stuff is (or where your pots belong to). It is important to know, if a pot is in "My Pots" or "Inshor" or "Outshore". This information has to keep in some objects, like "myPots" and "inshore" and so on.
So dragging doesn't only move the pots on screen, it also changes where a pot belong to.
Hint: A representation of a area (myPots, ...) can be done with invisible areas. Invisible, because you already have the background. But a invisible rectangle gives you the ability to resize the ui without complicated re-calculations.
I would devide the area like this:
The coordinates are just examples for better understanding.
Most game engines work with coordinate (0,0) at top left.
So if you drag and release a pot, you have to calculate the end point of drag and compare it with your areas. No complicated collision detection necessary, because you only test if a point is in an area. But if you want collision detection, search for AABB collision detection (like here https://studiofreya.com/3d-math-and-physics/simple-aabb-vs-aabb-collision-detection/).
In your case it would be enough to have the decision:
if draggedPot.endCoordinate.y > 100 {
// in or out shore
if draggedPot.endCoordinate.x > 300 {
// outshore
}
else {
// inshore
}
}
else {
// still in myPots
}
I hope you get the idea :)
For arrange in a row it's also some math. Loop over the pots in an area, place one by one, always start-x + width-of-pot + some space. If this is greater than width of the area, set y to height of pot + some space and x starts at zero.

SKTileMapNode: Create an infinitely sized tile map

Over on the Apple Documents It claimes you can make an infinitely sized SKTileMap
Generating procedural game-world maps resembling natural terrain. You can create game world of infinite size by using procedural noise as its underlying representation, and manage storage and memory efficiently by creating noise maps (and their visual representations) only for the area around a player’s current position. (See the SKTileMap class.)
I can generate realistic terrain with GKNoise like the Apple Documents claim you can.
I cannot, however, make 1 giant infinitely sized SKTileMapNode it would be to intense to run on a device
The Apple Documents say to make an SKTileMapNode only around the players current position (like chunks in minecraft)
how can I achieve this in swift? my RPG needs to be infinitely sized to achieve everything I want to do with this game.
I need the "chunks" to be SKTileMapNodes because I need trees, stone, water, etc. to be added to the map so the player can interact with it.
The solution to your problem begins in making the GKNoise tileable.
You are probably using GKNoiseMap to generate them.
When you use the initializer:
let map = GKNoiseMap(_ noise: GKNoise, size: vector_double2,
origin: vector_double2, sampleCount: vector_int2, seamless: Bool)
Important: Don't forget to set the "seamless" variable to true.
That way you get a tileable map.
It looks better when you make them larger than the screen.
Let's say that a part (tileable) of the map that is your realistic terrain is going to be 2048 by 2048
One map may cover 128x128 tiles, for example. In this case each SKTile would be 16x16 pixels
You make a SKTileMap with 128x128 tiles.
Now the SKTileMap needs a background image, or the tile definitions (in this case, it is the GKNoiseMap, that you generated)
Now you can just use the same GKNoiseMap map, and place next to the first map, in any direction, containing another tile map of 128x128 tiles.
Your map is now 256x128 tiles. When the user scrolls, they can't tell where one image ends and another begins so the whole size of the map can be as large as you want, by repeating the same exercise.
It works well when you generate GKNoiseMap bigger than the screen, when you have to scroll a couple of times before the next GKNoiseMap starts. That way it doesn't get visually repetitive.
The area around "the player's position" can be one map, and then when you scroll, the map can repeat itself, saving you from loading anything else besides the map you already generated. That answers the " and manage storage and memory efficiently by creating noise maps" part of your question.
You should also be careful with Data Storage. If every SKTile needs to store variables different than what the GKNoiseMap will give you, infinite maps can be expensive.

SpriteKit sktilemapnode vertical line glitch

I am making a 2d platformer and I decided to use multiple tilemapnodes as my backgrounds. Even with 1 tile map, I get these vertical or horizontal lines that appear and disappear when I'm moving the player around the screen. See image below:
My tiles are 256x256 and I'm storing them in a tileset sks file. Not exactly sure why I'm getting this or how to get rid of this and it is quite annoying. Wondering if others experience this as well.
Considering to not use the tile maps, but I would prefer to use them if I can.
Thanks for any help with this!!!
I had the same issue and was able to solve it by "extruding" the tiled image a couple pixels. This provides a little cushion of pixels to use when the floating point issue occurs instead of displaying nothing (hence the gap). This video sums it up pretty well.
Unity: extruding tile map images
If you're using TexturePacker to generate your sprite atlas' there is an option to add this automatically without having to do it to your tile images yourself.
Hope that helps!
Sort of like the "extruding" suggested by #cheaze, I simply make the tile size in the drawing code a tiny amount larger than the required tile size. This means the assets themselves do not have to be changed.
Eg. if you assets are sized 256 x 256 and all of your calculations are based on that; draw the textures as 256.02 x 256.02 pixels in size:
[SKSpriteNode spriteNodeWithTexture:texture size:CGSizeMake(256.02, 256.02)];
Only adding .02 pixel per side will overlap your tiles automatically and remove the line glitches, depending on your camera speed and frame rate.
If the problem is really bad, you can even go so far as to add half a pixel (+0.5) or an entire pixel to remove the glitches, yet the user will not be able to see the difference. (Since a one pixel difference on a retina screen is hard to distinguish).

Interact with complex figure in iOS

I need to be able to interact with a representation of a cilinder that has many different parts in it. When the users taps over on of the small rectangles, I need to display a popover related to the specific piece (form).
The next image demonstrates a realistic 3d approach. But, I repeat, I need to solve the problem, the 3d is NOT required (would be really cool though). A representation that complies the functional needs will suffice.
The info about the parts to make the drawing comes from an API (size, position, etc)
I dont need it to be realistic really. The simplest aproximation would be to show a cilinder in a 2d representation, like a rectangle made out of interactable small rectangles.
So, as I mentioned, I think there are (as I see it) two opposite approaches: Realistic or Simplified
Is there a way to achieve a nice solution in the middle? What libraries, components, frameworks that I should look into?
My research has led me to SceneKit, but I still dont know if I will be able to interact with it. Interaction is a very important part as I need to display a popover when the user taps on any small rectangle over the cylinder.
Thanks
You don't need any special frameworks to achieve an interaction like this. This effect can be achieved with standard UIKit and UIView and a little trigonometry. You can actually draw exactly your example image using 2D math and drawing. My answer is not an exact formula but involves thinking about how the shapes are defined and break the problem down into manageable steps.
A cylinder can be defined by two offset circles representing the end pieces, connected at their radii. I will use an orthographic projection meaning the cylinder doesn't appear smaller as the depth extends into the background (but you could adapt to perspective if needed). You could draw this with CoreGraphics in a UIView drawRect.
A square slice represents an angle piece of the circle, offset by an amount smaller than the length of the cylinder, but in the same direction, as in the following diagram (sorry for imprecise drawing).
This square slice you are interested in is the area outlined in solid red, outside the radius of the first circle, and inside the radius of the imaginary second circle (which is just offset from the first circle by whatever length you want the slice).
To draw this area you simply need to draw a path of the outline of each arc and connect the endpoints.
To check if a touch is inside one of these square slices:
Check if the touch point is between angle a from the origin at a.
Check if the touch point is outside the radius of the inside circle.
Check if the touch point is inside the radius of the outside circle. (Note what this means if the circles are more than a radius apart.)
To find a point to display the popover you could average the end points on the slice or find the middle angle between the two edges and offset by half the distance.
Theoretically, doing this in Scene Kit with either SpriteKit or UIKit Popovers is ideal.
However Scene Kit (and Sprite Kit) seem to be in a state of flux wherein nobody from Apple is communicating with users about the raft of issues folks are currently having with both. From relatively stable and performant Sprite Kit in iOS 8.4 to a lot of lost performance in iOS 9 seems common. Scene Kit simply doesn't seem finished, and the documentation and community are both nearly non-existent as a result.
That being said... the theory is this:
Material IDs are what's used in traditional 3D apps to define areas of an object that have different materials. Somehow these Material IDs are called "elements" in SceneKit. I haven't been able to find much more about this.
It should be possible to detect the "element" that's underneath a touch on an object, and respond accordingly. You should even be able to change the state/nature of the material on that element to indicate it's the currently selected.
When wanting a smooth, well rounded cylinder as per your example, start with a cylinder that's made of only enough segments to describe/define the material IDs you need for your "rectangular" sections to be touched.
Later you can add a smoothing operation to the cylinder to make it round, and all the extra smoothing geometry in each quadrant of unique material ID should be responsive, regardless of how you add this extra detail to smooth the presentation of the cylinder.
Idea for the "Simplified" version:
if this representation is okey, you can use a UICollectionView.
Each cell can have a defined size thanks to
collectionView:layout:sizeForItemAtIndexPath:
Then each cell of the collection could be a small rectangle representing a
touchable part of the cylinder.
and using
collectionView:(UICollectionView *)collectionView
didSelectItemAtIndexPath:(NSIndexPath *)indexPath
To get the touch.
This will help you to display the popover at the right place:
CGRect rect = [collectionView layoutAttributesForItemAtIndexPath:indexPath].frame;
Finally, you can choose the appropriate popover (if the app has to work on iPhone) here:
https://www.cocoacontrols.com/search?q=popover
Not perfect, but i think this is efficient!
Yes, SceneKit.
When user perform a touch event, that mean you knew the 2D coordinate on screen, so your only decision is to popover a view or not, even a 3D model is not exist.
First, we can logically split the requirement into two pieces, determine the touching segment, showing right "color" in each segment.
I think the use of 3D model is to determine which piece of data to show in your case if I don't get you wrong. In that case, the SCNView's hit test method will do most of work for you. What you should do is to perform a hit test, take out the hit node and the hit's local 3D coordinate of this node, you can then calculate which segment is hit by this touch and do the decision.
Now how to draw the surface of the cylinder would be the only left question, right? There are various ways to do, for example simply paint each image you need and programmatically and attach it to the cylinder's material or have your image files on disk and use as material for the cylinder ...
I think the problem would be basically solved.

Why does gaps between tiles in an orthogonal tilemap cocos2d game appear when running on iPhone?

I'm trying to make a tilemap-based game using cocos2d 2.1 and Tiled 0.9.1. The game runs perfectly on the simulator, but I have gaps (artifact lines) between the tiles when running on the device.
Please see the screenshot.
The diff is the difference (made in photoshop) between the original tile (taken straight from the png of the tileset) and the tile as rendered by cocos2d. As you can see, in simulator they are 100% identical. However, on the device it seems that cocos2d shrinks the tile texture vertically by just a little bit. The 1 pixel stripe is actually the texture above the troublesome tile in the tileset.
Any idea what caused this and how to fix it?
While using this answer In my case enabling CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL was not enough.
I also added the following code to AppDelegate::applicationDidFinishLaunching() function and rounded values passed to setPosition(x, y) function to nearest int.
Director::getInstance()->setProjection(Director::Projection::_2D);
I use cocos2d-x 3.4.
Not certain why this happens on devices only, but you should read in ccConfig.h for parameter CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL. This in itself is a bad kludge, but it gives you a hint as to where to look.
Basically, you should make certain that all your positions are on an exact pixel boundary, ie on non-retina devices cast them to int, and on retina devices round to the nearest exact multiple of .5. Best way to ensure that is to make all your textures w,h even numbers ... the onus is on the artist for anything that will not move. If you move things, and the final position is calculated (for example in a ccTouches move,end), make certain you do this rounding there. Beware of batch nodes : the node itself, and all its children should be on pel boundary.

Resources