I'm trying to use Image Tracking with my own image. When I run the app I see that the NFT is loaded (console: [info] Loading of NFT data complete.) but when scanning the image nothing happens.
<!DOCTYPE html>
<html>
<script src="https://aframe.io/releases/1.0.4/aframe.min.js"></script>
<script src="https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar-nft.js"></script>
<body style="margin : 0px; overflow: hidden;">
<a-scene
vr-mode-ui="enabled: false;"
renderer="logarithmicDepthBuffer: true;"
embedded
arjs>
<a-nft
type="nft"
url="nft_img/sImg"
smooth="true"
smoothCount="10"
smoothTolerance=".01"
smoothThreshold="5"
>
<a-box
color="blue"
scale="0.07 0.07 0.07"
position="0 0 0">
</a-box>
<a-entity camera></a-entity>
</a-scene>
</body>
</html>
The NFT files are saved in the folder nft_img. The image that I'm using for NFT files is a very simple one.
Tested image
I suggest you to restart with the same code but using the descriptors (NFT marker) of the pinball.jpg image instead (included in AR.js and in jsartoolkit5). If the code is ok, that is you can track the pinball image and the box appear, this means that your initial image, that you want to track is not adequate. Also looking at your image that you provided I can almost certainly say that it is not suitable. If you can create the NFT marker with NFT-Marker-Creator this not means that will have sufficient descriptors for the detection and for the tracking.
For more information on this subject read carefully the wiki of NFT-Marker-Creator and especially the Creating good markers section.
in my tests I had to increase the scale because it seems that the size depends on those of the image to recognize (which is usually much larger than the box you are going to draw).
In any case, the extraction of the features for the test image you provided returns a confidence value of 0 (obviously too low).
I suggest you to use a more complex image (with more features) and increase the scale factor of the box.
PS: pay attention to properly close the a-nft tag after closing the a-box one.
Related
The following is the code from the example posed on the aframe blog. The result shows the camera image only; no box and no model:
not sure what to expect. The model is there and js libraries are there.
Should I see the box and model somewhere in my camera image?
<!DOCTYPE html>
<html>
<head>
<!-- include A-Frame obviously -->
<script src="https://aframe.io/releases/0.6.0/aframe.min.js"></script>
<script src="js/ColladaLoader.js"></script>
<!-- include ar.js for A-Frame -->
<script src="https://jeromeetienne.github.io/AR.js/aframe/build/aframe-ar.js"></script>
</head>
<body style=’margin : 0px; overflow: hidden;’>
<a-scene
embedded arjs=’sourceType: webcam;’>
<!-- create your content here. just a box for now -->
<a-box position='0 0.5 0' material='opacity: 1 scale = "2 2 2" color="red";'
>
</a-box>
<!-- define a camera which will move according to the marker position -->
<a-entity camera
>
</a-entity>
<a-marker preset=’hiro’>
</a-marker>
<a-collada-model
id = "titlexx1b"
src="models/man_default.dae" position = "1 1 1"
scale = "1 1 1"
>
</a-collada-model>
</a-scene>
</body>
</html>
You should add the box and the model to the markers you want to put them on. In this case the hiro marker. Usually, it's not recommended to put more than one object on a single marker.
Another thing is, I'm not sure that DAE files are supported by A-frame / AR.js. Consider using a supported format.
I have started to look at Aframe to create a web based AR project but am not sure what I am doing. My goal is to be able to spawn a 3D model in AR and make the model clickable or tap to open a hyperlink to a website. Currently my code looks like this:
<html>
<head>
<script src="https://aframe.io/releases/1.0.3/aframe.min.js"></script>
<script src="https://jeromeetienne.github.io/AR.js/aframe/build/aframe-ar.js"></script>
</head>
<body style='margin : 0px; overflow: hidden;'>
<a-scene embedded arjs cursor="rayorigin: mouse" raycaster="objects: #engine">
<!-- Grab models from here: https://github.com/KhronosGroup/glTF-Sample-Models -->
<a-asset-item id="engine" src="https://cdn.rawgit.com/KhronosGroup/glTF-Sample-Models/master/2.0/2CylinderEngine/glTF/2CylinderEngine.gltf"></a-asset-item>
<!-- add the model -->
<a-entity gltf-model="#engine" position="0 0 0" scale="0.001 0.001 0.001" startEvents: "clicked"></a-entity>
<a-marker-camera preset='hiro'></a-marker-camera>
</a-scene>
<script>
var toggleEl = document.querySelector('#engine')
toggleEl.addEventListener('click', function (evt){
toggleEl.emit("clicked");
});
</script>
</body>
</html>
For now it is just a simple 'point camera at a marker, spawn a 3D model from a github repo, be able to click/tap on the model to open a hyperlink to a website' project but I am unable to get it to work.
What I am doing wrong? I thank you for any help that can be given.
This isn't possible at present, there's no hit detection within webXR yet and regular user DOM events are disabled
https://aframe.io/blog/webxr-ar-module/
I am trying to detect different markers. One is a pattern named and1painting.patt and the other is the preset 'hiro'
When I show the hiro pattern, it is detected by the and1painting.patt marker. E.g. in the following code, it always shows the blue box rather than red, when I show the hiro marker. Thoughts on why? I tried this with the sample1.patt that is already in the repo but it didn't work either.
<!doctype HTML>
<html>
<script src="https://aframe.io/releases/0.6.1/aframe.min.js"></script>
<script src="https://cdn.rawgit.com/jeromeetienne/AR.js/1.5.0/aframe/build/aframe-ar.js"> </script>
<body style='margin : 0px; overflow: hidden;'>
<a-scene embedded arjs='sourceType: webcam;'>
<a-marker type='pattern' patternUrl='Data/and1painting.patt'>
<a-box position='0 0.5 0' material='opacity: 0.5; side:double; color:blue;'>
</a-box>
</a-marker>
<!-- handle marker with hiro preset -->
<a-marker preset='hiro'>
<a-box position='0 0.5 0' material='opacity: 0.5; side:double; color:red;'>
</a-box>
</a-marker>
<a-entity camera></a-entity>
</a-scene>
</body>
</html>
Unfortunately AR.JS still is seriously broken at the moment:
https://github.com/jeromeetienne/AR.js/pull/236
You can hack your way to a working solution, if you follow the comments in the issue.
Actually, ar.js is not at all that broken. There is a syntax error in the code above and that is the reason it doesn't work. The correct syntax of the call for the pattern file is simply 'url = ' instead of 'patternUrl = ''.
Try this:
a-marker type='pattern' url='Data/and1painting.patt'
I know it works, because it took me forever to figure it out.
I read these 2 doc pages: summary and
summary-card-with-large-image, but I don't really see the difference.
Example:
<meta name="twitter:card" content="summary" /> <!-- or summary_large_image -->
<meta name="twitter:title" content="Small Island Developing States Photo Submission" />
<meta name="twitter:image" content="https://farm6.staticflickr.com/5510/14338202952_93595258ff_z.jpg" />
What is the actual difference betwen both at the end? The rendering looks identical:
You're quite right and I'll make a note to update these documentation pages. A summary card usually has a small image on the left of the text in the timeline - this variance in the docs may be the result of using embedded Tweets. Both examples in the documentation show what I'd expect a summary card with large image to render as.
As suggested by #AndyPiper, this is probably a documentation problem. Here is the result of a content="summary", screenshot from Twitter in Chrome browser (desktop):
As you know you can compress several CSS files to one (or JS files). I was wondering if it's possible to compress several SVG to one external file, so the server makes just one request
Basically SVG files are just XML text so it's theoretically possible, however there is a catch how to render several of those images on different places
I'm just wondering
Check the answer to this question, which describes how to configure the filetypes the Asset Pipeline will manage: Using fonts with Rails asset pipeline (despite the title, it applies to more than fonts).
Thx for answer but in the end I ended up doing it like this:
In my case I wanted to use my SVG as a background image of div tag so I don't needed to precompile SVGs, I just put the compress format directly to background-image: url('here')
so for multiple SVG background:
width: 600px:
height: 400px:
background-image: url('data:image/svg+xml ...first svg '), url('data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIj8+Cjxzdmcgd2lkdGg9IjY0MCIgaGVpZ2h0PSI0ODAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CiA8IS0tIENyZWF0ZWQgd2l0aCBTVkctZWRpdCAtIGh0dHA6Ly9zdmctZWRpdC5nb29nbGVjb2RlLmNvbS8gLS0+CgogPGc+CiAgPHRpdGxlPkxheWVyIDE8L3RpdGxlPgogIDxwb2x5bGluZSBpZD0ic3ZnXzMiIHBvaW50cz0iNTM4LjUsMzMzLjU2OTU2NDgxOTMzNTk0IDMyMi4yNSwyNzguMjg0NzgyNDA5NjY3OTcgMTA2LDIyMyAiIHN0cm9rZT0iIzAwMDAwMCIgc3Ryb2tlLXdpZHRoPSI1IiBmaWxsPSJub25lIi8+CiAgPGVsbGlwc2UgZmlsbD0iI0ZGMDAwMCIgc3Ryb2tlPSIjMDAwMDAwIiBzdHJva2Utd2lkdGg9IjUiIGN4PSIzNDMiIGN5PSIxNzAiIGlkPSJzdmdfMSIgcng9IjE0MCIgcnk9IjEwNSIvPgogPC9nPgo8L3N2Zz4=')
check it here: http://codepen.io/equivalent/full/ymefJ
raw svg format:
<svg width="640" height="480" xmlns="http://www.w3.org/2000/svg">
<!-- Created with SVG-edit - http://svg-edit.googlecode.com/ -->
<g>
<title>Layer 1</title>
<polyline fill="none" stroke-width="5" stroke="#000000" points="538.5,333.56956481933594 322.25,278.28478240966797 106,223 " id="svg_3"/>
<ellipse transform="translate(-402 -120)" ry="105" rx="140" id="svg_1" cy="290" cx="745" stroke-width="5" stroke="#000000" fill="#FF0000"/>
</g>
</svg>
editor: http://svg-edit.googlecode.com/svn/trunk/editor/svg-editor.html
I believe that the link in Tom Harrison answer is on to something and from what I saw it might work, but I didn't try it myself. So if you want to truly precompile SVG with assets pipeline I encourage you to use that tactic.