Using X3Dom MovieTexture on a sphere doesn't show whole movie - x3dom

I'm trying to use a movie as a texture on a sphere using X3Dom's MovieTexture. It is in equirectangular projection which would allow the user to look around (similar to Google StreetView).
The movie is mp4 or ogv and plays fine on e.g. a box shape from the example code from the x3dom docs.
However, on the sphere only 20 percent of the surface is covered with the movie texture while the rest is stretched over the surface.
The relevant code looks like this:
<x3d width='500px' height='400px'>
<scene>
<shape>
<appearance>
<MovieTexture repeatS="false" repeatT="false" loop='true' url='bigBuckBunny.ogv'></MovieTexture>
</appearance>
<sphere></sphere>
</shape>
</scene>
</x3d>

Looks like it is supposed to work but currently a bug in x3dom when repeatS="false" is set on the texture.
The problem also occurs with a generic <texture> element that includes a <canvas> or <video> element.
The workaround that worked for me is to use a <canvas> with a power-of-two size to avoid setting repeatS="false"
An alternative would be to scale to original video.

Related

AR.js is difficult for vertically placed image tracking - does AR even make sense?

We have a big mural on a big wall. It is requested that, when viewing this mural on your handheld device, like a smartphone's camera, image overlays should be placed at specific positions within that mural (that mural has left out parts and the respective cutouts should be displayed on top).
Now, I followed the ar.js tutorial on image tracking and it kind of works, but I have the feeling that this is almost solely designed for horizontal and small placements. Like putting a car on your desk. The objects I managed to place on top of the mural are impossible to position, even when you add an orientation changer or rotate the objects.
This is what I have so far, tested with different sizes, rotations, positions:
<!DOCTYPE html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/gh/aframevr/aframe#1c2407b26c61958baa93967b5412487cd94b290b/dist/aframe-master.min.js"></script>
<script src="https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar-nft.js"></script>
<title></title>
</head>
<body style="margin : 0px; overflow: hidden;">
<a-scene
vr-mode-ui="enabled: false;"
renderer="logarithmicDepthBuffer: true;"
embedded
arjs="trackingMethod: best; sourceType: webcam;debugUIEnabled: false;"
>
<a-nft
type="nft"
url="url"
smooth="true"
smoothCount="10"
smoothTolerance=".01"
smoothThreshold="5"
size="1,2"
>
<a-plane color="#FFF" height="10" width="10" rotation="45 45 45" position="0 0 0"></a-plane>
</a-nft>
<a-entity camera></a-entity>
</a-scene>
</body>
</html>
It would be interesting to know how the sizing and widths and heights really function alltogether (for instance, in the documentation it say size is the nft size in meters, but is that really important? What about the children then?)
So I wondered, do I even need AR? Actually, it would be enough to detect image A in that mural (i. e. camera stream) and place another image B on top of that (or replace it), respecting the perspective.
The below is based on my experience.
The idea of creating the AR environment is to mimic the real-world surroundings the best you can. It's never perfect because of the approximations but there are ways to help the algorithms. One of them is the size of the marker. When using something like a camera that captures the 2d images of the real world, extracting the X and Y coordinates is "simple", but the depth must be deducted from the camera movement and the relative change in the object's position on the 2d image. The marker size is a hint of how far that particular object should be so I would say that the size of the marker is indeed important - if you decide to specify that.
Take a look at the example below:
This is a great simplification but try to imagine these two images are potential candidates for the marker position. But with a specified size - let's say you set it smaller than the real object - the camera would settle with the closer one.
Solution?
As far as I know, you don't need to specify the size of the marker - that way everything is left for the AR app to calculate.
But you can also take measurements and enter the correct size for better tracking.
Also, just a side note, please correct me if I'm wrong. Usually in A-frame attributes are separated with white-space and not ,. That would mean the size would be size="1 2" and not size="1, 2". But don't take my word for it, this would need to be verified.
What about the children?
The a-nft entity is placed where the marker was detected. It behaves like every other element so its children would inherit its placement as a local space. That would mean every transformation done in the local space would be placed on top of the parent transformation. For example, in A-frame, the position="X Y Z" is performed in local space.
Regarding the overlapping the images
If you are working with a rectangular image, that you want to project on a rectangular wall, then I would say your idea is good enough. I think that the most straightforward way would be to detect the 4 corners of the wall and warp the image so the corners fit (Four Corner Image Warp). That would cover the perspective transformation if you just use rectangular elements. But still, you have to somehow detect the mural.
But you may also think in advance in case one day you would like to enhance the experience and add some depth or 3D to the scene, then you would need the AR.

Vector tiles buffer

I have issue setting up an Openlayers map with vector tiles served from Geoserver. The lines gets screwed up along the edges of the tiles. It looks like the lines are first clipped and then styled instead of the opposite. This makes wide lines look ugly.
Changing the renderBuffer in the OL client doesnt make any difference.
I have similar issues with labels and perhaps the solution is pretty much the same in both cases.
EDIT: The geojson in QGIS shows that there is a buffer around the tiles:
geojson output in QGIS
EDIT 2: I have been doing som more research on the phenomenom and I think both Geoserver and Openlayers are to blame for the artifacts. As seen below, Geoserver does render with a buffer (the pink polygons(1 and 2) contains the black bordered tile extent) but does not include features that lies outside of the tile but inside of the buffer, like the green line(3) in the circle.
That makes the tile rendered with a jack as in the left circle below.
However, even when you have a line in a tile that goes close to the tile border, styled with quite a thick stroke, Openlayers wont render enough of the line outside the tile to make it styled without a jack. Like the right circle below. I have mote obvious examples of that bahviour. This could probably easily be fixed by setting a higher buffer/tolerance value for the tilerendering.

Three.js plane geometry border around texture

I am building a 3D image viewer which has Three.JS plane geometries as placeholders with the images as their textures.
Now I want to add a black border around the image. The only way I have found yet to implement this is to add a new black plane geometry behind the image to be displayed. But this required whole-sale changes to my framework which I want to avoid.
WebGL's texture loading function gl.texImage2D has a parameter for border. But I couldn't find this exposed anywhere through Three.js and doubt that it even works the way I think it does.
Is there an easier way to add borders around textures?
You can use a temporary regular 2D canvas to render your image and apply any kind of editing/effects there, like paint borders and such. Then use that canvas image as a texture. Might be a bit of work, but you will gain a lot of flexibility styling your borders and other stuff.
I'm not near my dev machine and won't be for a couple of days, so I can't look up an example of my own. This issue contains some code to get you started: https://github.com/mrdoob/three.js/issues/868

Animated characters on an overlay of camera capture

I was wondering how the characters in this app are animated on screen. Is it possible to have a video with transparent background to put as the overlay of camera capture ? Is this just a set of UIImages animated together ? These characters seems more animated than simple gifs.
That is most likely an OpenGL animation you are seeing overlaid on the camera display. See one of the often cited answers of Brad Larson on how to do that - includes a linked example project (that dude rocks).
To achieve that effect, you use the input of the camera, put that on a planar object as a texture and render your stuff (highly animated characters or even naked, dancing robot women) on top of it, presto.

Three.js custom textured meshes

Is it possible to add a textured material to an object with a custom mesh in Three.js?
Whenever I try exporting an object from Blender to Three.js with a texture on it, the object disappears. Looking through the three.js examples, it seems like they've carefully avoided putting textures on anything other than the built-in geometries, and forcing a texture on such a mesh causes it to again disappear.
For example, if I edit scene_test.js, which is a scene file called from webgl_scene_test.html, if I apply the "textured_bg" to the "walt" head, it will disappear.
It looks like the missing piece of the puzzle was that you have to apply the set of UV coordinates to the mesh of the object in question.
First, select your texture, and under "Mapping" make sure that the "coordinates" dropdown is set to "UV"
Then, click on the "object data" button, and in the UV Texture list, click the plus icon. This seems to automatically add the UV data to the mesh.

Resources