Is it possible to insert animated svg element into a-frame canvas or render it as shader texture onto a primitive mesh?
I've tried using the aframe-htmlembed-component but it renders svg just as a still image.
Also, i've tried the aframe-gif-shader which worked just fine, I'd only need this for svg animations.
Related
I'm trying to render something to texture using a library called regl. I manage to render an effect using two render targets and i see the result in one.
Capturing the frame after i've done rendering to the target looks like this, and it represents a screen blit (full screen quad with this texture). This is how i would like this to work.
Once i pass this to some other regl commands, in some future frame, this texture attachment seems to get nuked. This is the same object that i'm trying to render with the same resource, but the data is gone. I have tried detaching the texture from the FBO, but it doesn't seem to be helping. What can i be looking for that would make this texture behave like this?
This ended up being a problem with Regl and WebViz. I was calling React.useState to set the whatever resource that regl uses for the texture. For some reason, this seems like it was invoked, which "resets" the texture to an empty 1x1.
I'm trying to use a movie as a texture on a sphere using X3Dom's MovieTexture. It is in equirectangular projection which would allow the user to look around (similar to Google StreetView).
The movie is mp4 or ogv and plays fine on e.g. a box shape from the example code from the x3dom docs.
However, on the sphere only 20 percent of the surface is covered with the movie texture while the rest is stretched over the surface.
The relevant code looks like this:
<x3d width='500px' height='400px'>
<scene>
<shape>
<appearance>
<MovieTexture repeatS="false" repeatT="false" loop='true' url='bigBuckBunny.ogv'></MovieTexture>
</appearance>
<sphere></sphere>
</shape>
</scene>
</x3d>
Looks like it is supposed to work but currently a bug in x3dom when repeatS="false" is set on the texture.
The problem also occurs with a generic <texture> element that includes a <canvas> or <video> element.
The workaround that worked for me is to use a <canvas> with a power-of-two size to avoid setting repeatS="false"
An alternative would be to scale to original video.
I am trying to overlay graphics on top of my OpenGL render scene.
I have managed to get it up and running but the drop in FPS is somewhat too much.
I am currently using GLScene in combination with Graphics32.
What I do is to render the GLScene Rendering Context to a bitmap,
apply that bitmap to a TImageView32, and do some final UI touches inside the TImage32.
The code I am using to render to a bitmap is the following, which also reduces FPS is:
procedure RenderToBitmap;
var b: TBitmap;
begin
b:=TBitmap.Create;
b:=GLSceneViewer.Buffer.CreateSnapShotBitmap; //TGLSceneViewer
ImgVwr32.Bitmap.Assign(b); //TImageViewer32
b.Free;
end;
I have tried some other code (see below), which gives me a realtime rendering, but I can not
modify the "Bitmap" property of the ImageViewer32. In other words: The GLScene Rendering context is being rendered, but none of my own graphics is rendered.
The code:
//The following line is put inside the FormCreate call
GLSceneViewer.Buffer.CreateRC(GetDC(ImgVwr32.Handle),false);
How can I properly overlay graphics on top of the rendering context, or copy the rendering context output, without losing FPS?
Well, by avoiding the whole GPU→CPU→GPU copy roundtrips. Upload your overlay into a OpenGL texture and draw that over the whole scene using a large textured quad.
OpenGL is not a scene graph, it's just a somewhat more sophisticated drawing API. You can change the viewport and transformation parameters at any time without altering the pixels drawn so far. So it's easy to just switch into a screen space coordinate system and draw the overlay using that.
I am building a 3D image viewer which has Three.JS plane geometries as placeholders with the images as their textures.
Now I want to add a black border around the image. The only way I have found yet to implement this is to add a new black plane geometry behind the image to be displayed. But this required whole-sale changes to my framework which I want to avoid.
WebGL's texture loading function gl.texImage2D has a parameter for border. But I couldn't find this exposed anywhere through Three.js and doubt that it even works the way I think it does.
Is there an easier way to add borders around textures?
You can use a temporary regular 2D canvas to render your image and apply any kind of editing/effects there, like paint borders and such. Then use that canvas image as a texture. Might be a bit of work, but you will gain a lot of flexibility styling your borders and other stuff.
I'm not near my dev machine and won't be for a couple of days, so I can't look up an example of my own. This issue contains some code to get you started: https://github.com/mrdoob/three.js/issues/868
I'm using a canvas to load a background image, and then, using jQuery UI, I call the droppable() function on the canvas and draggable() on a bunch of PNG images on the screen (that are not in the canvas). After the user drags one or more images on the canvas, I allow him to save the contents to a file using the following function:
function saveCanvas() {
var data = canvas.toDataURL("image/png");
if (!window.open(data)) {
document.location.href = data;
}
}
This successfully open another window with an image, that sadly contains only the original background image, not the dragged images. I'd like to save an image presenting the final stage of the canvas. What am I missing here?
Thanks for your time.
You've got to draw the images to the canvas.
Here is a live example:
http://jsfiddle.net/6YV88/244/
To try it out, drag the kitten and drop it somewhere over the canvas (which is the square above the kitten). Then move the kitten again to see that it's been drawn into the canvas.
The example is just to show how an image would be drawn into the canvas. In your app, you wouldn't use the draggable stop method. Rather, at save time you would iterate through the pngs, drawing them on to your canvas. Note that the jQuery offset() method is used to determine the positions of the canvas and images relative to the document.
You are saving the final state of the canvas. You have images atop the canvas and the canvas has zero knowledge of them.
The only way you can save the resulting thing you see is to, before you call toDataUrl, you must call ctx.drawImage with each of the images and actually draw them to the canvas.
Getting the right coordinates for the call to drawImage might get tricky but its not impossible. You'll have to use the pageX and pageY coordinates of the image, probably, and then draw them onto the canvas relative to the canvas' own pageX and pageY.