I'm using some tile layers in Openlayers3 (3.5.0), of different kinds (XYZ and plain TileImage). I want to make the tiles fade in as they are loaded.
I have found that the tile sources I'm using emit a tileloadend event (also tileloadstart and tileloaderror) and I can successfully catch this. Its event object gives me access to the corresponding ImageTile object, which has methods to get its coordinates and to get an image element.
The image element is a DOM image element in my case, but which is not actually attached to the DOM. It's not on the page -- it's just the source which is then loaded into the canvas element, as far as I can tell. So I'm not sure this actually helps me to fade it in.
Is there any way to do this?
Related
I'm trying to make an editor using Konva.js.
In the editor I show a smaller draw area which becomes the final image. For this I'm using a group with a clipFunc. This gives a better UX since the transform controls of the transformer can be used "outside" of the canvas (visible part for the user) and allow the user to zoom and drag the draw area around (imagine a frame in Figma).
I want to implement object snapping based on this: https://konvajs.org/docs/sandbox/Objects_Snapping.html. (just edges and center for now) However I want it to be able to work when having multiple elements selected in my Transformer.
The strategy I'm using is basically calculating the snapping based on the .back element created by the transformer. When I know how much to snap I apply it to every node within the transformer.
However when doing it, the items starts jittering when moving the cursor close to the snapping lines.
My previous implementation was having the draw area fill the entire Stage, which I did manage to get working with the same strategy. (no jitter)
I don't really know what the issue is and I hope some of you guys can point me in the right direction.
I created a codepen to illustrate my issue: https://codesandbox.io/s/konva-transformer-snapping-1vwjc2?file=/src/index.ts
tile based
I am wondering what ‘tile based’ here means about figma.
Any one got any paper or ideas.
Thanks a lot!!!!
Divide your canvas up into a grid. Each square on that grid is a tile. This renderer would render each of those tiles individually, then save the resulting image somewhere. Each of these images are then drawn onto the grid at the spot where they are supposed to go.
It allows The renderer to cache each tile, so it only needs to render tiles that just came into view. Or if something in one tile changes, you only need to re-render that one tile and not the entire screen.
I drew a lot points in my program with webgl. Now I want to pick any point and move this point new position. The case is I don't know how to select point. So am I supposed to add actionlistener to each point?
WebGL is a rasterization library. It has no concept of movable, clickable position or points. It just draws pixels where you ask it to.
If you want to move things it's up to you to make your own data, use that data to decide if the mouse was clicked on something, update the data to reflect how the mouse changed it, and finally use WebGL to re-render something based on the data.
Notice none of those steps except the last one involve WebGL. WebGL has no concept of an actionlistener since WebGL has no actions you could listen to. It just draws pixels based on what you ask it to do. That's it. Everything else is up to you and outside the scope of WebGL.
Maybe you're using some library like three.js or X3D or Unity3d but in that case your question would be about that specific library as all input/mouse/object position related issues would be specific to that library (because again, WebGL just draws pixels)
This is demo slideshow:http://www.pixedelic.com/plugins/camera/. The transformation between the last two images, it makes the picture into a grid and animates it one by one. How can this be achieved?
My thought is that when there is a transformation, it will create many div elements, every div will use the same background image, and use background position, to make every div present a different area of that image, that way it looks like the image was taken apart into a grid. Just use .animate() in jQuery to make it animate or some CSS3 effect, like rotate or scale to generate the slide effect.
I do not know, is what i am thinking correct? Does anyone know the mechanism behind that effect?
I'm using a canvas to load a background image, and then, using jQuery UI, I call the droppable() function on the canvas and draggable() on a bunch of PNG images on the screen (that are not in the canvas). After the user drags one or more images on the canvas, I allow him to save the contents to a file using the following function:
function saveCanvas() {
var data = canvas.toDataURL("image/png");
if (!window.open(data)) {
document.location.href = data;
}
}
This successfully open another window with an image, that sadly contains only the original background image, not the dragged images. I'd like to save an image presenting the final stage of the canvas. What am I missing here?
Thanks for your time.
You've got to draw the images to the canvas.
Here is a live example:
http://jsfiddle.net/6YV88/244/
To try it out, drag the kitten and drop it somewhere over the canvas (which is the square above the kitten). Then move the kitten again to see that it's been drawn into the canvas.
The example is just to show how an image would be drawn into the canvas. In your app, you wouldn't use the draggable stop method. Rather, at save time you would iterate through the pngs, drawing them on to your canvas. Note that the jQuery offset() method is used to determine the positions of the canvas and images relative to the document.
You are saving the final state of the canvas. You have images atop the canvas and the canvas has zero knowledge of them.
The only way you can save the resulting thing you see is to, before you call toDataUrl, you must call ctx.drawImage with each of the images and actually draw them to the canvas.
Getting the right coordinates for the call to drawImage might get tricky but its not impossible. You'll have to use the pageX and pageY coordinates of the image, probably, and then draw them onto the canvas relative to the canvas' own pageX and pageY.