I'm using a canvas to load a background image, and then, using jQuery UI, I call the droppable() function on the canvas and draggable() on a bunch of PNG images on the screen (that are not in the canvas). After the user drags one or more images on the canvas, I allow him to save the contents to a file using the following function:
function saveCanvas() {
var data = canvas.toDataURL("image/png");
if (!window.open(data)) {
document.location.href = data;
}
}
This successfully open another window with an image, that sadly contains only the original background image, not the dragged images. I'd like to save an image presenting the final stage of the canvas. What am I missing here?
Thanks for your time.
You've got to draw the images to the canvas.
Here is a live example:
http://jsfiddle.net/6YV88/244/
To try it out, drag the kitten and drop it somewhere over the canvas (which is the square above the kitten). Then move the kitten again to see that it's been drawn into the canvas.
The example is just to show how an image would be drawn into the canvas. In your app, you wouldn't use the draggable stop method. Rather, at save time you would iterate through the pngs, drawing them on to your canvas. Note that the jQuery offset() method is used to determine the positions of the canvas and images relative to the document.
You are saving the final state of the canvas. You have images atop the canvas and the canvas has zero knowledge of them.
The only way you can save the resulting thing you see is to, before you call toDataUrl, you must call ctx.drawImage with each of the images and actually draw them to the canvas.
Getting the right coordinates for the call to drawImage might get tricky but its not impossible. You'll have to use the pageX and pageY coordinates of the image, probably, and then draw them onto the canvas relative to the canvas' own pageX and pageY.
Related
tile based
I am wondering what ‘tile based’ here means about figma.
Any one got any paper or ideas.
Thanks a lot!!!!
Divide your canvas up into a grid. Each square on that grid is a tile. This renderer would render each of those tiles individually, then save the resulting image somewhere. Each of these images are then drawn onto the grid at the spot where they are supposed to go.
It allows The renderer to cache each tile, so it only needs to render tiles that just came into view. Or if something in one tile changes, you only need to re-render that one tile and not the entire screen.
I'm using some tile layers in Openlayers3 (3.5.0), of different kinds (XYZ and plain TileImage). I want to make the tiles fade in as they are loaded.
I have found that the tile sources I'm using emit a tileloadend event (also tileloadstart and tileloaderror) and I can successfully catch this. Its event object gives me access to the corresponding ImageTile object, which has methods to get its coordinates and to get an image element.
The image element is a DOM image element in my case, but which is not actually attached to the DOM. It's not on the page -- it's just the source which is then loaded into the canvas element, as far as I can tell. So I'm not sure this actually helps me to fade it in.
Is there any way to do this?
I want to create simple 2d racing game on FM, I have track (racing map) bitmap. Image component must show only part of that bitmap depending on car position. Is there any way to define coordinates of bitmap's start point, from which Image will show this bitmap. If Image doesn't support that, what component does? Thanks.
Use a TSubImage. This component is similar to a TImage, but lets you specify a sub-section of the image to be displayed.
I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know
Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.
I would like to capture the entire contents of a window, not just the visible portion, as a bitmap. What I've been able to do only captures what is currently visible:
var v:UIComponent = ...
var bd:BitmapData = new BitmapData ( v.width, v.height );
bd.draw(v);
var pixels:ByteArray = bd.getPixels(bd.rect);
I realize using the v.width and v.height will only get the visible part. But I need the entire graphic extent (that which is scrollable).
Any help would be appreciated?
You can maybe try not to copy the window pixels but its content pixels : if that content is scrollable, there's chances that the window uses a mask or any similar method to hide everything that's outside of its dimensions.
But by trying to draw its content directly (if you have a v.content method or anything likewise) you should be able to have its real sizes and thus draw it in its entire.
Keep us updated if that works ?