I'm creating a drawing app and I need the end result to be saved as a png image. But I then need to be able to edit the image with further drawing.
Is a framebuffer object the way to go here? Rendering into an offscreen texture?
It depends how you want to edit the image afterwards. There are two parts to your question:
1) Saving the image as a png
2) Editing the image after drawing to it
1) It is straightforward to save a frame buffer drawing as a png. There is a similar question here for OpenGL ES 1.x (http://stackoverflow.com/questions/5062978/how-can-i-dump-opengl-renderbuffer-to-png-or-jpg-image) that should be a good base to work off of.
2) It depends how soon you want to edit the image. If you are editing the image continuously throughout the program, then keep everything in memory in the frame buffer and only write to a png when you are done editing. If you need to draw on top of the image at a later time (for instance, when you reopen the program) you can save as a png and then load the png as a texture for a new frame buffer when you want to edit the image again. When you draw to this new frame buffer, you will be drawing on top of the texture (which was your previous image).
Related
I am updating some textures of the scene all the time by new images.
Problem is uploading is synchronous and texImage2D takes ~100ms. It takes so long time even if texture is not used during rendering of the next frame or rendering is switched off.
I am wondering, is there any way to upload texture data asynchronously?
Additional conditions:
I had mention there is old texture which could stay active until uploading of new one to GPU will be finished.
Solution is to use texSubImage2D and upload image to GPU by small portions. Once uploading will be finished activate your new texture and delete old one.
is there any way to upload texture data asynchronously?
no, not in WebGL 1.0. There might be in WebGL 2.0 but that's not out yet.
Somethings you might try.
make it smaller
What are you uploading? Video? Can you make it smaller?
Have you tried different formats?
WebGL converts from whatever format the image is stored in to the format you request. So for example if you load a .JPG the browser might make an RGB image. If you then upload it with gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE it has to convert the image to RGBA before uploading (more time).
Do you have UNPACK_FLIP_Y set to true?
If so WebGL has to flip your image before uploading it.
Do you have UNPACK_COLORSPACE_CONVERSION_WEBGL set to BROWSER_DEFAULT_WEBGL?
If not WebGL may have to re-decompress your image
Do you have UNPACK_PREMULTIPLY_ALPHA_WEBGL set to false or true?
Depending on how the browser normally stores images it might have
to convert the image to the format your requesting
Images have to be decompressed
are you sure your time is in "uploading" vs "decompressing"? If you switch to uploading a TypedArray of the same dimensions does it speed up?
I have a series of images that I would look to loop through using iOS's [UIView startAnimating]. My trouble is that, when I exported the images, they all came standard in a 240x160 size, although only 50x50 contains the actual image, the rest being transparent parts that are just taking up space.
When I set the frame of the image automatically using image.size.width and image.size.height, iOS takes into images' original size of 240x160, so I am unable to get a frame that conforms to the actual parts of the image. I was wondering if there is a way using Illustrator or Photoshop, or any other graphics editing software for me to export the images based on their natural dimensions, and not a fixed dimension. Thanks!
I am a fan of vector graphics and thinks everything in the world should be vector ;-) so here is what you do in illustrator: file - document setup - edit artboards. Then click on the image, and the artboard should adjust to the exact size. You can of course have multiple artboards, or simply operate with one artboard and however-many images.
I've seen many questions asking how to draw transparent images, but my case is quite the opposite. I have a TPicture where I load any file type, including PNG. I then read TPicture.Graphic and call Draw directly in a TBitmap's canvas. However, when the image is drawn, it carries over the transparency of the original PNG image.
The current code is very simple, just...
MyPicture.LoadFromFile(SomeFilename);
MyBitmap.Canvas.StretchDraw(SomeRect, MyPicture.Graphic);
Now the issue is that the canvas which I'm drawing to already has an image, and this PNG is being drawn over a portion of it. When the PNG has a transparent background, normally it appears white. However, since it's directly drawing a transparent graphic to the canvas, it keeps those areas transparent.
How can I draw a PNG Graphic directly to a canvas without its original transparency while using only the canvas drawing methods? I don't want to create too many graphic objects and draw too many times, hence the reason I only have 2 lines of code above. I'm hoping there's a way I can do something like BitBlt with some special mechanism for this purpose.
The only method pre-built in Delphi XE2 has a defect and doesn't work properly. Instead, draw whitespace, or whatever background you desire, to a blank canvas. Then draw the transparent image on top.
In case you aren't drawing onto a blank canvas, you can call FillRect method of the bitmap canvas for the region you're planning to draw the png.
I have been designing an app with oCanvas.js. It's a really nice canvas library that makes it much easier to create an app that can create and manipulate images, but I ran into a snag when I was trying to implement image filters:
I need transparent backgrounds so that I can have multiple layers, each of which is represented by its own display object, rendered separately (meaning one at a time) on a hidden "staging" canvas. Immediately after being rendered, a layer is then drawn on top of the previous layers on the visible canvas, so that different image filters can be applied to each layer independently during render.
The issue I am having is that, when attempting to extract the image from an oCanvas object's canvasElement, the resulting images never have a transparent background. For example: Imagine I have a 50x50 canvas that has been oCanvas.create() processed, but has display: none; (this is used as the rendering canvas) and another canvas (same dimensions) without an oCanvas instance. I am trying to do something like this (Pseudocode):
visibleCanvas.getContext("2d").drawImage(MyOcanvasCore.canvasElement, 0, 0);
I have also tried using URL = MyOcanvasCore.canvasElement.toDataURL() and then having my visibleCanvas do a drawImage with src=url.
The images always transfer, but they have a white background, even though I specify background: "transparent" during canvas.create(). As such, they completely overwrite all previous layers.
Do you have any tips for me? Am I doing it wrong? I tried transferring stuff from one canvas to another using classic drawRect, drawImage, etc methods, and transparency was retained. That's why I believe it is either the library or my code.
I guess you are using the image format other than png.
You should have your image format in PNG which keeps all the details of every pixel >including transparency of pixels
and not a compressed format.
So just try keep your image in png format after edting them in some image editor and save > result as .png .
i wonder, how i could figure out if an image has a transparency effect applied. Is there any way in JavaScript or HTML5? I have a Base64-coded image. Is there a way to read out the transparency-information (alpha-channel). For example, if i load a PNG-Image, then convert it to base64, then drop it to html5-canvas, now how can i know if this has transparency-effect activated?
thanx alot
okyo
When you say 'drop it to html5-canvas', I assume you mean using an image element with the 'data:' URI scheme. Also, let's take it as given that you don't want to write javascript code to parse the image files.
You could do something like this pseudo-code:
create 2 off-screen canvases
color one opaque white and the other opaque black
draw the image on both of them
call getImageData on each canvas, using the image bounds
compare the image data
If the image has any transparent or partially-transparent pixels, then presumably the two canvases will end up at least a little different. One exception would be if the image has the transparency feature enabled but is entirely opaque anyway. Another would be if the non-opaque pixels are only very slightly transparent - not enough to alter a white or black background. But this technique would catch images where transparency is noticeable.