I'm using Graphics32 for image processing. Looking at its capabilities, it strikes me that I've yet to see a proper implementation of a clipping mask. I do see the term "clipping" pop up here and there, but it seems to refer to something else.
Simply put, I need one layer to function as a "peeking hole" to another; layer A should be projected onto layer B, but only where layer B is visible. (I see no further need to redefine what a clipping mask is.)
If it were just the bitmap of that other layer that I'd like to present, it wouldn't be so hard to do - then I could use this trick - but what complicates things, is that the bitmap of a layer does not tell much about what would be displayed by the layer; the layer can be:
(partially) invisible (when out of the view)
moved/stretched + optionally resampled
rotated
with no effects on its bitmap.
Is it actually so that there is no ready implementation for this? Any suggestions for doing this myself?
Progress
I found some useful elements in the source of Graphics32. For example, using this declaration:
type
TLayerAccess = class(TBitmapLayer);
to gain access to protected methods, I can call TLayerAccess(ABitmapLayer).Paint(ABitmap32) to have just this layer painted to a bitmap, exactly as it would to the screen.
Have a look at TByteMap and its writeTo method.
I have looked into this myself a year ago, and quickly resorted to using a black layer with transparent parts. It suited my then needs. But what you want is possible..
You want to couple one TBitmapLayer to another and consider it its mask. I want to avoid these references however (and its potential problems and Graphics32 rework) and see this only as a last resort.
There is a way to do without a dedicated TBitmapLayer, using your own pixel combiner. But it doesn't know about TBitmapLayers and its XY pixels.
To proper occlude (or leave out) parts of a TBitmapLayer while drawing to screen, you could use create a method of type TPixelCombineEvent and assigning this as its OnPixelCombine and setting TBitmapLayer.DrawMode to dmCustom.
Inside that TPixelCombineEvent method you decide what pixels result given the background B and the foreground F given the current M master alpha.
procedure TMyObj.MyPixCombine(F: TColor32; var B: TColor32; M: TColor32);
begin
if not PseudoThisPixelShouldBeMasked then B := F; // ugly and aliased
end;
Problem here is, that (the pseudo code) PseudoThisPixelShouldBeMasked doesn't really know what pixel this concerns and whether it is inside a mask. So you'd have to extract that value from a component of F, such as its Alpha value.
Back then i opted for the quite fast B := ColorMin(F,B); where F is black or white. This layer is on top always and results in black instead of transparent masks however.
This because any rendering to the TBitmapLayer would destroy the mask data and i need to reapply it. Using a TByteMap however as suggested below (who downvoted that?) by iamjoosy might be interesting, perhaps that performance penalty turns out negligible.
Related
The question relates to: Drawing on a paintbox - How to keep up with mouse movements without delay?
I was going to at some point ask the question of how to repaint only part of a paintbox without invalidating the whole paintbox, which is slow when there is a lot of drawing going on or in my case when there are a lot of tiles drawn on screen.
From the link above Peter Kostov did touch on the subject briefly in one of his comments:
you can partly BitBlt the offscreen bitmap (only regions where it is changed). This will improve the performance dramatically.
I have limited graphic skills and have never really used BitBlt before but I will be reading more about it after I post this question.
With that said, I wanted to know how exactly could you determine if regions of a bitmap have changed? Does this involve something simple or is there more magic so to speak involved in determining which regions have changed?
Right now I am still painting directly to the paintbox but once I draw to the offscreen buffer bitmap my next step is to optimise the painting and the above comments sound exactly like what I need, only the determining regions that have changed has confused me slightly.
Of course if there are other ways of doing this please feel free to comment.
Thanks.
You don't have to use BitBlt() directly if you draw to an offscreen TBitmap, you can use TCanvas.CopyRect() instead (which uses StretchBlt() internally). But either way, when you need to invalidate just a portion of the TPaintBox (the portion corresponding to the section of the offscreen bitmap that you drew on), you can use InvalidateRect() directly to specify the appropriate rectangle of the TPaintBox, instead of calling TControl.Invalidate() (which calls InvalidateRect() with the lpRect parameter set to NULL). Whenever the TPaintBox.OnPaint event is triggered, InvalidateRect() will have established a clipping rectangle within the canvas, any drawing you do outside of that rectangle will be ignored. If you want to manually optimize your drawing beyond that, you can use the TCanvas.ClipRect property to determine the rectangle of the TPaintBox that needs to be drawn, and just copy that portion from your onscreen bitmap.
The only gotcha is that TPaintBox is a TGraphicControl descendant, so it does not have its own HWND that you can pass to InvalidateRect(). You would have to use its Parent.Handle HWND instead. Which means you would have to translate TPaintBox-relative coordinates into Parent-relative coordinates and vice versa when needed.
You need to take charge of the painting in order to do this:
Call InvalidateRect to invalidate portions of a window.
When handling WM_PAINT you call BeginPaint which yields a paint struct containing the rect to be painted.
All of this requires a window, and unfortunately for you, TPaintBox is not windowed. So you could use the parent control's window handle, but frankly it would be cleaner to use a windowed control.
You could use my windowed paint control from this question as a starting point: How could I fade in/out a TImage? Use the ClipRect of the control's canvas when painting to determine the part of the canvas that needs re-painting.
I am creating a custom button that needs to be able to glow to a varying degree
How would I use these pictures to make a button that 'glows' the diamond when it is pressed, and have this glow gradually fade back to inert state?
I want to churn out several different colours of diamond as well... I am hoping to generate all different coloured diamonds from the same stock images presented here.
I would like to get my head around the basic methods available, in enough detail that I can see each one through and make a decision which path to take...
My tangled efforts so far... ( I will delete all of this, or move it into possibly several answers as a solution unfolds... )
I can see 3 potential solution paths:
GL
it looks as though GL has everything it takes to get complete fine-grained control over the process, although functions exposed by core graphics come tantalisingly close, and that would save several hundred lines of code spread over a bunch of source files, which seems a bit ridiculous for such a basic task.
core graphics, and core animation to accomplish the blending
documentation goes on to say
Anything underneath the unpainted samples, such as the current fill color or other drawing, shows through.
so I can chroma-key mask the left image, setting {0,0,0} ie Black as the key.
this at least secures a transparent background, now I have to work on making it yellow instead of grey.
so maybe I could have started instead with setting a yellow back colour for my image context, then use some CGContextSetBlendMode(...) to imprint the diamond on the yellow, THEN use chroma-key masking to get a transparent background
ok, this covers at least getting the basic unlit image on-screen
now I could overlay the sparkly image, using some blend mode, maybe I could keep it in its current greyscale state, and that would just boost the colours of the original
only problem with this is that it is a lot of heavy real-time blending
so maybe I could pre-calculate every image in the animation... this is looking increasingly mucky...
Cocos2D
if this allows me to set the blend mode to additive blending then I could just composite the glowing image over the original image with an appropriate Alpha setting.
After digging through a lot of documentation, the optimal solution seems to be to use core graphics functions to get the source images into a single 2-component GL texture, and then use GL to blend between them.
I will need to pass a uniform value glow_factor into the shader
The obvious solution might seem to simply use
r,g,b = in_r,g,b * { (1 - glow_factor) * inertPixel + glow_factor * shinyPixel }
(where inertPixel is the appropriate pixel of the inert diamond etc)...
it looks like I would also do well to manufacture my own sparkles and add them over the top; a gem should sparkle white irrespective of its characteristic colour.
After having looked at this problem a little more, I can see several solutions
Solution A -- store the transition from glow=0 to glow=1 as 60 frames in memory, then load the appropriate frame into a GL texture every time it is required.
this has an obvious benefit that a graphic designer could construct the entire sequence and I could load it in as a bunch of PNG files.
another advantage is that these frames wouldn't need to be played in sequence... the appropriate frame can be chosen on-the-fly
however, it has a potential drawback of a lot of sending data RAM->VRAM
this can be optimised by using glTexSubImage2D; several frames can be sent simultaneously and then unpacked from within GL... in fact maybe the entire sequence. if this is so, then it would make sense to use PVRT texture compression.
iOS: playing a frame-by-frame greyscale animation in a custom colour
Solution B -- load glow=0 and glow=1 images as GL textures, and manually write shader code that takes in the glow factor as a uniform and performs the blend
this has an advantage that it is close to the wire and can be tweaked in all sorts of ways. Also it is going to be very efficient. This advantage is that it is a big extra slice of code to maintain.
Solution C -- set glBlendMode to perform additive blending.
then draw the glow=0 image image, setting eg alpha=0.2 on each vertex.
then draw the glow=1 image image, setting eg alpha=0.8 on each vertex.
this has an advantage that it can be achieved with a more generic code structure -- ie a very general ' draw textured quad / sprite ' class.
disadvantage is that without some sort of wrapper it is a bit messy... in my game I have a couple of dozen diamonds -- at any one time maybe 2 or 3 are likely to be glowing. so first-pass I would render EVERYTHING ( just need to set Alpha appropriately for everything that is glowing ) and then on the second pass I could draw the glowing sprite again with appropriate Alpha for everything that IS glowing.
it is worth noting that if I pursue solution A, this would involve creating some sort of real-time movie player object, which could be a very useful reusable code component.
I'm drawing some cars. They're Bitmap's, loaded from PNG's in the library. I need to be able to color the cars-- red ones and green ones and blue ones, whatever. However, when you paint the car green, the tires should stay black, and the windows stay window-color.
I know of two ways to handle this, neither one of which makes me happy. First, I could have two bitmaps for each car; one underneath for the body color, and one on top for detail bits. The underneath bitmap gets its transform.colorTransform set to turn the white car-body into whatever color I need. Not great, because I end up with twice as many Bitmap's running around on screen at runtime.
Second, I could programmatically search-and-replace "white" with "car-body" color when I load the bitmap for each car. Not great either, because the amount of memory I take up multiplies by however many colors I need.
What I would LIKE would be a way to say "draw this Bitmap with JUST THE WHITE PARTS turned into this other color" at runtime. Is there anything like this available? I will be less than surprised if the answer is "no," but I figure it's worth asking.
You might have answered the question yourself.
I think your first approach would need only two transparent images: one with pixels of the parts that need to change colour, one with the rest of the image. You will use colorTransform or ColorMatrix filter by case. It might even work with having the pixels the need the colour change covered with Sprite with a flat colour set on overlay ?
The downside would be that you will need to create a 'colour map'/set of pixels to replace for each different item that will need colour replacement.
For the second approach:
You might isolate the areas using something like threshold().
For speed, you might want either to store the indices of the pixels you need to replace in an Vector.<int> object that could be used in conjuction with BitmapData's getVector() method. (You would loop once to fetch the pixel indices that need to be replaced)
Since you will use the same image(same dimensions) to fill the same content with a different colour, you'll always loop through the same pixels. Also keep in mind that you will gain a bit of speed by using lock() before your loop to setPixel() and unlock() after the loop.
Alternatively you could use Pixel Bender and try some green screen/background subtraction techniques. It should be fast and wouldn't delay the execution of the rest of your as3 code as Pixel Bender code runs in it's own thread.
Also check out Lee's Pixel Bender subtraction technique too.
Although it's a bit old now, you can use some knowledge from #Quasimondo's article too.
HTH
I'm a little confused where you see the difference between your second approach and the one you would like to have. You can go over your loaded bitmap pixel by pixel and read out the color. If it turns out to be white replace it with another color. I do not see occurence of multiplied memory consumption.
You might want to try my selective color transform: http://www.quasimondo.com/archives/000614.php - it's from 2006, so some parts of it could probably be replaced by a pixel bender filter now.
Why not just load the pieces separately, perform the color transform on the one you want to change, then do a BitmapData.copyPixels() with the result? The blit routine runs in machine code, so is wicked fast. Doing it pixel by pixel in ActionScript would be glacially slow in comparison.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#copyPixels()
Can anyone share a sample code to draw a non-rectangular part of a picture in delphi canvas?
You're looking for GDI paths. Start here, which explains what paths are in this context, and provides links on the left to explain the functionality available with them.
Google can turn up lots of examples of using paths in Delphi. If you can't find them, post a comment back here and I'll see what I can turn up for you.
Your question is pretty vague. But I suspect what you are looking for is clipping regions. Read up on them. Set the clipping region on the target device to the shape you want, and then draw the image onto the device. Only the part of the image that would be within the clipping region will be drawn.
Canvas.Ellipse(0, 0, 10, 20); // not a rectangle
I use so called runlists for this feature (generalized shapes and blitting them). I've seen them called warplists too. A shape is encoded as a runlist by defining it as a set of horizontal lines, and each line is two integer values (skip n pixels,copy n pixels).
This means you can draw entire lines, leaving you with only "height" draw operations.
So a rectangle is defined (the first "skip" pixels from top level corner to the left corner (xorg,yorg). The rectangle is width_rect wide, and width_pixels goes a line further. width_pixels can be wider than the width of the picture (alignment bytes)
(yorg*width_pixels+xorg , width_rect),
(width_pixels-width_rect , width_rect),
(width_pixels-width_rect , width_rect),
(width_pixels-width_rect , width_rect),
..
..
This way you can make your drawing routines pretty generic, and for simple, regular shapes (rects, circles) it takes only minor math to precalculate these lists. It simplified my shape handling enormously.
However I draw directly to bitmaps, not to canvasses, so I can't help with that part. A primitive that efficiently draws a row, and a way to extract a row from a graphic should be enough.
WHAT I AM TRYING TO DO
I am trying to draw multiple graphics to a Timage, These graphics that i Draw consist of ordered layers with Foodfills and lines.
I use multiple buffers to ensure ordering and double buffering.
WHAT I AM DOING
procedure DrawScene();
var
ObjLength,LineLength,Filllength,Obj,lin,angle,i:integer;
Npoints : array[0..1] of Tpoint;
Objmap:TBitmap;
wholemap:TBitmap;
begin
wholemap := TBitmap.Create;
wholemap.Width:=area;
wholemap.height:=area;
ObjLength:=length(Objects);
for Obj:=0 to (ObjLength-1) do
if objects[Obj].Visible then
begin
// create object bitmap
if Objects[obj].Tag='FBOX' then
begin
Objmap := TBitmap.Create;
Objmap.Width:=area;
Objmap.height:=area;
Objmap.Transparent:=true;
Objmap.Canvas.Rectangle((objects[obj].Boundleft-4)+objects[obj].Position.x,area-((objects[obj].boundtop+4)+objects[obj].Position.y),(objects[obj].boundright+4)+objects[obj].Position.x,area-((objects[obj].boundbottom-4)+objects[obj].Position.y));
end;
//draw object
LineLength:=length(objects[Obj].Lines)-1;
angle:=objects[Obj].Rotation;
for lin:=0 to (LineLength) do
begin
for i:=0 to 1 do
begin
Npoints[i] := PointAddition(RotatePoint(objects[obj].Lines[lin].Point[i],angle),objects[obj].Position,false);
end;
Objmap:=DrawLine(Npoints[0].x,Npoints[0].y,Npoints[1].x,Npoints[1].y,objects[obj].Lines[lin].Color,Objmap);
end;
Filllength:=length(objects[Obj].Fills)-1;
for i:=0 to Filllength do
begin
Npoints[0]:=PointAddition(RotatePoint(objects[Obj].Fills[i].Point,objects[Obj].Rotation),objects[Obj].Position,false);
Objmap:=fillpoint( Npoints[0].x, Npoints[0].y,objects[Obj].Fills[i].color,Objmap);
end;
//write object to step frame
wholemap.Canvas.Draw(0,0,Objmap);
Objmap.Free;
end;
// write step frame to Visible Canvas
mainwindow.bufferim.Canvas.Draw(0,0,wholemap);
mainwindow.RobotArea.Picture.Graphic:=mainwindow.bufferim.Picture.Graphic;
wholemap.Free;
end;
WHAT I EXPECT
I expect to see each image object layered on top of one another with each image layer being the complete image for that layer.
im my example it is a robot with a flag behind it.
the flag is drawn first and then the robot.
WHAT I GET(on a pc)
on a pc i get what i expect and all appears to be correct.
WHAT I GET(on a laptop)
On a nearly every laptop and some pc's i only see the robot.
i put in some statments to see if it is drawing the flag and it does. the game can even interact with the flag in the correct manner.
further investigation showed me that it was only showing the last image drawn my "drawscene", and when images were drawn directly to the wholecanvas everthing apeared(this cannot be done for overlapping fill layers reasons)
WHAT I THINK IS HAPPENING
so what i deduced is that the Timage.transparent property of the Timage is not working or is being computed differently on some machines..
i did a test to prove this and made a canvas red. then to that canvas i drew at 0,0 i Timage with property transparent=true with just one dot in the middle to the red canvas. the resuly was a white canvas with a dot in the middle.
I am assuming and findings indicate that machines with very basic graphics drivers seem to treat null or transparent as white where as more powerful machines seem to treat null or transparent as transparent.
this seems to be a bit of a failure due to the fact that the Timage.Transparent property was true.
EDIT:
UPDATE!!!
It would appear to be that on ATI graphics cards if a colour is "null" then it is interpreted in the format of PF24bit and therefore no alpha channel and no transparency.
on Nvidia cards it takes it as PF32bit and treats null as transparent.
the obvious way to get around that woulf be to set the bitmaptype to PF32bit, but that does still not work.
I then assumed that maybe that is not enough and I should make sure that the background is SET to transparent rather than being left as null.. but there are no tools ( that I can see) inside the Timage to set colour with alpha. all canvas drawing functions require a Tcolor that is RGB 24 bit and ony a TcolorRef with RGBA 32 bit would do....
is there a way of drawing with alpha 0?
WHAT I THINK I NEED TO KNOW
How to force the Transparent property to work on all machines
or a way to make laptops not paste in transparent as white
Or a way to achieve the same result.
Anyone have any solutions or ideas?
I have been using an Graphics library (AggPas) to help with drawing graphics and one of the things I've noticed is that I always need a line Bitmap.PixelFormat = pf32bit to get it to draw transparancies.
Having said that I use TransformImage from the AggPas library to copy the Image with a transparent background to another one and AggPas only accepts pf24bit or pf32bits as Pixel formats (otherwise it doesn't attach to the bitmap)
I've seen similar behaviour on different machines in the past (a few years back).
They were caused by different video cards and drivers.
Either the NVideo or the ATI ones were getting the wrong results, but I forgot which ones.
It could be reproduced on both laptops and regular PC's.
So: what video cards and drivers do you use?
--jeroen
I would explicitly set the bitmap format to pf32bit for each bitmap you create to ensure the problem isnt converting colors from 32 to 16 bit (if thats the native video resolution of the bitmaps getting created), which might interfere with how the transparency works. Also, I've had better luck specifically setting the transparency color in the past.
Another option - perhaps a better one in the long run, is to instead set the Alpha Channel on the images (ie, use PNG files), and use the GDI function AlphaBlend() to draw the graphics.