I use delphi 7 and delphix so I can have ddraw and be able to create a surface to play with pixels.
so in a button trigger i use this:
for i:=1 to 100 do
for j:=1 to 100 do
dxdraw1.Surface.Pixels[i,j]:=250;
where dxdraw1 is a ddraw surface.
The problem is that it works, but the result shows after I hoover another window above my program, like it is not updating the rectangle area with the pixels.
(I press the button, the cpu usage is getting up for a short and the rectangle remains black until I hoover another window.)
Also its slow. I read somewhere this:
"...
DXDraw.Surface.Canvas.Pixels[X,Y]:=clBlue;
DXDraw.Surface.Canvas.Release;
..."
and after this:
"...
Just remember that this function is extremely slow .
It's locking and unlocking the surface on every pixel set.. not very usable.
PixelDX and turboPixel don't, you manually lock the surface, do all of your
pixel operations and then unlock it.. 1000 times faster.
..."
How to use these functions? I cant find them (and I have no idea since I am beginner on this)?
How to lock first and then unlock?
U P D A T E :
Ok, I used undelphix, but in the rectangle I cant see the result, it still remains black:
procedure TForm1.BitBtn1Click(Sender: TObject);
var
i,j :integer;
begin
dxdraw1.Surface.Lock;
for i:=1 to 150 do
for j:=1 to 150 do
dxdraw1.Surface.Pixel[i,j]:=100;
dxdraw1.Surface.Unlock;
end;
U P D A T E 2 :
It works but this bizarre happens: to see the result I have to "hide" the app window under another window or program, and the minimize that window to see the result. If I move the app then the result is gone, and it stays blank. Any ideas?
I'm not up to date on Delphi-X, but I have used the raw DirectX libs a lot in my Life32 program.
Here's how it works behind the scenes:
You get a lock on the screen.
This returns a pointer to memory that you can write to, whatever to write in this memory block will appear on the screen.
The entire system will be frozen (!) until you call unlock.
The memory pointer in step 2 can be regarded as a pointer directly to videomemory.
It is up to you to know:
- bits per pixel;
- number of pixels in the x and y direction;
- padding bytes at the end of each scan-line;
- if the bits per pixel are less than 24, you will not be working with RGB values, but with a palette;
- if you write past the end of the screen unpredicatable stuff will happen(!)
DelphiX is obsolete
I recommend you use unDelphiX: http://www.micrel.cz/Dx/
It is much more advanced and has many more features.
You can simply use:
DXDraw1.Surface.Lock; //special version Lock without any parameters
try
for xy:= 0 to Min(DXDraw1.Height, DXDraw1.Width) - 1 do begin
//silly example, unDirectX has line drawing routines
DXDraw1.Surface.Pixel[xy, xy]:= TheColor;
end; {for xy}
finally
DXDraw1.Surface.Unlock;
end;
Because you've called lock prior to accessing the pixel array, it will not bracket the call in a lock - unlock pair.
Warning
Always spend the absolute minimum amount of time possible in between lock and unlock calls, because your entire system will be frozen while the display is locked.
Also make sure that Delphi does not break on exceptions when debugging the code inside the lock box, because you system will be frozen with no way to revive it.
Note
There is still some overhead in the call to pixel, because the color value gets translated into a format dictated by the BytesPerPixel for your display.
Also the [x,y] coordinates get translated into a memory address.
In Life32 I resolved to write 8 different drawing routines, for every possible pixel depth.
And doing my own drawing of 16x16 blocks, thereby avoiding address translation per individual pixel.
Don't draw until you have to
Because DirectX is so fast, it's easy to draw much faster than the eye can see, use a high resolution timer (included in unDirectX) to limit drawing to x frames per second.
Do not use Windows messages like WM_PAINT to do your main drawing, but do redraw when you're receiving an incoming WM_PAINT message.
Related
I have an odd bug in FMX that has all the hallmarks of a pointer overrun or a hardware-specific bug, which I haven't been able to track down. Small apps to reproduce it, don't (yet.) I have code snippets below, but since I haven't managed to reproduce this in a small application they are only snippets.
The app loads some PNG files, and then creates in-memory bitmaps based on colour-coding in the PNG files - that is, it will create an in-memory bitmap which is big enough to bound all red areas of the original, and is initially blank. There is a byte array the same size as the in-memory bitmap which is a mask (zero, non-zero) indicating if that pixel corresponds to a red (say) area in the original or not. The user can paint in the app, painting on a temporary display bitmap, and then when they let go the mouse button that bitmap is scanned against the byte array to write on the in-memory bitmap.
With me so far? Basically, colour-coded subsets of an original, masks created for the colour areas, painting on the subset is masked. Here's a diagram:
This works fine on XP (GDI+ canvas) and Win7 (D2D canvas) and Win8.1 and most Win10 machines. However, on two Win10 machines, one with an Intel HD 4600 and another with an Intel Iris 5000, a client gets odd artifacts on the bitmap after doing the step of masking and setting pixels.
The artifacts are rectangles scattered apparently randomly over the bitmap, either large (100x20px, say) or small (10x8, say). I've also seen screenshots where parts of the rest of the UI, like button glyphs, are also present on the bitmap. Here are some example screenshots:
Here the user has already painted the dark red area, and painted the lighter colour somewhere else. These small flecked rectangles appear.
Here the user has painted the broad stripe down the middle. The dark red and lighter red rectangles are examples of the artifacts after masking and drawing to the bitmap.
To me that sounds like I'm writing pixels at the wrong stride or something. I've rigorously code-inspected for off-by-one errors in array or bitmap data access, and I'm using the following code to get or set pixels in bitmap data:
function GetBitmapColor(const M : PBitmapData; const X, Y : Integer) : TAlphaColor;
begin
Result := PixelToAlphaColor(
#PAlphaColorArray(M.Data)[Y * (M.Pitch div PixelFormatBytes[M.PixelFormat]) + X],
M.PixelFormat);
end;
procedure SetBitmapColor(const M : PBitmapData; const X, Y : Integer; const C : TAlphaColor);
begin
AlphaColorToPixel(C,
#PAlphaColorArray(M.Data)[Y * (M.Pitch div PixelFormatBytes[M.PixelFormat]) + X],
M.PixelFormat);
end;
These are based on the example code for accessing bitmap data in the Seattle documentation. Assertions that X and Y are in a valid range (in the bitmap size) are all ok. While this happens on the OS & hardware given above, it does not happen for the same OS (Win10) with other cards - another Intel HD, a Nvidia, etc; on a Win10 tablet; on a Win10 VM I tried; on my Win7 development machine; etc. As far as buffer overruns etc are concerned, compiling with range checks on does not give any errors, nor have any asserts of my own checking valid X/Y coordinates against bitmap size.
The error occurs with both XE6 and Seattle on the same machines.
I can't reproduce this. They're in the process of setting up remote access to this machine but it's still going to be tricky to figure out what's going on, so I wonder if anyone has seen something like this before, or has concrete suggestions of specific ways to check for bitmap pointer overruns or similar?
I am trying to make 2D game in Delphi XE5/XE6 with FireMonkey, and I need to keep multiplatformity - I need Windows and Android build. For now, game is just drawn with lines, circles, rectangles or other polygons and I use Canvas (TPaintBox on whole screen and OnPaint event to paint). Problem is, I think that performance is extremely low, even when FireMonkey should use GPU acceleration. I made this performance test which draw 1000 rectangles on random XY positions:
procedure Paint(Canvas: TCanvas);
var
I: Integer;
X, Y: Single;
begin
Canvas.BeginScene;
Canvas.Clear($ffffffff);
Canvas.Fill.Color := $ffff0000;
for I := 0 to 10000 do begin
X := Random(500);
Y := Random(400);
Canvas.FillRect(TRectF.Create(X, Y, X + 20, Y + 20), 0, 0, [], 1);
end;
Canvas.EndScene;
end;
I measured approach thru this function and I get these results:
Windows PC - Core2Duo 3.2GHz, GeForce 660Ti - pretty strong for such simple task - it took 20 miliseconds
Android Phone - HTC One X (4 core, 1 GHz, Tegra 3 CPU/GPU) - it took 50-60 milisends (not as big difference against PC, as i thought)
However I think that both devices should be able to draw not 10 000, but 100 000, or even milions rectanges (my PC with GTX 660 Ti for sure) in one frame (assuming at least 25 FPS, so in about 40 miliseconds), but I can reach this with FireMonkey, even when Delphi creators brags that it is lightnight fast and GPU accelerated. And if I replace rectangle with circle (FillArc method), when I draw 100 circles, it took about same time as 10 000 rectangles. Same with outputing letters (one-char texts). What am I doing wrong? Is there any mistake I can't see? Or it is just problem of FireMonkey? Or it is normal?
What another way I should choose than Canvas for fast drawing when I need to keep compatibility with Windows and Android?
The Firemonkey canvas on Windows is probably not using the GPU. If you are using XE6 you can
set the global variable FMX.Types.GlobalUseGPUCanvas to true in the initialization section.
Documentation
Otherwise, in XE5 stick a TViewPort3D on your Form. Stick a TLayer3D in the TViewPort3D and change it's to Projection property to pjScreen. Stick your TPaintBox on the TLayer3D.
Another alternative could be there is an OpenGL canvas unit
You could also parallel process your loop but they will only make your test faster and maybe not your real world game (Parallel loop in Delphi)
When you draw a circle in canvas (ie GPUCanvas) then you draw in fact around 50 small triangles. this is how GPUCanvas work. it's even worse with for exemple Rectangle with round rect. I also found that Canvas.BeginScene and Canvas.endScene are very slow operation. you can try to put form.quality to highperformance to avoid antialiasing, but i didn't see that it's change really the speed.
I am using a Texture2D object as a tilesheet. It's one-dimensional- one tile tall and however many tiles long.
I'm just curious if there's any disadvantage (Aside from being more difficult to edit, I guess) to having them laid out like this as opposed to a more compact way, e.g. instead of 64x1 tiles, make it 16x16. It wouldn't be very hard to change it, but I figure why bother if there's no hurt in having a long image!
The only real disadvantage is that you'll hit the maximum width (2048 on the Reach profile, 4096 on HiDef) of the texture with fewer sprites.
This isn't really a problem because, when it happens, it's so trivial to add support for more rows.
Seeing as you've asked the question, there is one obscure, performance-related thing that is at least interesting to be aware of, even though you almost never need to worry about it.
A texture's pixel data is stored in CPU memory in a row-by-row order. So if your tiles are stored horizontally, you try and read or write the pixels for a single tile (ie: GetData and SetData), you will be accessing many small, spread-out sections of memory. If you stored your tiles vertically, then each tile would occupy a single large section of memory - this has better cache coherency and can be copied with fewer operations (ie: it's faster).
This is not a problem on the GPU, where textures are stored with a layout such that all nearby pixels (not just the ones to the left and right) are stored nearby in memory.
Having a long image can make it easier to 'index' sprites in the sheet. For example, in your case, the image at index i is located at coordinates (w * i, 0), where w is the width of one sprite. Having it in a square would mean that you need more complex math to find the right sprite. It would be at (w * (i % 16), h * (i / 16)). (By the way, those coordinates are in pixels.)
So use the long image! It'll keep your code cleaner.
I'm using Graphics32 for image processing. Looking at its capabilities, it strikes me that I've yet to see a proper implementation of a clipping mask. I do see the term "clipping" pop up here and there, but it seems to refer to something else.
Simply put, I need one layer to function as a "peeking hole" to another; layer A should be projected onto layer B, but only where layer B is visible. (I see no further need to redefine what a clipping mask is.)
If it were just the bitmap of that other layer that I'd like to present, it wouldn't be so hard to do - then I could use this trick - but what complicates things, is that the bitmap of a layer does not tell much about what would be displayed by the layer; the layer can be:
(partially) invisible (when out of the view)
moved/stretched + optionally resampled
rotated
with no effects on its bitmap.
Is it actually so that there is no ready implementation for this? Any suggestions for doing this myself?
Progress
I found some useful elements in the source of Graphics32. For example, using this declaration:
type
TLayerAccess = class(TBitmapLayer);
to gain access to protected methods, I can call TLayerAccess(ABitmapLayer).Paint(ABitmap32) to have just this layer painted to a bitmap, exactly as it would to the screen.
Have a look at TByteMap and its writeTo method.
I have looked into this myself a year ago, and quickly resorted to using a black layer with transparent parts. It suited my then needs. But what you want is possible..
You want to couple one TBitmapLayer to another and consider it its mask. I want to avoid these references however (and its potential problems and Graphics32 rework) and see this only as a last resort.
There is a way to do without a dedicated TBitmapLayer, using your own pixel combiner. But it doesn't know about TBitmapLayers and its XY pixels.
To proper occlude (or leave out) parts of a TBitmapLayer while drawing to screen, you could use create a method of type TPixelCombineEvent and assigning this as its OnPixelCombine and setting TBitmapLayer.DrawMode to dmCustom.
Inside that TPixelCombineEvent method you decide what pixels result given the background B and the foreground F given the current M master alpha.
procedure TMyObj.MyPixCombine(F: TColor32; var B: TColor32; M: TColor32);
begin
if not PseudoThisPixelShouldBeMasked then B := F; // ugly and aliased
end;
Problem here is, that (the pseudo code) PseudoThisPixelShouldBeMasked doesn't really know what pixel this concerns and whether it is inside a mask. So you'd have to extract that value from a component of F, such as its Alpha value.
Back then i opted for the quite fast B := ColorMin(F,B); where F is black or white. This layer is on top always and results in black instead of transparent masks however.
This because any rendering to the TBitmapLayer would destroy the mask data and i need to reapply it. Using a TByteMap however as suggested below (who downvoted that?) by iamjoosy might be interesting, perhaps that performance penalty turns out negligible.
WHAT I AM TRYING TO DO
I am trying to draw multiple graphics to a Timage, These graphics that i Draw consist of ordered layers with Foodfills and lines.
I use multiple buffers to ensure ordering and double buffering.
WHAT I AM DOING
procedure DrawScene();
var
ObjLength,LineLength,Filllength,Obj,lin,angle,i:integer;
Npoints : array[0..1] of Tpoint;
Objmap:TBitmap;
wholemap:TBitmap;
begin
wholemap := TBitmap.Create;
wholemap.Width:=area;
wholemap.height:=area;
ObjLength:=length(Objects);
for Obj:=0 to (ObjLength-1) do
if objects[Obj].Visible then
begin
// create object bitmap
if Objects[obj].Tag='FBOX' then
begin
Objmap := TBitmap.Create;
Objmap.Width:=area;
Objmap.height:=area;
Objmap.Transparent:=true;
Objmap.Canvas.Rectangle((objects[obj].Boundleft-4)+objects[obj].Position.x,area-((objects[obj].boundtop+4)+objects[obj].Position.y),(objects[obj].boundright+4)+objects[obj].Position.x,area-((objects[obj].boundbottom-4)+objects[obj].Position.y));
end;
//draw object
LineLength:=length(objects[Obj].Lines)-1;
angle:=objects[Obj].Rotation;
for lin:=0 to (LineLength) do
begin
for i:=0 to 1 do
begin
Npoints[i] := PointAddition(RotatePoint(objects[obj].Lines[lin].Point[i],angle),objects[obj].Position,false);
end;
Objmap:=DrawLine(Npoints[0].x,Npoints[0].y,Npoints[1].x,Npoints[1].y,objects[obj].Lines[lin].Color,Objmap);
end;
Filllength:=length(objects[Obj].Fills)-1;
for i:=0 to Filllength do
begin
Npoints[0]:=PointAddition(RotatePoint(objects[Obj].Fills[i].Point,objects[Obj].Rotation),objects[Obj].Position,false);
Objmap:=fillpoint( Npoints[0].x, Npoints[0].y,objects[Obj].Fills[i].color,Objmap);
end;
//write object to step frame
wholemap.Canvas.Draw(0,0,Objmap);
Objmap.Free;
end;
// write step frame to Visible Canvas
mainwindow.bufferim.Canvas.Draw(0,0,wholemap);
mainwindow.RobotArea.Picture.Graphic:=mainwindow.bufferim.Picture.Graphic;
wholemap.Free;
end;
WHAT I EXPECT
I expect to see each image object layered on top of one another with each image layer being the complete image for that layer.
im my example it is a robot with a flag behind it.
the flag is drawn first and then the robot.
WHAT I GET(on a pc)
on a pc i get what i expect and all appears to be correct.
WHAT I GET(on a laptop)
On a nearly every laptop and some pc's i only see the robot.
i put in some statments to see if it is drawing the flag and it does. the game can even interact with the flag in the correct manner.
further investigation showed me that it was only showing the last image drawn my "drawscene", and when images were drawn directly to the wholecanvas everthing apeared(this cannot be done for overlapping fill layers reasons)
WHAT I THINK IS HAPPENING
so what i deduced is that the Timage.transparent property of the Timage is not working or is being computed differently on some machines..
i did a test to prove this and made a canvas red. then to that canvas i drew at 0,0 i Timage with property transparent=true with just one dot in the middle to the red canvas. the resuly was a white canvas with a dot in the middle.
I am assuming and findings indicate that machines with very basic graphics drivers seem to treat null or transparent as white where as more powerful machines seem to treat null or transparent as transparent.
this seems to be a bit of a failure due to the fact that the Timage.Transparent property was true.
EDIT:
UPDATE!!!
It would appear to be that on ATI graphics cards if a colour is "null" then it is interpreted in the format of PF24bit and therefore no alpha channel and no transparency.
on Nvidia cards it takes it as PF32bit and treats null as transparent.
the obvious way to get around that woulf be to set the bitmaptype to PF32bit, but that does still not work.
I then assumed that maybe that is not enough and I should make sure that the background is SET to transparent rather than being left as null.. but there are no tools ( that I can see) inside the Timage to set colour with alpha. all canvas drawing functions require a Tcolor that is RGB 24 bit and ony a TcolorRef with RGBA 32 bit would do....
is there a way of drawing with alpha 0?
WHAT I THINK I NEED TO KNOW
How to force the Transparent property to work on all machines
or a way to make laptops not paste in transparent as white
Or a way to achieve the same result.
Anyone have any solutions or ideas?
I have been using an Graphics library (AggPas) to help with drawing graphics and one of the things I've noticed is that I always need a line Bitmap.PixelFormat = pf32bit to get it to draw transparancies.
Having said that I use TransformImage from the AggPas library to copy the Image with a transparent background to another one and AggPas only accepts pf24bit or pf32bits as Pixel formats (otherwise it doesn't attach to the bitmap)
I've seen similar behaviour on different machines in the past (a few years back).
They were caused by different video cards and drivers.
Either the NVideo or the ATI ones were getting the wrong results, but I forgot which ones.
It could be reproduced on both laptops and regular PC's.
So: what video cards and drivers do you use?
--jeroen
I would explicitly set the bitmap format to pf32bit for each bitmap you create to ensure the problem isnt converting colors from 32 to 16 bit (if thats the native video resolution of the bitmaps getting created), which might interfere with how the transparency works. Also, I've had better luck specifically setting the transparency color in the past.
Another option - perhaps a better one in the long run, is to instead set the Alpha Channel on the images (ie, use PNG files), and use the GDI function AlphaBlend() to draw the graphics.