Drawing SharpDX effects on bitmap is too slow, what am I doing wrong? - directx

SO basically, I need performance. Currently in my job we use GDI+ graphics to draw bitmap. Gdi+ graphics contains a method called DrawImage(Bitmap,Points[]). That array contains 3 points and the rendered image result with a skew effect.
Here is an image of what is a skew effect :
Skew effect
At work, we need to render between 5000 and 6000 different images each single frame which takes ~ 80ms.
Now I thought of using SharpDX since it provides GPU accelerations. I use direct2D since all I need is in 2 dimensions. However, the only way I saw to reproduce a skew effect is the use the SharpDX.effects.Skew and calculate matrix to draw the initial bitmap with a skew effect ( I will provide the code below). The rendered image is exactly the same as GDI+ and it is what I want. The only problem is it takes 600-700ms to render the 5000-6000images.
Here is the code of my SharpDX :
To initiate device :
private void InitializeSharpDX()
{
swapchaindesc = new SwapChainDescription()
{
BufferCount = 2,
ModeDescription = new ModeDescription(this.Width, this.Height, new Rational(60, 1), Format.B8G8R8A8_UNorm),
IsWindowed = true,
OutputHandle = this.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput,
Flags = SwapChainFlags.None
};
SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.BgraSupport | DeviceCreationFlags.Debug, swapchaindesc, out device, out swapchain);
SharpDX.DXGI.Device dxgiDevice = device.QueryInterface<SharpDX.DXGI.Device>();
surface = swapchain.GetBackBuffer<Surface>(0);
factory = new SharpDX.Direct2D1.Factory1(FactoryType.SingleThreaded, DebugLevel.Information);
d2device = new SharpDX.Direct2D1.Device(factory, dxgiDevice);
d2deviceContext = new SharpDX.Direct2D1.DeviceContext(d2device, SharpDX.Direct2D1.DeviceContextOptions.EnableMultithreadedOptimizations);
bmpproperties = new BitmapProperties(new SharpDX.Direct2D1.PixelFormat(SharpDX.DXGI.Format.B8G8R8A8_UNorm, SharpDX.Direct2D1.AlphaMode.Premultiplied),
96, 96);
d2deviceContext.AntialiasMode = AntialiasMode.Aliased;
bmp = new SharpDX.Direct2D1.Bitmap(d2deviceContext, surface, bmpproperties);
d2deviceContext.Target = bmp;
}
And here is my code I use to recalculate every image positions each frame (each time I do a mouse zoom in or out, I asked for a redraw). You can see in the code two loop of 5945 images where I asked to draw the image. No effects takes 60ms and with effects, it takes up to 700ms as I mentionned before :
private void DrawSkew()
{
d2deviceContext.BeginDraw();
d2deviceContext.Clear(SharpDX.Color.Blue);
//draw skew effect to 5945 images using SharpDX (370ms)
for (int i = 0; i < 5945; i++)
{
AffineTransform2D effect = new AffineTransform2D(d2deviceContext);
PointF[] points = new PointF[3];
points[0] = new PointF(50, 50);
points[1] = new PointF(400, 40);
points[2] = new PointF(40, 400);
effect.SetInput(0, actualBmp, true);
float xAngle = (float)Math.Atan(((points[1].Y - points[0].Y) / (points[1].X - points[0].X)));
float yAngle = (float)Math.Atan(((points[2].X - points[0].X) / (points[2].Y - points[0].Y)));
Matrix3x2 Matrix = Matrix3x2.Identity;
Matrix3x2.Skew(xAngle, yAngle, out Matrix);
Matrix.M11 = Matrix.M11 * (((points[1].X - points[0].X) + (points[2].X - points[0].X)) / actualBmp.Size.Width);
Matrix.M22 = Matrix.M22 * (((points[1].Y - points[0].Y) + (points[2].Y - points[0].Y)) / actualBmp.Size.Height);
effect.TransformMatrix = Matrix;
d2deviceContext.DrawImage(effect, new SharpDX.Vector2(points[0].X, points[0].Y));
effect.Dispose();
}
//draw no effects, only actual bitmap 5945 times using SharpDX (60ms)
for (int i = 0; i < 5945; i++)
{
d2deviceContext.DrawBitmap(actualBmp, 1.0f, BitmapInterpolationMode.NearestNeighbor);
}
d2deviceContext.EndDraw();
swapchain.Present(1, PresentFlags.None);
}
After benching a lot, I realized the line that make it really slow is :
d2deviceContext.DrawImage(effect, new SharpDX.Vector2(points[0].X, points[0].Y));
My guess is my code or my setup does not use GPU acceleration of SharpDX like it should and this is why the code is really slow. I would expect at least better performance from SharpDX than GDI+ for this kind of stuff.

Related

(DX12 Shadow Mapping) Depth buffer is always filled with 1

I'm really new to graphics programming in general, so please bear with me. I am trying to add shadow mapping from a distant light (orthogonal projection) into my scene, but when I follow the (very incomplete) steps from Frank Luna's DX12 book I find that my SRV for the shadow map is just filled with depths of 1.
If it helps, here is my SRV definition:
D3D12_TEX2D_SRV texDesc = {
0,
-1,
0,
0.0f
};
D3D12_SHADER_RESOURCE_VIEW_DESC srvDesc = {
DXGI_FORMAT_R32_TYPELESS,
D3D12_SRV_DIMENSION_TEXTURE2D,
D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING,
};
srvDesc.Texture2D = texDesc;
m_device->CreateShaderResourceView(m_lightDepthTexture.Get(),&srvDesc, m_cbvHeap->GetCPUDescriptorHandleForHeapStart());
and here are my DSV heap and descriptor definitions:
D3D12_DESCRIPTOR_HEAP_DESC dsvHeapDesc = {};
dsvHeapDesc.NumDescriptors = 2;
dsvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_DSV;
dsvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;
ThrowIfFailed(m_device->CreateDescriptorHeap(&dsvHeapDesc, IID_PPV_ARGS(&m_dsvHeap)));
D3D12_DEPTH_STENCIL_VIEW_DESC depthStencilDesc = {};
depthStencilDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilDesc.ViewDimension = D3D12_DSV_DIMENSION_TEXTURE2D;
depthStencilDesc.Flags = D3D12_DSV_FLAG_NONE;
CD3DX12_HEAP_PROPERTIES heapProps = CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT);
CD3DX12_RESOURCE_DESC resourceDesc = CD3DX12_RESOURCE_DESC::Tex2D(DXGI_FORMAT_R32_TYPELESS, m_width, m_height, 1, 0, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL);
D3D12_CLEAR_VALUE depthOptimizedClearValue = {};
depthOptimizedClearValue.Format = DXGI_FORMAT_D32_FLOAT;
depthOptimizedClearValue.DepthStencil.Depth = 1.0f;
depthOptimizedClearValue.DepthStencil.Stencil = 0;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&resourceDesc,
D3D12_RESOURCE_STATE_DEPTH_WRITE,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_dsvBuffer)
));
D3D12_RESOURCE_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D12_RESOURCE_DESC));
texDesc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
texDesc.Alignment = 0;
texDesc.Width = m_width;
texDesc.Height = m_height;
texDesc.DepthOrArraySize = 1;
texDesc.MipLevels = 1;
texDesc.Format = DXGI_FORMAT_R32_TYPELESS;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN;
texDesc.Flags = D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&texDesc,
D3D12_RESOURCE_STATE_GENERIC_READ,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_lightDepthTexture)
));
CD3DX12_CPU_DESCRIPTOR_HANDLE dsv(m_dsvHeap->GetCPUDescriptorHandleForHeapStart());
m_device->CreateDepthStencilView(m_dsvBuffer.Get(), &depthStencilDesc, dsv);
dsv.Offset(1, m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_DSV));
m_device->CreateDepthStencilView(m_lightDepthTexture.Get(), &depthStencilDesc, dsv);
I then created a basic vertex shader that just transforms the vertices with my map (from Frank Luna's book, page 648,650). Since I bound the m_lightDepthTexture to D3D12GraphicsCommandList::OMSetRenderTargets, I assumed that the depth values would be written onto m_lightDepthTexture. But simply sampling this texture in my main pass proves that the values are actually 1.0f. So nothing actually happened on my shadow pass!
I really have no idea what to ask, but if anyone has a sample DX12 shadow map I could see (Google comes up with DX11 or less, or much too complicated samples), or if there's a good source to learn about this, please let me know!
EDIT: I should say that I changed the format from DXGI_FORMAT_D24_UNORM_S8_UINT, as I think the extra 8 bits for stencil is irrelevant to my case. I changed back to the book format and nothing changed, so I think this format should be fine.
If you remove the unecessary return ret; from your shadow vertex shader, the problem then seems to be in winding order of vertices of your sphere. You can easily verify this by setting cull mode to D3D12_CULL_MODE_NONE for your shadow PSO.
You can easily correct your sphere winding order by switching order of any two vertices of every triangle, so wherever you have p1,p2,p3 you just write it for example as p1,p3,p2.
You will also need to check your matrix multiplication order in your vertex shaders, I didn't checked it in detail but it's inconsistent and I believe the cause why the sphere will appear black when you fix the above issue. You also seem to be missing division by w for your light coords in lighting vertex shader.

Canvas zoomIn/zoomOut: how to avoid loss of image quality?

I have this code that I used for scaling images. To zoomIn and zoomOut is use the code scalePicture(1.10, drawingContext); and scalePicture(0.90, drawingContext);. I perform that operations on a off screen canvas and then copy the image back to the original screen.
I make use of the offscreen processing since the browser optimizes the image operations by using double buffering. I am still having the issue that when I zoomIn by around 400% and then zoomOut back to the original size, there is a significant loss of image quality.
I am not depending on the original image because the user can perform many operations such as clip, crop, rotate, annotate and I need to stack all the operations on the original image.
Can anyone throw some advice/suggestions around any means to preserve the quality of the image while not sacrificing the performance and quality.
scalePicture : function(scalePercent, operatingCanvasContext) {
var w = operatingCanvasContext.canvas.width,
h = operatingCanvasContext.canvas.height,
sw = w * scalePercent,
sh = h * scalePercent,
operatingCanvas = operatingCanvasContext.canvas;
var canvasPic = new Image();
operatingCanvasContext.save();
canvasPic.src = operatingCanvas.toDataURL();
operatingCanvasContext.clearRect (0,0, operatingCanvas.width, operatingCanvas.height);
operatingCanvasContext.translate(operatingCanvas.width/2, operatingCanvas.height/2);
canvasPic.onload = function () {
operatingCanvasContext.drawImage(canvasPic, -sw/2 , -sh/2 , sw, sh);
operatingCanvasContext.translate(-operatingCanvas.width/2, -operatingCanvas.height/2);
operatingCanvasContext.restore();
};
}
Canvas is draw and forget. There is no way to preserve original quality without referencing the original source.
I would suggest to reconstruct the recorded stack but using a transformation matrix for the changes in scale, rotation etc. Then apply the accumulated matrix on the original image. This will preserve the optimal quality as well as provide some gain in performance (as you only draw the last and current state).
Similar for clipping, calculate and merge the clipping regions using the same matrix and apply clip before drawing in the original image in the final step. And similar with text etc.
It's a bit too broad to show an example that does all these steps, but here is an example showing how to use accumulated matrix transforms on the original image preserving optimal quality. You can see that you can zoom in and out, rotate and the image will in each instance render at optimal quality.
Example of Concept
var ctx = c.getContext("2d"), img = new Image; // these lines just for demo init.
img.onload = demo;
ctx.fillText("Loading image...", 20, 20);
ctx.globalCompositeOperation = "copy";
img.src = "http://i.imgur.com/sPrSId0.jpg";
function demo() {
render();
zin.onclick = zoomIn; // accumulates transform, but render
zout.onclick = zoomOut; // based on original image using.
zrot.onclick = rotate; // current transformation matrix
}
function render() {ctx.drawImage(img, 0, 0)} // render original image
function zoomIn() {
ctx.translate(c.width * 0.5, c.height * 0.5); // pivot = center
ctx.scale(1.05, 1.05);
ctx.translate(-c.width * 0.5, -c.height * 0.5);
render();
}
function zoomOut() {
ctx.translate(c.width * 0.5, c.height * 0.5);
ctx.scale(1/1.05, 1/1.05);
ctx.translate(-c.width * 0.5, -c.height * 0.5);
render();
}
function rotate() {
ctx.translate(c.width * 0.5, c.height * 0.5);
ctx.rotate(0.3);
ctx.translate(-c.width * 0.5, -c.height * 0.5);
render();
}
<button id=zin>Zoom in</button>
<button id=zout>Zoom out</button>
<button id=zrot>Rotate</button><br>
<canvas id=c width=640 height=378></canvas>

Eliminating blob inside another blob

I'm currently working on a program for character recognition using C# and AForge.NET and now I'm struggling with the processing of blobs.
This is how I created the blobs:
BlobCounter bcb = new BlobCounter();
bcb.FilterBlobs = true;
bcb.MinHeight = 30;
bcb.MinWidth = 5;
bcb.ObjectsOrder = ObjectsOrder.XY;
bcb.ProcessImage(image);
I also marked them with rectangles:
Rectangle[] rects;
rects = bcb.GetObjectsRectangles();
Pen pen = new Pen(Color.Red, 1);
Graphics g = Graphics.FromImage(image);
foreach (Rectangle rect in rects)
{
g.DrawRectangle(pen, rect);
}
After execution my reference image looks like this:
BlobImage
As you can see, almost all characters are recognized. Unfortunately, some character include blobs inside a blob, e.g. "g", "o" or "d".
I would like to eliminate the blobs which are inside another blob.
I tried to adjust the drawing of the rectangles to achieve my objective:
foreach (Rectangle rect in rects)
{
for (int i = 0; i < (rects.Length - 1); i++)
{
if (rects[i].Contains(rects[i + 1]))
rects[i] = Rectangle.Union(rects[i], rects[i + 1]);
}
g.DrawRectangle(pen, rect);
}
...but it wasn't successful at all.
Maybe some of you can help me?
you can try to detect rectangles within rectangles by check their corner indices,
I have one MATLAB code for this which i have written for similar kind of problem:
Here is snippet of the code is:
function varargout = isBoxMerg(ReferenceBox,TestBox,isNewBox)
X = ReferenceBox; Y = TestBox;
X1 = X(1);Y1 = X(2);W1 = X(3);H1 = X(4);
X2 = Y(1);Y2 = Y(2);W2 = Y(3);H2 = Y(4);
if ((X1+W1)>=X2 && (Y2+H2)>=Y1 && (Y1+H1)>=Y2 && (X1+W1)>=X2 && (X2+W2)>=X1)
Intersection = true;
else
`Intersection = false;`
end
Where X and Y are upper left corner indices of the bounding rectangle; W and H are width and height respectively.
in above if Intersection variable becomes true that means the boxes having intersection. you can use this code for further customization.
Thank You

Starling + Box2d - Collision not precise

I create stage walls and a box inside on my mobile app using starling + as3.
Ok, now when I test the app the box falls but it does not match the walls, as if there
was an offset:
https://www.dropbox.com/s/hd4ehnfthh0ucfm/box.png
Here is how I created the boxes (walls and the box).
It seems like there is an offset hidden, what do you think?
public function createBox(x:Number, y:Number, width:Number, height:Number, rotation:Number = 0, bodyType:uint = 0):void {
/// Vars used to create bodies
var body:b2Body;
var boxShape:b2PolygonShape;
var circleShape:b2CircleShape;
var fixtureDef:b2FixtureDef = new b2FixtureDef();
fixtureDef.shape = boxShape;
fixtureDef.friction = 0.3;
// static bodies require zero density
fixtureDef.density = 0;
var quad:Quad;
bodyDef = new b2BodyDef();
bodyDef.type = bodyType;
bodyDef.position.x = x / WORLD_SCALE;
bodyDef.position.y = y / WORLD_SCALE;
// Box
boxShape = new b2PolygonShape();
boxShape.SetAsBox(width / WORLD_SCALE, height / WORLD_SCALE);
fixtureDef.shape = boxShape;
fixtureDef.density = 0;
fixtureDef.friction = 0.5;
fixtureDef.restitution = 0.2;
// create the quads
quad = new Quad(width, height, Math.random() * 0xFFFFFF);
quad.pivotX = 0;
quad.pivotY = 0;
// this is the key line, we pass as a userData the starling.display.Quad
bodyDef.userData = quad;
//
body = m_world.CreateBody(bodyDef);
body.CreateFixture(fixtureDef);
body.SetAngle(rotation * (Math.PI / 180));
_clipPhysique.addChild(bodyDef.userData);
}
The SetAsBox method takes half width and half height as its parameters. I'm guessing your graphics don't match your box2d bodies. So either you will need to make your graphics twice as big or multiply your SetAsBox params by 0.5. Also the body pivot will be in the center of it, so offset your movieclip accordingly depending on its pivot position.
Note that box2d has a debugrenderer which can outline your bodies for you to see what's going on.

SlimDX Camera setup

Please, tell me what I'm doing wrongly:
that's my Camera class
public class Camera
{
public Matrix view;
public Matrix world;
public Matrix projection;
public Vector3 position;
public Vector3 target;
public float fov;
public Camera(Vector3 pos, Vector3 tar)
{
this.position = pos;
this.target = tar;
view = Matrix.LookAtLH(position, target, Vector3.UnitY);
projection = Matrix.PerspectiveFovLH(fov, 1.6f, 0.001f, 100.0f);
world = Matrix.Identity;
}
}
that's my Constant buffer struct:
struct ConstantBuffer
{
internal Matrix mWorld;
internal Matrix mView;
internal Matrix mProjection;
};
and here I'm drawing the triangle and setting camera:
x+= 0.01f;
camera.position = new Vector3(x, 0.0f, 0.0f);
camera.view = Matrix.LookAtLH(camera.position, camera.target, Vector3.UnitY);
camera.projection = Matrix.PerspectiveFovLH(camera.fov, 1.6f, 0.0f, 100.0f);
var buffer = new Buffer(device, new BufferDescription
{
Usage = ResourceUsage.Default,
SizeInBytes = sizeof(ConstantBuffer),
BindFlags = BindFlags.ConstantBuffer
});
////////////////////////////// camera setup
ConstantBuffer cb;
cb.mProjection = Matrix.Transpose(camera.projection);
cb.mView = Matrix.Transpose(camera.view);
cb.mWorld = Matrix.Transpose(camera.world);
var data = new DataStream(sizeof(ConstantBuffer), true, true);
data.Write(cb);
data.Position = 0;
context.UpdateSubresource(new DataBox(0, 0, data), buffer, 0);
//////////////////////////////////////////////////////////////////////
// set the shaders
context.VertexShader.Set(vertexShader);
context.PixelShader.Set(pixelShader);
// draw the triangle
context.Draw(4, 0);
swapChain.Present(0, PresentFlags.None);
Please, if you can see what's wrong, tell me! :) I have spent two days writing this already..
Attempt the second:
#paiden I initialized fov now ( thanks very much :) ) but still no effect (now it's fov = 1.5707963267f;) and #Nico Schertler , thank you too, I put it in use by
context.VertexShader.SetConstantBuffer(buffer, 0);
context.PixelShader.SetConstantBuffer(buffer, 0);
but no effect still... probably my .fx file is wrong? for what purpose do I need this:
cbuffer ConstantBuffer : register( b0 ) { matrix World; matrix View; matrix Projection; }
Attepmpt the third:
#MHGameWork
Thank you very much too, but no effect still ;)
If anyone has 5 minutes time, I can just drop source code to his/her e-mail and then we will publish the answer... I guess it will help much to some newbies like me :)
unsafe
{
x+= 0.01f;
camera.position = new Vector3(x, 0.0f, 0.0f);
camera.view = Matrix.LookAtLH(camera.position, camera.target, Vector3.UnitY);
camera.projection = Matrix.PerspectiveFovLH(camera.fov, 1.6f, 0.01f, 100.0f);
var buffer = new Buffer(device, new BufferDescription
{
Usage = ResourceUsage.Default,
SizeInBytes = sizeof(ConstantBuffer),
BindFlags = BindFlags.ConstantBuffer
});
THE PROBLEM NOW - I SEE MY TRIANGLE BUT THE CAMERA DOESN'T MOVE
You have set your camera's nearplane to 0. This makes all the value in your matrix divide by zero, so you get a matrix filled with 'NAN's
Use a near plane value of about 0.01 in your case, it will solve the problem
I hope you still need help. Here is my camera class, which can be used, and can be easily moved around the scene using mouse/keyboard.
http://pastebin.com/JtiUSiHZ
Call the "TakeALook()" method in each frame (or when you move the camera).
You can move around it with the "CameraMove" method. It takes a Vector3 - where you want to move your camera (dont give it huge values, I use 0,001f for each frame)
And with the "CameraRotate()" you can turn it around - it take a Vector2 as a Left-Right and Up-Down rotation.
Its pretty easy. I use EventHandlers to call there two function, but feel free to edit as you wish.

Resources