Direct3D line thickness, with a slightly different take - directx

I realise that Direct3D doesn't properly support line thickness, and infact on most graphics hardware, lines are actually just collapsed rectangles.
At least I thought that was the case, until I tried to actually implement line thickness by rendering rectangles instead of lines and found that they lost detail and were eventually invisible as I zoomed out; whereas line primtive types seem to be guaranteed to always be 1 pixel wide regardless of scale.
I'm creating an AutoCAD viewer, of which lines are a fairly staple entity, and need to support a thickness; but regardless of zoom level must always be at least one pixel wide.
Can anyone suggest a strategy for achieving this; ideally a rendering settings adjustment as opposed to working out if it should render lines instead of rectangles?
[Edit] Should have mentioned; it's Direct3D 9 in .Net via SlimDX.

The simplest approach I can think of would be to render the lines as simple quads in 2D, and have the pixel shader write an oDepth value containing the correct 3d perspective depth.

Related

How do I clear only part of a depth/stencil view?

If I want to clear an entire depth/stencil view in Direct3D 11, I can easily call ID3D11DeviceContext::ClearDepthStencilView.
Direct3D 11.1 adds support for clearing rectangular portions of render target views using ID3D11DeviceContext1::ClearView.
But I see no way to clear only a portion of a depth/stencil view in Direct3D 11, short of rendering a quad over the desired area? This seems like an odd regression from Direct3D 9, where this was trivially easy. Am I missing something, or is this really not supported?
There is no such function that can clear only a part of depth/stencil view.
This is my way to solve the problem:
make a texture. Set Alpha of the part to clear to 1,and other part to 0.
open AlphaTest, only the pixel whose alpha is 1.
open AlphaBlend,set BlendOP to Add,set SrcBlend factor to 0,set DestBlend factor to 1.
set StencilTest and DepthTest to Always, set StencilRef to the value you want to clear.
use orthogonal projection matrix.
draw a rectangle that just covers the screen( z-coordinate/(ZFar-ZNear) will convert to depth),and paste the texture on it.
There is an excellent reason at removing the partial clears in the API. First, it is always possible to emulate them by drawing quads with proper render states, and second, all GPUs have fast clear and resolve hardware. Using them in the intend logic greatly improve performance of the rendering.
With the DX11 clear API, it is possible to use the fast clear and the latter GPU optimisation. A depth buffer fast clear also prepare for an early depth test ( prior to pixel shading, because yes, the real depth test is post pixel shading ), and some bandwidth optimisation on access to the depth buffer will rendering. If you clear with a quad, you lost all that and draw cost will rise.

How to create sprite surface like in "cham cham"

My question maybe a bit too broad but i am going for the concept. How can i create surface as they did in "Cham Cham" app
https://itunes.apple.com/il/app/cham-cham/id760567889?mt=8.
I got most of the stuff done in the app but the surface change with user touch is quite different. You can change its altitude and it grows and shrinks. How this can be done using sprite kit what is the concept behind that can anyone there explain it a bit.
Thanks
Here comes the answer from Cham Cham developers :)
Let me split the explanation into different parts:
Note: As the project started quite a while ago, it is implemented using pure OpenGL. The SpiteKit implementation might differ, but you just need to map the idea over to it.
Defining the ground
The ground is represented by a set of points, which are interpolated over using Hermite Spline. Basically, the game uses a bunch of points defining the surface, and a set of points between each control one, like the below:
The red dots are control points, and eveyrthing in between is computed used the metioned Hermite interpolation. The green points in the middle have nothing to do with it, but make the whole thing look like boobs :)
You can choose an arbitrary amount of steps to make your boobs look as smooth as possible, but this is more to do with performance.
Controlling the shape
All you need to do is to allow the user to move the control points (or some of them, like in Cham Cham; you can define which range every point could move in etc). Recomputing the interpolated values will yield you an changed shape, which remains smooth at all times (given you have picked enough intermediate points).
Texturing the thing
Again, it is up to you how would you apply the texture. In Cham Cham, we use one big texture to hold the background image and recompute the texture coordinates at every shape change. You could try a more sophisticated algorithm, like squeezing the texture or whatever you found appropriate.
As for the surface texture (the one that covers the ground – grass, ice, sand etc) – you can just use the thing called Triangle Strips, with "bottom" vertices sitting at every interpolated point of the surface and "top" vertices raised over (by offsetting them against "bottom" ones in the direction of the normal to that point).
Rendering it
The easiest way is to utilize some tesselation library, like libtess. What it will do it covert you boundary line (composed of interpolated points) into a set of triangles. It will preserve texture coordinates, so that you can just feed these triangles to the renderer.
SpriteKit note
Unfortunately, I am not really familiar with SpriteKit engine, so cannot guarantee you will be able to copy the idea over one-to-one, but please feel free to comment on the challenging aspects of the implementation and I will try to help.

How can I make blend edges with shaders

let me introduce my answer.
This is triagnle rendered with webgl. Well it is a little enlarged ...
And this is triangle, which I want to have:
So Im looking for some shader, that will be able to blend edges of primitive triangle. I have an idea how to realize one, but Im probably not good enough to write it yet.
My idea is something like:
Based on position of 3 vertices calculate for each fragment, how much does primitive cover pixel, and then set up transparency of this pixel based on calculated information...
I can get 2D coordinates from vertex shader and use them in fragment shader. Now I probably want to use gl_FragCoord.xy or gl_PointCoord.xy and calculate % pixel cover, but I not able to compare these values (it seems that units are different, I compute miles with milimetres and also 'point zero' is somewhere else for these vectors), so I can't calculate final transparency value.
Can anyone help me please? Just turn me correct way.
There are lots ways to achieve this
You can render at a higher resolution. Make your canvas larger than the size its displayed, the browser will almost certainly bilinear interpolate the result. Example:
<canvas width="400" height="400" style="width: 200px; height 200px" />
declares a canvas with 400x400 backstore that is scaled to 200x200 when displayed.
Here's a fiddle.
Another technique would be to compute an alpha value in the shader such that you get the blending you want along the edge of the polygon.
I'm sure there are others. Most Canvas2D implementations are gpu accelerated and anti-aliased even if the GPU does not support anti-aliasing so you could try digging through one of those.
The problem with your plan is that OpenGL applies its own test to decide which pixels to draw first — if the centre of the fragment lies inside the geometry boundary then it is rasterised, if it lies outside then it is not, if it lies exactly on the boundary then rasterisation depends on whether it is at the start or end of a horizontal or vertical run. The boundary condition ensures that where two triangles exactly meet, they never both contain the same fragments.
So if you compute coverage per fragment you're almost never going to get a number less than 50% (corners and other very thin bits of geometry being the exception). You're not going to get the complete anti-aliasing you desire. You'll get the antialiased version clipped by the aliased version.
Hardware achieves this by sampling multiple fragments per output pixel. You can simulate that by rendering to texture at a multiple of your output size, then scaling down. The mip map generation will filter the input image.
That all being said, have you tried just passing antialias as true when calling canvas.GetContext? That will use the hardware capabilities, subject to hardware and browser support.

How can I render a square bitmap to an arbitrary four-sided polygon using GDI?

I need to paint a square image, mapped or transformed to an unknown-at-compile-time four-sided polygon. How can I do this?
Longer explanation
The specific problem is rendering a map tile with a non-rectangular map projection. Suppose I have the following tile:
and I know the four corner points need to be here:
Given that, I would like to get the following output:
The square tile may be:
Rotated; and/or
Be narrower at one end than at the other.
I think the second item means this requires a non-affine transformation.
Random extra notes
Four-sided? It is plausible that to be completely correct, the tile should be
mapped to a polygon with more than four points, but for our purposes
and at the scale it is drawn, a square -> other four-cornered-polygon
transformation should be enough.
Why preferably GDI only? All rendering so far is done using GDI, and I want to keep the code (a) fast and (b) requiring as few extra
libraries as possible. I am aware of some support for
transformations in GDI and have been experimenting with them
today, but even after experimenting with them I'm not sure if they're
flexible enough for this purpose. If they are, I haven't managed to
figure it out, and so I'd really appreciate some sample code.
GDI+ is also ok since we use it elsewhere, but I know it can be slow, and speed is
important here.
One other alternative is anything Delphi- /
C++Builder-specific; this program is written mostly in C++ using
the VCL, and the graphics in question are currently painted to a
TCanvas with a mix of TCanvas methods and raw WinAPI/GDI calls.
Overlaying images: One final caveat is that one colour in the tile may be for color-key
transparency: that is, all the white (say) squares in the above tile
should be transparent when drawn over whatever is underneath.
Currently, tiles are drawn to square or axis-aligned rectangular
targets using TransparentBlt.
I'm sorry for all the extra caveats that make this question more complicated
than 'what algorithm should I use?' But I will happily accept answers with
only algorithmic information too.
You might also want to have a look at Graphics32.
The screen shot bewlow shows how the transfrom demo in GR32 looks like
Take a look at 3D Lab Vector graphics. (Specially "Football field" in the demo).
Another cool resource is AggPas with full source included (download)
AggPas is Open Source and free of charge 2D vector graphics library. It is an Object Pascal native port of the Anti-Grain Geometry library - AGG, originally written by Maxim Shemanarev in C++. AggPas doesn't depend on any graphic API or technology. Basically, you can think of AggPas as of a rendering engine that produces pixel images in memory from some vectorial data.
Here is how the perspective demo looks like:
After transformation:
The general technique is described in George Wolberg's "Digital Image Warping". It looks like this abstract contains the relevant math, as does this paper. You need to create a perspective matrix that maps from one quad to another. The above links show how to create the matrix. Once you have the matrix, you can scan your output buffer, perform the transformation (or possibly the inverse - depending on which they give you), and that will give you points in the original image that you can copy from.
It might be easier to use OpenGL to draw a textured quad between the 4 points, but that doesn't use GDI like you wanted.

rendering a photoshop style brush in openGL

I have lines that are programmatically defined by my program. what I want to do is render a brush stroke along them.
the way I think the type of brush I want works is, it simply has a texture, mostly transparent, and what you do is, render this texture centered on EVERY PIXEL in the path, and they blend together to create the stroke.
now assuming this even works, I'm going to make a bet that it will be WAY too expensive (targeting the ipad and other mobile chips, which HATE fillrate and alpha blending)
so, what other options are there?
if it could be done in realtime (that is, the path spline updating every frame) that would be ideal. but if not, within a fraction of a second on the ipad would be good too (the splines connect nodes, the user can drag nodes around thus transforming the spline, but it would be acceptable to revert to a simpler fill for the spline while it was moving around, then recalculate the brush once they release it)
for those wondering, I'm trying to get it so the thick lines look like they have been made with a pencil. it should look as real life as possible.
I considered just rendering the brushed spline to a texture, but as the spline can be any length, in any direction, dedicating a WHOLE rectangular texture to encompass the whole spline would be way to costly...
the spline is inevitably broken up into quads for rendering, so I thought of initially rendering the brush to a texture, then generating an optimized texture with each of the quads separated and packed as neatly as possible into the texture.
but two renders to texture... algorithm to create the optimized texture, making it so the quads still seamlessly blend with each other... sounds like a nightmare, and thats not even making it realtime.
so yeah, any ideas on how to draw thick, pencil like lines that follow a spline in real time on the ipad in openGL?
From my point of view, what you want is to render a line that:
is textured
has the edges fade off (i.e. no sharp edge to it)
follows a spline
To achieve these goals I would first of all break the spline up into a series of line segments that closely approximate the curve (you can make it more or less fine-grained depending on how accurate you want it to be versus how fast you want it to render).
Once you have these, you will need to make each segment into 3 quads, one that goes over the middle of the line segment that serves as the fully opaque part of the line and one on each edge of the line that will fade out to be totally transparent.
You will need to use a little bit of math to make sure that you extrude the quads along a vector that bisects 2 segments equally (i.e. so that the angle between the each segment and the extrusion vector are equal). This will ensure that you don't have gaps in the obtuse part of the join and overlaps in the acute parts.
After all of this, you just need to use the vertex positions as the UV co-ordinates (probably scaled though) and allow the texture to wrap around.
Using this method, you should end up with a mesh that has a solid thick line running through the middle of your spline with "fins" that taper off into complete transparency. This should approximate the effect you want quite closely while only rendering to relevant pixels (i.e. no giant areas of completely transparent pixels) and with very litter memory overhead.
I've been a little vague here as its kind of hard to explain with text alone and without writing an in depth tutorial. If you need more info, just comment on what your stuck on and I'll elaborate further.

Resources