I want to use my own Unit"system".
Something like 1 Pixel is equals to 0.01 Units.
Now when I want to draw something with my own Unitsystem, I always have to multiply/divide the value by 100.
I've found some answers that mentioned to use matrix in SpriteBatch.Begin, but I dont know how.
Could someone help me^^?
Spritebatch.Begin()´s last parameter can be a transform matrix.
TransformMatrix = Matrix.CreateScale(0.01);
spriteBatch.Begin(SpriteSortMode.Immediate, null, null, null, null, null, TransformMatrix);
Farseer Physics provides a ConvertUnits class for this kind of thing. From memory the methods you're interested in are ToSimUnits and ToDisplayUnits. The Farseer documentation describes it like this:
// 1 meter = 64 pixels
ConvertUnits.SetDisplayUnitToSimUnitRatio(64f);
// If you object is 512 pixels wide and 128 pixels high:
float width = ConvertUnits.ToSimUnits(512f) // 8 meters
float height = ConvertUnits.ToSimUnits(128f) // 2 meters
So the rules are:
Whenever you need to input pixels to the engine, you need to convert the number to meters first.
Whenever you take a value from the engine, and draw it on the screen, you need to convert it to pixels first.
If you follow those simple rules, you will have a stable physics simulation.
On a related topic, you should look into resolution independence.
Related
I need to know the relation between points and pixels and how it affects different BB 7.0 and lower version devices.
I have a project which parses values of width and height of components to be displayed in points and I have converted them into pixels and shown on different devices using the following formula.
fldwidth = fldwidth*Display.getWidth()/100
fldheight = fldheight*Display.getHeight()/100
where initially the values of fldwidth and fldheight has pt values in decimal.
Am I correct
A point is, by definition, 1/72 of an inch - see Wikipedia Point_(typography)
The size of pixel is dependent on the screen resolution on the device. Just to be clear, this is resolution normally stated in dots per inch (dpi). This is not the common usage for the term resolution which is the pixel height and width of the screen. People use resolution in this way incorrectly. Resolution is the density of dots on the screen, not the number of pixels on the screen.
The point here is that there is NO relationship between the number of pixels displayed on the screen with the number of pixels that are required for a point. You can not use the conversion that you are attempting.
To determine the number of pixels that match 1 point, you must get the resolution of the screen. BB provides two methods for this:
Display.getHorizontalResolution();
Display.getVerticalResolution();
Fortunately, these will give you the same value on all BBOS (Java) devices, as all BBOS devices have the same vertical and horizontal resolution.
The value supplied is the number of pixels in one metre. So all you need to do is determine how many 1/72s of an inch there are in 1 metre, divide one of these values by that number, and then you have the number of pixels in a point.
Because of integer arithmetic, when doing this calculation, I would multiply by the point size you are trying to achieve before doing the division. For example:
int pixelSizeReqd = pointSizeReq *
Display.getHorizontalResolution() / pointsInOneMetre;
And by the way, just call Display.getHorizontalResolution() once and reuse the returned value. I am not sure about getHorizontalResolution(), but I do know that some Display methods, for example, getHeight() and getWdith() are 'expensive' so should be avoided if possible. The value is not going to change anyway!
Update following this comment:
Can you explain in an example . Suppose I got a device 8520 (320x240 resolution) i have a point (say 57pt) what would be its corresponding pixel value as per your formula ... int pixelSizeReqd = pointSizeReq * Display.getHorizontalResolution() / pointsInOneMetre
Answer:
Note that the 8520 has a screen size of 320 x 240. That is not its screen resolution for the purposes of this discussion. Got that?
You want a size of 57 points. So the calculation is:
int pixelSizeReqd = 57 * Display.getHorizontalResolution() / pointsInOneMetre;
You shouldn't replace Display.getHorizontalResolution() with a figure - it will be different on different devices and there is no need for you to try to fix this value for yourself.
How many points are there in 1 metre? You can do the math, convert a 1/72 inch into metres and then divide 1 metre by this. Or you can type into Google "how many points in a meter" and get the answer 2,834.64567. We don't need the accuracy, so we just use integer arithmetic to give us this:
int pixelSizeReqd = 57 * Display.getHorizontalResolution() / 2834;
Job done - that wasn't too hard was it?
I'm trying to use the DepthBias property on the rasterizer state in DirectX 11 (D3D11_RASTERIZER_DESC) to help with the z-fighting that occurs when I render in wireframe mode over solid polygons (wireframe overlay), and it seems setting it to any value doesn't change anything to the result. But I noticed something strange... the value is defined as a INT rather than a FLOAT. That doesn't make sense to me, but it still doesn't happen to work as expected. How do we properly set that value if it is a INT that needs to be interpreted as a UNORM in the shader pipeline?
Here's what I do:
Render all geometry
Set the rasterizer to render in wireframe
Render all geometry again
I can clearly see the wireframe overlay, but the z-fighting is horrible. I tried to set the DepthBias to a lot of different values, such as 0.000001, 0.1, 1, 10, 1000 and all the minus equivalent, still no results... obviously, I'm aware when casting the float as integer, all the decimals get cut... meh?
D3D11_RASTERIZER_DESC RasterizerDesc;
ZeroMemory(&RasterizerDesc, sizeof(RasterizerDesc));
RasterizerDesc.FillMode = D3D11_FILL_WIREFRAME;
RasterizerDesc.CullMode = D3D11_CULL_BACK;
RasterizerDesc.FrontCounterClockwise = FALSE;
RasterizerDesc.DepthBias = ???
RasterizerDesc.SlopeScaledDepthBias = 0.0f;
RasterizerDesc.DepthBiasClamp = 0.0f;
RasterizerDesc.DepthClipEnable = TRUE;
RasterizerDesc.ScissorEnable = FALSE;
RasterizerDesc.MultisampleEnable = FALSE;
RasterizerDesc.AntialiasedLineEnable = FALSE;
As anyone figured out how to set the DepthBias properly? Or perhaps it is a bug in DirectX (which I doubt) or again maybe there's a better way to achieve this than using DepthBias?
Thank you!
http://msdn.microsoft.com/en-us/library/windows/desktop/cc308048(v=vs.85).aspx
Depending on whether your depth buffer is UNORM or floating point varies the meaning of the number. In most cases you're just looking for the smallest possible value that gets rid of your z-fighting rather than any specific value. Small values are a small bias, large values are a large bias, but how that equates to a shift numerically depends on the format of your depth buffer.
As for the values you've tried, anything less than 1 would have rounded to zero and had no effect. 1, 10, 1000 may simply not have been enough to fix the issue. In the case of a D24 UNORM depth buffer, the formula would suggest a depth bias of 1000 would offset depth by: 1000 * (1 / 2^24), which equals 0.0000596, a not very significant shift in z-buffering terms.
Does a large value of 100,000 or 1,000,000 fix the z-fighting?
If anyone cares, I made myself a macro to make it easier. Note that this macro will only work if you are using a 32bit float depth buffer format. A different macro might be needed if you are using a different depth buffer format.
#define DEPTH_BIAS_D32_FLOAT(d) (d/(1/pow(2,23)))
That way you can simply set your depth bias using standard values, such as:
RasterizerDesc.DepthBias = DEPTH_BIAS_D32_FLOAT(-0.00001);
I'm quite new to XNA so excuse me if I ask a 'silly' question but I couldn't find an answer.
I have a problem with the terrain rendered from a heightmap: the terrain I get is too small, I need something larger for my game but I'd like to keep the heigh tdata updated - so I can check for collisions later. (height data being a 2 dimensional array which holds the heights of each point - in my program it's called 'dateInaltime').
The problem is that if I modify the scale of the terrain, the collision checker will use the old values (from the original/small terrain) so I'll get wrong collision points.
My terrain class looks like this.
How can I make the terrain larger but also extend the height data array?
Change this part:
vertex[x + y * lungime].Position = new Vector3(x, dateInaltime[x, y], -y);
to:
vertex[x + y * lungime].Position = new Vector3(x, dateInaltime[x, y], -y) * new Vector3(10);
It should separate the vertices by a scale of 10 (or whatever number you choose).
How to make a 2d world with fixed size, which would repeat itself when reached any side of the map?
When you reach a side of a map you see the opposite side of the map which merged togeather with this one. The idea is that if you didn't have a minimap you would not even notice the transition of map repeating itself.
I have a few ideas how to make it:
1) Keeping total of 3x3 world like these all the time which are exactly the same and updated the same way, just the players exists in only one of them.
2) Another way would be to seperate the map into smaller peaces and add them to required place when asked.
Either way it can be complicated to complete it. I remember that more thatn 10 years ago i played some game like that with soldiers following each other in a repeating wold shooting other AI soldiers.
Mostly waned to hear your thoughts about the idea and how it could be achieved. I'm coding in XNA(C#).
Another alternative is to generate noise using libnoise libraries. The beauty of this is that you can generate noise over a theoretical infinite amount of space.
Take a look at the following:
http://libnoise.sourceforge.net/tutorials/tutorial3.html#tile
There is also an XNA port of the above at: http://bigblackblock.com/tools/libnoisexna
If you end up using the XNA port, you can do something like this:
Perlin perlin = new Perlin();
perlin.Frequency = 0.5f; //height
perlin.Lacunarity = 2f; //frequency increase between octaves
perlin.OctaveCount = 5; //Number of passes
perlin.Persistence = 0.45f; //
perlin.Quality = QualityMode.High;
perlin.Seed = 8;
//Create our 2d map
Noise2D _map = new Noise2D(CHUNKSIZE_WIDTH, CHUNKSIZE_HEIGHT, perlin);
//Get a section
_map.GeneratePlanar(left, right, top, down);
GeneratePlanar is the function to call to get the sections in each direction that will connect seamlessly with the rest of your world.
If the game is tile based I think what you should do is:
Keep only one array for the game area.
Determine the visible area using modulo arithmetics over the size of the game area mod w and h where these are the width and height of the table.
E.g. if the table is 80x100 (0,0) top left coordinates with a width of 80 and height of 100 and the rect of the viewport is at (70,90) with a width of 40 and height of 20 you index with [70-79][0-29] for the x coordinate and [90-99][0-9] for the y. This can be achieved by calculating the index with the following formula:
idx = (n+i)%80 (or%100) where n is the top coordinate(x or y) for the rect and i is in the range for the width/height of the viewport.
This assumes that one step of movement moves the camera with non fractional coordinates.
So this is your second alternative in a little bit more detailed way. If you only want to repeat the terrain, you should separate the contents of the tile. In this case the contents will most likely be generated on the fly since you don't store them.
Hope this helped.
I have a tex2D sampler I want to only return precisely those colours that are present on my texture. I am using Shader Model 3, so cannot use load.
In the event of a texel overlapping multiple colours, I want it to pick one and have the whole texel be that colour.
I think to do this I want to disable mipmapping, or at least trilinear filtering of mips.
sampler2D gColourmapSampler : register(s0) = sampler_state {
Texture = <gColourmapTexture>; //Defined above
MinFilter = None; //Controls sampling. None, Linear, or Point.
MagFilter = None; //Controls sampling. None, Linear, or Point.
MipFilter = None; //Controls how the mips are generated. None, Linear, or Point.
//...
};
My problem is I don't really understand Min/Mag/Mip filtering, so am not sure what combination I need to set these in, or if this is even what I am after.
What a portion of my source texture looks like;
Screenshot of what the relevant area looks like after the full texture is mapped to my sphere;
The anti-aliasing/blending/filtering artefacts are clearly visible; I don't want these.
MSDN has this to say;
D3DSAMP_MAGFILTER: Magnification filter of type D3DTEXTUREFILTERTYPE
D3DSAMP_MINFILTER: Minification filter of type D3DTEXTUREFILTERTYPE.
D3DSAMP_MIPFILTER: Mipmap filter to use during minification. See D3DTEXTUREFILTERTYPE.
D3DTEXF_NONE: When used with D3DSAMP_MIPFILTER, disables mipmapping.
Another good link on understanding hlsl intrinsics.
RESOLVED
Not an HLSL issue at all! Sorry all. I seem to ask a lot of questions that are impossible to answer. Ogre was over-riding the above settings. This was fixed with;
Ogre::MaterialManager::getSingleton().setDefaultTextureFiltering(Ogre::FO_NONE , Ogre::FO_NONE, Ogre::FO_NONE);
What it looks to me is that you're getting the values from a lower level mip-map (unfiltered) than the highest detail you're showing.
MipFilter = None
should prevent that, unless something in the code overrides it. So look for calls to SetSamplerState.
What you have done should turn off filtering. There are 2 potential issues, that I can think of, though
1) The driver just ignores you and filters anyway (If this is happening there is nothing you can do)
2) You have some form of edge anti-aliasing enabled.
Looking at your resulting image that doesn't look much like bilinear filtering to me so I'd think you are suffering from having antialiasing turned on somewhere. Have you set the antialiasing flag when you create the device/render-texture?
If you want to have really just one texel, use load instead of sample. load takes (as far as i know) an int2as an argument, that specifies the actual array coordinates in the texture. load looks then up the entry in your texture at the given array coordinates.
So, just scale your float2, e.g. by using ceil(float2(texCoord.x*textureWidth, texCoord.y*textureHeight)).
MSDN for load: http://msdn.microsoft.com/en-us/library/bb509694(VS.85).aspx
When using just shader model 3, you could a little hack to achieve this: Again, let's assume that you know textureWidth and textureHeight.
// compute floating point stride for texture
float step_x = 1./textureWidth;
float step_y = 1./textureHeight;
// compute texel array coordinates
int target_x = texCoord.x * textureWidth;
int target_y = texCoord.y * textureHeight;
// round to values, such that they're multiples of step_x and step_y
float2 texCoordNew;
texCoordNew.x = target_x * step_x;
texCoordNew.y = target_y * step_y;
I did not test it, but I think it could work.