I would like to know whether using a transform matrix in SpriteBatch.Begin() renders the whole world or just the region defined by the viewport? If not, what would be the best way to render a huge world without severe performance issues??
Thanks in advance!
You can work out the bounding rectangle of this viewport and simply not draw anything outside the rectangle. This is made even easier if your world is grid based.
For a grid-based world, you'll want two (nested) for loops to draw the viewport, width and height.
For each loop, you need a starting point and an end point.
To get the start point for each loop: Divide the (width/height) by the (width/height) of the cells in the grid, then 'floor' this number.
To get the end point for each loop: Divide the (width/height) by the (width/height) of the cells in the grid, then 'ceiling' this number.
From there, you can simply draw the grid normally, using the start and end points that were calculated previously.
If your world isn't grid-based, you can either use a simple 'is point in rectangle' test or an octree.
Related
So first some background. Im developing a really simple 2D game, in Delphi 10.3, FMX, which at the bottom of the screen draws a random terrain for each level of the game.
Anyway, the terrain is just some random numbers which are used in Tpathdata and then i use fillpath to draw this 2d "terrain".
I want to check when a "falling" object, a trect for example, intersects with this terrain.
My idea was to get all the points of the tpathdata, every Y position of every X position of the screen width. This way i could easily check when an object intersects with the terrain.
I just cannout figure the way how to do it, or if anyone has any other solution. Id really appreciate any help. Thanks
This is not really a Delphi problem but a math problem.
You should have a math representation of your terrain. The polygon representing the boundary of the terrain. Then you need to use the math to know if a point is inside the polygon. See Wikipedia.
You may also implement it purely graphically using a B/W bitmap of the same resolution of the screen. You set the entire bitmap as white and draw the terrain on the bottom in white. Then checking the color of a pixel in that bitmap you'll know if it is outside of the terrain (black) or inside the terrain (white).
I am really having a problem with this.
I have a polygon (a quad) which can be any shape. When my mouse is inside the polygon I need to find the x,y values of where my mouse is (inside the quad) as though the poygon were are perfect square. Further explanation; I have a 32x32 texture applied to the polygon and I need to know the x,y of the texture that the mouse is over.
I have some code that works for most shapes but which breaks if TR.Y is less than TL.y for instance.
I have some pretty simple code that tests whether the cursor is inside the polygon (via two triangle tests). But I cannot figure out how to use this to generate an x,y of a virtual square projection.
This problem is killing me. What is the name of operation i am trying to perform? Does anyone know of an explanation where the equations are presented in code form (any kind of code) (rather than just mathematical notation?). Any kind of help would be so appreciated.
I am on the verge of doing a 2nd render with specially formatted textures (each pixel having a unique value) so that I can just colour test to get an approximate x,y match (and precision is something that can be compromised here without causing too much trouble) - but then I will have to work around the DX Lib's attempt to blend and smooth the special texture as it is warped to fill the quad)
**Edit: Code that works for many quad shapes
It depends on method - how the texture is drawn at this quad.
If it uses perspective transform Square=>Quad, you have to use matrix of inverse transform Quad=>Square. Short article
For linear interpolation approach see this page
I want to find all pixels in an image (in Cartesian coordinates) which lie within certain polar range, r_min r_max theta_min and theta_max. So in other words I have some annular section defined with the parameters mentioned above and I want to find integer x,y coordinates of the pixels which lie within it. The brute force solution comes to mid offcourse (going through all the pixels of the image and checking if it is within it) but I am wondering if there is some more efficient solution to it.
Thanks
In the brute force solution, you can first determine the tight bounding box of the area, by computing the four vertexes and including the four cardinal extreme points as needed. Then for every pixel, you will have to evaluate two circles (quadratic expressions) and two straight lines (linear expressions). By doing the computation incrementally (X => X+1) the number of operations drops to about nothing.
Inside a circle
f(X,Y) = X²+Y²-2XXc-2YYc+Xc²+Yc²-R² <= 0
Incrementally,
f(X+1,Y) = f(X,Y)+2X+1-2Xc <= 0
If you really want to avoid that overhead, you will resort to scanline conversion techniques. First think of filling a slanted rectangle. Drawing two horizontal lines by the intermediate vertices, you decompose the rectangle in two triangles and a parallelogram. Then for any scanline that crosses one of these shapes, you know beforehand what pair of sides you will intersect. From there, you know what portion of the scanline you need to fill.
You can generalize to any shape, in particular your circle segment. Be prepared to a relatively subtle case analysis, but finding the intersections themselves isn't so hard. It may help to split the domain with a vertical through the center so that any horizontal always meets the outline twice, never four times.
We'll assume the center of the section is at 0,0 for simplicity. If not, it's easy to change by offsetting all the coordinates.
For each possible y coordinate from r_max to -r_max, find the x coordinates of the circle of both radii: -sqrt(r*r-y*y) and sqrt(r*r-y*y). For every point that is inside the r_max circle and outside the r_min circle, it might be part of the section and will need further testing.
Now do the same x coordinate calculations, but this time with the line segments described by the angles. You'll need some conditional logic to determine which side of the line is inside and which is outside, and whether it affects the upper or lower part of the section.
I am trying to do normal mapping on flat surface but I can't get any noticeable result :(
My shader
http://pastebin.com/raEvBY92
For my eye, shader looks fine, but it doesn't render desired result( https://dl.dropbox.com/u/47585151/sss/final.png).
All values are passed.Normals,tengents and binormals are computed correctly when I create the grid,I have checked that!
Here are screens of ambient,diffuse,specular and bump map.
https://dl.dropbox.com/u/47585151/sss/ambient.png
https://dl.dropbox.com/u/47585151/sss/bumpMap.png
https://dl.dropbox.com/u/47585151/sss/diffuse.png
https://dl.dropbox.com/u/47585151/sss/specular.png
They seems to be legit...
The bump map,which is the result of (bump=normalize(mul(bump, input.WorldToTangentSpace)) definitely looks correct,but doesn't have any impact on end result.
Maybe I don't understand the different spaces idea or I changed the order of matrix multiplication.By world matrix I understand the position and orientation of the grid,which never changes and it is identity matrix.Only view matrix changes and represents camera position and orientation in own space.
Where is my mistake?
First of all, if you're having a problem, it's a good ideo to comment everything out, which doesn't belong to this. The whole lightcomputation with ambient, specular or even the diffusetexture isn't interesting at this moment. With
output.color=float4(diffuse ,1);
You can focus on your problem and see clearly what change, if you change something in you code.
If your quad lies in the xy-plane with z=0, you should change your lightvector, he wouldn't work. Generally I use for testing purpose a diagonal vector (like normalize(1,-1,1)) to prevent a parallel direction to my object.
When I look over your code it seems to me, that you didn't get the idea of the different spaces, as how you thought ;) The basic idea of normalmapping is to give additional information about the surface with additional normals. They are saved in a normalmap, so encoded to rgb, where b is usually the up-vector. Now you must fit them into your 3D-world, because they aren't in the world space, but in the tangent space (tangent space = surface space of the triangle). Because this transformation is more complex, the normal computation goes the other way round. You transform with the normal,binormal and tangent as a matrix your lightvector and viewvector from world space into tangent space (you are mapping the axis of world space xyz to tnb - tangent,normal,binormal, the order can be wrong I usually swap them until it works ;) ). With your line
bump = normalize(mul(bump, input.WorldToTangentSpace));
you try to transform you normal in tangent space to tangent space. Change this, so you transform the view and the lightvector in the vertexshader into tangent space and pass the transformed vectors to the pixelshader. There you can do the lightcomputation in tangent space. Maybe read an additional tutorial to normalmapping, then you will get this working! :)
PS: If youre finished with the basic lighting, your specular computation seems to have some errors, too.
float3 reflect = normalize(2*diffuse*bump-LightDirection);
This line seems to should compute the halfway-vector, but therefore you need the viewvector and shouldn't use a lightingstrength like diffuse. But a tutorial can explain this in more detail than me now.
I have a set of points to define a shape. These points are in order and essentially are my "selection".
I want to be able to contract this selection by an arbitrary amount to get a smaller version of my original shape.
In a basic example with a triangle, the points are simply moved along their normal which is defined by the points to the left and the right of the points in question.
Eventually all 3 points will meet and form one point but until that point they will make a smaller and smaller triangle.
For more complex shapes, when moving the individual points inward, they may pass through the outer edge of the shape resulting in weird artifacts. Obviously I'll need to cull these points and remove them from the array.
Any help in exactly how I can do that would be greatly appreciated.
Thanks!
This is just an idea but couldn't you find the center of mass of the object, create a vector from the center to each point, and move each point along this vector?
To find the center of mass would of course involve averaging each x and y coordinate. Getting a vector is as simple a subtracting the point in question with the center point. Normalizing and scaling are common vector operations that can be found with the Google.
EDIT
Another way to interpret what you're asking is you want to erode your collection of points. As in morphology erosion. This is typically applied to binary images but you can slightly modify the concept to work with a collection of points. Essentially, you need to write a function that, given a point, will return true (black) or false (white) depending on if that point is inside or outside the shape defined by your points. You'd have to look up how to do that for shapes that aren't always concave (it's harder but not impossible).
Now, obviously, every single one of your actual points will return false because they're all on the border (by definition). However, you now have a matrix of points around your point of interest that define where is "inside" and where is "outside". Average all of the "inside" points and move your actual point along the vector between itself and towards this average. You could play with different erosion kernels to see what works best.
You could even work with a kernel with floating point weights instead of either/or values which will affect your average calculation proportional to their weights. With this, you could approximate a circular kernel with a low number of points. Try the simpler method first.
Find the selection center (as suggested by colithium)
Map the selection points to the coordinate system with the selection center at (0,0). For example, if the selection center is at (150,150), and a given selection point is at (125,75), the mapped position of the point becomes (-25,-75).
Scale the mapped points (multiply X and Y by something in the range of 0.0..1.0)
Remap the points back to the original coordinate system
Only simple maths required, no need to muck about normalizing vectors.