The definition on MSDN website is:
layerDepth
Type: Single
The depth of a layer. By default, 0 represents the front layer and 1 represents a back layer. Use SpriteSortMode if you want sprites to be sorted during drawing.
Can someone explain what this means please? Thanks.
It's equivalent to the z-order of normal windows, it's not related to the depth buffer or the z coordinate in the DirectX coordinate system.
If you draw two sprites at the same xy position, the one with the lower "layer depth" will be behind the one with the higher "layer depth".
If you use sorting, then the lower number layers (i.e. the ones at the back) will be drawn first, which is generally what you want.
Refer to this answer in response to a similar question, which has more details.
Related
I'm struggling with a texture-baking process with 3DSmax software. I have a white 3D mesh with 2 image textures. I'm trying to get a diffusemap (see target_diffuse_map.jpg). To do this, I exectue the following steps:
1) Affect image-texture1 and image-texture2 to face1 and face2 of the objet.
2) Clone the object to get the white colors when baking texture.
3) unwrap UVM.
4) Rendering Texture to obtain the diffuse map.
5) Projection of the texture + white colors on the cloned object.
Please, find these steps on this small video I made: https://drive.google.com/file/d/1h4v2CrL8OCLwdeVtLmpQwD250cawgJpi/view
I obtain a bad sampled and weird diffuse map (please see obtained_diffuse_map.jpg). What I want is target_diffuse_map.jpg.
I'm I forgetting some steps?
Thank you for your help.
You need to either:
Add a small amount of "Push" in the Projection Modifier
Uncheck "Use Cage" in the Projection Options dialog, while setting a very small value for the offset
Projection Mapping works by casting rays from points on the cage towards corresponding model points on your mesh. You did not push the cage out at all, therefore rays are not well defined; rays are cast from a point toward a direction which is the exact same point. This causes numerical errors and z-fighting. The there needs to be some time amount of push so the "from" and "to" points of each ray are different giving them a well-defined direction to travel.
The second option, instead of using the cage defined in the projection modifier, is to use the offset method (you probably still need to apply projection modifier though). This method defines each rays as starting from a point defined by taking the model point of the mesh and moving outward by a fixed offset amount in the direction of the normal. The advantage is that for curved objects with large polygons, it produces less distortion because the system uses the smoothed shading normal at each point. The disadvantage you can't have different cage distances at different points of the model, for better control. Use this method for round wooden barrels and other simplistic objects with large, smooth curves.
Also, your situation is made difficult by having different parts of the model very close to each other (touching) and embedded within each other - namely how the mouth of the bottle is inside the cap and the cap it touching the base. For this case, it might make sense to break the objects apart after you have the overall UV mapping, run projection mapping separately on each one separately, and then combine the maps back together in an image editor.
I have implemented realtime ray tracer with MetalFramework for iOS and it is implemented for following optical prisms like dodecahedron, icosahedron, octahedron, cube, etc. All my figures are composed from triangles, for example cube - 12 triangles, octahedron - 4 triangles. I trace rays and search intersection with figure, then I search how ray moves in prism. Then ray leaves figure and I search intersection with skybox. The problem is in complicated figures. When I test cube fps is 60, but when I test dodecahedron fps is 6. In my algorithm intersection with figure is the same as intersection with any triangle. It means that when I check intersection with ray and figure I have to check intersection with all triangles. I need some idea how to do not check intersections for all triangles. Thanks.
let say you have world bounded by some bounding box
create grid (dividing this box to cubes or whatever)
each voxel/cell
Is a list of triangles that intersects or are in it so before rendering for each cell process all triangles and store index of all triangles inside or crossing
rewrite ray-tracer to trace through this voxel map
So just increment the ray through neighboring voxels it is the same as line rasterization on pixels. This way you have partially Z-sort done. So take first voxel hit by ray and test only triangles contained in it. If any hit on voxel was found then stop (no need to test other voxels because they are farer).
further optimizations
You can add flag if triangle has been tested so test only those which where not already tested because many triangles will be multiple times tested otherwise
[notes]
Number of voxels per axis greatly affect performance so you need to play with it a bit to achieve best performance. If you have dynamic objects then the voxel map lists computations must be done once in a while or even per each frame. For static scene there is sufficient to do this just once.
To trace efficiently you'll need to use an acceleration structure, for example a KD-tree or a bounding volume hierarchy(BVH). This is similar to using a binary search tree to find a matching element.
I would suggest using a BVH because it is easier to construct and traverse than a KD-tree. And I would suggest against using a uniform voxel grid structure. A voxel grid can easily have very poor performance when triangles are unevenly distributed through the scene or heavily concentrated in a few voxels.
The BVH is just a tree of bounding volumes, such as an axis aligned bounding box (AABB) which encompass the primitives within it. This way if your a ray misses the bounding volume you know that it does not hit any primitives contained with it.
To construct a BVH:
Put all the triangle in one bounding volume. This will be the root of the tree.
Divide the triangles into two sets where the bounding volume of each set of triangles is minimized. More properly you'd want to follow the surface area heuristic (SAH), where you want to create set of triangles where you minimize the sum of the (surface area of the BVH)/(# triangles) for both sets of triangles.
Repeat step 2 for each node recursively until you the number of triangles you have left hits some threshold (4 is a good number to try).
To traverse
Check if the ray hits the root bounding box, if it does then proceed to step 2 otherwise no hit.
Check if it hits the child bounding boxes. If it does then repeat this step for its children bounding boxes. Otherwise no hit.
When you get the a bounding box which only contains triangles you'll need to test each triangle to see if it is hit just like normal.
This is a basic idea of a BVH. There much more detail that I haven't gone into about the BVH that you'll have to search for, since there are so many variations in the details.
In Short Implement a bounding volume hierarchy to trace.
My question maybe a bit too broad but i am going for the concept. How can i create surface as they did in "Cham Cham" app
https://itunes.apple.com/il/app/cham-cham/id760567889?mt=8.
I got most of the stuff done in the app but the surface change with user touch is quite different. You can change its altitude and it grows and shrinks. How this can be done using sprite kit what is the concept behind that can anyone there explain it a bit.
Thanks
Here comes the answer from Cham Cham developers :)
Let me split the explanation into different parts:
Note: As the project started quite a while ago, it is implemented using pure OpenGL. The SpiteKit implementation might differ, but you just need to map the idea over to it.
Defining the ground
The ground is represented by a set of points, which are interpolated over using Hermite Spline. Basically, the game uses a bunch of points defining the surface, and a set of points between each control one, like the below:
The red dots are control points, and eveyrthing in between is computed used the metioned Hermite interpolation. The green points in the middle have nothing to do with it, but make the whole thing look like boobs :)
You can choose an arbitrary amount of steps to make your boobs look as smooth as possible, but this is more to do with performance.
Controlling the shape
All you need to do is to allow the user to move the control points (or some of them, like in Cham Cham; you can define which range every point could move in etc). Recomputing the interpolated values will yield you an changed shape, which remains smooth at all times (given you have picked enough intermediate points).
Texturing the thing
Again, it is up to you how would you apply the texture. In Cham Cham, we use one big texture to hold the background image and recompute the texture coordinates at every shape change. You could try a more sophisticated algorithm, like squeezing the texture or whatever you found appropriate.
As for the surface texture (the one that covers the ground – grass, ice, sand etc) – you can just use the thing called Triangle Strips, with "bottom" vertices sitting at every interpolated point of the surface and "top" vertices raised over (by offsetting them against "bottom" ones in the direction of the normal to that point).
Rendering it
The easiest way is to utilize some tesselation library, like libtess. What it will do it covert you boundary line (composed of interpolated points) into a set of triangles. It will preserve texture coordinates, so that you can just feed these triangles to the renderer.
SpriteKit note
Unfortunately, I am not really familiar with SpriteKit engine, so cannot guarantee you will be able to copy the idea over one-to-one, but please feel free to comment on the challenging aspects of the implementation and I will try to help.
I want to be able to tell when 2 images collide (not just their frames). But here is the catch: the images are rotating.
So I know how to find whether a pixel in an image is transparent or not but that wont help in this scenario because it will only find the location in the frame relative to a non-rotated image.
Also I have gone as far as trying hit boxes but even those wont work because I can't find a way to detect the collision of UIViews that are contained in different subviews.
Is what I am trying to do even possible?
Thanks in advance
I don't know how you would go about checking for pixel collision on a rotated image. That would be hard. I think you would have to render the rotated image into a context, then fetch pixels from the context to check for transparency. That would be dreadfully slow.
I would suggest a different approach. Come up with a path that maps the bounds of your irregular image. You could then use CGPathContainsPoint to check to see if a set of points is contained in the path (That method takes a transform, which you would use to describe the rotation of your image's path.)
Even then though you're going to have performance problems, since you would have to call that method for a large number of points from the other image to determine if they intersect.
I propose you a simple strategy to solve that, based on looking for rectangles intersections.
The key for that is to create a simplified representation of your images with a set of rectangles laid out properly as bounding boxes of the different part of you image (like you would build your image with legos). For better performance use a small set of rectangles (a few big legos), for better precision use a biggest number of rectangles to precisely follow the image outline.
Your problem becomes equivalent to finding an intersection between rectangles. Or to be more precise to find wether at least one vertex of the rectangles of object A is inside at least one rectangle of object B (CGRectContainsPoint) or if rect intersects (CGRectIntersectsRect).
If you prefer the points lookup, you should define your rectangles by their 4 vertices then it is easy when you rotate your image to apply the same affine transform (use CGPointApplyAffineTransform) to your rectangle vertices to have the coordinates of your points after rotation. But of course you can lookup for frame intersections and represent you rectangle using the standard CGRect structure.
You could also use a CGPath (as explained in another answer below) instead of a set of rectangles and look for any vertex inside other path using CGPathContainsPoint. That would give the same result actually but probably the rectangles approach is faster in many cases.
The only trick is to take one of the objects as a reference axis. Imagine you are on object A and you only see B moving around you. Then if you have to rotate A you need to make an axis transform to always have B transform relatively to A and not to the screen or any other reference. If your transforms are only rotation around the object centre then rotating A by n radians is equivalent to rotating B by -n radians.
Then just loop through your vertices defining object A and find if one is inside a rectangle of object A.
You will probably need to investigate a bit to achieve that but at least you have some clues on how to solve that.
Given an image that can contain any variety of solid color images, what is the best method for parsing the image at a given point and then determining the slope (or Vector if you prefer) of that area?
Being new to XNA development, I feel there must be an established method for doing this sort of thing but I have Googled this issue for awhile now.
By way of example, I have mocked up a quick image to demonstrate what I am trying to do. The white portion of the image (where the labels are shown) would be transparent pixels. The "ground" would be a RenderTarget2D or Texture2D object that will provide the Color array of pixels.
Example
What you are looking for is the tangent, which is 90 degrees to the normal (which is more commonly used). These two terms should assist you in your searching.
This is trivial if you've got the polygon outline data. If all you have is an image, then you have to come up with a way to convert it into a polygon.
It may not be entirely suitable for your problem, but the first place I would go is the Farseer Physics Engine, which has a "texture to polygon" feature you could possibly reuse.
If you are using the terrain as some kind of "ground", you can possibly cheat a bit by looking at the adjacent column of pixels and using that to determine the ground slope at that exact point. Kind of like what Lemmings and Worms do.
If you make that determination at the boundary between each pixel, you can get gradients of rise:run between two pixels horizontally. Usually you just break it into categories: so flat (1:1), 45 degrees (2:1) or too steep (>3:1). With a more complicated algorithm, that looks outwards to more columns, you can get better resolution.