My question maybe a bit too broad but i am going for the concept. How can i create surface as they did in "Cham Cham" app
https://itunes.apple.com/il/app/cham-cham/id760567889?mt=8.
I got most of the stuff done in the app but the surface change with user touch is quite different. You can change its altitude and it grows and shrinks. How this can be done using sprite kit what is the concept behind that can anyone there explain it a bit.
Thanks
Here comes the answer from Cham Cham developers :)
Let me split the explanation into different parts:
Note: As the project started quite a while ago, it is implemented using pure OpenGL. The SpiteKit implementation might differ, but you just need to map the idea over to it.
Defining the ground
The ground is represented by a set of points, which are interpolated over using Hermite Spline. Basically, the game uses a bunch of points defining the surface, and a set of points between each control one, like the below:
The red dots are control points, and eveyrthing in between is computed used the metioned Hermite interpolation. The green points in the middle have nothing to do with it, but make the whole thing look like boobs :)
You can choose an arbitrary amount of steps to make your boobs look as smooth as possible, but this is more to do with performance.
Controlling the shape
All you need to do is to allow the user to move the control points (or some of them, like in Cham Cham; you can define which range every point could move in etc). Recomputing the interpolated values will yield you an changed shape, which remains smooth at all times (given you have picked enough intermediate points).
Texturing the thing
Again, it is up to you how would you apply the texture. In Cham Cham, we use one big texture to hold the background image and recompute the texture coordinates at every shape change. You could try a more sophisticated algorithm, like squeezing the texture or whatever you found appropriate.
As for the surface texture (the one that covers the ground – grass, ice, sand etc) – you can just use the thing called Triangle Strips, with "bottom" vertices sitting at every interpolated point of the surface and "top" vertices raised over (by offsetting them against "bottom" ones in the direction of the normal to that point).
Rendering it
The easiest way is to utilize some tesselation library, like libtess. What it will do it covert you boundary line (composed of interpolated points) into a set of triangles. It will preserve texture coordinates, so that you can just feed these triangles to the renderer.
SpriteKit note
Unfortunately, I am not really familiar with SpriteKit engine, so cannot guarantee you will be able to copy the idea over one-to-one, but please feel free to comment on the challenging aspects of the implementation and I will try to help.
Related
I'm struggling with a texture-baking process with 3DSmax software. I have a white 3D mesh with 2 image textures. I'm trying to get a diffusemap (see target_diffuse_map.jpg). To do this, I exectue the following steps:
1) Affect image-texture1 and image-texture2 to face1 and face2 of the objet.
2) Clone the object to get the white colors when baking texture.
3) unwrap UVM.
4) Rendering Texture to obtain the diffuse map.
5) Projection of the texture + white colors on the cloned object.
Please, find these steps on this small video I made: https://drive.google.com/file/d/1h4v2CrL8OCLwdeVtLmpQwD250cawgJpi/view
I obtain a bad sampled and weird diffuse map (please see obtained_diffuse_map.jpg). What I want is target_diffuse_map.jpg.
I'm I forgetting some steps?
Thank you for your help.
You need to either:
Add a small amount of "Push" in the Projection Modifier
Uncheck "Use Cage" in the Projection Options dialog, while setting a very small value for the offset
Projection Mapping works by casting rays from points on the cage towards corresponding model points on your mesh. You did not push the cage out at all, therefore rays are not well defined; rays are cast from a point toward a direction which is the exact same point. This causes numerical errors and z-fighting. The there needs to be some time amount of push so the "from" and "to" points of each ray are different giving them a well-defined direction to travel.
The second option, instead of using the cage defined in the projection modifier, is to use the offset method (you probably still need to apply projection modifier though). This method defines each rays as starting from a point defined by taking the model point of the mesh and moving outward by a fixed offset amount in the direction of the normal. The advantage is that for curved objects with large polygons, it produces less distortion because the system uses the smoothed shading normal at each point. The disadvantage you can't have different cage distances at different points of the model, for better control. Use this method for round wooden barrels and other simplistic objects with large, smooth curves.
Also, your situation is made difficult by having different parts of the model very close to each other (touching) and embedded within each other - namely how the mouth of the bottle is inside the cap and the cap it touching the base. For this case, it might make sense to break the objects apart after you have the overall UV mapping, run projection mapping separately on each one separately, and then combine the maps back together in an image editor.
I have a question and hope you can help me with it. I've been busy making a game with xna and have recently started getting into shaders (hlsl). There's this shader that i like, use, and would like to improve.
The shader creates an outline by drawing the back faces of a model (in black), and translating the vertex along its normal. Now, for a smooth-shaded model, this is fine. I, however, am using flat-shaded models (I'm posting this from my phone, but if anyone is confused about what that means, I can upload a picture later). This means that the vertices are translated along the normal of its corresponding face, resulting in visible gaps between the faces.
Now, the question is: is there a way to calculate (either in the shader or in xna), how i should translate each vertex, or is the only viable option just making a copy of the 3d model, but with smooth shading?
Thanks in advance, hope you can educate me!
EDIT: Alternatively, I could load only a smooth-shaded model, and try to flat-shade it in the shader. That would, however, mean that I have to be able to find the normals of all vertices of the corresponding face, add their normals, and normalize the result. Is there a way to do this?
EDIT2: So far, I've found a few options that don't seem to work in the end: setting "shademode" in hlsl is now deprecated. Setting the fillmode to "wireframe" would be neat (while culling front faces), if only I would be able to set the line thickness.
I'm working on a new idea here. I could maybe iterate through vertices, find their position on the screen, and draw 2d lines between those points using something like the RoundLine library. I'm going to try this, and see if it works.
Ok, I've been working on this problem for a while and found something that works quite nicely.
Instead doing a lot of complex mathematics to draw 2d lines at the right depth, I simply did the following:
Set a rasterizer state that culls front faces, draws in wireframe, and has a slightly negative depth bias.
Now draw the model in all black (I modified my shader for this)
Set a rasterizer state that culls back faces, draws in fillmode.solid and has a 0 depth bias.
Now draw the model normally.
Since we can't change the thickness of wireframe lines, we're left with a very slim outline. For my purposes, this was actually not too bad.
I hope this information is useful to somebody later on.
In 3d terrain that consists of thousands of cubes (i.e. Minecraft ), what is a way to handle each block in terms of location and rendering? More specifically, I know that drawing a primitive of a cube and world transforming it everywhere in directX 9 is probably a ridiculous way to accomplish this since there are so many performance issues, so I was wondering what a more reasonable method would be.
Should each cube be a mesh that's copied many times, or is their a way to create the appropriate meshes from the data in your vertex buffer?
I found this article that walks through some of the theory behind implementing what I want to implement, but I've never used octrees before so I wasn't able to take too much from the source code. If octrees are indeed the way to go, where is a good starting point to learn about them? Most of my google searches only turned up blog posts about theory with little or no implementation examples.
It seems like using voxels would be useful in doing this, but like with octrees, I'm coming from no experience here, so I don't really know what to study first.
Anyway, thanks for any advice\resources\book names you can spare. I'm sure it's obvious, but I'm still very new to 3d programming, so I appreciate your help.
First off if you're using Minecraft as your reference, think of their use of chunks and relate it to Oct-trees. Minecraft divides up their world into smaller chunks to handle the massive amount information that is needed to be stored so use Oct-trees to organize this data that will be stored. Goz has a very accurate description of how Oct-trees and Quad-trees work, so use his information as a reference.
Another thing to consider is that you don't actually want to draw every cube to the screen as this will eat up your framerate. Use Object Culling to only draw visible cubes to the screen. Again if you think Minecraft; have you ever encountered a glitch where you can see through the blocks and under the world? This is because Minecraft only draws the top layer of blocks. With this many objects on screen, it would be a worthwhile investment to look into Object Culling using both the camera frustum and occlusion query.
For information on using DirectX I would recommend any book by Frank Luna. I own this book myself and it never leaves my side when programming in DirectX. http://www.amazon.com/Introduction-Game-Programming-Direct-9-0c/dp/1598220160/ref=sr_1_3?ie=UTF8&qid=1332478780&sr=8-3
I highly recommend this book as I've learned almost everything I know about DirectX from it.
Upon a Google search I found this link that discusses Occlusion Culling, because Luna doesn't cover occlusion culling, only frustum culling. I hear the Programming Gems series mentioned a lot, but I can't attest to its name personally. http://http.developer.nvidia.com/GPUGems/gpugems_ch29.html
Hope this helps.
Oct-trees are fairly simple, especially axis aligned ones like those in mine craft.
It is basically just a 3D extension of the quad-tree. You may find it easier to learn about Quad-trees first.
To give you a quick overview of a quad-tree; basically you start off with a square. Now imagine placing a much smaller square in that square. If you wish to build a quad tree representing it you first divide the original square into 4 equal sized squares.
Next you check each quadrant and if the smaller square is in that quadrant you split that quadrant into 4 smaller sized squares. Then you check those 4 quadrants choose the quadrant and subdivide. Eventually your smaller square will be wholly contained in one or more quadrants inside quadrants inside quadrants (etc). You have now built your quad tree.
Now if you imagine you are searching for a specific square inside the larger square you can quickly see the bonus of a quad-tree. Instead of searching every possible square in the quad tree (equivalent to searching every pixel in a texture) you can now check the first 4 quadrants to see if they contain it. If one does you can check its 4 sub quadrants and so on until you find the smallest quadrant wholly containing your square (or pixel). This way you end up doing many fewer tests to find your object.
Now an oct-tree is basically the same thing but instead of encoding squares in squares you now encode cubes in cubes. Every cube can be split into 8 smaller octants (and hence the name oct-tree).
Oct-trees have the advantage that by knowing which octant you are starting in you can easily cast rays through the oct-tree to find collisions (as an octant is either full, partially full or it is empty). If an octant is empty then you pass right through it and then check the octant on the other side. If it is partially full you check its sub-octants and so on until you either find a full octant (ie you've hit a solid cube and you render it) or you pass through the octant entirely and hence there is no cube to render. This is how minecraft works (I'm guessing anyway ;)). This is also a good way of quickly rendering voxel data which more people are looking into these days as a possible future rendering mechanism.
Hope thats some help! :)
Oct-trees and quad-trees are useful for culling sections of your geometry to render. Minecraft uses 16x16x16 render blocks to break up the terrain into manageable pieces.
Another technique to consider is instancing. Instancing is where you tell the GPU to render an object multiple times in different locations. It's used for crowd rendering, trees, anything where the geometry is the same, but you have lots of them.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb173349(v=vs.85).aspx
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter03.html
Here is an article where the writer duplicates the minecraft renderer in OpenGL 4. While the code won't apply to your case the techniques (culling cubes that are surrounded, etc) can be applied to a directx renderer.
http://codeflow.org/entries/2010/dec/09/minecraft-like-rendering-experiments-in-opengl-4/
Don't be fooled by the blocky graphics and the low quality textures. Minecraft is an extremely complex renderer and you'll need to come up with ways to handle the sheer number of items involved. For example even a "small" part of the world, say 100x100x100 blocks is 1 million blocks. To push each block to the GPU as a separate mesh would kill your GPU. The Minecraft renderer is far more complex than most first person shooters when you get down to the technology.
Given an image that can contain any variety of solid color images, what is the best method for parsing the image at a given point and then determining the slope (or Vector if you prefer) of that area?
Being new to XNA development, I feel there must be an established method for doing this sort of thing but I have Googled this issue for awhile now.
By way of example, I have mocked up a quick image to demonstrate what I am trying to do. The white portion of the image (where the labels are shown) would be transparent pixels. The "ground" would be a RenderTarget2D or Texture2D object that will provide the Color array of pixels.
Example
What you are looking for is the tangent, which is 90 degrees to the normal (which is more commonly used). These two terms should assist you in your searching.
This is trivial if you've got the polygon outline data. If all you have is an image, then you have to come up with a way to convert it into a polygon.
It may not be entirely suitable for your problem, but the first place I would go is the Farseer Physics Engine, which has a "texture to polygon" feature you could possibly reuse.
If you are using the terrain as some kind of "ground", you can possibly cheat a bit by looking at the adjacent column of pixels and using that to determine the ground slope at that exact point. Kind of like what Lemmings and Worms do.
If you make that determination at the boundary between each pixel, you can get gradients of rise:run between two pixels horizontally. Usually you just break it into categories: so flat (1:1), 45 degrees (2:1) or too steep (>3:1). With a more complicated algorithm, that looks outwards to more columns, you can get better resolution.
I'm working on a 2D Platform game, and I was wondering what's the best (performance-wise) way to implement Surface (Collision) Detection.
So far I'm thinking of constructing a list of level objects constructed of a list of lines, and I draw tiles along the lines.
alt text http://img375.imageshack.us/img375/1704/lines.png
I'm thinking every object holds the ID of the surface that he walks on, in order to easily manipulate his y position while walking up/downhill.
Something like this:
//Player/MovableObject class
MoveLeft()
{
this.Position.Y = Helper.GetSurfaceById(this.SurfaceId).GetYWhenXIs(this.Position.X)
}
So the logic I use to detect "droping/walking on surface" is a simple point (player's lower legs)-touches-line (surface) check
(with some safety approximation
- let`s say 1-2 pixels over the line).
Is this approach OK?
I`ve been having difficulty trying to find reading material for this problem, so feel free to drop links/advice.
Having worked with polygon-based 2D platformers for a long time, let me give you some advice:
Make a tile-based platformer.
Now, to directly answer your question about collision-detection:
You need to make your world geometry "solid" (you can get away with making your player object a point, but making it solid is better). By "solid" I mean - you need to detect if the player object is intersecting your world geometry.
I've tried "does the player cross the edge of this world geometry" and in practice is doesn't work (even though it might seem to work on paper - floating point precision issues will not be your only problem).
There are lots of instructions online on how to do intersection tests between various shapes. If you're just starting out I recommend using Axis-Aligned Bounding Boxes (AABBs).
It is much, much, much, much, much easier to make a tile-based platformer than one with arbitrary geometry. So start with tiles, detect intersections with AABBs, and then once you get that working you can add other shapes (such as slopes).
Once you detect an intersection, you have to perform collision response. Again a tile-based platformer is easiest - just move the player just outside the tile that was collided with (do you move above it, or to the side? - it will depend on the collision - I will leave how to do this is an exercise).
(PS: you can get terrific results with just square tiles - look at Knytt Stories, for example.)
Check out how it is done in the XNA's Platformer Starter Kit Project. Basically, the tiles have enum for determining if the tile is passable, impassable etc, then on your level you GetBounds of the tiles and then check for intersections with the player and determine what to do.
I've had wonderful fun times dealing with 2D collision detection. What seems like a simple problem can easily become a nightmare if you do not plan it out in advance.
The best way to do this in a OO-sense would be to make a generic object, e.g. classMapObject. This has a position coordinate and slope. From this, you can extend it to include other shapes, etc.
From that, let's work with collisions with a Solid object. Assuming just a block, say 32x32, you can hit it from the left, right, top and bottom. Or, depending on how you code, hit it from the top and from the left at the same time. So how do you determine which way the character should go? For instance, if the character hits the block from the top, to stand on, coded incorrectly you might inadvertently push the character off to the side instead.
So, what should you do? What I did for my 2D game, I looked at the person's prior positioning before deciding how to react to the collision. If the character's Y position + Height is above the block and moving west, then I would check for the top collision first and then the left collision. However, if the Character's Y position + height is below the top of the block, I would check the left collision.
Now let's say you have a block that has incline. The block is 32 pixels wide, 32 pixels tall at x=32, 0 pixels tall at x=0. With this, you MUST assume that the character can only hit and collide with this block from the top to stand on. With this block, you can return a FALSE collision if it is a left/right/bottom collision, but if it is a collision from the top, you can state that if the character is at X=0, return collision point Y=0. If X=16, Y=16 etc.
Of course, this is all relative. You'll be checking against multiple blocks, so what you should do is store all of the possible changes into the character's direction into a temporary variable. So, if the character overlaps a block by 5 in the X direction, subtract 5 from that variable. Accumulate all of the possible changes in the X and Y direction, apply them to the character's current position, and reset them to 0 for the next frame.
Good luck. I could provide more samples later, but I'm on my Mac (my code is on a WinPC) This is the same type of collision detection used in classic Mega Man games IIRC. Here's a video of this in action too : http://www.youtube.com/watch?v=uKQM8vCNUTM
You can try to use one of physics engines, like Box2D or Chipmunk. They have own advanced collision detection systems and a lot of different bonuses. Of course they don't accelerate your game, but they are suitable for most of games on any modern devices
It is not that easy to create your own collision detection algorithm. One easy example of a difficulty is: what if your character is moving at a high enough velocity that between two frames it will travel from one side of a line to the other? Then your algorithm won't have had time to run in between, and a collision will never be detected.
I would agree with Tiendil: use a library!
I'd recommend Farseer Physics. It's a great and powerful physics engine that should be able to take care of anything you need!
I would do it this way:
Strictly no lines for collision. Only solid shapes (boxes and triangles, maybe spheres)
2D BSP, 2D partitioning to store all level shapes, OR "sweep and prune" algorithm. Each of those will be very powerfull. Sweep and prune, combined with insertion sort, can easily thousands of potentially colliding objects (if not hundreds of thousands), and 2D space partitioning will allow to quickly get all nearby potentially colliding shapes on demand.
The easiest way to make objects walk on surfaces is to make then fall down few pixels every frame, then get the list of surfaces object collides with, and move object into direction of surface normal. In 2d it is a perpendicular. Such approach will cause objects to slide down on non-horizontal surfaces, but you can fix this by altering the normal slightly.
Also, you'll have to run collision detection and "push objects away" routine several times per frame, not just once. This is to handle situations if objects are in a heap, or if they contact multiple surfaces.
I have used a limited collision detection approach that worked on very different basis so I'll throw it out here in case it helps:
A secondary image that's black and white. Impassible pixels are white. Construct a mask of the character that's simply any pixels currently set. To evaluate a prospective move read the pixels of that mask from the secondary image and see if a white one comes back.
To detect collisions with other objects use the same sort of approach but instead of booleans use enough depth to cover all possible objects. Draw each object to the secondary entirely in the "color" of it's object number. When you read through the mask and get a non-zero pixel the "color" is the object number you hit.
This resolves all possible collisions in O(n) time rather than the O(n^2) of calculating interactions.