I need to draw a large sphere composed of small quads. I am very new to Stage3D so I need some advices. How can I create a sphere with hundreds (maybe thousands) of quads?
I haven't worked with Stage3D, but I do know that moosefetcher is wrong. It is entirely possible to create spheres out of quads. Here is a modelling tips video, in it you can see there is a sphere made entirely out of quads.
video
Because I haven't worked with Stage3D I don't know anything about it. Though I'm sure you could look at tutorials for creating quad spheres in 3DSMAX, Maya and Cinema 4D. The steps should be very roughly similar.
However, if they are not similar. I'm pretty sure you can download a model of a quad sphere and load it up in Stage3D and then examine it and try to replicate it. Heres a link to download one:
download
It's not avaliabe in Stage3D, so I did a bit of research and theres a few programs that allow you to convert .blend files to stage 3d files and .obj files to stage 3d files.
You cannot cover the entire surface of a sphere using just quads. At least SOME triangles will be required.
Related
I have a problem in which in my game I have to use SpriteSortMode.Texture because I have a lot of objects with few textures, so I cannot afford to use SpriteSortMode.BackToFront.
The thing is this means I cannot draw by layers, unless I do SpriteBatch.Begin with the exact same settings, which is what I'm currently doing.
I only have 3 draw layers I need - a Tileset surface, Objects like rocks or characters on the surface, and UI.
Other solutions I've found is using texture quads (which supposedly also improves tileset drawing performance), going 3D with orthogonal view which I haven't researched yet.
I'm hoping there's a better to make this work.
Why would having a lot of objects with few textures mean you have to use SpriteSortMode.Texture?
"This can improve performance when drawing non-overlapping sprites of uniform depth." says the MSDN page, and this is clearly not what you are doing.
Just use the default SpriteSortMode.Deferred and draw things back to front in order.
I am creating a model of Saturn and I'm having problems when creating the rings. I found this asset
but when I try to set it as a diffuse, it projects like this
How can I control the way a texture projects over a geometry?
I found the solution. By replacing the cylinder with a torus and rotating the image 90 degrees, XCode did the mapping itself.
But there must be a better way.
This isn’t specifically a SceneKit or IOS issue, the same would apply in any 3D package.
You can control the way a texture projects over a geometry by using UV mapping. In practice that means you map the vertices and faces of the model on to the texture in software such a Blender. The texture you use now is meant to be tiled but because the lines on the texture are perfectly straight it will never look optimal.
To save yourself some trouble, use a texture that shows the entire ring from the top/above.
I think the best way is to use SCNTube.
I need to make a human 2D face to 3D face.
I used this link to load an ".obj" file and map the textures. This example is only for cube and pyramid. I loaded a human face ".obj" file.
This loads the .obj file and can get the human face properly as below.
But my problem here is I need to display different human faces without changing the ".obj" file. just by texture mapping.
But the texture is not getting mapped properly, as the obj file is of different model. I just tried changing the ".png" file which is used as texture and the below is the result, where the texture is mapped but not exactly what I expected, as shown below.
The below are my few questions on it :
1) I need to load texture on same model( with same .obj file ) with different images. Is it possible in opengles?
2) If the solution for above problem is "shape matching", how can I do it with opengles?
3) And finally a basic question, I need to display the image in large area, how to make the display area bigger?
mtl2opengl is actually my project, so thanks for using it!
1) The only way you can achieve perfect texture swapping without distortion is if both textures are mapped onto the UV vertices in exactly the same way. Have a look at the images below:
Model A: Blonde Girl
Model B: Ashley Head
As you can see, textures are made to fit the model. So any swapping to a different geometry target will result in distortion. Simplified, human heads/faces have two components: Interior (Bone/Geometry) and Exterior (Skin/Texture). The interior aspect obviously defines the exterior, so perfect texture swapping on the same .obj file will not work unless you change the geometry of the model with the swap.
2) This is possible with a technique called displacement mapping that can be implemented in OpenGL ES, although with anticipated difficulty for multiple heads/faces. This would require your target .obj geometry to start with a pretty generic model, like a mannequin, and then use each texture to shift the position of the model vertices. I think you need to be very comfortable with Modeling, Graphics, Shaders, and Math to pull this one off!
Via Wikipedia
3) I will add more transform options (scale & translate) in the next update. The Xcode project was actually made to show off the PERL script, not as a primer for OpenGL ES on iOS. For now, find the modelViewMatrix and fiddle with this little bit:
GLKMatrix4Scale(_modelViewMatrix, 0.30, 0.33, 0.30);
Hope that answers all your questions!
I want to make an 2D Shooter Game in XNA. The Terrain shall consist of an Bitmap Image which should be used as an collision map. I tried to do some Character Movement, but I failed with the side-collision and walking up slopes. Do you have any Ideas for that?
There's an excellent tutorial on pixel-perfect collision available on the MSDN App Hub.
Basically what you end up doing is pulling all the information from the texture (via GetData()) as an array, and looping through the overlapping pixels in each texture to see if they're both opaque, black, or whatever else you want to use to determine solidity. It gets a bit more complicated if you need scalable/rotated images, but the tutorial above contains instructions for that as well.
How do Google Maps do their panoramas in Street View?
Yeah, I know its Flash, but how do they skew bitmaps with Correct Texture Mapping?
Are they doing it on the pixel-level like most Flash 3D engines?, or just applying some tricky transformation to the Bitmaps in the Movieclips?
Flash Panorama Player can help achieve a similar result!
It uses 6 equirectangular images (cube faces) stitched together seamlessly with some 'magic' ActionScript.
Also see these parts of flashpanos.com for plugins, and tutorials with (possibly) documentation.
A quick guide to shooting panoramas so you can view them with FPP (Flash Panorama Player).
Cubic projection cube faces are actually 90x90 degrees rectilinear
images like the ones you get from a normal camera lens. ~ What is VR Photography?
Check out http://www.panoguide.com/. They have howtos, links to software etc.
Basically there are 2 components in the process: the stitching software which creates a single panoramic photo from many separate image sources, then there is the panoramic viewer, which distorts the image as you change your POV to simulate what your eyes would see if you were actually there.
My company uses the Papervision3D flash render engine, and maps a panoramic image (still image or video) onto a 3D sphere. We found that using a spherical object with about 25 divisions along both the axes gives a much better visual result than mapping the same image on the six faces of a cube. Check it for yourself at http://www.panocast.com.
Actually, you could of course distort your image in advance, so that when it is mapped on the faces of a cube, its perspective is just right, but this requires the complete rerendering of your imagery.
With some additional "magic", we can also load still images incrementally, as needed, depending on where the user is looking and at what zoom level (not unlike Google Street View does).
In terms of what Google actually does, Bork had this right. I'm not sure of the exact details (and not sure I could release the details even if I did), but Google stores individual 360 degree streetview scenes in an equirectangular representation for serving. The flash player then uses a series of affine transformations to display the image in perspective. The affine transformations are approximate, but good enough to aggregate to a decent image overall.
The calculation of the served images is very involved, since there are many stages of image processing that have to be done, to remove faces, account for bloom, etc. etc. In terms of actually stitching the panoramas, there are many algorithms for this (wikipedia article). Just one interesting thing I'd like to point out though, as food for thought, in the 360 degree panoramas on street view, you can see the road at the bottom of the image, where there was no camera on the cars. Now that's stitching.
An expensive camera. makes
A 360 degree video
It is pretty impressive to watch a video that allows panning in every direction... which is what street view is without the bandwidth to support the full video.
For those wondering how the Google VR Photographers and editers add the ground to their Equirectangular panoramas, check out the feature called Viewpoint Correction, as seen in software like PTGui:
ptgui.com/excamples/vptutorial.html
(Note that this is NOT the software used by Google)
If you take a closer look at the ground in street view, you see that the stitching seems streched, and sometimes it even overlaps with information from the viewpoint next to the current one. (With that I mean that you can see something in one place, and suddenly that same feature is shown as the ground in the next place, revealing the technique used for the ground stitching).