Load .obj file out of Blender - ios

im searching now for a quite long time to find something working to import
Blender 3D .obj files into xcode to use it on an iphone application.
i cant find a description how to implement something like that anywhere!
i dont want to use any engine. i just want to know the steps i have to
fullfill and the basic things to do.
there is really nothing on the www. you can find articles from 2005 - 2008
but thats all not up to date and nothing working.
So, does anybody knows how to do that?

take a look at the source in libgdx, and take a look at the objloader. He built the lib to be able to load obj files, and is supposed to work across multiple platforms (including ios). From reading the source you will see that creating an object loader for just the vertices is really simple, but becomes more complicated when you start caring about the normals and texture coords. Here is a simple algorithm for scraping an obj file (I will leave parsing the associated tpl file for the reader's own research):
Read in the lines of the file, throwing out all lines that start with # (comments)
Vertices look like: v 1.000 1.000 1.000, so, if line starts with 'v ' split the line on the spaces and store (and convert to float) the 3 floats as a vertice.
Normals look like: vn 1.000 1.000 1.000, so, if the line starts with 'vn ' do the same as number 2, but store as a normal.
Texture coords look like vt 1.000 1.000 with a possible 3rd [w] value, split and store the line in the same way.
Now it gets tricky, there is the face descriptions that look like f 1/1/1 2/2/2 3/3/3 these are describing 3 vertices/textcoords/normals (in that order) for each of the vertices of a shape (normally they are triangles) by index. The hardest part about this is that the obj file type uses three indexes instead of one like opengl or direct3d. So you will have to shuffle around the order of your vertexes/coords/normals so that you can utilize indexed drawing on the sources.
E.g. Basically you have to get the f 1/300/30 40/22/400 20/30/10 to become more like f 1/1/1 2/2/2 40/40/40 through reshuffling the order.
This site gives you an idea of this same algorithm and shows you an example of how to go about this in a high level algorithm (go about midway down on the page), and the source code he references for you to check out can be found here.
Anyways, let me know if you need anymore assistance. :)
Edit:
By the way if you see something like this: f 1//4 2//5 3//7 don't be alarmed, this is a valid file as intended and just means (in this instance) that there are no texture coords

I used the GLEssentials sample app as a starting point. It is really bare bones, but its kind of what you want to start with so you can really understand the format for when you decide to add to it later.
https://developer.apple.com/library/mac/#samplecode/GLEssentials/Introduction/Intro.html

Related

Creating a sub-texture, from an existing texture, using D3D9

I'm working on an older project that uses D3D9 for rendering 3D environments.
I have a texture file loaded into memory, that I'm applying onto a simple 3D model for rendering. I'm loading this file using the D3DXCreateTextureFromFileInMemory function (MS Docs function link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromfileinmemory), and everything works okay.
However, instead of simply reading & loading the entire texture file, I want to only be able to read & load a square portion of it (a sub-texture of sorts). I have a pair of UV coordinates of the supposed square portion of the sub-texture (one UV coordinate for top-left corner of square, one for the bottom-right), relative to the main texture file, but I can't find a D3D9 function that does such a thing (I believe the correct wording for this would be a "Texture Atlas", but I've only heard it a couple of times and I'm not sure).
Here is an example diagram, to make sure my question is clear:
Looking over the MS Docs for the D3D9 texture functions, D3DXCreateTextureFromFileInMemoryEx (MS Docs link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromfileinmemoryex) can also be found with is a supposed upgrade of the previous D3DXCreateTextureFromFileInMemory function, however it only accepts a "height" and a "width" parameters, but not any sort of positional parameter pair. There are also alternative functions that use "Resources" instead of files in memory, but they also do not appear to accept any sort of positional parameters (such as D3DXCreateTextureFromResourceEx, MS Docs link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromresourceex).
There are also several functions for a "UV Atlas" present in the MS Docs archives (https://learn.microsoft.com/en-us/windows/win32/direct3d9/dx9-graphics-reference-d3dx-functions-uvatlas), however I do not think those would be helpful to me.
Is what I'm trying to achieve here even possible using D3D9? Are there any functions that I may be missing that could help me achieve this goal?

How to hit the texel cache in WebGL?

What i'm doing is GPGPU on WebGL and I don't know the access pattern which I'd be talking about applies to general graphics and gaming programs. In our code, frequently, we come across data which needs to be summarized or reduced per output texel. A very simple example is matrix multiplication during which, for every output texel, your return a value which is a dot product of a row of one input and a column of the other input.
This has been the sore point of our performance because of not so much the computation but multiplied data access. So I've been trying to find a pattern of reads or data layouts which would expedite this operation and I have been completely unsuccessful.
I will be describing some assumptions and some schemes below. The sample code for all these are under https://github.com/jeffsaremi/webgl-experiments
Unfortunately due to size I wasn't able to use the 'snippet' feature of StackOverflow. NOTE: All examples write to console not the html page.
Base matmul implementation: Example: [2,3]x[3,4]->[2,4] . This produces in a simplistic form 2 textures of (w:3,h:2) and (w:4,h:3). For each output texel I will be reading along the X axis of the left texture but going along the Y axis of the right texture. (see webgl-matmul.html)
Assuming that GPU accesses data similar to CPU -- that is block by block -- if I read along the width of the texture I should be hitting the cache pretty often.
For this, I'd layout both textures in a way that I'd be doing dot products of corresponding rows (along texture width) only. Example: [2,3]x[4,3]->[2,4] . Note that the data for the right texture is now transposed so that for each output texel I'd be doing a dot product of one row from the left and one row from the right. (see webgl-matmul-shared-alongX.html)
To ensure that the above assumption is indeed working, I created a negative test also. In this test I'd be reading along the Y axis of both left and right textures which should have the worst performance ever. Data is pre-transposed so that the results make sense. Example: [3,2]x[3,4]->[2,4]. (see webgl-matmul-shared-alongY.html).
So I ran these -- and I hope you could do as well to see -- and I found no evidence to support existence or non-existence of such caching behavior. You need to run each example a few times to get consistent results for comparison.
Then I came along this paper http://fileadmin.cs.lth.se/cs/Personal/Michael_Doggett/pubs/doggett12-tc.pdf which in short claims that the GPU caches data in blocks (or tiles as I call them).
Based on this promising lead I created a version of matmul (or dot product) which uses blocks of 2x2 to do its calculation. Prior to using this of course I had to rearrange my inputs into such layout. The cost of that re-arrangement is not included in my comparison. Let's say I could do that once and run my matmul many times after. Even this scheme did not contribute anything to the performance if not taking something away. (see webgl-dotprod-tiled.html).
A this point I am completely out of ideas and any hints would be appreciated.
thanks

iOS Import .obj file to Model I/O without duplicating vertices

I'm trying to import a .obj file to use in Scene Kit using the Model I/O framework. I initially used the simple MDLAsset initWithURL: function, but after transferring the mesh to a SCNGeometry, I realized this function was triangulizing the mesh, such that each face had 3 unique vertices, and there were separate vertices at the same location for border faces. This was causing some major problems with my other functions, so I tried to fix it by instead using the MDLAsset initWithURL:vertexDescriptor:bufferAllocator:preserveTopology function with preserveTopology set to YES with the descriptor/allocator set to the default with nil. This preserving topology fixed my problem of duplicating vertices, so the faces/edges were all good, but in the process I lost the normals data.
By lost the normals, I don't mean multiple indexing, I mean after setting preserveTopology to YES, the buffer did not contain any normals values at all. Whereas before it was v1/n1/v2/n2... and the stride was 24 bytes (3 dimensions *4 bytes/float * 2 attributes), now the first half of the buffer is v1/v2/... with a stride of 12 and the entire 2nd half of the buffer is just 0.0 floats.
Also something weird with this, when you look at the SCNGeometrySources of the Geometry, there are 2 sources, 1 with semantic kGeometrySourceSemanticVertex, and 1 with semantic kGeometrySourceSemanticNormal. You would think that the semantic vertex source would contain the position data, and the semantic normal source would contain the normal data. However that is not the case. No matter what you set preserveTopology, they are buffers of size to contain both position and normal data with identical values. So when I said before there was no normal data, I mean both of these buffers, semantic vertex AND semantic normal went from being v1/n1/v2/n2... to v1/v2/.../(0.0, 0.0, 0.0)/(0.0, 0.0, 0.0)/... I went into the mdlmesh's buffer (before the transfer to scene kit) at found the same problem, so the problem must be with the initWithURL, not with the model i/o to scenekit bridge.
So I figured there must be something wrong with the default vertex descriptor and buffer allocator (since I was using nil) and went about trying to create my own that matched these 2 possible data formats. Alas after much trying I was unable to get something that worked.
Any ideas on how I should do this? How to give MDLAsset the proper vertexDescriptor and bufferAllocator (I feel like nil should be ok here) for importing a .obj file? Thanks
An obj file with vertices and normals has vertices, indicated by v lines, normals, indicated by vn lines, and faces, indicated by f lines.
The v and vn lines will just be the floating point values you expect, and the f line will be of the form -
f v0//n0 v1//n1 etc
Since OpenGL and Metal don't allow multiple indexing, you'll see the first effect of vertices being duplicated. For example,
f 0//0 1//2 2//0
can't work as a vertex buffer because it would require different indices per vertex. So typical OBJ parsers have to create new vertices that allow the face to become
f 0//0 1//1 2//2
The preserve topology option doesn't help you. It preserves the connectivity and shape of the mesh (no triangulation occurs, shared edges remain shared) but it still enforces a single index per vertex component.
One solution would be to make sure that your tool that is outputting the OBJ files uses single indexing during export, if that is an option.
Another option, and this won't solve the problem immediately, would be file a request that multiple-indexing be supported at the Model I/O level. SceneKit would still have to uniquely-index because it is has to be able to render.
Another option would be to use a format like PLY that doesn't have multiple indexing.

Matrix Concatenation using Actionscript Matrix3D

I want to get the properly rendered projection result from a Stage3D framework that presents something of a 'gray box' interface via its API. It is gray rather than black because I can see this critical snippet of source code:
matrix3D.copyFrom (renderable.getRenderSceneTransform (camera));
matrix3D.append (viewProjection);
The projection rendering technique that perfectly suits my needs comes from a helpful tutorial that works directly with AGAL rather than any particular framework. Its comparable rendering logic snippet looks like this:
cube.mat.copyToMatrix3D (drawMatrix);
drawMatrix.prepend (worldToClip);
So, I believe the correct, general summary of what is going on here is that both pieces of code are setting up the proper combined matrix to be sent to the Vertex Shader where that matrix will be a parameter to the m44 AGAL operation. The general description is that the combined matrix will take us from Object Local Space through Camera View Space to Screen or Clipping Space.
My problem can be summarized as arising from my ignorance of proper matrix operations. I believe my failed attempt to merge the two environments arises precisely because the semantics of prepending one matrix to another is not, and is never intended to be, equivalent to appending that matrix to the other. My request, then, can be summarized in this way. Because I have no control over the calling sequence that the framework will issue, e.g., I must live with an append operation, I can only try to fix things on the side where I prepare the matrix which is to be appended. That code is not black-boxed, but it is too complex for me to know how to change it so that it would meet the interface requirements posed by the framework.
Is there some sequence of inversions, transformations or other manuevers which would let me modify a viewProjection matrix that was designed to be prepended, so that it will turn out right when it is, instead, appended to the Object's World Space coordinates?
I am providing an answer more out of desperation than sure understanding, and still hope I will receive a better answer from those more knowledgeable. From Dunn and Parberry's "3D Math Primer" I learned that "transposing the product of two matrices is the same as taking the product of their transposes in reverse order."
Without being able to understand how to enter text involving superscripts, I am not sure if I can reduce my approach to a helpful mathematical formulation, so I will invent a syntax using functional notation. The equivalency noted by Dunn and Parberry would be something like:
AB = transpose (B) x transpose (A)
That comes close to solving my problem, which problem, to restate, is really just a problem arising out of the fact that I cannot control the behavior of the internal matrix operations in the framework package. I can, however, perform appropriate matrix operations on either side of the workflow from local object coordinates to those required by the GPU Vertex Shader.
I have not completed the test of my solution, which requires the final step to be taken in the AGAL shader, but I have been able to confirm in AS3 that the last 'un-transform' does yield exactly the same combined raw data as the example from the author of the camera with the desired lens properties whose implementation involves prepending rather than appending.
BA = transpose (transpose (A) x transpose (B))
I have also not yet tested to see if these extra calculations are so processing intensive as to reduce my application frame rate beyond what is acceptable, but am pleased at least to be able to confirm that the computations yield the same result.

.VTX File Format?

I've recently taken the plunge into DirectX and have been messing around a little with Anim8or, and have discovered several file types that models can be exported to that are text based. I've particularly taken to VTX files. I've learned how to parse some basics out of it, but I'm obviously missing a few things.
It starts with a .Faceset with is immediately (on the same line) followed by the number of meshes in the file.
For each mesh, there is one .Vertex section and one .Index section in that order and the first pair of .Vertex/.Index sections are the first mesh, the second set are the second mesh and so on as you'd expect.
In a .Vertex section of the file, there's 8 numbers per line and an undefined number of lines (unless you want to trust the comments Anim8or has put just before the section, but that doesn't seem to be part of the specs of the file, just Anim8or being kind). The first 3 numbers correspond to X, Y, and Z coordinates for a particular point that'll later be used as a vertex, the other 5 I have no idea. A majority of the time, the last 2 numbers are both 0, but I've noticed that's not ALWAYS true, just usually true.
Next comes the matching .Index section. This section has 4 numbers. The first 3 are reference numbers to the Vertexes previously stated and the 3 points mark a triangle in the model. 0 meaning the first mentioned Vertex, 1 meaning the next one, and so on, like a zero-based array. The 4th number appears to always be -1, I can't figure out what importance it has and I can't promise it's ALWAYS -1. In case you can't tell, I'm not too certain about anything in this file type.
There's also other information in the file that I'm choosing to ignore right now because I'm new and don't want to overcomplicate things too much. Such as after every .Index section is:
.Brdf
// Ambient color
0.431 0.431 0.431
// Diffuse color
0.431 0.431 0.431
// Specular color and exponent
1 1 1 2
// Kspecular = 0.5
// end of .Brdf
It appears to me this is about the surface of the mesh just described. But it's not needed for placement of meshes so I moved past it for now.
Moving on to the real problem... I can load a VTX file when there's only one mesh in the VTX file (meaning the .FaceSet is 1). I can almost successfully load a VTX file that has multiple meshes, each mesh is successfully structured, but not properly placed in relation to the other meshes. I downloaded an AT-AT model from an Anim8or thread in a forum and it's made up of 344 meshes, when I load the file just using the specs I've mentioned so far, it looks like the AT-AT is exploded out as if it were a diagram of how to make it (when loaded in Anim8or, all pieces are close and resemble a fully assembled AT-AT). All the pieces are oriented correctly and have the same up direction, but there's plenty of extra space between the pieces.
Does somebody know how to properly read a VTX file? Or know of a website that'll explain what those other numbers mean?
Edit:
The file extension .VTX is used for a lot of different things and has a lot of different structures depending on what the expected use is. Valve, Visio, Anim8or, and several others use VTX, I'm only interested in the VTX file that Anim8or exports and the structure that it uses.
I have been working on a 3D Modeling program myself and wanted a simple format to be able to bring objects in to the editor to be able to test the speed of my drawing routines with large sets of vertices and faces. I was looking for an easy one where I could get models quickly and found the .vtx format. I googled it and found your question. When I was unable to find the format on the internet, I played around and compared .OBJ exports with .vtx ones. (Maybe it was created just for Anim8or?) Here is what I found:
1) Yes, the vertices have eight numbers on each line. The first three are, as you guessed, the x, y, and z coordinates. The next three are the vertex normals, nx, ny, and nz. You may notice that each vertex appears multiple times with different normals for each face that contains it. The last two numbers are texture coordinates.
2) As for the faces, I reached the same conclusions as you did. The first three numbers are indices into the vertex list above. The last number does appear to always be -1. I am going to assume that it has something to do with the facing of the face. (e.g. facing in or out.) Since most models are created with the faces all facing appropriately, it stands to reason that this would be the same number for all of them.
3) One additional note: When comparing the .obj with the .vtx, I did notice that the positions of the vertices changed. This was also true when comparing with the .an8 file. This should not be a "HUGE" problem as long as they are all offset by the same amount in each vertex and every file. At least then it could be compensated for.
Have you considered using the .obj file format? It is text-based and is not extremely difficult to parse or understand. There is quite a bit of information about it online.
I am going to add that, after a few hours inspection, the vtx export in Anim8or seems to be broken. I experienced the same problem as you did that the pieces were not located properly. My assumption would be that anim8or exports these objects using the local coordinates for each mesh and not accounting for transformations that have been applied. I do also note that it will not IMPORT the vtx file...
Based on some googling, it seems you're at the wrong end of the pipeline. As I understand it: A VTX file is a Valve Proprietary File Format that is the result of a set of steps.
The final output of Studiomdl for each
Half-Life model is a group of files in
the gamedirectory/models folder ready
to be used by the Game Engine:
an .MDL
file which defines the structure of
the model along with animation,
bounding box, hit box, material, mesh
and LOD information,
a .VVD file which
stores position independent flat data
for the bone weights, normals,
vertices, tangents and texture
coordinates used by the MDL, currently
three separate types of VTX file:
.sw.vtx (Software),
.dx80.vtx (DirectX
8.0) and
.dx90.vtx (DirectX 9.0) which store hardware optimized material,
skinning and triangle strip/fan
information for each LOD of each mesh
in the MDL,
often a .PHY file
containing a rigid or jointed
(ragdoll) collision model, and
sometimes
a .ANI file for To do:
something to do with model animations
Valve
Now the Valve Source SDK may have some utilities in it to read VTX's (it seems to have the ability to make them anyway). Some people may have made 3rd party tools or have code to read them, but it's likely to not work on all files just cause it's a 3rd party format. I also found this post which might help if you haven't seen it before.

Resources