Changing UV of model imported in Xna - xna

Is it possible to change UV(texture coordinates) of model(.fbx) in Xna. and I know i can break model into mesh and mesh into meshpart, so can i also break meshpart in to vertices. pls explain
Thank you.

Short answer; It is possible.
Longer;
What are you trying to do?
Just want to shift the texture a bit: You can do this by writing a shader that has a parameter UVOffset of type float2 that you add to the existing TexCoord.
Want to redo it completely: I am not sure this is the kind of thing you would want to do in XNA. You might want to consider editing the model in a 3d-modelling software.
I want to anyways: The Model has a property called Meshes. each ModelMesh has a property MeshParts. Each MeshParts has a VertexBuffer. Each VertexBuffer has a GetData-Method.
In short, you can fetch the Vertices from the VertexBuffer of a MeshPart, modify them as you wish, and the use the VertexBuffer.SetData-method to apply your changes.
If you tell us what you are trying to achieve, we might be able to give more specific help :)
Edit:
Example based on hlsl found in this thread: http://xboxforums.create.msdn.com/forums/p/1407/72515.aspx
insert after 'texture Texture;':
float2 UVMultiplier;
replace 'output.TexCoord = input.TexCoord;' with:
output.TexCoord = input.TexCoord * UVMultiplier;

Related

Metal and Model I/O - Add Texture Coordinates to Mesh

I'm working on a student project for which I want to texture a mesh that I scanned using an iPad equipped with the new LiDAR sensor.
To texture a mesh, however, I need to add texture coordinates. My current plan is to convert the scanned mesh to an MDLMesh and add all submeshes to an MDLAsset container. Afterwards, I iterate over the MDLMeshes using a foreach-loop. In each iteration I'm calling the function "MDLMesh.addUnwrappedTextureCoordinates" on the current mesh. unfortunately, it always results in a crash. Sometimes I can loop through 2 meshes before I get an error, sometimes I does not even add UV's to a single mesh.
I'm not at expert at swift or Model IO, but it seems strange to me, that this operation crashes while I can add normals just fine.
The error I'm getting looks like this:
Can't choose for edge creation
libc++abi.dylib: terminating with uncaught exception of type std::out_of_range: unordered_map::at: key not found
The code I'm using looks like this:
private func unwrapTextureCoordinates(asset: MDLAsset) -> MDLAsset{
let objects = asset.childObjects(of: MDLMesh.self)
for object in objects{
if let mesh = object as? MDLMesh{
mesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0.5)
mesh.addAttribute(withName: MDLVertexAttributeTextureCoordinate, format: .float2)
mesh.addUnwrappedTextureCoordinates(forAttributeNamed: MDLVertexAttributeTextureCoordinate)
}
}
return asset
}
Hopefully someone can tell me what's wrong or point me in the right direction.
After I could not figure out what was causing the issue, I resorted to Unity and its ARFoundation wrapper to see whether I was able to calculate any UVs there. I found that Unity's equivalent to Model I/O's "addUnwrappedTextureCoordinates", namely Unwrapping.GeneratePerTriangleUV, calculates 3 UVs for each triangle.
Now, when I run this function in Unity, I also get an out-of-range-exception for my mesh, just like in Swift. The error description says that the number of UV coordinates cannot exceed the number of vertices in the mesh - which makes sense since I get three times as many UV coordinates as I have vertices in my mesh. Therefore, I highly suspect that the out-of-range-exception in Swift using Model I/O has the same cause.
Surely, there are many workarounds for this, but I resorted to a different solution since the "Unwrapping" class is part of the "UnityEngine.Editor" namespace anyways and therefore I would not be able to use it in a finished build (which is what I want).
Instead, I came across the function in this thread to calculate a single set of UVs for my mesh. I utilized it, and it worked exactly as I want it to. The code is written in C# and therefore I decided to continue my project using the Unity Engine. However, I don't think it will be a lot of trouble to translate the function into Swift.

Vulkan texture rendering on multiple meshes

I am in the middle of rendering different textures on multiple meshes of a model, but I do not have much clues about the procedures. Someone suggested for each mesh, create its own descriptor sets and call vkCmdBindDescriptorSets() and vkCmdDrawIndexed() for rendering like this:
// Pipeline with descriptor set layout that matches the shared descriptor sets
vkCmdBindPipeline(...pipelines.mesh...);
...
// Mesh A
vkCmdBindDescriptorSets(...&meshA.descriptorSet... );
vkCmdDrawIndexed(...);
// Mesh B
vkCmdBindDescriptorSets(...&meshB.descriptorSet... );
vkCmdDrawIndexed(...);
However, the above approach is quite different from the chopper sample and vulkan's samples that makes me have no idea where to start the change. I really appreciate any help to guide me to a correct direction.
Cheers
You have a conceptual object which is made of multiple meshes which have different texturing needs. The general ways to deal with this are:
Change descriptor sets between parts of the object. Painful, but it works on all Vulkan-capable hardware.
Employ array textures. Each individual mesh fetches its data from a particular layer in the array texture. Of course, this restricts you to having each sub-mesh use textures of the same size. But it works on all Vulkan-capable hardware (up to 128 array elements, minimum). The array layer for a particular mesh can be provided as a push-constant, or a base instance if that's available.
Note that if you manage to be able to do it by base instance, then you can render the entire object with a multi-draw indirect command. Though it's not clear that a short multi-draw indirect would be faster than just baking a short sequence of drawing commands into a command buffer.
Employ sampler arrays, as Sascha Willems suggests. Presumably, the array index for the sub-mesh is provided as a push-constant or a multi-draw's draw index. The problem is that, regardless of how that array index is provided, it will have to be a dynamically uniform expression. And Vulkan implementations are not required to allow you to index a sampler array with a dynamically uniform expression. The base requirement is just a constant expression.
This limits you to hardware that supports the shaderSampledImageArrayDynamicIndexing feature. So you have to ask for that, and if it's not available, then you've got to work around that with #1 or #2. Or just don't run on that hardware. But the last one means that you can't run on any mobile hardware, since most of them don't support this feature as of yet.
Note that I am not saying you shouldn't use this method. I just want you to be aware that there are costs. There's a lot of hardware out there that can't do this. So you need to plan for that.
The person that suggested the above code fragment was me I guess ;)
This is only one way of doing it. You don't necessarily have to create one descriptor set per mesh or per texture. If your mesh e.g. uses 4 different textures, you could bind all of them at once to different binding points and select them in the shader.
And if you a take a look at NVIDIA's chopper sample, they do it pretty much the same way only with some more abstraction.
The example also sets up descriptor sets for the textures used :
VkDescriptorSet *textureDescriptors = m_renderer->getTextureDescriptorSets();
binds them a few lines later :
VkDescriptorSet sets[3] = { sceneDescriptor, textureDescriptors[0], m_transform_descriptor_set };
vkCmdBindDescriptorSets(m_draw_command[inCommandIndex], VK_PIPELINE_BIND_POINT_GRAPHICS, layout, 0, 3, sets, 0, NULL);
and then renders the mesh with the bound descriptor sets :
vkCmdDrawIndexedIndirect(m_draw_command[inCommandIndex], sceneIndirectBuffer, 0, inCount, sizeof(VkDrawIndexedIndirectCommand));
vkCmdDraw(m_draw_command[inCommandIndex], 1, 1, 0, 0);
If you take a look at initDescriptorSets you can see that they also create separate descriptor sets for the cubemap, the terrain, etc.
The LunarG examples should work similar, though if I'm not mistaken they never use more than one texture?

What should I do for multiple histograms?

I'm working with openCV and I'm a newbie in this field. I'm researching about Camshift. I want to extend this method by using multiple histograms. It means when tracking an object has many than one apperance (ex: rubik cube with six apperance), if we use only one histogram, Camshift will most likely fail.
I know calcHist function in openCV (http://docs.opencv.org/modules/imgproc/doc/histograms.html#calchist) has a parameter is "accumulate", but I don't know how to use and when to use (apply for camshiftdemo.cpp in opencv samples folder). This function can help me solve this problem? Or I have to use difference solution?
I have an idea, that is: create an array histogram for object, for every appearance condition that strongly varies in color, we pre-compute and store all to this array. But when we compute new histogram? It means that the pre-condition to start compute new histogram is what?
And what happend if I have to track multiple object has same color?
Everybody please help me. Thank you so much!

HLSL shaders and accessing current location the GPU is at in the index buffer

My question is very basic.
Is there an HLSL shader instruction or any way within shader code to access the current location that the GPU is up to within the array of the index buffer? I am using XNA Game Studio 4.0 which via the GraphicsDevice.DrawIndexedPrimitives() method passes VertexBuffers and IndexBuffers to the graphics pipeline per mesh. I know the GPU iterates through the IndexBuffer internally but I was hoping it was possible to ascertain which entry within the IndexBuffer that the GPU was currently at:
To express what I mean in psuedocode:
IndexBuffer[i], where I want to know the value of i that the GPU is currently processing.
The reason for this is I want to know which polygon is currently being rendered and it's not possible to tell simply by knowing the vertex because a single vertex is always shared between multiple polygons.
Is my question clear and can anyone help me?
Thanks in advance.
Simple answer: don't share vertices, and include polygon ID's as a vertex attribute.
Fancier answers might be possible if you know special facts about the tessellation (say, it's a sphere, or the UVs are laid out in a grid, etc -- so the placement of polys is predictable).

How to design a simple GLSL wrapper for shader use

UPDATE: Because I needed something right away, I've created a simple shader wrapper that does the sort of thing I need. You can find it here: ShaderManager on GitHub. Note that it's designed for Objective-C / iOS, so may not be useful to everyone. If you have any suggestions for design improvements, please let me know!
Original Problem:
I'm new to using GLSL shaders. I'm familiar enough with the GLSL language and the OpenGL interface, but I'm having trouble designing a simple API through which to use shaders.
OpenGL's C interface to interact with shaders seems cumbersome. I can't seem to find any tutorials on the net that cover the API design of such things.
My question is this: does any one have a good, simple, API design or pattern to wrap the OpenGL shader program API?
Take the following simple example. Say I have one vertex shader that just emulates fixed functionality, and two fragment shaders - one for drawing smooth rectangles and one for drawing smooth circles. I have the following files:
Shader.vsh : Simple vertex shader, with the following inputs/outputs:
-- Uniforms: mat4 Model, mat4 View, mat4 Projection
-- Attributes: vec4 Vertex, vec2 TexCoord, vec4 Color
-- Varying: vec4 vColor, vec2 vTexCoord
Square.fsh : Fragment shader for drawing squares based on tex coord / color
Circle.fsh : Fragment shader for drawing circles based on tex coord / color
Basic Linking
Now what is the standard way to use these? Do I link the above shaders into two OpenGL shader programs? That is:
Shader.vsh + Square.fsh = SquareProgram
Shader.vsh + Circle.fsh = CircleProgram
Or do I instead create one big program where the fragment shaders check some conditional uniform variables and call out to a shader function to generate their result. E.g:
Shader.vsh + Square.fsh + Circle.fsh + Main.fsh = ShaderProgram
//Main.fsh here would simply check whether to call out to square or circle
With two individual programs I would presumably need to call
glUseProgram(CircleProgram); or glUseProgram(SquareProgram);
Before each type of element I want to draw. I would then need to set the uniforms (Model / View / Projection) and attributes of each program before I use it. This seems so unwieldy.
With the single ShaderProgram option I would still need to set some sort of boolean switch (circle or square) in the fragment shader that would be checked before drawing each pixel. This also seems complicated.
As a side note, am I allowed to link two fragment shaders, each with a main() function, into one shader program? How would OpenGL know which one to call?
Setting Variables
The calls:
glUniform*
glVertexAttribPointer
Are used to set uniforms and attribute pointer locations on the current program.
Different classes and structures may need to access and set variables on the current shader (or change the current shader) from different places in the code. I can't think of a nice way to do this that decouples the shader code from the code that wants to use it.
That is, each shape I want to draw will need to set vertex and texture coordinate attributes - requiring the handles to those attributes generated by OpenGL.
The camera will need to set its projection matrix as a uniform in the vertex shader, while the class managing the model matrix stack will need to set its own uniform in the vertex shader.
Changing shaders part-way through drawing a scene would mean that all these classes will need to set their uniforms and attributes again.
How do most people design around this?
A global dictionary of shaders accessed by handle or name, with getters and setters for their parameters?
An OO design with shader objects that each have parameters?
I've looked at the following wrappers:
Jon's Teapot: GLSL Shader Manager - This wraps shaders in C++ classes. It seems like little more than a wrapper that enforces OO principles on a C API, resulting in a C++ API that is much the same.
I am after any sort of design that simplifies the use of Shader programs, and am not concerned about the particular paradigm used (OO, procedural, and so on)
I see this is tagged with iOS, so if you're partial to Objective-C, I'd take a good look at Jeff LaMarche's GLProgram wrapper class, which he describes here and has source available here. I've used it within my own applications to simplify some of the shader program setup, and to make the code a little cleaner.
For example, you can set up a shader and its attributes and uniforms using code like the following:
sphereDepthProgram = [[GLProgram alloc] initWithVertexShaderFilename:#"SphereDepth" fragmentShaderFilename:#"SphereDepth"];
[sphereDepthProgram addAttribute:#"position"];
[sphereDepthProgram addAttribute:#"inputImpostorSpaceCoordinate"];
if (![sphereDepthProgram link])
{
NSLog(#"Depth shader link failed");
NSString *progLog = [sphereDepthProgram programLog];
NSLog(#"Program Log: %#", progLog);
NSString *fragLog = [sphereDepthProgram fragmentShaderLog];
NSLog(#"Frag Log: %#", fragLog);
NSString *vertLog = [sphereDepthProgram vertexShaderLog];
NSLog(#"Vert Log: %#", vertLog);
[sphereDepthProgram release];
sphereDepthProgram = nil;
}
sphereDepthPositionAttribute = [sphereDepthProgram attributeIndex:#"position"];
sphereDepthImpostorSpaceAttribute = [sphereDepthProgram attributeIndex:#"inputImpostorSpaceCoordinate"];
sphereDepthModelViewMatrix = [sphereDepthProgram uniformIndex:#"modelViewProjMatrix"];
sphereDepthRadius = [sphereDepthProgram uniformIndex:#"sphereRadius"];
When you need to use the shader program, you then do something like the following:
[sphereDepthProgram use];
This doesn't address the issues of branching vs. individual shaders that you bring up above, but Jeff's implementation does provide a nice encapsulation of some of the OpenGL ES boilerplate shader setup code.
Basic Linking:
There is no standard way here. There are at least 2 general approaches:
Monolithic - one shader covers many cases, using uniform boolean switches. These branches don't hurt performance because the condition result is constant for any fragment group (actually, for all of the fragments).
Multi-object program compositing - main shader declares a set of entry points (like 'get_diffuse', 'get_specular', etc), which are implemented in separate shader objects attached. This implies individual shader for each object, but any kind of caching helps.
Setting Variables: Uniforms
I will just describe the approach I developed.
Each shader program has a list of uniform dictionaries. It's used to fill the uniform source list upon program (re-)linking. When the program is activated, it goes through the uniform list, fetches values from their sources and uploads them to GL. In the result, data is not directly connected with the user shader program, and whatever manages it does not care about the program using it.
One of these dictionaries can be, for example, a core one, containing model,view transformations, camera projection and maybe something else.
Setting Variables: Attributes
First of all, shader program is an attribute consumer, so it is what has to extract these attributes from a mesh (or any other data storage) and upload them to GL in a way it needs. It should also make sure that types of provided attributes match the requested types.
When using with monolithic shader approach, there is a possible unpleasant situation when one the disabled branch ways requires a vertex attribute that is not provided. I would advice using another attribute's data to supply the missing one, because we don't care about the actual values in this case.
P.S.
You can find an actual implementation of these ideas here: http://code.google.com/p/kri/

Resources