Im using swig with Lua and have something like the following structure wrapped, which is used for generic vector calculations:
typedef struct
{
%mutable;
float x,y,z
%extend
{
Set(float x,float y, floatz )
{
Vector3Set(x,y,z);
}
};
} Vector3;
In the structure below im re-using Vector3 inside another structure and set it to %immutable:
typedef struct
{
%immutable
Vector3 gravity;
} World;
In Lua, the following is running as expected, and get an error that gravity is immutable:
world.gravity=Vector3:Set(1,2,3)
But if I do this:
world.gravity.x=-10
No error is generated and world.gravity.x is equal to -10.
How can I fix this issue, I obviously do not want to set that XYZ in Vector3 is %immutable
Remove the %mutable from Vector3 so that Swig can propagate the mutability flag to inner data members.
Related
so it appears that Apple Metal got away with preprocessor directives, opting to use function constants for optionally enabling / disabling parts of shader code.
I can conditionally enable certain struct fields, like so:
constant bool is_cubemap_render [[function_constant(0)]];
struct FragmentOut {
float depth [[depth(any), function_constant(is_cubemap_render)]];
};
However I am having hard time understanding what is the best way to omit a field if is_cubemap_render is false.
struct FragmentOut {
float depth [[depth(any), !function_constant(is_cubemap_render)]]; // does not compile
};
struct FragmentOut {
float depth [[depth(any), function_constant(!is_cubemap_render)]]; // does not compile too
};
What I resorted to doing is
constant bool is_cubemap_render [[function_constant(0)]];
constant bool is_not_cubemap_render = !is_cubemap_render;
struct FragmentOut {
float depth [[depth(any), function_constant(is_cubemap_render)]];
float4 color [[color(0), function_constant(is_not_cubemap_render)]];
};
I don't really like it as it pollutes the start of my files and I have to manually declare new variables. Is there a better way to go about it?
I'm using WebCamTexture to get input from the camera (iOS&Android). However, since this is raw input, the rotation is wrong when rendered to a texture. I read around a lot, and found this (look at the bottom): WebCamTexture rotated and flipped on iPhone
His code (but with test-values):
Quaternion rotation = Quaternion.Euler(45f, 30f, 90f);
Matrix4x4 rotationMatrix = Matrix4x4.TRS(Vector3.zero, rotation, new Vector3(1, 1, 1));
material.SetMatrix("_Rotation", rotationMatrix);
But whatever value I use, nothing happens (neither in the editor or on devices)...
Thanks!
Edit
After some intense testing, I found that material.SetMatrix, SetFloat, SetWhatever has NO effect (not setting the value) unless it's declared inside the "Properties"-block. Looking at unity:s own example, this should't have to (and can't) be done for a matrix (can't be declared inside Properties, only inside the CGProgram). So... How do you set a matrix then? Or what else am I doing wrong?
You should be using: WebCamTexture.videoRotationAngle
its designed to solve exactly this problem, read more about this here.
Example code:
using UnityEngine;
using System.Collections;
public class ExampleClass : MonoBehaviour {
public WebCamTexture webcamTexture;
public Quaternion baseRotation;
void Start() {
webcamTexture = new WebCamTexture();
renderer.material.mainTexture = webcamTexture;
baseRotation = transform.rotation;
webcamTexture.Play();
}
void Update() {
transform.rotation = baseRotation * Quaternion.AngleAxis(webcamTexture.videoRotationAngle, Vector3.up);
}
}
Just rotate the camera to 90 degrees along the z axis (the camera is which is rendering the webcamtexture gameobject).
I want to copy BasicEffect's fog method to use in my own shader so I don't have to declare a basiceffect shader and my own. The HLSL code of the basic effect was released with one of the downloadable samples on XNA Creators Club a while ago and I thought the method needed would be found within that HLSL file. However, all I can see is a function being called but no actual definition for that function. The function called is:
ApplyFog(color, pin.PositionWS.w);
Does anybody know where the definition is and if it's freely acceptable. Otherwise any help on how to replicate it's effect would be great.
I downloaded the sample from here.
Thanks.
Edit: Stil having problems. Think it's to do with getting depth:
VertexToPixel InstancedCelShadeVSNmVc(VSInputNmVc VSInput, in VSInstanceVc VSInstance)
{
VertexToPixel Output = (VertexToPixel)0;
Output.Position = mul(mul(mul(mul(VSInput.Position, transpose(VSInstance.World)), xWorld), xView), xProjection);
Output.ViewSpaceZ = -VSInput.Position.z / xCameraClipFar;
Is that right? Camera clip far is passed in as a constant.
Heres an example of how to achieve a similar effect
In your Vertex Shader Function, you pass the viewspace Z position, divided by the distance of your farplane, that gives you a nice 0..1 mapping for your depthvalues.
Than, in your pixelshader, you use the lerp function to blend between your original color value, and the fogcolor, heres some (pseudo)code:
cbuffer Input //Im used to DX10+ remove the cbuffer for DX9
{
float FarPlane;
float4 FogColor;
}
struct VS_Output
{
//...Whatever else you need
float ViewSpaceZ : TEXCOORD0; //or whatever semantic you'd like to use
}
VS_Output VertexShader(/*Your Input Here */)
{
VS_Output output;
//...Transform to viewspace
VS_Output.ViewSpaceZ = -vsPosition.Z / FarPlane;
return output;
}
float4 PixelShader(VS_Output input) : SV_Target0 // or COLOR0 depending on DX version
{
const float FOG_MIN = 0.9;
const float FOG_MAX = 0.99;
//...Calculate Color
return lerp(yourCalculatedColor, FogColor, lerp(FOG_MIN, FOG_MAX, input.ViewSpaceZ));
}
I've written this from the top of my head, hope it helps.
The constants i've chose will give you a pretty "steep" fog, choose a smaller value for FOG_MIN to get a smoother fog.
people.
I have a problem passing a float array to vertex shader (HLSL) through constant buffer. I know that each "float" in the array below gets a 16-byte slot all by itself (space equivalent to float4) due to HLSL packing rule:
// C++ struct
struct ForegroundConstants
{
DirectX::XMMATRIX transform;
float bounceCpp[64];
};
// Vertex shader constant buffer
cbuffer ForegroundConstantBuffer : register(b0)
{
matrix transform;
float bounceHlsl[64];
};
(Unfortunately, the simple solution here does not work, nothing is drawn after I made that change)
While the C++ data gets passed, due to the packing rule they get spaced out such that each "float" in the bounceCpp C++ array gets into a 16-byte space all by itself in bounceHlsl array. This resulted in an warning similar to the following:
ID3D11DeviceContext::DrawIndexed: The size of the Constant Buffer at slot 0 of the Vertex Shader unit is too small (320 bytes provided, 1088 bytes, at least, expected). This is OK, as out-of-bounds reads are defined to return 0. It is also possible the developer knows the missing data will not be used anyway. This is only a problem if the developer actually intended to bind a sufficiently large Constant Buffer for what the shader expects.
The recommendation, as being pointed out here and here, is to rewrite the HLSL constant buffer this way:
cbuffer ForegroundConstantBuffer : register(b0)
{
matrix transform;
float4 bounceHlsl[16]; // equivalent to 64 floats.
};
static float temp[64] = (float[64]) bounceHlsl;
main(pos : POSITION) : SV_POSITION
{
int index = someValueRangeFrom0to63;
float y = temp[index];
// Bla bla bla...
}
But that didn't work (i.e. ID3D11Device1::CreateVertexShader never returns). I'm compiling things against Shader Model 4 Level 9_1, can you spot anything that I have done wrong here?
Thanks in advance! :)
Regards,
Ben
One solution, albeit non optimal, is to just declare your float array as
float4 bounceHlsl[16];
then process the index like
float x = ((float[4])(bounceHlsl[i/4]))[i%4];
where i is the index you require.
I came across a weird behavior of HLSL. I am trying to use an array that is contained within a struct, like this (Pixel Shader code):
struct VSOUT {
float4 projected : SV_POSITION;
float3 pos: POSITION;
float3 normal : NORMAL;
};
struct Something {
float a[17];
};
float4 shMain (VSOUT input) : SV_Target {
Something s;
for (int i = 0; i < (int)(input.pos.x * 800); ++i)
s.a[(int)input.pos.x] = input.pos.x;
return col * s.a[(int)input.pos.x];
}
The code makes no sense logically, it's just a sample. The problem is that when I try to compile this code, I get the following error (line 25 is the for-loop line):
(25,7): error X3511: Forced to unroll
loop, but unrolling failed.
However, when I put the array outside the struct (just declare float a[17] in shMain), everything works as expected.
My question is, why is DirectX trying to unroll the (unrollable) for-loop when using the struct? Is this a documented behavior? Is there any available workaround except for putting the array outside the struct?
I am using shader model 4.0, DirectX 10 SDK from June 2010.
EDIT:
For clarification I am adding the working code, it only replaces usage of the struct Something with plain array:
struct VSOUT {
float4 projected : SV_POSITION;
float3 pos: POSITION;
float3 normal : NORMAL;
};
float4 shMain (VSOUT input) : SV_Target {
float a[17]; // Direct declaration of the array
for (int i = 0; i < (int)(input.pos.x * 800); ++i)
a[(int)input.pos.x] = input.pos.x;
return col * a[(int)input.pos.x];
}
This code compiles and works as expected. It works even if I add [loop] attribute in front of the for-loop which means it is not unrolled (which is a correct behavior).
I'm not sure but what I know is that the hardware schedule and process fragments by block of 2x2 (for computing derivatives). This could be a reason that fxc try to unroll the for loop so that the shader program is executed in lockstep mode.
Also did you try to use [loop] attribute for generating code that uses flow control?