"Incorrect parameter" error in DrawUserPrimitives call - f#

I'm getting an unhandled exception with the message HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect. at the call to DrawUserPrimitives in the code below:
namespace Game1
open Microsoft.Xna.Framework
open Microsoft.Xna.Framework.Graphics
open System.IO
open System.Reflection
type Game1() as this =
inherit Game()
let graphics = new GraphicsDeviceManager(this)
[<DefaultValue>] val mutable effect : Effect
[<DefaultValue>] val mutable vertices : VertexPositionColor[]
do base.Content.RootDirectory <- "Content"
override this.Initialize() =
base.Initialize()
let device = base.GraphicsDevice
let s = Assembly.GetExecutingAssembly().GetManifestResourceStream("effects.mgfxo")
let reader = new BinaryReader(s)
this.effect <- new Effect(device, reader.ReadBytes((int)reader.BaseStream.Length))
()
override this.LoadContent() =
this.vertices <-
[|
VertexPositionColor(Vector3(-0.5f, -0.5f, 0.0f), Color.Red);
VertexPositionColor(Vector3(0.0f, 0.5f, 0.0f), Color.Green);
VertexPositionColor(Vector3(0.5f, -0.5f, 0.0f), Color.Yellow)
|]
override this.Draw(gameTime) =
let device = base.GraphicsDevice
do device.Clear(Color.CornflowerBlue)
this.effect.CurrentTechnique <- this.effect.Techniques.["Pretransformed"]
this.effect.CurrentTechnique.Passes |> Seq.iter
(fun pass ->
pass.Apply()
device.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.TriangleList, this.vertices, 0, 1)
)
do base.Draw(gameTime)
My effect code is as follows (taken from the excellent Riemer's tutorials) and is as simple as can be. It's being converted as in this answer, and that seems to be working because I can see the effect name if I put a breakpoint in before the draw call.
struct VertexToPixel
{
float4 Position : POSITION;
float4 Color : COLOR0;
float LightingFactor: TEXCOORD0;
float2 TextureCoords: TEXCOORD1;
};
struct PixelToFrame
{
float4 Color : COLOR0;
};
VertexToPixel PretransformedVS( float4 inPos : POSITION, float4 inColor: COLOR)
{
VertexToPixel Output = (VertexToPixel)0;
Output.Position = inPos;
Output.Color = inColor;
return Output;
}
PixelToFrame PretransformedPS(VertexToPixel PSIn)
{
PixelToFrame Output = (PixelToFrame)0;
Output.Color = PSIn.Color;
return Output;
}
technique Pretransformed
{
pass Pass0
{
VertexShader = compile vs_4_0 PretransformedVS();
PixelShader = compile ps_4_0 PretransformedPS();
}
}
It works fine if I replace the custom effect with a BasicEffect as per this example.
I'm using Monogame 3.2 and Visual Studio 2013.

I worked out eventually (with help from this forum thread) that I just needed to replace POSITION with SV_POSITION in the effect file. This is a consequence of MonoGame being built in DirectX 10/11 rather then XNA's DirectX 9.

Related

Problems with applying a Color Cube to an image using the Metal core in swift iOS

Hello developer friends!
Faced with an incomprehensible problem in the behavior of a custom Core Image filter developed in the Metal language.
The task is not very difficult. Take a LUT file in PNG format and overlay it on an image with a given intensity. IOS has built-in filters, but they do not support a size larger than 64, overlay intensity control, and plus are not very good in terms of performance. I also need to use a LUT of 64 or more.
There seemed to be no difficulty in finding the coordinate of the point in the filter core. But for some reason, some colors are distorted in the output image and it is noticeably darker.
Thank you for any help.
Here is the filter core code:
#include <metal_stdlib>
using namespace metal;
#include <CoreImage/CoreImage.h>
extern "C" {
namespace coreimage {
float4 commitLUT64(sampler image, sampler lut, float intensity) {
float4 color = image.sample(image.coord());
color = clamp(color, float4(0.0f), float4(1.0f));
float red = color.r * 63.0f;
float green = color.g * 63.0f;
float blue = color.b * 63.0f;
float x = red / 512.0f;
float y = green / 512.0f;
float blueY = floor(blue / 8.0f) * 0.125f;
float blueX = 0.125f * ceil(blue - 8.0f * floor(blue / 8.0f)) / 512.0f;
float4 newColor = lut.sample(float2(x + blueX, y + blueY));
return mix(color, float4(newColor.r, newColor.g, newColor.b, color.a), intensity);
}
}
}
Here is the Color LookUp class inherited from CIFilter:
class ColorLookUp: CIFilter {
var inputImage: CIImage?
var inputLUT: CIImage?
var inputIntensity: CGFloat = 1.0
static var kernel: CIKernel = {
guard let url = Bundle.main.url(forResource: "ColorCube.ci", withExtension: "metallib"),
let data = try? Data(contentsOf: url)
else { fatalError("Unable to load metallib") }
guard let kernel = try? CIKernel(functionName: "commitLUT64", fromMetalLibraryData: data)
else { fatalError("Unable to create color kernel") }
return kernel
}()
override var outputImage: CIImage? {
guard let image = inputImage, let lut = inputLUT else { return inputImage }
return ColorLookUp.kernel.apply(
extent: image.extent,
roiCallback: { (index, dest) -> CGRect in if index == 0 { return dest } else { return lut.extent } },
arguments: [image, lut, inputIntensity])
}
}
Here is the project repository with additional materials: https://github.com/VKostin8311/MetalKernelsTestApp.git

how to use MTLSamplerState instead of declaring a sampler in my fragment shader code?

I have the shader below where I define a sampler (constexpr sampler textureSampler (mag_filter::linear,min_filter::linear);).
using namespace metal;
struct ProjectedVertex {
'float4 position [[position]];
'float2 textureCoord;
};
fragment float4 fragmentShader(const ProjectedVertex in [[stage_in]],
const texture2d<float> colorTexture [[texture(0)]],
constant float4 &opacity [[buffer(1)]]){
constexpr sampler textureSampler (mag_filter::linear,min_filter::linear);
const float4 colorSample = colorTexture.sample(textureSampler, in.textureCoord);
return colorSample*opacity[0];
}
Now I would like to avoid to hardly define this sampler inside my shader code. I found MTLSamplerState But I don't know how to use it
To create a sampler, first create a MTLSamplerDescriptor object and configure the descriptor’s properties. Then call the newSamplerStateWithDescriptor: method on the MTLDevice object that will use this sampler. After you create the sampler, you can release the descriptor or reconfigure its properties to create other samplers.
// Create default sampler state
MTLSamplerDescriptor *samplerDesc = [MTLSamplerDescriptor new];
samplerDesc.rAddressMode = MTLSamplerAddressModeRepeat;
samplerDesc.sAddressMode = MTLSamplerAddressModeRepeat;
samplerDesc.tAddressMode = MTLSamplerAddressModeRepeat;
samplerDesc.minFilter = MTLSamplerMinMagFilterLinear;
samplerDesc.magFilter = MTLSamplerMinMagFilterLinear;
samplerDesc.mipFilter = MTLSamplerMipFilterNotMipmapped;
id<MTLSamplerState> ss = [device newSamplerStateWithDescriptor:samplerDesc];
Sets a sampler state for the fragment function:
id<MTLRenderCommandEncoder> encoder = [commandBuffer renderCommandEncoderWithDescriptor: passDescriptor];
...
[encoder setFragmentSamplerState: ss atIndex:0];
Accessing from the shader:
fragment float4 albedoMainFragment(ImageColor in [[stage_in]],
texture2d<float> diffuseTexture [[texture(0)]],
sampler smp [[sampler(0)]]) {
float4 color = diffuseTexture.sample(smp, in.texCoord);
return color;
}
How to create SamplerState
First, declare MTLSamplerDescriptor and configure some properties such as addressModes, magFilter, minFilter.
Second, call makeSamplerState method from MTLDevice. Most cases default device.
That
You can use the below code. I hope it helps.
private static func buildSamplerState() -> MTLSamplerState? {
let descriptor = MTLSamplerDescriptor()
descriptor.sAddressMode = .repeat // .clampToEdge, .mirrorRepeat, .clampToZero
descriptor.tAddressMode = .repeat // .clampToEdge, .mirrorRepeat, .clampToZero
descriptor.magFilter = .linear // .nearest
descriptor.minFilter = .linear // .nearest
let samplerState = MTLCreateSystemDefaultDevice()?.makeSamplerState(descriptor: descriptor)
return samplerState
}
How to use it
...
let samplerState = buildSamplerState()
...
// call `makeRenderCommandEncoder` to create commandEncoder from commandBuffer
let commandEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
commandEncoder.setFragmentSamplerState(samplerState, index: 0)
in your fragment shader
fragment float4 exampleShader(VertexIO inputFragment [[stage_in]],
sampler textureSampler [[sampler(0)]],
texture2d<float> inputTexture [[texture(0)]])
{
float2 position = inputFragment.textureCoord;
// ...
return inputTexture.sample(textureSampler, position);
}

getAttribLocation return always the same index

attribute vec2 test;
attribute vec2 position;
void main() {
vTexCoord = position ;
vec2 gg = test;
.....
}
what stand getAttribLocation for ?
I used to believe it was the index of the attribute in the code,
but no, it always return 0 for position and 1 for test
?
getAttribLocation gets the location of the attribute. Just because your GPU/driver returns 0 for position and 1 for test doesn't mean that all drivers will.
Also, while debugging it's common to comment out parts of a shader. If an attribute is not used the driver may optimize it away. In your case if you were to comment out whatever lines use position it's likely test would get location 0. If you weren't looking up the location and you assumed test was always at location 1 your code would fail
On the other hand you can set the location before you call linkProgram by calling bindAttribLocation. For example
gl.bindAttribLocation(program, 10, "position");
gl.bindAttribLocation(program, 5, "test");
gl.linkProgram(program);
In which case you don't have to look up the locations.
var vs = `
attribute float position;
attribute float test;
void main() {
gl_Position = vec4(position, test, 0, 1);
}
`;
var fs = `
void main() {
gl_FragColor = vec4(0, 1, 0, 1);
}
`;
function createShader(gl, type, source) {
var s = gl.createShader(type);
gl.shaderSource(s, source);
gl.compileShader(s);
if (!gl.getShaderParameter(s, gl.COMPILE_STATUS)) {
console.log(gl.getShaderInfoLog(s));
}
return s;
}
var gl = document.createElement("canvas").getContext("webgl");
var prg = gl.createProgram();
gl.attachShader(prg, createShader(gl, gl.VERTEX_SHADER, vs));
gl.attachShader(prg, createShader(gl, gl.FRAGMENT_SHADER, fs));
gl.bindAttribLocation(prg, 5, "position");
gl.bindAttribLocation(prg, 10, "test");
gl.linkProgram(prg);
if (!gl.getProgramParameter(prg, gl.LINK_STATUS)) {
console.log(gl.getProgramInfoLog(prg));
}
console.log("test location:", gl.getAttribLocation(prg, "test"));
console.log("position location:", gl.getAttribLocation(prg, "position"));

Draw/Remove geometrical shapes on render target using DirectX 11

I am trying to draw/remove geometrical shapes on render target which displays a image.
During each frame render, a new image is rendered to the render target by updating the resource via texture mapping which works perfectly as expected.
Now, I'm trying to draw a new geometrical shape filled with a solid color on top of the render target, which is only done during the rendering of every 10th frame.
However currently stuck as to how i should approach this.
I'm using DirectX 11 on windows 7 PC with C# (slimDx or sharpDx for DirectX).
Any suggestion would be great.
Thanks.
Code:
During rendring loop i added the code below to draw the overlay which is a triangle in my case.
var device = this.Device;
var context = device.ImmediateContext;
var effectsFileResource = Properties.Resources.ShapeEffect;
ShaderBytecode shaderByteCode = ShaderBytecode.Compile(effectsFileResource, "fx_5_0", ShaderFlags.EnableStrictness | ShaderFlags.Debug, EffectFlags.None);
var effect = new Effect(device, shaderByteCode);
// create triangle vertex data, making sure to rewind the stream afterward
var verticesTriangle = new DataStream(VertexPositionColor.SizeInBytes * 3, true, true);
verticesTriangle.Write(new VertexPositionColor(new Vector3(0.0f, 0.5f, 0.5f), new Color4(1.0f, 0.0f, 0.0f, 1.0f)));
verticesTriangle.Write(new VertexPositionColor(new Vector3(0.5f, -0.5f, 0.5f), new Color4(0.0f, 1.0f, 0.0f, 1.0f)));
verticesTriangle.Write(new VertexPositionColor(new Vector3(-0.5f, -0.5f, 0.5f), new Color4(0.0f, 0.0f, 1.0f, 1.0f)));
verticesTriangle.Position = 0;
// create the triangle vertex layout and buffer
var layoutColor = new InputLayout(device, effect.GetTechniqueByName("Color").GetPassByIndex(0).Description.Signature, VertexPositionColor.inputElements);
var vertexBufferColor = new SharpDX.Direct3D11.Buffer(device, verticesTriangle, (int)verticesTriangle.Length, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
verticesTriangle.Close();
var srv = new ShaderResourceView(device, this.RenderTarget);
effect.GetVariableByName("g_Overlay").AsShaderResource().SetResource(srv);
// Think of the shared textureD3D10 as an overlay.
// The overlay needs to show the 2d content but let the underlying triangle (or whatever)
// show thru, which is accomplished by blending.
var bsd = new BlendStateDescription();
bsd.RenderTarget[0].IsBlendEnabled = true;
bsd.RenderTarget[0].SourceBlend = BlendOption.SourceColor;
bsd.RenderTarget[0].DestinationBlend = BlendOption.BlendFactor;
bsd.RenderTarget[0].BlendOperation = BlendOperation.Add;
bsd.RenderTarget[0].SourceAlphaBlend = BlendOption.One;
bsd.RenderTarget[0].DestinationAlphaBlend = BlendOption.Zero;
bsd.RenderTarget[0].AlphaBlendOperation = BlendOperation.Add;
bsd.RenderTarget[0].RenderTargetWriteMask = ColorWriteMaskFlags.All;
var blendStateTransparent = new BlendState(device, bsd);
context.InputAssembler.InputLayout = layoutColor;
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleStrip;
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertexBufferColor, VertexPositionColor.SizeInBytes, 0));
context.OutputMerger.BlendState = blendStateTransparent;
var currentTechnique = effect.GetTechniqueByName("Color");
for (var pass = 0; pass < currentTechnique.Description.PassCount; ++pass)
{
using (var effectPass = currentTechnique.GetPassByIndex(pass))
{
System.Diagnostics.Debug.Assert(effectPass.IsValid, "Invalid EffectPass");
effectPass.Apply(context);
}
context.Draw(3, 0);
};
srv.Dispose();
Also below is the shader file for the effect:
Texture2D g_Overlay;
SamplerState g_samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};
// ------------------------------------------------------
// A shader that accepts Position and Color
// ------------------------------------------------------
struct ColorVS_IN
{
float4 pos : POSITION;
float4 col : COLOR;
};
struct ColorPS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
};
ColorPS_IN ColorVS(ColorVS_IN input)
{
ColorPS_IN output = (ColorPS_IN)0;
output.pos = input.pos;
output.col = input.col;
return output;
}
float4 ColorPS(ColorPS_IN input) : SV_Target
{
return input.col;
}
// ------------------------------------------------------
// A shader that accepts Position and Texture
// Used as an overlay
// ------------------------------------------------------
struct OverlayVS_IN
{
float4 pos : POSITION;
float2 tex : TEXCOORD0;
};
struct OverlayPS_IN
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD0;
};
OverlayPS_IN OverlayVS(OverlayVS_IN input)
{
OverlayPS_IN output = (OverlayPS_IN)0;
output.pos = input.pos;
output.tex = input.tex;
return output;
}
float4 OverlayPS(OverlayPS_IN input) : SV_Target
{
float4 color = g_Overlay.Sample(g_samLinear, input.tex);
return color;
}
// ------------------------------------------------------
// Techniques
// ------------------------------------------------------
technique11 Color
{
pass P0
{
SetGeometryShader(0);
SetVertexShader(CompileShader(vs_4_0, ColorVS()));
SetPixelShader(CompileShader(ps_4_0, ColorPS()));
}
}
technique11 Overlay
{
pass P0
{
SetGeometryShader(0);
SetVertexShader(CompileShader(vs_4_0, OverlayVS()));
SetPixelShader(CompileShader(ps_4_0, OverlayPS()));
}
}
The above code has been taken from : SharedResources using SharpDX

Loading .fbx models into directX 10

I'm trying to load in meshes into DirectX 10. I've created a bunch of classes that handle it and allow me to call in a mesh with only a single line of code in my main game class.
How ever, when I run the program this is what renders:
In the debug output window the following errors keep appearing:
D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX ]
D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that the input stage requires Semantic/Index (POSITION,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ]
The thing is, I've no idea how to fix this. The code I'm using does work and I've simply brought all of that code into a new project of mine. There are no build errors and this only appears when the game is running
The .fx file is as follows:
float4x4 matWorld;
float4x4 matView;
float4x4 matProjection;
struct VS_INPUT
{
float4 Pos:POSITION;
float2 TexCoord:TEXCOORD;
};
struct PS_INPUT
{
float4 Pos:SV_POSITION;
float2 TexCoord:TEXCOORD;
};
Texture2D diffuseTexture;
SamplerState diffuseSampler
{
Filter = MIN_MAG_MIP_POINT;
AddressU = WRAP;
AddressV = WRAP;
};
//
// Vertex Shader
//
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output=(PS_INPUT)0;
float4x4 viewProjection=mul(matView,matProjection);
float4x4 worldViewProjection=mul(matWorld,viewProjection);
output.Pos=mul(input.Pos,worldViewProjection);
output.TexCoord=input.TexCoord;
return output;
}
//
// Pixel Shader
//
float4 PS(PS_INPUT input ) : SV_Target
{
return diffuseTexture.Sample(diffuseSampler,input.TexCoord);
//return float4(1.0f,1.0f,1.0f,1.0f);
}
RasterizerState NoCulling
{
FILLMODE=SOLID;
CULLMODE=NONE;
};
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
SetRasterizerState(NoCulling);
}
}
In my game, the .fx file and model are called and set as follows:
Loading in shader file
//Set the shader flags - BMD
DWORD dwShaderFlags = D3D10_SHADER_ENABLE_STRICTNESS;
#if defined( DEBUG ) || defined( _DEBUG )
dwShaderFlags |= D3D10_SHADER_DEBUG;
#endif
ID3D10Blob * pErrorBuffer=NULL;
if( FAILED( D3DX10CreateEffectFromFile( TEXT("TransformedTexture.fx" ), NULL, NULL, "fx_4_0", dwShaderFlags, 0, md3dDevice, NULL, NULL, &m_pEffect, &pErrorBuffer, NULL ) ) )
{
char * pErrorStr = ( char* )pErrorBuffer->GetBufferPointer();
//If the creation of the Effect fails then a message box will be shown
MessageBoxA( NULL, pErrorStr, "Error", MB_OK );
return false;
}
//Get the technique called Render from the effect, we need this for rendering later on
m_pTechnique=m_pEffect->GetTechniqueByName("Render");
//Number of elements in the layout
UINT numElements = TexturedLitVertex::layoutSize;
//Get the Pass description, we need this to bind the vertex to the pipeline
D3D10_PASS_DESC PassDesc;
m_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc );
//Create Input layout to describe the incoming buffer to the input assembler
if (FAILED(md3dDevice->CreateInputLayout( TexturedLitVertex::layout, numElements,PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_pVertexLayout ) ) )
{
return false;
}
model loading:
m_pTestRenderable=new CRenderable();
//m_pTestRenderable->create<TexturedVertex>(md3dDevice,8,6,vertices,indices);
m_pModelLoader = new CModelLoader();
m_pTestRenderable = m_pModelLoader->loadModelFromFile( md3dDevice,"armoredrecon.fbx" );
m_pGameObjectTest = new CGameObject();
m_pGameObjectTest->setRenderable( m_pTestRenderable );
// Set primitive topology, how are we going to interpet the vertices in the vertex buffer
md3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST );
if ( FAILED( D3DX10CreateShaderResourceViewFromFile( md3dDevice, TEXT( "armoredrecon_diff.png" ), NULL, NULL, &m_pTextureShaderResource, NULL ) ) )
{
MessageBox( NULL, TEXT( "Can't load Texture" ), TEXT( "Error" ), MB_OK );
return false;
}
m_pDiffuseTextureVariable = m_pEffect->GetVariableByName( "diffuseTexture" )->AsShaderResource();
m_pDiffuseTextureVariable->SetResource( m_pTextureShaderResource );
Finally, the draw function code:
//All drawing will occur between the clear and present
m_pViewMatrixVariable->SetMatrix( ( float* )m_matView );
m_pWorldMatrixVariable->SetMatrix( ( float* )m_pGameObjectTest->getWorld() );
//Get the stride(size) of the a vertex, we need this to tell the pipeline the size of one vertex
UINT stride = m_pTestRenderable->getStride();
//The offset from start of the buffer to where our vertices are located
UINT offset = m_pTestRenderable->getOffset();
ID3D10Buffer * pVB=m_pTestRenderable->getVB();
//Bind the vertex buffer to input assembler stage -
md3dDevice->IASetVertexBuffers( 0, 1, &pVB, &stride, &offset );
md3dDevice->IASetIndexBuffer( m_pTestRenderable->getIB(), DXGI_FORMAT_R32_UINT, 0 );
//Get the Description of the technique, we need this in order to loop through each pass in the technique
D3D10_TECHNIQUE_DESC techDesc;
m_pTechnique->GetDesc( &techDesc );
//Loop through the passes in the technique
for( UINT p = 0; p < techDesc.Passes; ++p )
{
//Get a pass at current index and apply it
m_pTechnique->GetPassByIndex( p )->Apply( 0 );
//Draw call
md3dDevice->DrawIndexed(m_pTestRenderable->getNumOfIndices(),0,0);
//m_pD3D10Device->Draw(m_pTestRenderable->getNumOfVerts(),0);
}
Is there anything I've clearly done wrong or are missing? Spent 2 weeks trying to workout what on earth I've done wrong to no avail.
Any insight a fresh pair eyes could give on this would be great.

Resources