I want to do some custom work in custom shader, but get a internal error when run it. compile seems ok, but run will get the error message:
error building shaders : Error Domain=AGXMetalG4P Code=1 "Compiler encountered an internal error" UserInfo={NSLocalizedDescription=Compiler encountered an internal error}
[SCNKit ERROR] display link thread seems stuck
SCNProgram *program = [[SCNProgram alloc] init];
program.fragmentFunctionName = #"myVertex";
program.vertexFunctionName = #"myFragment";
mySceneNode.geometry.program = program
and the shader:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct MyNodeBuffer {
float4x4 modelTransform;
float4x4 modelViewTransform;
float4x4 normalTransform;
float4x4 modelViewProjectionTransform;
};
typedef struct {
float3 position [[ attribute(SCNVertexSemanticPosition) ]];
} MyVertexInput;
struct SimpleVertex
{
float4 position [[position]];
};
vertex SimpleVertex myVertex(MyVertexInput in [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant MyNodeBuffer& scn_node [[buffer(1)]])
{
SimpleVertex vert;
vert.position = scn_node.modelViewProjectionTransform * float4(in.position, 1.0);
vert.position = float4(in.position,0);
return vert;
}
fragment half4 myFragment(SimpleVertex in [[stage_in]])
{
half4 color;
color = half4(0.0 ,1.0 ,0.0, 1.0);
return color;
}
Related
I have the shader below where I define a sampler (constexpr sampler textureSampler (mag_filter::linear,min_filter::linear);).
using namespace metal;
struct ProjectedVertex {
'float4 position [[position]];
'float2 textureCoord;
};
fragment float4 fragmentShader(const ProjectedVertex in [[stage_in]],
const texture2d<float> colorTexture [[texture(0)]],
constant float4 &opacity [[buffer(1)]]){
constexpr sampler textureSampler (mag_filter::linear,min_filter::linear);
const float4 colorSample = colorTexture.sample(textureSampler, in.textureCoord);
return colorSample*opacity[0];
}
Now I would like to avoid to hardly define this sampler inside my shader code. I found MTLSamplerState But I don't know how to use it
To create a sampler, first create a MTLSamplerDescriptor object and configure the descriptor’s properties. Then call the newSamplerStateWithDescriptor: method on the MTLDevice object that will use this sampler. After you create the sampler, you can release the descriptor or reconfigure its properties to create other samplers.
// Create default sampler state
MTLSamplerDescriptor *samplerDesc = [MTLSamplerDescriptor new];
samplerDesc.rAddressMode = MTLSamplerAddressModeRepeat;
samplerDesc.sAddressMode = MTLSamplerAddressModeRepeat;
samplerDesc.tAddressMode = MTLSamplerAddressModeRepeat;
samplerDesc.minFilter = MTLSamplerMinMagFilterLinear;
samplerDesc.magFilter = MTLSamplerMinMagFilterLinear;
samplerDesc.mipFilter = MTLSamplerMipFilterNotMipmapped;
id<MTLSamplerState> ss = [device newSamplerStateWithDescriptor:samplerDesc];
Sets a sampler state for the fragment function:
id<MTLRenderCommandEncoder> encoder = [commandBuffer renderCommandEncoderWithDescriptor: passDescriptor];
...
[encoder setFragmentSamplerState: ss atIndex:0];
Accessing from the shader:
fragment float4 albedoMainFragment(ImageColor in [[stage_in]],
texture2d<float> diffuseTexture [[texture(0)]],
sampler smp [[sampler(0)]]) {
float4 color = diffuseTexture.sample(smp, in.texCoord);
return color;
}
How to create SamplerState
First, declare MTLSamplerDescriptor and configure some properties such as addressModes, magFilter, minFilter.
Second, call makeSamplerState method from MTLDevice. Most cases default device.
That
You can use the below code. I hope it helps.
private static func buildSamplerState() -> MTLSamplerState? {
let descriptor = MTLSamplerDescriptor()
descriptor.sAddressMode = .repeat // .clampToEdge, .mirrorRepeat, .clampToZero
descriptor.tAddressMode = .repeat // .clampToEdge, .mirrorRepeat, .clampToZero
descriptor.magFilter = .linear // .nearest
descriptor.minFilter = .linear // .nearest
let samplerState = MTLCreateSystemDefaultDevice()?.makeSamplerState(descriptor: descriptor)
return samplerState
}
How to use it
...
let samplerState = buildSamplerState()
...
// call `makeRenderCommandEncoder` to create commandEncoder from commandBuffer
let commandEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
commandEncoder.setFragmentSamplerState(samplerState, index: 0)
in your fragment shader
fragment float4 exampleShader(VertexIO inputFragment [[stage_in]],
sampler textureSampler [[sampler(0)]],
texture2d<float> inputTexture [[texture(0)]])
{
float2 position = inputFragment.textureCoord;
// ...
return inputTexture.sample(textureSampler, position);
}
I have this Metal shader :
struct InVertex {
packed_float3 pos;
packed_uchar4 color;
};
vertex ProjectedVertex vertexShader(const device InVertex *vertexArray [[buffer(0)]],
const unsigned int vertexId [[vertex_id]]){
InVertex in = vertexArray[vertexId];
....
}
However I would like to make the buffer declaration "dynamic". IE I would like that my shader is able to handle buffer type like for exemple
struct InVertex1 {
packed_float3 pos;
packed_uchar4 color;
};
struct InVertex2 {
float4 pos;
flat4 color;
};
struct InVertex3 {
float4 pos;
float4 tangent;
float4 color;
};
etc..
so I would like something like :
vertex ProjectedVertex vertexShader(const device ???? *vertexArray [[buffer(0)]],
const unsigned int vertexId [[vertex_id]]
const device int vertexType [[buffer(1)]] ){
if vertexType = InVertex1Type {
... handle InVertex1 type ...
}
else if vertexType = InVertex2Type {
... handle InVertex2 type ...
}
else if vertexType = InVertex3Type {
... handle InVertex3 type ...
}
}
The Metal programming language is a C++14-based Specification with extensions and
restrictions. Taking this into account you can do the following.
First create a header file called ShaderTypes.h:
// Header containing types and enum constants shared between Metal shaders and Swift/ObjC source
#ifndef ShaderTypes_h
#define ShaderTypes_h
#ifndef __METAL_VERSION__
/// 96-bit 3 component float vector type
typedef struct __attribute__ ((packed)) packed_float3 {
float x;
float y;
float z;
} packed_float3;
#endif
typedef struct
{
packed_float3 pos;
packed_uchar4 color;
} InVertex1;
typedef struct
{
vector_float4 pos;
vector_float4 color;
} InVertex2;
typedef struct
{
vector_float4 pos;
vector_float4 tangent;
vector_float4 color;
} InVertex3;
enum VertexType {InVertex1Type = 0, InVertex2Type = 1, InVertex3Type = 2};
typedef struct
{
InVertex1 InVertex1;
InVertex2 InVertex2;
InVertex3 InVertex3;
VertexType vertexType;
} dynamicStruct;
#endif /* ShaderTypes_h */
In your render class add the following:
// Include header shared between C code here, which executes Metal API commands, and .metal files
#import "ShaderTypes.h"
id <MTLBuffer> _dynamicBuffer;
// Create your dynamic buffer.
void InitBuffer(id<MTLDevice> device)
{
_dynamicBuffer = [device newBufferWithLength:sizeof(dynamicStruct) options:MTLResourceStorageModeShared];
}
// Update your dynamic buffer.
void UpdateBuffer()
{
dynamicStruct* ds = (dynamicStruct*)_dynamicBuffer.contents;
ds->InVertex1.color = {0, 0, 0, 0};
ds->InVertex2.pos = {0, 1, 1, 1};
ds->InVertex3.tangent = {1, 1, 1, 1};
// Select specific struct
ds->vertexType = VertexType::InVertex2Type;
}
- (void)drawInMTKView:(nonnull MTKView *)view
{
...
// Pass your dynamic buffer to the shader.
[renderEncoder setVertexBuffer:_dynamicBuffer offset:0 atIndex:0];
...
}
And finally in your shader file (.metal):
// Including header shared between this Metal shader code and Swift/C code executing Metal API commands
#import "ShaderTypes.h"
vertex ProjectedVertex vertexShader(constant dynamicStruct & dynamicStruct[[ buffer(0) ]],
const unsigned int vertexId [[vertex_id]])
{
InVertex1 v1;
InVertex2 v2;
InVertex3 v3;
if(dynamicStruct.vertexType == VertexType::InVertex1Type)
{
v1 = dynamicStruct.InVertex1;
}
else if(dynamicStruct.vertexType == VertexType::InVertex2Type)
{
v2 = dynamicStruct.InVertex2;
}
else if(dynamicStruct.vertexType == VertexType::InVertex3Type)
{
v3 = dynamicStruct.InVertex3;
}
....
}
I am trying to draw/remove geometrical shapes on render target which displays a image.
During each frame render, a new image is rendered to the render target by updating the resource via texture mapping which works perfectly as expected.
Now, I'm trying to draw a new geometrical shape filled with a solid color on top of the render target, which is only done during the rendering of every 10th frame.
However currently stuck as to how i should approach this.
I'm using DirectX 11 on windows 7 PC with C# (slimDx or sharpDx for DirectX).
Any suggestion would be great.
Thanks.
Code:
During rendring loop i added the code below to draw the overlay which is a triangle in my case.
var device = this.Device;
var context = device.ImmediateContext;
var effectsFileResource = Properties.Resources.ShapeEffect;
ShaderBytecode shaderByteCode = ShaderBytecode.Compile(effectsFileResource, "fx_5_0", ShaderFlags.EnableStrictness | ShaderFlags.Debug, EffectFlags.None);
var effect = new Effect(device, shaderByteCode);
// create triangle vertex data, making sure to rewind the stream afterward
var verticesTriangle = new DataStream(VertexPositionColor.SizeInBytes * 3, true, true);
verticesTriangle.Write(new VertexPositionColor(new Vector3(0.0f, 0.5f, 0.5f), new Color4(1.0f, 0.0f, 0.0f, 1.0f)));
verticesTriangle.Write(new VertexPositionColor(new Vector3(0.5f, -0.5f, 0.5f), new Color4(0.0f, 1.0f, 0.0f, 1.0f)));
verticesTriangle.Write(new VertexPositionColor(new Vector3(-0.5f, -0.5f, 0.5f), new Color4(0.0f, 0.0f, 1.0f, 1.0f)));
verticesTriangle.Position = 0;
// create the triangle vertex layout and buffer
var layoutColor = new InputLayout(device, effect.GetTechniqueByName("Color").GetPassByIndex(0).Description.Signature, VertexPositionColor.inputElements);
var vertexBufferColor = new SharpDX.Direct3D11.Buffer(device, verticesTriangle, (int)verticesTriangle.Length, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
verticesTriangle.Close();
var srv = new ShaderResourceView(device, this.RenderTarget);
effect.GetVariableByName("g_Overlay").AsShaderResource().SetResource(srv);
// Think of the shared textureD3D10 as an overlay.
// The overlay needs to show the 2d content but let the underlying triangle (or whatever)
// show thru, which is accomplished by blending.
var bsd = new BlendStateDescription();
bsd.RenderTarget[0].IsBlendEnabled = true;
bsd.RenderTarget[0].SourceBlend = BlendOption.SourceColor;
bsd.RenderTarget[0].DestinationBlend = BlendOption.BlendFactor;
bsd.RenderTarget[0].BlendOperation = BlendOperation.Add;
bsd.RenderTarget[0].SourceAlphaBlend = BlendOption.One;
bsd.RenderTarget[0].DestinationAlphaBlend = BlendOption.Zero;
bsd.RenderTarget[0].AlphaBlendOperation = BlendOperation.Add;
bsd.RenderTarget[0].RenderTargetWriteMask = ColorWriteMaskFlags.All;
var blendStateTransparent = new BlendState(device, bsd);
context.InputAssembler.InputLayout = layoutColor;
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleStrip;
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertexBufferColor, VertexPositionColor.SizeInBytes, 0));
context.OutputMerger.BlendState = blendStateTransparent;
var currentTechnique = effect.GetTechniqueByName("Color");
for (var pass = 0; pass < currentTechnique.Description.PassCount; ++pass)
{
using (var effectPass = currentTechnique.GetPassByIndex(pass))
{
System.Diagnostics.Debug.Assert(effectPass.IsValid, "Invalid EffectPass");
effectPass.Apply(context);
}
context.Draw(3, 0);
};
srv.Dispose();
Also below is the shader file for the effect:
Texture2D g_Overlay;
SamplerState g_samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};
// ------------------------------------------------------
// A shader that accepts Position and Color
// ------------------------------------------------------
struct ColorVS_IN
{
float4 pos : POSITION;
float4 col : COLOR;
};
struct ColorPS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
};
ColorPS_IN ColorVS(ColorVS_IN input)
{
ColorPS_IN output = (ColorPS_IN)0;
output.pos = input.pos;
output.col = input.col;
return output;
}
float4 ColorPS(ColorPS_IN input) : SV_Target
{
return input.col;
}
// ------------------------------------------------------
// A shader that accepts Position and Texture
// Used as an overlay
// ------------------------------------------------------
struct OverlayVS_IN
{
float4 pos : POSITION;
float2 tex : TEXCOORD0;
};
struct OverlayPS_IN
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD0;
};
OverlayPS_IN OverlayVS(OverlayVS_IN input)
{
OverlayPS_IN output = (OverlayPS_IN)0;
output.pos = input.pos;
output.tex = input.tex;
return output;
}
float4 OverlayPS(OverlayPS_IN input) : SV_Target
{
float4 color = g_Overlay.Sample(g_samLinear, input.tex);
return color;
}
// ------------------------------------------------------
// Techniques
// ------------------------------------------------------
technique11 Color
{
pass P0
{
SetGeometryShader(0);
SetVertexShader(CompileShader(vs_4_0, ColorVS()));
SetPixelShader(CompileShader(ps_4_0, ColorPS()));
}
}
technique11 Overlay
{
pass P0
{
SetGeometryShader(0);
SetVertexShader(CompileShader(vs_4_0, OverlayVS()));
SetPixelShader(CompileShader(ps_4_0, OverlayPS()));
}
}
The above code has been taken from : SharedResources using SharpDX
I've got a question about constant buffers in Metal.
Let's assume, that I've got something like:
...list of includes goes here...
using namespace metal;
struct ConstantBuffer {
float ANY_VALUE;
};
struct VS_INPUTS {
float4 i_pos_ms [ [ attribute ( 0 ) ] ] ;
} ;
struct V2P_STRUCT {
float4 v_pos_out [ [ position ] ] ;
} ;
float3 CalcSomething() {
return float3(ANY_VALUE, ANY_VALUE, ANY_VALUE); // !!!!!!!!
}
vertex V2P_STRUCT VertexFunc(VS_INPUTS vs_inputs [ [ stage_in ] ] ,
constant ConstantBuffer& cb [ [ buffer (1) ] ] )
{
V2P_STRUCT vs_outputs;
vs_outputs.v_pos_out.xyz = CalcSomething();
vs_outputs.v_pos_out.w = cb.ANY_VALUE; // that's OK
return vs_outputs;
}
Is it possible to call CalcSomething() without passing ANY_VALUE as input argument?
For example in DX11 or in OpenGL you create constant buffer, which can be accessed from every place in shader code.
I think about copying content of "cb" to temporary global object but I have no idea how to do it (because of constant address space).
Another idea is to somehow declare "cb" in global scope (but unfortunately [[buffer]] is designed only for arguments). Is there any trick for that?
Solution to my issue:
#include <metal_stdlib>
#include <metal_graphics>
#include <metal_texture>
#include <metal_matrix>
#include <metal_math>
#include <metal_geometric>
#include <metal_common>
using namespace metal;
constant float MyVariable = 4;
struct ConstantBuffer
{
float ANY_VALUE;
};
struct VS_INPUTS {
float4 i_pos_ms [ [ attribute ( 0 ) ] ] ;
};
struct V2P_STRUCT {
float4 v_pos_out [ [ position ] ] ;
};
struct VertexShader
{
thread VS_INPUTS& vs_inputs;
thread texture2d<float> img;
constant ConstantBuffer& cb;
VertexShader(thread VS_INPUTS& inputs, constant ConstantBuffer& b, thread texture2d<float>& texture)
: cb(b)
, vs_inputs(inputs)
, img(texture)
{}
float3 CalcSomething() {
return float3(cb.ANY_VALUE, cb.ANY_VALUE, cb.ANY_VALUE); // !!!!!!!!
}
V2P_STRUCT majn()
{
V2P_STRUCT vs_outputs;
vs_outputs.v_pos_out.xyz = CalcSomething();
vs_outputs.v_pos_out.w = cb.ANY_VALUE * vs_inputs.i_pos_ms.x * MyVariable; // that's OK
return vs_outputs;
}
};
vertex V2P_STRUCT VertexFunc(VS_INPUTS vs_inputs [ [ stage_in ] ] ,
constant ConstantBuffer& cb [ [ buffer (1) ] ] ,
texture2d<float> img [[ texture(0) ]]
)
{
VertexShader vs(vs_inputs, cb, img);
return vs.majn();
}
I create one struct which contains my whole original shader. Arguments are passed as references to the constructor. Any function can read from constant buffer without receiving tons of arguments.
To fix problem with ANY_VALUE which is now part of cb I use macro:
#define ANY_VALUE cb.ANY_VALUE.
There are many questions here. I think it would be best if you provided us a problem to solve, instead of trying to shoehorn concepts from other platforms into metal. For now, here are some ideas.
Is it possible to call CalcSomething() without passing ANY_VALUE as input argument?
struct ConstantBuffer {
const float ANY_VALUE;
};
constant const ConstantBuffer constantBuffer = {1};
static float3 CalcSomething() {
return float3(constantBuffer.ANY_VALUE);
}
Are you sure CalcSomething shouldn't be a method?
struct ConstantBuffer {
ConstantBuffer(const float value): value(value) {}
float3 calculateSomething() const {
return float3(value);
}
const float value;
};
vertex V2P_STRUCT VertexFunc(
constant const ConstantBuffer& _constantBuffer [[buffer(1)]]
) {
// Metal can't currently deal with methods without this.
const auto constantBuffer = _constantBuffer;
Another idea is to somehow declare "cb" in global scope (but unfortunately [[buffer]] is designed only for arguments). Is there any trick for that?
The "trick", in my mind, is to create the buffer in Swift, not the Metal shading language.
I'm getting an unhandled exception with the message HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect. at the call to DrawUserPrimitives in the code below:
namespace Game1
open Microsoft.Xna.Framework
open Microsoft.Xna.Framework.Graphics
open System.IO
open System.Reflection
type Game1() as this =
inherit Game()
let graphics = new GraphicsDeviceManager(this)
[<DefaultValue>] val mutable effect : Effect
[<DefaultValue>] val mutable vertices : VertexPositionColor[]
do base.Content.RootDirectory <- "Content"
override this.Initialize() =
base.Initialize()
let device = base.GraphicsDevice
let s = Assembly.GetExecutingAssembly().GetManifestResourceStream("effects.mgfxo")
let reader = new BinaryReader(s)
this.effect <- new Effect(device, reader.ReadBytes((int)reader.BaseStream.Length))
()
override this.LoadContent() =
this.vertices <-
[|
VertexPositionColor(Vector3(-0.5f, -0.5f, 0.0f), Color.Red);
VertexPositionColor(Vector3(0.0f, 0.5f, 0.0f), Color.Green);
VertexPositionColor(Vector3(0.5f, -0.5f, 0.0f), Color.Yellow)
|]
override this.Draw(gameTime) =
let device = base.GraphicsDevice
do device.Clear(Color.CornflowerBlue)
this.effect.CurrentTechnique <- this.effect.Techniques.["Pretransformed"]
this.effect.CurrentTechnique.Passes |> Seq.iter
(fun pass ->
pass.Apply()
device.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.TriangleList, this.vertices, 0, 1)
)
do base.Draw(gameTime)
My effect code is as follows (taken from the excellent Riemer's tutorials) and is as simple as can be. It's being converted as in this answer, and that seems to be working because I can see the effect name if I put a breakpoint in before the draw call.
struct VertexToPixel
{
float4 Position : POSITION;
float4 Color : COLOR0;
float LightingFactor: TEXCOORD0;
float2 TextureCoords: TEXCOORD1;
};
struct PixelToFrame
{
float4 Color : COLOR0;
};
VertexToPixel PretransformedVS( float4 inPos : POSITION, float4 inColor: COLOR)
{
VertexToPixel Output = (VertexToPixel)0;
Output.Position = inPos;
Output.Color = inColor;
return Output;
}
PixelToFrame PretransformedPS(VertexToPixel PSIn)
{
PixelToFrame Output = (PixelToFrame)0;
Output.Color = PSIn.Color;
return Output;
}
technique Pretransformed
{
pass Pass0
{
VertexShader = compile vs_4_0 PretransformedVS();
PixelShader = compile ps_4_0 PretransformedPS();
}
}
It works fine if I replace the custom effect with a BasicEffect as per this example.
I'm using Monogame 3.2 and Visual Studio 2013.
I worked out eventually (with help from this forum thread) that I just needed to replace POSITION with SV_POSITION in the effect file. This is a consequence of MonoGame being built in DirectX 10/11 rather then XNA's DirectX 9.