I want to implement instance normalization - ios

I am writing a metal cnn code.
Metal provides MPSCNNLocalContrastNormalization,
Since the concept of Instance Normalization is slightly different, I intend to implement it as a Kernel Function.
However, the problem is that the mean and variance for each R, G, B should be obtained when feature is R, G, B in texture received from input in kernel function.
I want to get some hints on how to implement this.
kernel void instance_normalization_2darray(texture2d_array<float, access::sample> src [[ texture(0) ]],
texture2d_array<float, access::write> dst [[ texture(1) ]],
uint3 tid [[thread_position_in_grid]]) {
}
kernel void calculate_avgA(texture2d_array<float, access::read> texture_in [[texture(0)]],
texture2d_array<float, access::write> texture_out [[texture(1)]],
uint3 tid [[thread_position_in_grid]])
{
int width = texture_in.get_width();
int height = texture_in.get_height();
int depth = texture_in.get_array_size();
float4 outColor;
uint3 kernelIndex(0,0,0);
uint3 textureIndex(0,0,0);
for(int k = 0; k < depth; k++) {
outColor = (0.0, 0.0, 0.0, 0.0);
for (int i=0; i < width; i++)
{
for (int j=0; j < height; j++)
{
kernelIndex = uint3(i, j, k);
textureIndex = uint3(tid.x + i, tid.y + j, tid.z + k);
float4 color = texture_in.read(textureIndex.xy, textureIndex.z).rgba;
outColor += color;
}
}
outColor = outColor / (width * height);
texture_out.write(float4(outColor.rgba), tid.xy, textureIndex.z);
}
}

Mr.Bista
I had the same problem for this, apple didn't provide some function for this with fast speed.
And I just use MPSCNNPoolingAverage for caculate mean before kernels.
Maybe it is a temporary method for it.
And other algorithm is not better than this ,such as reduction sum algorithm after my test with codes.
So I will continue to track better implementation for this.

Related

How can I calculate the mean and variance value of an image with 16 channels using Metal Shader Lanuage

how can I calculate mean and variance value of an image with 16 channels using Metal ?
I want to calculate mean and variance value of different channel sperately!
ex.:
kernel void meanandvariance(texture2d_array<float, access::read> in[[texture(0)]],
texture2d_array<float, access::write> out[[texture(1)]],
ushort3 gid[[thread_position_in_grid]],
ushort tid[[thread_index_in_threadgroup]],
ushort3 tg_size[[threads_per_threadgroup]]) {
}
There's probably a way to do this by creating a sequence of texture views on the input texture array and output texture array, encoding a MPSImageStatisticsMeanAndVariance kernel invocation for each slice.
But let's take a look at how to do it ourselves. There are many different possible approaches, so I chose one that was simple and used some interesting results from statistics.
Essentially, we'll do the following:
Write a kernel that can produce a subset mean and variance for a single row of the image.
Write a kernel that can produce an overall mean and variance from the partial results from step 1.
Here are the kernels:
kernel void compute_row_mean_variance_array(texture2d_array<float, access::read> inTexture [[texture(0)]],
texture2d_array<float, access::write> outTexture [[texture(1)]],
uint3 tpig [[thread_position_in_grid]])
{
uint row = tpig.x;
uint slice = tpig.y;
uint width = inTexture.get_width();
if (row >= inTexture.get_height() || slice >= inTexture.get_array_size()) { return; }
float4 mean(0.0f);
float4 var(0.0f);
for (uint col = 0; col < width; ++col) {
float4 rgba = inTexture.read(ushort2(col, row), slice);
// http://datagenetics.com/blog/november22017/index.html
float weight = 1.0f / (col + 1);
float4 oldMean = mean;
mean = mean + (rgba - mean) * weight;
var = var + (rgba - oldMean) * (rgba - mean);
}
var = var / width;
outTexture.write(mean, ushort2(row, 0), slice);
outTexture.write(var, ushort2(row, 1), slice);
}
kernel void reduce_mean_variance_array(texture2d_array<float, access::read> inTexture [[texture(0)]],
texture2d_array<float, access::write> outTexture [[texture(1)]],
uint3 tpig [[thread_position_in_grid]])
{
uint width = inTexture.get_width();
uint slice = tpig.x;
// https://arxiv.org/pdf/1007.1012.pdf
float4 mean(0.0f);
float4 meanOfVar(0.0f);
float4 varOfMean(0.0f);
for (uint col = 0; col < width; ++col) {
float weight = 1.0f / (col + 1);
float4 oldMean = mean;
float4 submean = inTexture.read(ushort2(col, 0), slice);
mean = mean + (submean - mean) * weight;
float4 subvar = inTexture.read(ushort2(col, 1), slice);
meanOfVar = meanOfVar + (subvar - meanOfVar) * weight;
varOfMean = varOfMean + (submean - oldMean) * (submean - mean);
}
float4 var = meanOfVar + varOfMean / width;
outTexture.write(mean, ushort2(0, 0), slice);
outTexture.write(var, ushort2(1, 0), slice);
}
In summary, to achieve step 1, we use an "online" (incremental) algorithm to calculate the partial mean/variance of the row in a way that's more numerically-stable than just adding all the pixel values and dividing by the width. My reference for writing this kernel was this post. Each thread in the grid writes its row's statistics to the appropriate column and slice of an intermediate texture array.
To achieve step 2, we need to find a statistically-sound way of computing the overall statistics from the partial results. This is quite simple in the case of finding the mean: the mean of the population is the mean of the means of the subsets (this holds when the sample size of each subset is the same; in the general case, the overall mean is a weighted sum of the subset means). The variance is trickier, but it turns out that the variance of the population is the sum of the mean of the variances of the subsets and the variance of the means of the subsets (the same caveat about equally-sized subsets applies here). This is a convenient fact that we can combine with our incremental approach above to produce the final mean and variance of each slice, which is written to the corresponding slice of the output texture.
For completeness, here's the Swift code I used to drive these kernels:
let library = device.makeDefaultLibrary()!
let meanVarKernelFunction = library.makeFunction(name: "compute_row_mean_variance_array")!
let meanVarComputePipelineState = try! device.makeComputePipelineState(function: meanVarKernelFunction)
let reduceKernelFunction = library.makeFunction(name: "reduce_mean_variance_array")!
let reduceComputePipelineState = try! device.makeComputePipelineState(function: reduceKernelFunction)
let width = sourceTexture.width
let height = sourceTexture.height
let arrayLength = sourceTexture.arrayLength
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba32Float, width: width, height: height, mipmapped: false)
textureDescriptor.textureType = .type2DArray
textureDescriptor.arrayLength = arrayLength
textureDescriptor.width = height
textureDescriptor.height = 2
textureDescriptor.usage = [.shaderRead, .shaderWrite]
let partialResultsTexture = device.makeTexture(descriptor: textureDescriptor)!
textureDescriptor.width = 2
textureDescriptor.height = 1
textureDescriptor.usage = .shaderWrite
let destTexture = device.makeTexture(descriptor: textureDescriptor)!
let commandBuffer = commandQueue.makeCommandBuffer()!
let computeCommandEncoder = commandBuffer.makeComputeCommandEncoder()!
computeCommandEncoder.setComputePipelineState(meanVarComputePipelineState)
computeCommandEncoder.setTexture(sourceTexture, index: 0)
computeCommandEncoder.setTexture(partialResultsTexture, index: 1)
let meanVarGridSize = MTLSize(width: sourceTexture.height, height: sourceTexture.arrayLength, depth: 1)
let meanVarThreadgroupSize = MTLSizeMake(meanVarComputePipelineState.threadExecutionWidth, 1, 1)
let meanVarThreadgroupCount = MTLSizeMake((meanVarGridSize.width + meanVarThreadgroupSize.width - 1) / meanVarThreadgroupSize.width,
(meanVarGridSize.height + meanVarThreadgroupSize.height - 1) / meanVarThreadgroupSize.height,
1)
computeCommandEncoder.dispatchThreadgroups(meanVarThreadgroupCount, threadsPerThreadgroup: meanVarThreadgroupSize)
computeCommandEncoder.setComputePipelineState(reduceComputePipelineState)
computeCommandEncoder.setTexture(partialResultsTexture, index: 0)
computeCommandEncoder.setTexture(destTexture, index: 1)
let reduceThreadgroupSize = MTLSizeMake(1, 1, 1)
let reduceThreadgroupCount = MTLSizeMake(arrayLength, 1, 1)
computeCommandEncoder.dispatchThreadgroups(reduceThreadgroupCount, threadsPerThreadgroup: reduceThreadgroupSize)
computeCommandEncoder.endEncoding()
let destTexture2DDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba32Float, width: 2, height: 1, mipmapped: false)
destTexture2DDesc.usage = .shaderWrite
let destTexture2D = device.makeTexture(descriptor: destTexture2DDesc)!
meanVarKernel.encode(commandBuffer: commandBuffer, sourceTexture: sourceTexture2D, destinationTexture: destTexture2D)
#if os(macOS)
let blitCommandEncoder = commandBuffer.makeBlitCommandEncoder()!
blitCommandEncoder.synchronize(resource: destTexture)
blitCommandEncoder.synchronize(resource: destTexture2D)
blitCommandEncoder.endEncoding()
#endif
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
In my experiments, this program produced the same results as MPSImageStatisticsMeanAndVariance, give or take some differences on the order of 1e-7. It was also 2.5x slower than MPS on my Mac, probably due in part to failure to exploit latency hiding with granular parallelism.
#include <metal_stdlib>
using namespace metal;
kernel void instance_norm(constant float4* scale[[buffer(0)]],
constant float4* shift[[buffer(1)]],
texture2d_array<float, access::read> in[[texture(0)]],
texture2d_array<float, access::write> out[[texture(1)]],
ushort3 gid[[thread_position_in_grid]],
ushort tid[[thread_index_in_threadgroup]],
ushort3 tg_size[[threads_per_threadgroup]]) {
ushort width = in.get_width();
ushort height = in.get_height();
const ushort thread_count = tg_size.x * tg_size.y;
threadgroup float4 shared_mem [256];
float4 sum = 0;
for(ushort xIndex = gid.x; xIndex < width; xIndex += tg_size.x) {
for(ushort yIndex = gid.y; yIndex < height; yIndex += tg_size.y) {
sum += in.read(ushort2(xIndex, yIndex), gid.z);
}
}
shared_mem[tid] = sum;
threadgroup_barrier(mem_flags::mem_threadgroup);
// Reduce to 32 values
sum = 0;
if (tid < 32) {
for (ushort i = tid + 32; i < thread_count; i += 32) {
sum += shared_mem[i];
}
}
shared_mem[tid] += sum;
threadgroup_barrier(mem_flags::mem_threadgroup);
// Calculate mean
sum = 0;
if (tid == 0) {
ushort top = min(ushort(32), thread_count);
for (ushort i = 0; i < top; i += 1) {
sum += shared_mem[i];
}
shared_mem[0] = sum / (width * height);
}
threadgroup_barrier(mem_flags::mem_threadgroup);
const float4 mean = shared_mem[0];
threadgroup_barrier(mem_flags::mem_threadgroup);
// Variance
sum = 0;
for(ushort xIndex = gid.x; xIndex < width; xIndex += tg_size.x) {
for(ushort yIndex = gid.y; yIndex < height; yIndex += tg_size.y) {
sum += pow(in.read(ushort2(xIndex, yIndex), gid.z) - mean, 2);
}
}
shared_mem[tid] = sum;
threadgroup_barrier(mem_flags::mem_threadgroup);
// Reduce to 32 values
sum = 0;
if (tid < 32) {
for (ushort i = tid + 32; i < thread_count; i += 32) {
sum += shared_mem[i];
}
}
shared_mem[tid] += sum;
threadgroup_barrier(mem_flags::mem_threadgroup);
// Calculate variance
sum = 0;
if (tid == 0) {
ushort top = min(ushort(32), thread_count);
for (ushort i = 0; i < top; i += 1) {
sum += shared_mem[i];
}
shared_mem[0] = sum / (width * height);
}
threadgroup_barrier(mem_flags::mem_threadgroup);
const float4 sigma = sqrt(shared_mem[0] + float4(1e-4));
float4 multiplier = scale[gid.z] / sigma;
for(ushort xIndex = gid.x; xIndex < width; xIndex += tg_size.x) {
for(ushort yIndex = gid.y; yIndex < height; yIndex += tg_size.y) {
float4 val = in.read(ushort2(xIndex, yIndex), gid.z);
out.write(clamp((val - mean) * multiplier + shift[gid.z], -10.0, 10.0), ushort2(xIndex, yIndex), gid.z);
}
}
}
this is how Blend implement, but I do not think it is true, can anybody solve it ?
https://github.com/xmartlabs/Bender/blob/master/Sources/Metal/instanceNorm.metal

Is there a iOS Metal value for bt601?

I have sample metal code that I'm trying to convert to iOS. Is there an iOS compatible value that I can use for bt601?
#include <metal_stdlib>
#include "utilities.h" // error not found
using namespace metal;
kernel void laplace(texture2d<half, access::read> inTexture [[ texture(0) ]],
texture2d<half, access::read_write> outTexture [[ texture(1) ]],
uint2 gid [[ thread_position_in_grid ]]) {
constexpr int kernel_size = 3;
constexpr int radius = kernel_size / 2;
half3x3 laplace_kernel = half3x3(0, 1, 0,
1, -4, 1,
0, 1, 0);
half4 acc_color(0, 0, 0, 0);
for (int j = 0; j <= kernel_size - 1; j++) {
for (int i = 0; i <= kernel_size - 1; i++) {
uint2 textureIndex(gid.x + (i - radius), gid.y + (j - radius));
acc_color += laplace_kernel[i][j] * inTexture.read(textureIndex).rgba;
}
}
half value = dot(acc_color.rgb, bt601); //bt601 not defined
half4 gray_color(value, value, value, 1.0);
outTexture.write(gray_color, gid);
}
It seems that the intention here is simply to derive a single "luminance" value from the RGB output of the kernel. In that case, bt601 would be a three-element vector whose components are the desired weights of the respective channels, summing to 1.0.
Borrowing values from Rec. 601, we might define it like this:
float3 bt601(0.299f, 0.587f, 0.114f);
This is certainly a common choice. Another popular choice uses coefficients found in the Rec. 709 standard. That would look like this:
float3 bt709(0.212671f, 0.715160f, 0.072169f);
Both of these vectors will give you a single gray value that approximates the brightness of a linear sRGB color. Whether either of them is "correct" depends on the provenance of your data and how you process it further down the pipeline.
For whatever it's worth, the MetalPerformanceShaders MPSImageThresholdBinary kernel seems to favor the BT.601 values.
I'd recommend taking a look at this answer for more detail on the issues, and conditions under which the use of these values is appropriate.

Edge detection overlay with lines

I need to overlay the edges detected in live video preview with a color of my choice (as is done in Lightroom CC app when you adjust focus). What's the easiest way to draw those lines in real time using Metal or CoreImage? I can use Sobel edge detection to detect the edges using Metal Performance Shader but not sure how to overlay the edges with a color of my choice.
Here is a edge detect shader for metal
kernel void edge_detect(texture2d<half, access::read> inTexture [[ texture(0) ]],
texture2d<half, access::write> outTexture [[ texture(1) ]],
uint2 gid [[ thread_position_in_grid ]]) {
constexpr int kernel_size = 3;
constexpr int radius = kernel_size / 2;
half3x3 horizontal_kernel = half3x3(-1./8., -1./8., -1./8.,
-1./8., 1., -1./8.,
-1./8., -1./8., -1./8.);
half3x3 vertical_kernel = half3x3(-1./8., -1./8., -1./8.,
-1./8., 1., -1./8.,
-1./8., -1./8., -1./8.);
half3 result_horizontal(0,0,0);
half3 result_vertical(0,0,0);
for(int j = 0; j <= kernel_size - 1; j++) {
for(int i = 0; i <= kernel_size - 1; i++) {
uint2 texture_index(gid.x + (i - radius), gid.y + (j - radius));
result_horizontal += horizontal_kernel[i][j] * inTexture.read(texture_index).rgb;
result_vertical += vertical_kernel[i][j] * inTexture.read(texture_index).rgb;
}
}
half3 bt601 = half3(0.299, 0.587, 0.114);
half gray_horizontal = dot(result_horizontal.rgb, bt601);
half gray_vertical = dot(result_vertical.rgb, bt601);
half magnitude = length(half2(gray_horizontal, gray_vertical));
outTexture.write(half4(half3(magnitude), 1), gid);
}
I know this is late, but if anyone still needs it I figured it out just now. It's very easy to find an edge detection shader, but not easy to figure out how to change the color of the detected edges, especially if you are new to this. Here is my kernel:
typedef struct {
simd_float3 rgb;
} AppliedColor;
kernel void edgeEffect(texture2d<half, access::read> inputTexture [[ texture(0) ]],
texture2d<half, access::read_write> outputTexture [[ texture(1) ]],
constant float &edgeStrength [[ buffer(0) ]],
constant AppliedColor &newColor [[ buffer(1) ]],
uint2 gid [[thread_position_in_grid]]) {
constexpr int kernelSize = 3;
constexpr int radius = kernelSize / 2;
half3x3 horizontalKernel = half3x3(-1, -2, -1,
0, 0, 0,
1, 2, 1);
half3x3 verticalKernel = half3x3(1, 0, -1,
2, 0, -2,
1, 0, -1);
half3 horizontalResult(0, 0, 0);
half3 verticalResult(0, 0, 0);
for(int j = 0; j <= kernelSize - 1; j++) {
for(int i = 0; i <= kernelSize - 1; i++) {
uint2 textureIndex(gid.x + (i - radius), gid.y + (j - radius));
horizontalResult += horizontalKernel[i][j] * inputTexture.read(textureIndex).rgb;
verticalResult += verticalKernel[i][j] * inputTexture.read(textureIndex).rgb;
}
}
half horizontalWhite = dot(horizontalResult.rgb, half3(1.0));
half verticalWhite = dot(verticalResult.rgb, half3(1.0));
half magnitude = length(half2(horizontalWhite, verticalWhite)) * edgeStrength;
outputTexture.write(half4(half3(newColor.rgb * magnitude), 1), gid);
}//edgeEffect
This is using Sobel kernels to calculate the derivatives.

write method of texture2d<int, access:write> do not work in metal shader function

As mentioned in Apple's document, texture2d of shading language could be of int type. I have tried to use texture2d of int type as parameter of shader language, but the write method of texture2d failed to work.
kernel void dummy(texture2d<int, access::write> outTexture [[ texture(0) ]],
uint2 gid [[ thread_position_in_grid ]])
{
outTexture.write( int4( 2, 4, 6, 8 ), gid );
}
However, if I replace the int with float, it worked.
kernel void dummy(texture2d<float, access::write> outTexture [[ texture(0) ]],
uint2 gid [[ thread_position_in_grid ]])
{
outTexture.write( float4( 1.0, 0, 0, 1.0 ), gid );
}
Could other types of texture2d, such texture2d of int, texture2d of short and so on, be used as shader function parameters, and how to use them? Thanks for reviewing my question.
The related host codes:
MTLTextureDescriptor *desc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatRGBA8Unorm
desc.usage = MTLTextureUsageShaderWrite;
id<MTLTexture> texture = [device newTextureWithDescriptor:desc];
[commandEncoder setTexture:texture atIndex:0];
The code to show the output computed by GPU, w and h represents width and height of textrue, respectively.
uint8_t* imageBytes = malloc(w*h*4);
memset( imageBytes, 0, w*h*4 );
MTLRegion region = MTLRegionMake2D(0, 0, [texture width], [texture height]);
[texture getBytes:imageBytes bytesPerRow:[texture width]*4 fromRegion:region mipmapLevel:0];
for( int j = 0; j < h; j++ )
{
printf("%3d: ", j);
for( int i = 0; i < w*pixel_size; i++ )
{
printf(" %3d",imageBytes[j*w*pixel_size+i] );
}
printf("\n")
}
The problem is that the pixel format you used to create this texture (MTLPixelFormatRGBA8Unorm) is normalized, meaning that the expected pixel value range is 0.0-1.0. For normalized pixel types, the required data type for reading or writing to this texture within a Metal kernel is float or half-float.
In order to write to a texture with integers, you must select an integer pixel format. Here are all of the available formats:
https://developer.apple.com/documentation/metal/mtlpixelformat
The Metal Shading Language Guide states that:
Note: If T is int or short, the data associated with the texture must use a signed integer format. If T is uint or ushort, the data associated with the texture must use an unsigned integer format.
All you have to do is make sure the texture you write to in the API (host code) matches what you have in the kernel function. Alternatively, you can also cast the int values into float before writing to the outTexture.

DX11 HLSL Secondary Texture Coordinates Lost

Been banging my head up against the wall with this for a while. Despite the fact that I THINK I have a proper Vertex Format defined with D3D11_INPUT_ELEMENT_DESC, no matter what I do, I can't see to read my TEXCOORD1 values from this shader. To test this shader, I put random values into my second set of UV coordinates just to see if they were reaching the shader, but to my dismay, I haven't been able to find these random values anywhere. I have also watched the data go into the mapped memory directly, and I am pretty sure the random values were there when they were mapped.
Here is the Shader code:
sampler ImageSampler: register(s0);
Texture2D <float4> ImageTexture: register(t0);
Texture2D <float4> ReflectionTexture: register(t1);
//Texture2D <float4> ReflectionMap: register(t0);
struct PS_IN
{
float4 InPos: SV_POSITION;
float2 InTex: TEXCOORD;
float2 InRef: TEXCOORD1;
float4 InCol: COLOR0;
};
float4 main(PS_IN input): SV_TARGET
{
float4 res;
float4 mul;
float2 tcRef;
float4 res1 = ImageTexture.Sample(ImageSampler, input.InTex) * input.InCol;
float4 res2 = ReflectionTexture.Sample(ImageSampler, input.InRef+input.InTex);
mul.r = 0.5;
mul.g = 0.5;
mul.b = 0.5;
mul.a = 0.5;
res = res1 + res2;
res = res * mul;
res.a = res1.a;
res.r = input.InRef.x;//<-----should be filled with random stuff... not working
res.b = input.InRef.y;//<-----should be filled with random stuff... not working
return res;
}
Here is my D3D11_ELEMENT_DESC... (sorry it is in pascal, but I like pascal)
const
CanvasVertexLayout: array[0..3] of D3D11_INPUT_ELEMENT_DESC =
((SemanticName: 'POSITION';
SemanticIndex: 0;
Format: DXGI_FORMAT_R32G32_FLOAT;
InputSlot: 0;
AlignedByteOffset: 0;
InputSlotClass: D3D11_INPUT_PER_VERTEX_DATA;
InstanceDataStepRate: 0),
(SemanticName: 'TEXCOORD';
SemanticIndex: 0;
Format: DXGI_FORMAT_R32G32_FLOAT;
InputSlot: 0;
AlignedByteOffset: 8;
InputSlotClass: D3D11_INPUT_PER_VERTEX_DATA;
InstanceDataStepRate: 0),
(SemanticName: 'TEXCOORD';
SemanticIndex: 1;
Format: DXGI_FORMAT_R32G32_FLOAT;
InputSlot: 0;
AlignedByteOffset: 16;
InputSlotClass: D3D11_INPUT_PER_VERTEX_DATA;
InstanceDataStepRate: 0),
(SemanticName: 'COLOR';
SemanticIndex: 0;
Format: DXGI_FORMAT_R8G8B8A8_UNORM;
InputSlot: 0;
AlignedByteOffset: 24;
InputSlotClass: D3D11_INPUT_PER_VERTEX_DATA;
InstanceDataStepRate: 0)
);
And here's the Vertext Struct
TVertexEntry = packed record
X, Y: Single;
U, V: Single;
u2,v2:single;
Color: LongWord;
end;
Since the COLOR semantic follows the TEXTURE semantics, my best guess is that the problem is with the SHADER and not the pascal code... but since I'm new to this kind of stuff, I'm obviously lost
Any insight is appreciated.
Answering my own question. Since I'm new to Shaders in general, maybe this will help some other newbs.
I was assuming that all I needed to do was add a second set of UV coordinates to the Vertex Format and add a D3D11_INPUT_ELEMENT_DESC for it. However, there is also a vertex shader involved, more-or-less a passthrough and that vertex shader needs to be aware of the new UV coordinates and let them pass through. I was just making a 2D engine so I didn't think that I'd even have to mess with VertexShaders... go figure. So I modified the vertex shader, and this was the result:
void main(
float2 InPos: POSITION0,
float2 InTex: TEXCOORD0,
float2 InTex2: TEXCOORD1,//<--added
float4 InCol: COLOR0,
out float4 OutPos: SV_POSITION,
out float2 OutTex: TEXCOORD2,
out float2 OutTex2: TEXCOORD3,//<--added
out float4 OutCol: COLOR0)
{
OutPos = float4(InPos, 0.0, 1.0);
OutTex = InTex;
OutCol = InCol;
OutTex2 = InTex2;//<--added
}

Resources