How to get the texture size in HLSL? - xna

For a HLSL shader I'm working on (for practice) I'm trying to execute a part of the code if the texture coordinates (on a model) are above half the respective size (that is x > width / 2 or y > height / 2). I'm familiar with C/C++ and know the basics of HLSL (the very basics). If no other solution is possible, I will set the texture size manually with XNA (in which I'm using the shader, as a matter of fact). Is there a better solution? I'm trying to remain within Shader Model 2.0 if possible.

The default texture coordinate space is normalized to 0..1 so x > width / 2 should simply be texcoord.x > 0.5.

Be careful here. tex2d() and other texture calls should NOT be within if()/else clauses. So if you have a pixel shader input "IN.UV" and your aiming at "OUT.color," you need to do it this way:
float4 aboveCol = tex2d(mySampler,some_texcoords);
float4 belowCol = tex2d(mySampler,some_other_texcoords);
if (UV.x >= 0.5) {
OUT.color = /* some function of... */ aboveCol;
} else {
OUT.color = /* some function of... */ belowCol;
}
rather than putting teh tex() calls inside the if() blocks.

Related

iOS Metal: Jaggies Anit-aliasing

I was trying to draw a half circle with renderEncoder's drawIndexedPrimitives
[renderEncoder setVertexBuffer:self.vertexBuffer offset:0 atIndex:0];
[renderEncoder drawIndexedPrimitives:MTLPrimitiveTypeTriangleStrip
indexCount:self.indexCount
indexType:MTLIndexTypeUInt16
indexBuffer:self.indicesBuffer
indexBufferOffset:0];
where the vertexBuffer and indicesBuffer for the circle were created by calculation
int segments = 10;
float vertices02[ (segments +1)* (3+4)];
vertices02[0] = centerX;
vertices02[1] = centerY;
vertices02[2] = 0;
//3, 4, 5, 6 are RGBA
vertices02[3] = 1.0;
vertices02[4] = 0;
vertices02[5] = 0.0;
vertices02[6] = 1.0;
uint16_t indices[(segments -1)*3];
for (int i = 1; i <= segments ; i++){
float degree = (i -1) * (endDegree - startDegree)/ (segments -1) + startDegree;
vertices02[i*7] = (centerX + cos([self degreesToRadians:degree])*radius);
vertices02[i*7 +1] = (centerY + sin([self degreesToRadians:degree])*radius);
vertices02[i*7 +2] = 0;
vertices02[i*7 +3] = 1.0;
vertices02[i*7 +4] = 0;
vertices02[i*7 +5] = 0.0;
vertices02[i*7 +6] = 1.0;
if (i < segments){
indices[(i-1)*3 + 0] = 0;
indices[(i-1)*3 + 1] = i;
indices[(i-1)*3 + 2] = i+1;
}
}
So I am combining 9 Triangle to form a 180 degree circle.
Then create vertexBuffer and indicesBuffer
self.vertexBuffer = [device newBufferWithBytes:vertexArrayPtr
length:vertexDataSize
options:MTLResourceOptionCPUCacheModeDefault];
self.indicesBuffer = [device newBufferWithBytes:indexArrayPtr
length:indicesDataSize
options:MTLResourceOptionCPUCacheModeDefault];
The result is like this:
I believe this is Anti-Aliasing problem from Metal of iOS. I used to create half circle in OpenGL using same technique but the edges was much smoother.
Any suggestions to tackle the problem?
Suggested by warrenm, I should set the CAMetalLayer's drawableSize equals screenSize x scale. There are improvements:
Another Suggestion by warrenm, using MTKView and setting sampleCount = 4 solved the problem:
There are a couple of things to consider here. First, you need to ensure that (when possible) the size of the grid you're rasterizing to matches the resolution of the display it will be viewed on. Second, you might need to use subpixel techniques to eke out additional smoothness, since raster techniques tend to undersample continuous functions.
In Metal, the way we match the rendered image size to the display is by ensuring that the drawable size of the Metal layer matches the pixel dimensions it will occupy on the screen. When using CAMetalLayer directly, the default behavior is for the drawable size of the layer to be the size of the layer's bounds multiplied by the layer's contentsScale property. Setting the latter to the scale of the UIScreen onto which the layer is composited will match the layer's dimensions to the screen's pixels (ignoring other transformations that might be applied to the layer or its view hierarchy).
When using MTKView, the autoResizeDrawable property determines whether the view automatically manages its layer's drawable size. This is the default behavior, but if you set this property to NO, you can manually set the drawable size to something else (e.g., use adaptive resolution rendering when fragment-bound).
In order to sample more finely, we have our choice among any number of antialiasing techniques, but perhaps the easiest of these is multisampled antialiasing (MSAA), a hardware feature that—as the name suggests—takes multiple samples for each pixel along the edges of primitives, in order to reduce the jagged effects of aliasing.
In Metal, using MSAA requires setting multisampling state (i.e., the sample count) on both the render pipeline state and the textures used for rendering. MSAA is a two-step process, where a render target that can hold the data for multiple fragments per pixel is rendered to, then a resolve step combines these samples into the final color for each pixel. When using CAMetalLayer (or drawing off-screen), you must create a texture of type MTLTextureType2DMultisample for each active color/depth attachment. These textures are configured as the texture property of their respective color/depth attachments, and the resolveTexture property is set to a texture of type MTLTextureType2D, into which the MSAA targets are resolved.
When using MTKView, simply setting the sampleCount on the view to match the sampleCount of the render pipeline descriptor is sufficient to get MetalKit to create and manage the appropriate resources. By default, the render pass descriptors you receive from a view will have an internally-managed MSAA color target set as the primary color attachment, and the current drawable's texture set as the resolve texture of that attachment. In this way, enabling MSAA with MetalKit only requires a couple of lines of code.

Rendering MTLTexture on MTKView is not keeping aspect ratio

I have a texture that's 1080x1920 pixels. And I'm trying to render it on a MTKView that isn't the same aspect ratio. (i.e iPad/iPhone X full screen).
This is how I'm rendering the texture for the MTKView:
private func render(_ texture: MTLTexture, withCommandBuffer commandBuffer: MTLCommandBuffer, device: MTLDevice) {
guard let currentRenderPassDescriptor = metalView?.currentRenderPassDescriptor,
let currentDrawable = metalView?.currentDrawable,
let renderPipelineState = renderPipelineState,
let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: currentRenderPassDescriptor) else {
semaphore.signal()
return
}
encoder.pushDebugGroup("RenderFrame")
encoder.setRenderPipelineState(renderPipelineState)
encoder.setFragmentTexture(texture, index: 0)
encoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1)
encoder.popDebugGroup()
encoder.endEncoding()
// Called after the command buffer is scheduled
commandBuffer.addScheduledHandler { [weak self] _ in
guard let strongSelf = self else {
return
}
strongSelf.didRender(texture: texture)
strongSelf.semaphore.signal()
}
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
I want the texture to be rendered like .scaleAspectFill on a UIView and I'm trying to learn Metal so I'm not sure where I should be looking for this (the .metal file, the pipeline, the view itself, the encoder, etc.)
Thanks!
Edit: Here is the shader code:
#include <metal_stdlib> using namespace metal;
typedef struct {
float4 renderedCoordinate [[position]];
float2 textureCoordinate; } TextureMappingVertex;
vertex TextureMappingVertex mapTexture(unsigned int vertex_id [[ vertex_id ]]) {
float4x4 renderedCoordinates = float4x4(float4( -1.0, -1.0, 0.0, 1.0 ),
float4( 1.0, -1.0, 0.0, 1.0 ),
float4( -1.0, 1.0, 0.0, 1.0 ),
float4( 1.0, 1.0, 0.0, 1.0 ));
float4x2 textureCoordinates = float4x2(float2( 0.0, 1.0 ),
float2( 1.0, 1.0 ),
float2( 0.0, 0.0 ),
float2( 1.0, 0.0 ));
TextureMappingVertex outVertex;
outVertex.renderedCoordinate = renderedCoordinates[vertex_id];
outVertex.textureCoordinate = textureCoordinates[vertex_id];
return outVertex; }
fragment half4 displayTexture(TextureMappingVertex mappingVertex [[ stage_in ]],texture2d<float, access::sample> texture [[ texture(0) ]]) {
constexpr sampler s(address::clamp_to_edge, filter::linear);
return half4(texture.sample(s, mappingVertex.textureCoordinate));
}
A few general things to start with when dealing with Metal textures or Metal in general:
You should take into account the difference between points and pixels, refer to the documentation here. The frame property of a UIView subclass (as MTKView is one) always gives you the width and the height of the view in points.
The mapping from points to actual pixels is controlled through the contentScaleFactor option. The MTKView automatically selects a texture with a fitting aspect ratio that matches the actual pixels of your device. For example, the underlying texture of a MTKView on the iPhone X would have a resolution of 2436 x 1125 (the actual display size in pixels). This is documented here: "The MTKView class automatically supports native screen scale. By default, the size of the view’s current drawable is always guaranteed to match the size of the view itself."
As documented here, the .scaleAspectFill option "scale[s] the content to fill the size of the view. Some portion of the content may be clipped to fill the view’s bounds". You want to simulate this behavior.
Rendering with Metal is nothing more than "drawing" to the resolve texture, which is automatically set by the MTKView. However, you still have full control and could do it on your own by manually creating textures and setting them in your renderPassDescriptor. But you don't need to care about this right now. The single thing you should care about is what, where and which part of the 1080x1920 pixels texture in your resolve texture you want to render in your resolve texture (which might have a different aspect ratio). We want to fully fill ("scaleAspectFill") the resolve texture, so we leave the renderedCoordinates in your fragment shader as they are. The are defining a rectangle over the whole resolve texture, which means the fragment shader is called for every single pixel in the resolve texture. Following, we will simply change the texture coordinates.
Let's define the aspect ratio as ratio = width / height, the resolve texture as r_tex and the texture you want to render as tex.
So assuming your resolve texture does not have the same aspect ratio, there are two possible scenarios:
The aspect ratio of your texture that you want to render is larger than the aspect ratio of your resolve texture (the texture Metal renders to), that means the texture you want to render has a larger width than the resolve texture. In this case we leave the y values of the coordinate as they are. The x values of texture coordinates will be changed:
x_left = 0 + ((tex.width - r_tex.width) / 2.0)
x_right = tex_width - ((tex.width - r_tex_width) / 2.0)
These values must be normalized because the texture samples needs coordinates in the range from 0 to 1:
x_left = x_left / tex.width
x_right = x_right / tex.width
We have our new texture coordinates:
topLeft = float2(x_left,0)
topRight = float2(x_right,0)
bottomLeft = float2(x_left,1)
bottomRight = float2(x_right,1)
This will have the effect that nothing of the top or the bottom of your texture will be cut off, but some outer parts at the left and right side will be clipped, i.e. not visible.
The aspect ratio of your texture that you want to render is smaller than the aspect ratio of your resolve texture. The procedure is the same as with first scenario, but this time we will change the y coordinates
This should render your texture so that the resolve texture is completely filled and the aspect ratio of your texture is maintained on the x-axis. Maintaining the y-axis will work similarly. Additionally you have to check which side of the texture is larger/smaller and incorporate this in your calculation. This will clip parts of your texture as it would be when using scaleAspectFill. Be aware that the above solution is untested. But I hope it is helpful. Be sure to visit Metal Best Practices documentation from time to time, it's very helpful to get the basic concepts right. Have fun with Metal!
So your vertex shader pretty directly dictates that the source texture be stretched to the dimensions of the viewport. You are rendering a quad that fills the viewport, because its coordinates are at the extremes ([-1, 1]) of the Normalized Device Coordinate system in the horizontal and vertical directions.
And you are mapping the source texture corner-to-corner over that same range. That's because you specify the extremes of texture coordinate space ([0, 1]) for the texture coordinates.
There are various approaches to achieve what you want. You could pass the vertex coordinates in to the shader via a buffer, instead of hard-coding them. That way, you can compute the appropriate values in app code. You'd compute the desired destination coordinates in the render target, expressed in NDC. So, conceptually, something like left_ndc = (left_pixel / target_width) * 2 - 1, etc.
Alternatively, and probably easier, you can leave the shader as-is and change the viewport for the draw operation to target only the appropriate portion of the render target.

WebGL shader save multiple 32 bit values

I need to save up to 8 32bit values in each WebGL fragment shader invocation(including when no OES_texture_float or OES_texture_half_float extentions are available). It seems I can only store a single 32 bit value by packing it into 4x8bits RGBA gl_FragColor.
Is there a way to store 8 values ?
The only way to draw more than one vec4 worth of data per call in the fragment shader is to use WEBGL_draw_buffers which lets you bind multiple color attachments to a framebuffer and then render to all of them in a single fragment shader call using
gl_FragData[constAttachmentIndex] = result;
If WEBGL_draw_buffers is not available the only workarounds are I can think of are
Rendering in multiple draw calls.
Call gl.drawArrays to render the first vec4, then again with different parameters or a different shader to render the second vec4.
Render based on gl_FragCoord where you change the output for each pixel.
In otherwords, 1st pixel gets the first vec4, second pixel gets the second vec4, etc. For example
float mode = mod(gl_Fragcoord.x, 2.);
gl_FragColor = mix(result1, result2, mode);
In this way the results are stored like this
1212121212
1212121212
1212121212
into one texture. For more vec4s you could do this
float mode = mod(gl_Fragcoord.x, 4.); // 4 vec4s
if (mode < 0.5) {
gl_FragColor = result1;
} else if (mode < 1.5) {
gl_FragColor = result2;
} else if (mode < 2.5) {
gl_FragColor = result3;
} else {
gl_FragColor = result4;
}
This may or may not be faster than method #1. Your shader is more complicated because it's potentially doing calculations for both result1 and result2 for every pixel but depending on the GPU and pipelining you might get some of that for free.
As for getting 32bit values out even if there's no OES_texture_float you're basically going to have to write out even more 8bit values using one of the 3 techniques above.
In WebGL2 draw buffers is a required a feature where as it's optional in WebGL1. In WebGL2 there's also transform feedback which writes the outputs of a vertex shader (the varyings) into buffers.

How to convert TangoXyxIjData into a matrix of z-values

I am currently using a Project Tango tablet for robotic obstacle avoidance. I want to create a matrix of z-values as they would appear on the Tango screen, so that I can use OpenCV to process the matrix. When I say z-values, I mean the distance each point is from the Tango. However, I don't know how to extract the z-values from the TangoXyzIjData and organize the values into a matrix. This is the code I have so far:
public void action(TangoPoseData poseData, TangoXyzIjData depthData) {
byte[] buffer = new byte[depthData.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
depthData.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer, depthData.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Mat m = new Mat(depthData.ijRows, depthData.ijCols, CvType.CV_8UC1);
m.put(0, 0, buffer);
}
Does anyone know how to do this? I would really appreciate help.
The short answer is it can't be done, at least not simply. The XYZij struct in the Tango API does not work completely yet. There is no "ij" data. Your retrieval of buffer will work as you have it coded. The contents are a set of X, Y, Z values for measured depth points, roughly 10000+ each callback. Each X, Y, and Z value is of type float, so not CV_8UC1. The problem is that the points are not ordered in any way, so they do not correspond to an "image" or xy raster. They are a random list of depth points. There are ways to get them into some xy order, but it is not straightforward. I have done both of these:
render them to an image, with the depth encoded as color, and pull out the image as pixels
use the model/view/perspective from OpenGL and multiply out the locations of each point and then figure out their screen space location (like OpenGL would during rendering). Sort the points by their xy screen space. Instead of the calculated screen-space depth just keep the Z value from the original buffer.
or
wait until (if) the XYZij struct is fixed so that it returns ij values.
I too wish to use Tango for object avoidance for robotics. I've had some success by simplifying the use case to be only interested in the distance of any object located at the center view of the Tango device.
In Java:
private Double centerCoordinateMax = 0.020;
private TangoXyzIjData xyzIjData;
final FloatBuffer xyz = xyzIjData.xyz;
double cumulativeZ = 0.0;
int numberOfPoints = 0;
for (int i = 0; i < xyzIjData.xyzCount; i += 3) {
float x = xyz.get(i);
float y = xyz.get(i + 1);
if (Math.abs(x) < centerCoordinateMax &&
Math.abs(y) < centerCoordinateMax) {
float z = xyz.get(i + 2);
cumulativeZ += z;
numberOfPoints++;
}
}
Double distanceInMeters;
if (numberOfPoints > 0) {
distanceInMeters = cumulativeZ / numberOfPoints;
} else {
distanceInMeters = null;
}
Said simply this code is taking the average distance of a small square located at the origin of x and y axes.
centerCoordinateMax = 0.020 was determined to work based on observation and testing. The square typically contains 50 points in ideal conditions and fewer when held close to the floor.
I've tested this using version 2 of my tango-caminada application and the depth measuring seems quite accurate. Standing 1/2 meter from a doorway I slid towards the open door and the distance changed form 0.5 meters to 2.5 meters which is the wall at the end of the hallway.
Simulating a robot being navigated I moved the device towards a trash can in the path until 0.5 meters separation and then rotated left until the distance was more than 0.5 meters and proceeded forward. An oversimplified simulation, but the basis for object avoidance using Tango depth perception.
You can do this by using camera intrinsics to convert XY coordinates to normalized values -- see this post - Google Tango: Aligning Depth and Color Frames - it's talking about texture coordinates but it's exactly the same problem
Once normalized, move to screen space x[1280,720] and then the Z coordinate can be used to generate a pixel value for openCV to chew on. You'll need to decide how to color pixels that don't correspond to depth points on your own, and advisedly, before you use the depth information to further colorize pixels.
The main thing is to remember that the raw coordinates returned are already using the basis vectors you want, i.e. you do not want the pose attitude or location

Problem with HLSL looping/sampling

I have a piece of HLSL code which looks like this:
float4 GetIndirection(float2 TexCoord)
{
float4 indirection = tex2D(IndirectionSampler, TexCoord);
for (half mip = indirection.b * 255; mip > 1 && indirection.a < 128; mip--)
{
indirection = tex2Dlod(IndirectionSampler, float4(TexCoord, 0, mip));
}
return indirection;
}
The results I am getting are consistent with that loop only executing once. I checked the shader in PIX and things got even more weird, the yellow arrow indicating position in the code gets to the loop, goes through it once, and jumps back to the start, at that point the yellow arrow never moves again but the cursor moves through the code and returns a result (a bug in PIX, or am I just using it wrong?)
I have a suspicion this may be a problem to do with texture reads getting moved outside the loop by the compiler, however I thought that didn't happen with tex2Dlod since I'm setting the LOD manually :/
So:
1) What's the problem?
2) Any suggested solutions?
Problem was solved, it was a simple coding mistake, I needed to increase mip level on each iteration, not decrease it.
float4 GetIndirection(float2 TexCoord)
{
float4 indirection = tex2D(IndirectionSampler, TexCoord);
for (half mip = indirection.b * 255; mip > 1 && indirection.a < 128; mip++)
{
indirection = tex2Dlod(IndirectionSampler, float4(TexCoord, 0, mip));
}
return indirection;
}

Resources