direct11 write data to buffer in pixel shader? (like ssbo in open) - directx

I'm trying to write data to a buffer in hlsl shader. I know in opengl you need ssbo, but is there a corresponding buffer type in direct11?(I'm new to it).
p.s. I'm using monogame so the newest shader model available is 3.0.
thanks!

Shader Model 3 corresponds to the DirectX 9 architecture. This architecture looks as follows:
(source: s-msft.com)
(Source: https://msdn.microsoft.com/en-us/library/windows/desktop/bb219679(v=vs.85).aspx)
As you see, there is only one output from the pixel shader. This output can be a color or depth value and will be rendered to the render target. So, there is no way in DX9.
In DX10 (SM 4), the pipeline looks as follows:
(source: s-msft.com)
Source: https://msdn.microsoft.com/en-us/library/windows/desktop/bb205123(v=vs.85).aspx
Again, the output of the pixel shader is color and depth. No way in DX10 either.
Finally, DirectX 11 (SM 5):
(source: s-msft.com)
Source: https://msdn.microsoft.com/en-us/library/windows/desktop/ff476882(v=vs.85).aspx
Now there is a way from the Pixel Shader to Memory Resources. The buffer type that you would need is the RWBuffer.

Related

Write pixel data to certain mipmap level of texture2d

As you might know, Metal Shading Language allows few ways to read pixel data from texture2d in the kernel function. It can be either simple read(short2 coord) or sample(float2 coord, [different additional parameters]). But I noticed, that when it comes to writing something into texture, there's only write method.
And the problem here is that sample method allows to sample from certain mipmap level which is very convenient. Developer just needs to create a sampler with mipFilter and use normalized coordinates.
But what if I want to write into certain mipmap level of the texture? The thing is that write method doesn't have mipmap parameter the way sample method has and I cannot find any alternative for that.
I'm pretty sure there should be a way to choose mipmap level for writing data to the texture, because Metal Performance Shaders framework has solutions where mipmaps of textures are being populated.
Thanks in advance!
You can do this with texture views.
The purpose of texture views is to reinterpret the contents of a base texture by selecting a subset of its levels and slices and potentially reading/writing its pixels in a different (but compatible) pixel format.
The -newTextureViewWithPixelFormat:textureType:levels:slices: method on the MTLTexture protocol returns a new instance of id<MTLTexture> that has the first level specified in the levels range as its base mip level. By creating one view per mip level you wish to write to, you can "target" each level in the original texture.
For example, to create a texture view on the second mip level of a 2D texture, you might call the method like this:
id<MTLTexture> viewTexture =
[baseTexture newTextureViewWithPixelFormat:baseTexture.pixelFormat
textureType:baseTexture.textureType
levels:NSMakeRange(1, 1)
slices:NSMakeRange(0, 1)];
When binding this new texture as an argument, its mip level 0 will correspond to mip level 1 of its base texture. You can therefore use the ordinary texture write function in a shader to write to the selected mip level:
myShaderTexture.write(color, coords);

HLSL - how does vertex shader's output POSITION0 affect pixel shader's texture mapping uv?

It seems POSITION/POSITION0's w devide everything in output struct. thus made pixel shader can do correct perspective mapping.and it cant be removed,otherwise pixel shader wont output anything.
i didn't see any configuration in program code. Is it a fixed default setting for all devices? or can i customize this setting?
You have the choice to disable the perspective correction in hlsl on any interpolator as find here.
The modifier you want is noperspective.

Pass Texture to Uniform with CVOpenGLESTextureCache in OpenGL ES

I'm trying to apply a video file as a texture in OpenGL ES on iOS 5.0+ using CVOpenGLESTextureCache.
I've found Apple's RosyWriter sample code, and have been reading through it.
The question I have is:
How are the textures finally being delivered to the uniforms in the fragment shader?
In the RosyWriterPreviewView class, I follow it all the way up to
glBindTexture(CVOpenGLESTextureGetTarget(texture),
CVOpenGLESTextureGetName(texture))
after which some texture parameters are specified.
However, I don't see the texture uniform (sampler2D videoframe) ever being explicitly referenced by the sample code. The texture-sending code I've become used to would look something like:
GLint uniform = glGetUniformLocation(program, "u_uniformName");
with a subsequent call to actually send the texture to the uniform:
glUniform1i(GLint location, GLint x);
So I know that SOMEhow RosyWriter is delivering the texture to the uniform in its fragment shader, but I can't see how and where it's happening.
In fact, the sample code includes the comment where it builds up the OpenGL program:
// we don't need to get uniform locations in this example
Any help on why this is & how the texture is getting sent over would be great.
In the RosyWriter example, I think the reason they're able to get away without using glUniformi() at any point for the videoframe uniform is that they're binding the input texture to texture unit 0.
When specifying a uniform value for a texture, the value you use is the texture unit that texture is bound to. If you don't set a value for a uniform, I believe it should default to 0, so by binding the texture to unit 0 always they never have to set a value for the videoframe uniform. It will still pull in the texture attached to unit 0.

GPGPU programming with OpenGL ES 2.0

I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1.
I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app.
I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable.
As far as I understand it it goes like that: I need to set up a view (an orthographic one maybe?) and a quad in such a way so that the pixel shader would be applied once to each pixel in the image and so that it would be rendering to a texture or something. But how can I achieve that considering there's depth that may make things somewhat awkward to me...
I'd be very grateful if anyone could help me with this rather simple task as I am really frustrated with myself.
UPDATE:
It seems I'll have to use an FBO, getting one like this: glBindFramebuffer(...)
Use this tutorial, it's targeted at OpenGL 2.0, but most features are available in ES 2.0, the only thing i have doubts is floating point textures.
http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html
Basically, you need 4 vertex positions (as vec2) of a quad (with corners (-1,-1) and (1,1)) passed as a vertex attribute.
You don't really need a projection, because the shader will not need any.
Create an FBO, bind it and attach the target surface. Don't forget to check the completeness status.
Bind the shader, set up input textures and draw the quad.
Your vertex shader may look like this:
#version 130
in vec2 at_pos;
out vec2 tc;
void main() {
tc = (at_pos+vec2(1.0))*0.5; //texture coordinates
gl_Position = vec4(at_pos,0.0,1.0); //no projection needed
}
And a fragment shader:
#version 130
in vec2 tc;
uniform sampler2D unit_in;
void main() {
vec4 v = texture2D(unit_in,tc);
gl_FragColor = do_something();
}
If you want an example, I created this project for iOS devices for processing frames of video grabbed from the camera using OpenGL ES 2.0 shaders. I explain more about it in my writeup here.
Basically, I pull in the BGRA data for a frame and create a texture from that. I then use two triangles to generate a rectangle and map the texture on that. A shader is used to directly display the image onscreen, perform some effect on the image and display it, or perform some effect on the image while in an offscreen FBO. In the last case, I can then use glReadPixels() to pull in the image for some CPU-based processing, but ideally I want to fix this so that the processed image is just passed on as a texture to the next set of shaders.
You should also check out ogles_gpgpu, which even supports Android systems. An overview about this topic is given in this publication: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You can do more advanced GPGPU things with OpenGL ES 3.0 now. Check out this post for example. Apple now also has the "Metal API" which allows even more GPU compute operations. Both, OpenGL ES 3.x and Metal are only supported by newer devices with A7 chip.

Writing texture data onto depth buffer

I'm trying to implement the technique described at : Compositing Images with Depth.
The idea is to use an existing texture (loaded from an image) as a depth mask, to basically fake 3D.
The problem I face is that glDrawPixels is not available in OpenglES. Is there a way to accomplish the same thing on the iPhone?
The depth buffer is more obscured than you think in OpenGL ES; not only is glDrawPixels absent but gl_FragDepth has been removed from GLSL. So you can't write a custom fragment shader to spool values to the depth buffer as you might push colours.
The most obvious solution is to pack your depth information into a texture and to use a custom fragment shader that does a depth comparison between the fragment it generates and one looked up from a texture you supply. Only if the generated fragment is closer is it allowed to proceed. The normal depth buffer will catch other cases of occlusion and — in principle — you could use a framebuffer object to create the depth texture in the first place, giving you a complete on-GPU round trip, though it isn't directly relevant to your problem.
Disadvantages are that drawing will cost you an extra texture unit and textures use integer components.
EDIT: for the purposes of keeping the example simple, suppose you were packing all of your depth information into the red channel of a texture. That'd give you a really low precision depth buffer, but just to keep things clear, you could write a quick fragment shader like:
void main()
{
// write a value to the depth map
gl_FragColor = vec4(gl_FragCoord.w, 0.0, 0.0, 1.0);
}
To store depth in the red channel. So you've partially recreated the old depth texture extension — you'll have an image that has a brighter red in pixels that are closer, a darker red in pixels that are further away. I think that in your question, you'd actually load this image from disk.
To then use the texture in a future fragment shader, you'd do something like:
uniform sampler2D depthMap;
void main()
{
// read a value from the depth map
lowp vec3 colourFromDepthMap = texture2D(depthMap, gl_FragCoord.xy);
// discard the current fragment if it is less close than the stored value
if(colourFromDepthMap.r > gl_FragCoord.w) discard;
... set gl_FragColor appropriately otherwise ...
}
EDIT2: you can see a much smarter mapping from depth to an RGBA value here. To tie in directly to that document, OES_depth_texture definitely isn't supported on the iPad or on the third generation iPhone. I've not run a complete test elsewhere.

Resources