I am drawing texture in a quad with 4. then I am drawing a triangle with sample count 4. I feel there is no need to draw texture in a quad with 4 sample count. It affect performance. Is it possible use different sample count in a single program.
It's not possible to use different MSAA sample counts with a single render pipeline state or within a single render pass (render command encoder), because each of these objects is immutably configured with the sample count. In order to achieve MSAA, the render pass has one or more attachments which must be resolved to produce a final image. If you need different sample counts for different draw calls (i.e., you want to draw some MSAA passes and some non-MSAA passes), you should first perform your multisample passes, then load the resolveTextures of the final MSAA pass as the textures of the corresponding attachments in subsequent passes, using a loadAction of .load, then perform your non-MSAA drawing.
Related
I have a kernel function (compute shader) that reads nearby pixels of a pixel from a texture and based on the old nearby-pixel values updates the value of the current pixel (it's not a simple convolution).
I've tried creating a copy of the texture using BlitCommandEncoder and feeding the kernel function with 2 textures - one read-only and another write-only. Unfortunately, this approach is GPU-wise time consuming.
What is the most efficient (GPU- and memory-wise) way of reading old values from a texture while updating its content?
(Bit late but oh well)
There is no way you could make it work with only one texture, because the GPU is a highly parallel processor: Your kernel that you wrote for a single pixel gets called in parallel on all pixels, you can't tell which one goes first.
So you definitely need 2 textures. The way you probably should do it is by using 2 textures where one is the "old" one and the other the "new" one. Between passes, you switch the role of the textures, now old is new and new is old. Here is some pseudoswift:
var currentText = MTLTexture()
var nextText = MTLTexture()
let semaphore = dispatch_semaphore_create(1)
func update() {
dispatch_semaphore_wait(semaphore) // Wait for updating done signal
let commands = commandQueue.commandBuffer()
let encoder = commands.computeCommandEncoder()
encoder.setTexture(currentText, atIndex: 0)
encoder.setTexture(nextText, atIndex: 1)
encoder.dispatchThreadgroups(...)
encoder.endEncoding()
// When updating done, swap the textures and signal that it's done updating
commands.addCompletionHandler {
swap(¤tText, &nextText)
dispatch_semaphore_signal(semaphore)
}
commands.commit()
}
I have written plenty of iOS Metal code that samples (or reads) from the same texture it is rendering into. I am using the render pipeline, setting my texture as the render target attachment, and also loading it as a source texture. It works just fine.
To be clear, a more efficient approach is to use the color() attribute in your fragment shader, but that is only suitable if all you need is the value of the current fragment, not any other nearby positions. If you need to read from other positions in the render target, I would just load the render target as a source texture into the fragment shader.
I have a texture which I use as a texture map. Its a 2048 by 2048 texture divided in squares of 256 pixels each. So I have 64 "slots". This map can be empty, partly filled or full. On screen I am drawing simple squares with a slot of the sprite map each.
The problem is that I have to update this map from time to time when the asset for the slot becomes available. These assets are being downloaded from the internet but the initial information arrives in advance so I can tell how many slots I will use and see the local storage to check which ones are already available to be drawn at the start.
For example. My info says there will be 10 squares, from these 5 are available locally so when the sprite map is initialized these squares are already filled and ready to be drawn. On the screen I will show 10 squares. 5 of them will have the image stored in the texture map for those slots, the remaining 5 are drawn with a temporal image. As a new asset for a slot is downloaded I want to update my sprite map (which is bound and used for drawing) with the new corresponding texture, after the draw is finished and the sprite map has been updated I set up a flag which tells OpenGL that it should start drawing with that slot instead of the temporal image.
From what I have read, there are 3 ways to update a sprite map.
1) Upload a new one with glTextImage2D: I am currently using this approach. I will create another updater texture and then simply swap it. But i frequently run into memory warnings.
2) Modify the texture with glTextSubImage2D: I cant get this to work, I keep getting memory access errors or black textures. I believe its either because the thread is not the same or I am accessing a texture in use.
3) Use Frame Buffer Objects: I could try this but I am not certain if i can Draw on my texturebuffer while it is already being used.
What is the correct way of solving this?
This is meant to be used on an iPhone so resources are limited.
Edit: I found this post which talks about something related here.
Unfortunately I dont think its focused on modifying a texture that is currently being used.
the thread is not the same
OpenGL-ES API is absolutely not multi-threaded. Update your texture from main thread.
Because your texture must be uploaded on gpu, glTextSubImage2D is the fastest and simplest path. Keep this direction :)
Render on a Frame Buffer (attached on your texture) is very fast for rendering data which are already on gpu. (not your case). And yes you can draw on a frame buffer bound to a texture (= a frame buffer which use the texture as color attachment).
Just one contrain: You can't read and write the same texture in one draw call (The texture attached to the current frame buffer can't be bound to a texture unit)
I'm very new to shaders and am very confused about the whole thing, even after following several tutorials (in fact this is my second question about shaders today).
I'm trying to make a shader with two passes :
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 HorizontalBlur();
}
pass Pass2
{
PixelShader = compile ps_2_0 VerticalBlur();
}
}
However, this only applies VerticalBlur(). If I remove Pass2, it falls back to the HorizontalBlur() in Pass1. Am I missing something? Maybe it's simply not passing the result of the first pass to the second pass, in which case how would I do that?
Also, in most of the tutorials I've read, I'm told to put effect.CurrentTechnique.Passes[0].Apply(); after I start my spritebatch with the effect. However, this doesn't seem to change anything; I can set it to Passes[1] or even remove it entirely, and I still get only Pass2. (I do get an error when I try to set it to Passes[2], however.) What's the use of that line then? Has the need for it been removed in recent versions?
Thanks a bunch!
To render multiple passes:
For the first pass, render your scene onto a texture.
For the second, third, fourth, etc passes:
Draw a quad that uses the texture from the previous pass. If there are more passes, to follow, render this pass to another texture, otherwise if this is the last pass, render it to the back buffer.
In your example, say you are rendering a car.
First you render the car to a texture.
Then you draw a big rectangle, the size of the screen in pixels, place at a z depth of 0.5, with identity world, view, and projection matrices, and with your car scene as the texture, and apply the horizontal blur pass. This is rendered to a new texture that now has a horizontally blurred car.
Finally, render the same rectangle but with the "horizontally burred car" texture, and apply the vertical blur pass. Render this to the back buffer. You have now drawn a blurred car scene.
The reason for the following
effect.CurrentTechnique.Passes[0].Apply();
is that many effects only have a single pass.
To run multiple passes, I think you have to do this instead:
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
//Your draw code here:
}
I'd like to change my RenderTargets between SpriteBatch.Begin and SpriteBatch.End. I already know this works:
GraphicsDevice.SetRenderTarget(target1);
SpriteBatch.Begin()
SpriteBatch.Draw(...)
SpriteBatch.End()
GraphicsDevice.SetRenderTarget(target2);
SpriteBatch.Begin()
Spritebatch.Draw(...)
SPriteBatch.End()
But I'd really like to make this work:
SpriteBatch.Begin()
GraphicsDevice.SetRenderTarget(target1);
SpriteBatch.Draw(...)
GraphicsDevice.SetRenderTarget(target2);
Spritebatch.Draw(...)
SpriteBatch.End()
I've ever seen anybody doing this, but I didn't find any reason why.
EDIT: a little more about why I want to do this:
In my project, I use SpriteSortMode.Immediate (to be able to change the BlendState when I want), and I simply iterate through a sorted list of sprites, and draw them all.
But now I want to apply mutli passes shader on some sprites, but not all! I'm quite new to shaders, but from what I understood, I have to draw my sprite on an intermediate one using first pass, and the draw the intermediate sprite on the final render target using the second pass. (I'm using a gaussian blur pixel shader).
That's why I'd like to draw on the target I want, using the desired shader, without having to make a new begin/end.
The question is: Why do you want to change the render target there?
You won't have any performance improvements, since the batch would have to be split anyways when the render target (or any other render state) changes.
SpriteBatch tries to group the sprites by common attributes, for example the texture when SpriteSortMode.Texture is used. That means sprites sharing a texture will be drawn in the same draw call (batch). Having less batches can improve performance. But you can't change the GPU state during a draw call. So when you change the render target you are bound to use two draw calls anyways.
Ergo, even if the second example would work, the number of batches would be the same.
I'm in the process of writing my first few shaders, usually writing a shader to accomplish features as I realize that the main XNA library doesn't support them.
The trouble I'm running into is that not all of my models in a particular scene have texture data in them, and I can't figure out how to handle that. The main XNA libraries seem to handle it by using a wrapper class for BasicEffect, loading it through the content manager and selectively enabling or disabling texture processing accordingly.
How difficult is it to accomplish this for a custom shader? What I'm writing is an generic "hue shift" effect, that is, I want whatever gets drawn with this technique to have its texture colors (if any) and its vertex color hue shifted by a certain degree. Do I need to write separate shaders, one with textures and one without? If so, when I'm looping through my MeshParts, is there any way to detect if a given part has texture coordinates so that I can apply the correct effect?
Yes, you will need separate shaders, or rather different "techniques" - it can still be the same effect and use much of the same code. You can see how BasicEffect (at least the pre-XNA 4.0 version) does it by reading the source code.
To detect whether or not a model mesh part has texture coordinates, try this:
// Note: this allocates an array, so do it at load-time
var elements = meshPart.VertexBuffer.VertexDeclaration.GetVertexElements();
bool result = elements.Any(e =>
e.VertexElementUsage == VertexElementUsage.TextureCoordinate);
The way the content pipeline sets up its BasicEffect is via BasicMaterialContent. The BasicEffect.TextureEnabled property is simply turned on if Texture is set.