I have a kernel function (compute shader) that reads nearby pixels of a pixel from a texture and based on the old nearby-pixel values updates the value of the current pixel (it's not a simple convolution).
I've tried creating a copy of the texture using BlitCommandEncoder and feeding the kernel function with 2 textures - one read-only and another write-only. Unfortunately, this approach is GPU-wise time consuming.
What is the most efficient (GPU- and memory-wise) way of reading old values from a texture while updating its content?
(Bit late but oh well)
There is no way you could make it work with only one texture, because the GPU is a highly parallel processor: Your kernel that you wrote for a single pixel gets called in parallel on all pixels, you can't tell which one goes first.
So you definitely need 2 textures. The way you probably should do it is by using 2 textures where one is the "old" one and the other the "new" one. Between passes, you switch the role of the textures, now old is new and new is old. Here is some pseudoswift:
var currentText = MTLTexture()
var nextText = MTLTexture()
let semaphore = dispatch_semaphore_create(1)
func update() {
dispatch_semaphore_wait(semaphore) // Wait for updating done signal
let commands = commandQueue.commandBuffer()
let encoder = commands.computeCommandEncoder()
encoder.setTexture(currentText, atIndex: 0)
encoder.setTexture(nextText, atIndex: 1)
encoder.dispatchThreadgroups(...)
encoder.endEncoding()
// When updating done, swap the textures and signal that it's done updating
commands.addCompletionHandler {
swap(¤tText, &nextText)
dispatch_semaphore_signal(semaphore)
}
commands.commit()
}
I have written plenty of iOS Metal code that samples (or reads) from the same texture it is rendering into. I am using the render pipeline, setting my texture as the render target attachment, and also loading it as a source texture. It works just fine.
To be clear, a more efficient approach is to use the color() attribute in your fragment shader, but that is only suitable if all you need is the value of the current fragment, not any other nearby positions. If you need to read from other positions in the render target, I would just load the render target as a source texture into the fragment shader.
Related
I have two questions:
First, is there any more direct, sane way to go from a texture atlas image to a texture array in WebGL than what I'm doing below? I've not tried this, but doing it entirely in WebGL seems possible, though four-times the work and I still have to make two round trips to the GPU to do it.
And am I right that because buffer data for texImage3D() must come from PIXEL_UNPACK_BUFFER, this data must come directly from the CPU side? I.e. There is no way to copy from one block of GPU memory to a PIXEL_UNPACK_BUFFER without copying it to the CPU first. I'm pretty sure the answer to this is a hard "no".
In case my questions themselves are stupid (and they may be), my ultimate goal here is simply to convert a texture atlas PNG to a texture array. From what I've tried, the fastest way to do this by far is via PIXEL_UNPACK_BUFFER, rather than extracting each sub-image and sending them in one at a time, which for large atlases is extremely slow.
This is basically how I'm currently getting my pixel data.
const imageToBinary = async (image: HTMLImageElement) => {
const canvas = document.createElement('canvas');
canvas.width = image.width;
canvas.height = image.height;
const context = canvas.getContext('2d');
context.drawImage(image, 0, 0);
const imageData = context.getImageData(0, 0, image.width, image.height);
return imageData.data;
};
So, I'm creating an HTMLImageElement object, which contains the uncompressed pixel data I want, but has no methods to get at it directly. Then I'm creating a 2D context version containing the same pixel data a second time. Then I'm repopulating the GPU with the same pixel data a third time. Seems bonkers to me, but I don't see a way around it.
It seems that when pulling textures from a texture atlas, I am now generating new textures instead of using the same texture across different sprites with iOS 10. On iOS 9, this works as expected. Is anyone else experiencing this issue? Perhaps there is a step I missed that is now a part of iOS 10.
Notes: I created a sample project and created a new atlas, then just dragged spaceship in #1x, I have also tried preloading, and that did nothing as well.
Code:
let atlas = SKTexturAtlas(named:"Sprites")
var texture = atlas.textureNamed("Spaceship")
print("\(Unmanaged.passUnretained(texture)),\(Unmanaged.passUnretained(texture).toOpaque())")
texture = atlas.textureNamed("Spaceship")
print("\(Unmanaged.passUnretained(texture)),\(Unmanaged.passUnretained(texture).toOpaque())")
Edit: To get around issues of comparison, I use the description property to compare if 2 textures are equal. For this to work though, you can't be using 2 atlases that each contain a texture with an exact name and size. I will never hit this situation, but for anybody out there looking for help, keep this in mind.
I've make the same test and get your same results.
I'm not sure 100% but seems that during the development of Swift 3 there was a proposal here to change Unmanaged to use UnsafePointer.
But if you try to make:
func address<T: AnyObject>(o: T) -> String{
let addr = unsafeBitCast(o, to: Int.self)
return NSString(format: "%p", addr) as String
}
Usage:
print(address(o: texture))
in iOS9 you have correct values, in iOS10 wrong results.
I think you're right, we are facing a bug (another..)
Is having a different physical address for a texture referencing the "same texture" really a problem?
I've run the default sample game project, but setup for Obj-C. I have a texture atlas that would be something like the image below. However, note that I ran that through TexturePacker. So the actual generated atlas by Xcode is different.
I do as you said and created 2 textures with the same name.
self.myTextureAtlas = [SKTextureAtlas atlasNamed:#"MyTexture"];
self.tex0 = [self.myTextureAtlas textureNamed:#"tex0"];
self.tex1 = [self.myTextureAtlas textureNamed:#"tex0"];
As you said, the pointers for tex0 and tex1 are different. So at least there is consistency between Swift and Obj-C.
However, I don't think this is a problem/bug. What I suspect is that that they changed implementation so the returned SKTexture is a new "instance", however the underlying texture is still the same.
I'll talk OpenGL, since that is what I write my engines in. Metal will still have similarities. A basic sub-texture really has only 2 important properties: a texture name (this is the OpenGL texture name) and UVs. If you were thinking about what would be considered the "equality" for conforming to Equatable, it would most likely be testing for equality against those 2 items. The texture name is the atlas texture name, and the UVs are the UVs within the atlas which represent the area of the particular sub-texture.
To test this hypothesis, I ran a GPU frame capture on this. With Xcode 8 this seems pretty buggy. Using Metal, it crashed 100% of the time. I forced it to use OpenGL and managed to get a frame capture. As expected when I looked at all texture resources, I see only one texture for my atlas.
Texture #3 is MyTexture.
If I dump the textureRect, which appear to be the UVs, I can see they are the same:
Tex0 Rect 0.001618 0.793765 0.139159 0.203837
Tex1 Rect 0.001618 0.793765 0.139159 0.203837
Based on this, it would seem that both self.tex0 and self.tex1, although having different physical addresses, still both point to the same sub-texture.
Note that I no longer use SpriteKit. My current renderer uses handles for textures, however, when retrieved, you can get handle objects with different physical addresses. They still all dereference to the true texture since they still reference the same underlying texture instance.
I guess, I don't really see getting diff pointers a problem provided they still reference the same underlying texture (ie. no more texture memory is allocated).
To get around this issue, I had to come up with a way to cache the textures so that it doesn't duplicate:
private var textureCache = [String: SKTexture]()
extension SKTextureAtlas
{
func texturesWithNames(_ names:[String]) -> [SKTexture]
{
var textures = [SKTexture]()
names.forEach({textures.append(textureNamed($0))})
return textures
}
func cachedTextureWithName(_ name:String) -> SKTexture
{
if textureCache[name] == nil
{
textureCache[name] = textureNamed(name)
}
return textureCache[name]!
}
func cachedTexturesWithNames(_ names:[String]) -> [SKTexture]
{
var textures = [SKTexture]()
names.forEach({textures.append(cachedTextureWithName($0))})
return textures
}
func clearCache()
{
textureCache = [String: SKTexture]()
}
}
extension SKTexture
{
var name : String
{
return self.description.slice(start: "'",to: "'")!
}
}
I'm developing a opengl es application for ios.
I'm trying to blend two textures in my shader, but I always get only one active texture unit.
I have generated two texture, and linked them with two "sampler2D" from the fragment shader.
I set them to unit 0 and 1 by using glUniform1f();
And I have bind the textures using a loop
for (int i = 0; i < 2; i++)
{
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, textures[i]);
}
But when I draw the opengl frame, only one unit is active. like in the picture below
So, what I've been doing wrong?
The way I read the output of that tool (I have not used it), the left pane shows the currently active texture unit. There is always exactly one active texture unit, corresponding to your last call of glActiveTexture(). This means that after you call:
glActiveTexture(GL_TEXTURE0 + i);
the value in the left circled field will be the value of i.
The right pane shows the textures bound to each texture unit. Since you bound textures to unit 0 and 1 with the loop shown in your question, it shows a texture (with id 201) bound to texture unit 0, and a texture (with id 202) bound to texture unit 1.
So as far as I can tell, the state shown in the screenshot represents exactly what you set based on your description and code fragment.
Based on the wording in your question, you might be under the impression that glActiveTexture() enables texture units. That is not the case. glActiveTexture() only specifies which texture unit subsequent glBindTexture() calls operate on.
Which textures are used is then determined by the values you set for the sampler uniforms of your shader program, and by the textures you bound to the corresponding texture units. The value of the currently active texture unit has no influence on the draw call, only on texture binding.
I have a texture which I use as a texture map. Its a 2048 by 2048 texture divided in squares of 256 pixels each. So I have 64 "slots". This map can be empty, partly filled or full. On screen I am drawing simple squares with a slot of the sprite map each.
The problem is that I have to update this map from time to time when the asset for the slot becomes available. These assets are being downloaded from the internet but the initial information arrives in advance so I can tell how many slots I will use and see the local storage to check which ones are already available to be drawn at the start.
For example. My info says there will be 10 squares, from these 5 are available locally so when the sprite map is initialized these squares are already filled and ready to be drawn. On the screen I will show 10 squares. 5 of them will have the image stored in the texture map for those slots, the remaining 5 are drawn with a temporal image. As a new asset for a slot is downloaded I want to update my sprite map (which is bound and used for drawing) with the new corresponding texture, after the draw is finished and the sprite map has been updated I set up a flag which tells OpenGL that it should start drawing with that slot instead of the temporal image.
From what I have read, there are 3 ways to update a sprite map.
1) Upload a new one with glTextImage2D: I am currently using this approach. I will create another updater texture and then simply swap it. But i frequently run into memory warnings.
2) Modify the texture with glTextSubImage2D: I cant get this to work, I keep getting memory access errors or black textures. I believe its either because the thread is not the same or I am accessing a texture in use.
3) Use Frame Buffer Objects: I could try this but I am not certain if i can Draw on my texturebuffer while it is already being used.
What is the correct way of solving this?
This is meant to be used on an iPhone so resources are limited.
Edit: I found this post which talks about something related here.
Unfortunately I dont think its focused on modifying a texture that is currently being used.
the thread is not the same
OpenGL-ES API is absolutely not multi-threaded. Update your texture from main thread.
Because your texture must be uploaded on gpu, glTextSubImage2D is the fastest and simplest path. Keep this direction :)
Render on a Frame Buffer (attached on your texture) is very fast for rendering data which are already on gpu. (not your case). And yes you can draw on a frame buffer bound to a texture (= a frame buffer which use the texture as color attachment).
Just one contrain: You can't read and write the same texture in one draw call (The texture attached to the current frame buffer can't be bound to a texture unit)
I'm in the process of writing my first few shaders, usually writing a shader to accomplish features as I realize that the main XNA library doesn't support them.
The trouble I'm running into is that not all of my models in a particular scene have texture data in them, and I can't figure out how to handle that. The main XNA libraries seem to handle it by using a wrapper class for BasicEffect, loading it through the content manager and selectively enabling or disabling texture processing accordingly.
How difficult is it to accomplish this for a custom shader? What I'm writing is an generic "hue shift" effect, that is, I want whatever gets drawn with this technique to have its texture colors (if any) and its vertex color hue shifted by a certain degree. Do I need to write separate shaders, one with textures and one without? If so, when I'm looping through my MeshParts, is there any way to detect if a given part has texture coordinates so that I can apply the correct effect?
Yes, you will need separate shaders, or rather different "techniques" - it can still be the same effect and use much of the same code. You can see how BasicEffect (at least the pre-XNA 4.0 version) does it by reading the source code.
To detect whether or not a model mesh part has texture coordinates, try this:
// Note: this allocates an array, so do it at load-time
var elements = meshPart.VertexBuffer.VertexDeclaration.GetVertexElements();
bool result = elements.Any(e =>
e.VertexElementUsage == VertexElementUsage.TextureCoordinate);
The way the content pipeline sets up its BasicEffect is via BasicMaterialContent. The BasicEffect.TextureEnabled property is simply turned on if Texture is set.