In Direct3D 9, I'm trying to modify a surface thus:
Given a rectangle, for each of the pixels in the given surface within the rectangle's bounds, each of the channels (R, G, B, A) would be multiplied by a certain (float) value to either dim or brighten it.
How would I go about doing this? Preferably I want to avoid using LockRect (especially as it seems to not work with the default pool).
If you are wanting to update a Surfaces pixels directly, you can use "Device.UpdateTexture". This updates a Texture created in Pool.SystemMemory to a Texture created in Pool.Default.
But this doesn't sound like what you want to be doing. Use an Effect to hardware accelerate this. If you would like to know how I can show you.
Related
I'm working with 3D meshes (mostly triangle meshes, though occasionally quad- or general polygonal meshes) for which I compute a value for each edge. This value I'd like to visualise using a colour map, i.e. render each edge in a colour corresponding to its associated value.
Is there a way to assign values to edges in WebGL that is more efficient than using a typical drawArrays approach? That is, looping over the edges and storing the vertices pair-wise in a buffer (resulting in a lot of duplicated x, y, z coordinate data) and introducing an additional vertex attribute storing the edge value (the same value for both vertices)?
In OpenGL, I'd use a drawElements approach, store the edge values in a texture (or buffer texture), and then use gl_PrimitiveID in the fragment shader to look up the relevant edge value for the edge currently being processed. Unfortunately, WebGL doesn't know about gl_PrimitiveID, and I don't see a way to emulate it. I briefly thought about instanced rendering (using gl_InstanceID), but that would complicate things and probably end up not being much more efficient...
I'm using a metal shader to draw many particles onto the screen. Each particle has its own position (which can change) and often two particles have the same position. How can I check if the texture2d I write into does not have a pixel at a certain position yet? (I want to make sure that I only draw a particle at a certain position if there hasn't been drawn a particle yet, because I get an ugly flickering if many particles are drawn at the same positon)
I've tried outTexture.read(particlePosition), but this obviously doesn't work, because of the texture access qualifier, which is access::write.
Is there a way I can have read and write access to a texture2d at the same time? (If there isn't, how could I still solve my problem?)
There are several approaches that could work here. In concurrent systems programming, what you're talking about is termed first-write wins.
1) If the particles only need to preclude other particles from being drawn (and aren't potentially obscured by other elements in the scene in the same render pass), you can write a special value to the depth buffer to signify that a fragment has already been written to a particular coordinate. For example, you'd turn on depth test (using the depth compare function Equal), clear the depth buffer to some distant value (like 1.0), and then write a value of 0.0 to the depth buffer in the fragment function. Any subsequent write to a given pixel will fail to pass the depth test and will not be drawn.
2) Use framebuffer read-back. On iOS, Metal allows you to read from the currently-bound primary renderbuffer by attributing a parameter to your fragment function with [[color(0)]]. This parameter will contain the current color value in the renderbuffer, which you can test against to determine whether it has been written to. This does require you to clear the texture to a predetermined color that will never otherwise be produced by your fragment function, so it is more limited than the above approach, and possibly less performant.
All of the above applies whether you're rendering to a drawable's texture for direct presentation to the screen, or to some offscreen texture.
To answer the read and write part : you can specify a read/write access for the output texture as such :
texture2d<float, access::read_write> outTexture [[texture(1)]],
Also, your texture descriptor must specify usage :
textureDescriptor?.usage = [.shaderRead, .shaderWrite]
I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?
Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.
What is the theory behind the Light Glow effect of "After Effects"?
I wanna use GLSL to make it happen. But if I at least get closer to the theory behind it, I could replicate it.
I've recently been implementing something similar. My render pipeline looks something like this:
Render Scene to texture (full screen)
Filter scene ("bright pass") to isolate the high luminance, shiny bits
Down-sample (2) to smaller texture (for performance), and do H Gaussian blur
Perform a V Gaussian blur on (3).
Blend output from (4) with the output from (1)
Display to screen.
With some parameter tweaking, you get get it looking pretty nice. Google things like "bright pass" (low pass filter), Gaussian Blur, FBO (Frame Buffer Objects) and so on. Effects like "bloom" and "HDR" also have a wealth of information about different ways of doing each of these things. I tried out about 4 different ways of doing Gaussian blur before settling on my current one.
Look at how to make shadow volumes, and instead of stenciling out a shadow, you could run a multi-pass blur on the volume, set its material to a very emissive, additive blended shader, and I imagine you'll get a similar effect.
Atlernatively, you could do the GPUGems implementation:
I will answer my own question just in case someone gets to here at the same point. With more precision (actually 100% of precision) I got to the exact After Effects's glow. The way it works is:
Apply a gaussian blur to the original image.
Extract the luma of this blurred image
Like in After Effects, you have two colors (A and B). So the secret is to make a gradient map between these color, acoording to the desired "Color Looping". If you don't know, a gradient map is an interpolation between colors (A and B in this case). Following the same vocabulary of After Effects, you need to loop X times over the "Color Looping" you chose... it means, suppose you are using the Color Looping like A->B->A, it will be considered one loop over your image (one can try this on Photoshop).
Take the luma your extract on step 2 and use as a parameter of your gradient map... in other words: luma=(0%, 50%, 100%) maps to color (A, B, A) respectively... the mid points are interpolated.
Blend your image with the original image according to the "Glow Operation" desired (Add, Multiply, etc)
This procedure work like After Effects for every single pixel. The other details of the Glow may be easily done after in basic procedure... things like "Glow Intensity", "Glow Threshold" and so on needs to be calibrated in order to get the same results with the same parameters.
If I have the vertex normals of a normal scene showing up as colours in a texture in world space is there a way to calculate edges efficiently or is it mathematically impossible? I know it's possible to calculate edges if you have the normals in view space but I'm not sure if it is possible to do so if you have the normals in world space (I've been trying to figure out a way for the past hour..)
I'm using DirectX with HLSL.
if ( normalA dot normalB > cos( maxAngleDiff )
then you have an edge. It won't be perfect but it will definitely find edges that other methods won't.
Or am i misunderstanding the problem?
Edit: how about, simply, high pass filtering the image?
I assume you are trying to make cartoon style edges for a cell shader?
If so, simply make a dot product of the world space normal with the world space pixel position minus camera position. As long as your operands are all in the same space you should be ok.
float edgy = dot(world_space_normal, pixel_world_pos - camera_world_pos);
If edgy is near 0, it's an edge.
If you want a screen space sized edge you will need to render additional object id information on another surface and post process the differences to the color surface.
It will depend on how many colors your image contain, and how they merge: sharp edges, dithered, blended,...
Since you say you have the vertex normals I am assuming that you can access the color-information on a single plane.
I have used two techniques with varying success:
I searched the image for local areas of the same color (RGB) and then used the highest of R, G or B to find the 'edge' - that is where the selected R,G or B is no longer the highest value;
the second method I used is to reduce the image to 16 colors internally, and it is easy to find the outlines in this case.
To construct vectors would then depend on how fine you want the granularity of your 'wireframe'-image to be.