there's a great page with some c code for the blend modes in photoshop. What I want is to use some of these modes in my XNA application. In particular, overlay, hue and saturation. Do you think it is possible with just the XNA blend functions and blend enum or I will need to create a shader for these effects?
Here's the link for the photoshop blend modes math : http://www.nathanm.com/photoshop-blending-math/
First of all, here is a question that covers much of the same territory.
The problem is that the blend stage in a modern GPU is still very limited and fixed-function. You have these functions to choose from: add, subtract, max, min, and you have a few multipliers.
I'm pretty sure the blend modes you want to use cannot be implemented within this system. Overlay requires a conditional which probably cannot be worked around, and Hue and Saturation require a HSV conversion which cannot be done at all.
So the answer is - as you say - to create a shader that takes two textures as inputs and combines them using your custom blending mode. If you want to apply this effect on top of an entire scene, you will want to use render targets to render your scene to a texture that can be used as input to your shader.
Related
The task is to show glasses shadow on the user's face. Right now there is no shadow under the glasses. AnchorEntity(.face) is being used as the main anchor for glasses!
How it works now:
How it should work:
Limited Raytracing options for glass
In RealityKit 2.0 there are very limited raytracing options for transparent and semi-transparent objects (like glasses, vases or windows). There are no properties that control how raytracing should work. Remember, RealityKit's renderer isn't the same as Arnold in Autodesk Maya, for example. So there are no robust semi-transparent shadows behind glasses in RealityKit. Only frames cast opaque shadows, but these shadows are insignificant, barely noticeable.
Solution I
Here is a first solution for this situation - you need to use baked shadows (fake shadows) on the texture of a canonical face mesh. But, of course, using this approach, you can't "cast" shadows on real-user eyes to get a robust shading experience.
Solution II
To shade real-user eyes and areas around eyes in AR app you need to create two alpha-channel masks to apply a lower intensity for eyes and areas around them. For changing intensity of certain areas of background video you need to use compositing methods (CI filters) available in CoreImage framework.
I would like to do something like this:
Have the camera on and tap on the screen to get the color of that area and then replace that color with a texture. I have done something similar by replacing the color on the screen with another color (that is still not working right though), but replacing with a texture is more complex i think.
So please, can somebody give me a hint on how i can do this?
Also , on how to create the texture.
Thank you,
Alex
basically you will want to do this with a boolean operation in the fragment shader.
you'll need to feed two textures to the shader, one is the camera image, the other is the replacement image. then you need a function which determines if the per-fragment color from the camera texture is within a certain color range (which you choose), and depending on that either show the camera texture or the other texture.
your question is a bit vague, you should try to break it down into smaller problems. the tricky part, if you haven't done this before, is getting the OpenGL boilerplate code right.
you need to know:
how to write, compile and use basic GLSL shaders
how to load images into OpenGL textures and use them in your shaders (search for sampler2d)
a good first step is to do the following:
figure out how to show a texture as a flat fullscreen image using 2D geometry. You'll need to render two triangles, and map the texture's coordinates (UV) onto the triangle points.
if you follow this tutorial you'll be able to do the thing you want:
http://www.raywenderlich.com/70208/opengl-es-pixel-shaders-tutorial
Im trying to implement a particle system (using OpenGL 2.0 ES), where each particle is made up of a quad with a simple texture
the red pixels are transparent. Each particle will have a random alpha value from 50% to 100%
Now the tricky part is i like each particle to have a blendmode much like Photoshop "overlay" i tried many different combinations with the glBlendFunc() but without luck.
I dont understand how i could implement this in a fragment shader, since i need infomations about the current color of the fragment. So that i can calculate a new color based on the current and texture color.
I also thought about using a frame buffer object, but i guess i would need to re-render my frame-buffer-object into a texture, for each particle since each particle every frame, since i need the calculated fragment color when particles overlap each other.
Ive found math' and other infomations regrading the Overlay calculation but i have a hard time figuring out which direction i could go to implement this.
http://www.pegtop.net/delphi/articles/blendmodes/
Photoshop blending mode to OpenGL ES without shaders
Im hoping to have a effect like this:
You can get information about the current fragment color in the framebuffer on an iOS device. Programmable blending has been available through the EXT_shader_framebuffer_fetch extension since iOS 6.0 (on every device supported by that release). Just declare that extension in your fragment shader (by putting the directive #extension GL_EXT_shader_framebuffer_fetch : require at the top) and you'll get current fragment data in gl_LastFragData[0].
And then, yes, you can use that in the fragment shader to implement any blending mode you like, including all the Photoshop-style ones. Here's an example of a Difference blend:
// compute srcColor earlier in shader or get from varying
gl_FragColor = abs(srcColor - gl_LastFragData[0]);
You can also use this extension for effects that don't blend two colors. For example, you can convert an entire scene to grayscale -- render it normally, then draw a quad with a shader that reads the last fragment data and processes it:
mediump float luminance = dot(gl_LastFragData[0], vec4(0.30,0.59,0.11,0.0));
gl_FragColor = vec4(luminance, luminance, luminance, 1.0);
You can do all sorts of blending modes in GLSL without framebuffer fetch, but that requires rendering to multiple textures, then drawing a quad with a shader that blends the textures. Compared to framebuffer fetch, that's an extra draw call and a lot of schlepping pixels back and forth between shared and tile memory -- this method is a lot faster.
On top of that, there's no saying that framebuffer data has to be color... if you're using multiple render targets in OpenGL ES 3.0, you can read data from one and use it to compute data that you write to another. (Note that the extension works differently in GLSL 3.0, though. The above examples are GLSL 1.0, which you can still use in an ES3 context. See the spec for how to use framebuffer fetch in a #version 300 es shader.)
I suspect you want this configuration:
Source: GL_SRC_ALPHA
Destination: GL_ONE.
Equation: GL_ADD
If not, it might be helpful if you could explain the math of the filter you're hoping to get.
[EDIT: the answer below is true for OpenGL and OpenGL ES pretty much everywhere except iOS since 6.0. See rickster's answer for information about EXT_shader_framebuffer_fetch which, in ES 3.0 terms, allows a target buffer to be flagged as inout, and introduces a corresponding built-in variable under ES 2.0. iOS 6.0 is over a year old at the time of writing so there's no particular excuse for my ignorance; I've decided not to delete the answer because it's potentially valid to those finding this question based on its opengl-es, opengl-es-2.0 and shader tags.]
To confirm briefly:
the OpenGL blend modes are implemented in hardware and occur after the fragment shader has concluded;
you can't programmatically specify a blend mode;
you're right that the only workaround is to ping pong, swapping the target buffer and a source texture for each piece of geometry (so you draw from the first to the second, then back from the second to the first, etc).
Per Wikipedia and the link you provided, Photoshop's overlay mode is defined so that the output pixel from a background value of a and a foreground colour of b, f(a, b) is 2ab if a < 0.5 and 1 - 2(1 - a)(1 - b) otherwise.
So the blend mode changes per pixel depending on the colour already in the colour buffer. And each successive draw's decision depends on the state the colour buffer was left in by the previous.
So there's no way you can avoid writing that as a ping pong.
The closest you're going to get without all that expensive buffer swapping is probably, as Sorin suggests, to try to produce something similar using purely additive blending. You could juice that a little by adding a final ping-pong stage that converts all values from their linear scale to the S-curve that you'd see if you overlaid the same colour onto itself. That should give you the big variation where multiple circles overlap.
I'm trying to render 2 (light) circles in OpenGL ES in 2D. The middle is white, the border is black. It works fine, as long as they don't overlap:
But as soon as they do, I get this artifact:
I'm using glBlendFunc(GL_ONE, GL_ONE) with blending enabled of course.
What could be causing this? Is there a way to fix it?
I'd like them to blend more like this:
Thanks!
Are your circles currently linear gradients? You might get less of an artifact if you have a different curve.
Based on your example, though, it looks like you want the maximum intensity of the two circles, not the sum of the intensities. It appears that Apple's OpenGL ES 2.0 implementation support the EXT_blend_minmax extension, which lets you specify that the resulting fragment values should be the maximum of the inbound and existing values. Maybe try that?
The result you're seeing is exactly what should come out for linear gradients. Hint: Open up Photoshop or The GIMP draw two radial gradients into two layers and set them to "Addition" blending mode. It will look exactly like your picture.
A effect like what you desire is given with square gradients. If your gradient is in the range 0…1 take the square of the value and draw this. You may apply a sqrt later if you want to linearize the single gradients.
Not that this is something not easily done using the blending stage; it can be done using multiple passes, but then it's actually more straightforward to use a shader to combine passed from two FBOs.
Hi I'm using Firemonkey because of it's cross platform capabilities. I want to render a particle system. Now I'm using a TMesh which works well enough to display the particles fast. Each particle is represented in the mesh via a two textured triangles. Using different texture coordinates I can show many different particle types with the same mesh. The problem is, that every particle can have its own transparency/opacity. With my current approach I cannot set the tranparency individually for each triangle (or even vertex). What can I do?
I realized that there are some other properties in TMesh.Data.VertexBuffer, like Diffuse or other sets of textures (TexCoord1-3), but these properties are not used (not even initalized) in TMesh. It also seems not easy to simply change this behavior by inheriting from TMesh. It seems one have to inherit from a lower level control to initialize the VertextBuffer with more properties. Before I try that, I'd like to ask if it would be possible to control the transparency of a triangle with that. E.g. can I set a transparent color (Diffuse) or use a transparent texture (TextCoor1)? Or is there a better way to draw the particles in Firemonkey.
I admit that I don't know much about that particular framework, but you shouldn't be able to change transparency via vertex points in a 3D model. The points are usually x,y,z coordinates. Now, the vertex points would have an effect on how the sprites are lighted if you are using a lighting system. You can also use the vertex information to apply different transparency effects
Now, there's probably a dozen different ways to do this. Usually you have a texture with different degrees of alpha values that can be set at runtime. Graphics APIs usually have some filtering function that can quickly apply values to sprites/textures, and a good one will use your graphics chip if available.
If you can use an effect, it's usually better since the nuclear way is to make a bunch of different copies of a sprite and then apply effects to them individually. If you are using Gouraud Shading, then it gets easier since Gouraud uses code to fill in texture information.
Now, are you using light particles? Some graphics APIs actually have code that makes light particles.
Edit: I just remembered Vertex Shaders, which could.