I'm working on an iPad app, with OpenFrameworks and OpenGL ES 1.1. I need to display a video with alpha channel. To simulate it i have a RGB video (without any alpha channel) and another video containing only alpha channel (on every RGB channel, so the white parts correspond to the visible parts and the black to the invisible). Every video is an OpenGL texture.
In OpenGL ES 1.1 there is no shader, so i found this solution (here : OpenGL - mask with multiple textures) :
glEnable(GL_BLEND);
// Use a simple blendfunc for drawing the background
glBlendFunc(GL_ONE, GL_ZERO);
// Draw entire background without masking
drawQuad(backgroundTexture);
// Next, we want a blendfunc that doesn't change the color of any pixels,
// but rather replaces the framebuffer alpha values with values based
// on the whiteness of the mask. In other words, if a pixel is white in the mask,
// then the corresponding framebuffer pixel's alpha will be set to 1.
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO);
// Now "draw" the mask (again, this doesn't produce a visible result, it just
// changes the alpha values in the framebuffer)
drawQuad(maskTexture);
// Finally, we want a blendfunc that makes the foreground visible only in
// areas with high alpha.
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
drawQuad(foregroundTexture);
It's exactly what i want to do but glBlendFuncSeparate() doesn't exist in OpenGL ES 1.1 (or on iOS). I'm trying to do it with glColorMask and i found this : Can't get masking to work correctly with OpenGL
But it doesn't work as well, i guess because his mask texture file contains an 'real' alpha channel, and not mine.
I highly suggest you compute a single RGBA texture instead.
This will be both easier, and faster ( because you're sending 2 RGBA textures each frame - yes, your RGB texture is in fact encoded in RGBA by the hardware, and the A is ignored )
glColorMask won't help you, because it simply says "turn on or off this channel completely".
glBlendFuncSeparate could help you if you had it, but again, it's not a good solution : you're ruining your (very limited) iphone bandwidth by sending twice as much data as needed.
UPDATE :
Since you're using OpenFrameworks, and according to its source code ( https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/gl/ofTexture.cpp and https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/video/ofVideoPlayer.cpp ) :
Use ofVideoPlayer::setUseTexture(false) so that ofVideoPlayer::update won't upload the data to video memory;
Get the video data with ofVideoPlayer::getPixels
Interleave the result in the RGBA texture (you can use an GL_RGBA ofTexture and ofTexture::loadData)
Draw using ofTexture::Draw ( this is what ofVideoPlayer does anyway )
Related
I encountered weird behaviour when trying to create dissolve shader for iOS spritekit. I have this basic shader that for now only changes alpha of texture depending on black value of noise texture:
let shader = SKShader(source: """
void main() {\
vec4 colour = texture2D(u_texture, v_tex_coord);\
float noise = texture2D(noise_tex, v_tex_coord).r;\
gl_FragColor = colour * noise;\
}
""", uniforms: [
SKUniform(name: "noise_tex", texture: spriteSheet.textureNamed("dissolve_noise"))
])
Note that this code is called in spriteSheet preload callback.
On simulator this consistently gives expected result ie. texture with different alpha values all over the place. On actual 14.5.1 device it varies:
Applied directly to SKSpriteNode - it makes whole texture semi-transparent with single value
Applied to SKEffectNode with SKSpriteNode as its child - I see miniaturized part of a whole spritesheet
Same as 2 but texture is created from image outside spritesheet - it works as on simulator (and as expected)
Why does it behave like this? Considering this needs to work on iOS 9 devices I'm worried 3 won't work everywhere. So I'd like to understand why it happens and ideally get sure way to force 1 or at least 2 to work on all devices.
After some more testing I finally figured out what is happening. The textures in the shader are whole spritesheets instead of separate textures on devices, so the coordinates go all over the place. (which actually makes more sense than simulator behaviour now that I think about it)
So depending if I want 1 or 2 I need to apply different maths. 2 is easier, since display texture is first rendered onto a buffer, so v_text_coord will take full [0.0, 1.0], so all I need is noise texture rect to do appropriate transform. For 1 I need to additionally provide texture rect to first change it into [0.0, 1.0] myself and then apply that to noise coordinates.
This will work with both spritesheets loaded into the shader or separate images, just in later case it will do some unnecessary calculations.
I'm having trouble setting up blending in Metal. Even when starting with the Hello Triangle example provided by Apple, using the following code
pipelineStateDescriptor.colorAttachments[0].blendingEnabled = YES;
pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactorZero;
pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactorZero;
and the fragment function
fragment float4 fragmentShader(RasterizerData in [[stage_in]]) {
return float4(in.color.rgb, 0);
}
the triangle still draws completely opaque. What I want to achieve in the end is blending between two shapes by using different blending factors, but I thought I would start with a simple example to understand what is going on. What am I missing?
sourceAlphaBlendFactor and destinationAlphaBlendFactor are to do with constructing a blend for the alpha channel. i.e. they control the alpha that will be written into your destination buffer, which will not really be visible to you. You are probably more interested in the RGB that is written into the frame buffer.
Try setting values for sourceRGBBlendFactor and destinationRGBBlendFactor instead. For traditional alpha blending set sourceRGBBlendFactor to MTLBlendFactorSourceAlpha and set destinationRGBBlendFactor to MTLBlendFactorOneMinusSourceAlpha
I'm using GLKit for an iPad app. With this code I setup blending:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
It works fine, but when I try to get a screenshot the blend mode seems wrong. It doesn't matter if I use GLKit's snapshot or glReadPixels.
This is what I get when working with the app:
And this is the screenshot:
Do I have to change the Blend Mode or something before I make the screenshot? And if so, to what?
The problem you are having lies most likely in how the image is generated from the RGBA data. To solve this you will need to skip the alpha channel when creating CGImage with kCGImageAlphaNoneSkipLast or have the correct alpha values in the buffer in the first place.
To explain what is going. Your GL buffer consists of RGBA values but only RGB part is used to present it but when you create the image you use the alpha channel as well, thus the difference. How it comes to this is very simple, lets take a single pixel somewhere in the middle of the screen and go through its events:
You clear the pixel to any color you want
You overwrite the pixel (all 4 channels RGBA) with a solid color received from the texture for instance (.8, .8, .8, 1.0)
You draw a color over that pixel with some smaller alpha value and try to blend it, for instance (.4, .4, .4, .25). Your blend function says to multiply the source color with the source alpha and the destination with 1 - source alpha. That results in (.4, .4, .4, .25)*.25 + (.8, .8, .8, 1.0)*.75 = (.7, .7, .7, .76)
Now the result (.7, .7, .7, .76) is displayed nicely because your buffer only presents the RGB part resulting in seeing (.7, .7, .7, 1.0) but when you use all 4 components to create the image you also use the .76 alpha value which is further used to blend the image itself. Therefor you need to skip the alpha part at some point.
There is another way: As you can see in your case there is really no need to store the alpha value to the render buffer at all as you never use it, in your blend function you only use source alpha. Therefore you may just disable it using glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_FALSE), this also means you need to clear the alpha value to 1.0 (glClearColor(x,x,x,1.0))
I'm just starting game development and I thought a game like Tank wars or Worms would be nice.
The hardest part I can think of so far is making the terrain destructible and I want to know how it's done before doing the easy parts.
I thought that explosion could have a mask texture which could be scaled for different weapons. Then using that mask I should make underlying terrain transparent (and optionally draw a dark border).
(source: mikakolari.fi)
How do I achieve that?
Do I have to change the alpha value pixel by pixel or can I use some kind of masking technique? Drawing a blue circle on top of the terrain isn't an option.
I have versions 3.1 and 4.0 of XNA.
This tutorial is what you are searching:
http://www.riemers.net/eng/Tutorials/XNA/Csharp/series2d.php
Capter 20: Adding explosion craters
In short:
You have 2 textures: 1 Color Texture (visible), 1 Collision Texture (invisible)
You substract the explosion image from your collision texture.
To get the dark border: expand the explosion texture and darken the color in this area.
Now you generate a new Color Texture (old color - collison = new color).
This is a difficult question to answer - because there are many ways you could do it. And there are pros and cons to each method. I'll just give an overview:
As an overall design, you need to keep track of: the original texture, the "darkness" applied, and the "transparency" applied. One thing I can say almost for sure is you want to "accumulate" the results of the explosions somewhere - what you don't want to be doing is maintaining a list of all explosions that have ever happened.
So you have surfaces for texture, darkness and transparency. You could probably merge darkness and transparency into a single surface with a single channel that stores "normal", "dark" (or a level of darkness) and "transparent".
Because you probably don't want the dark rings to get progressively darker where they intersect, when you apply an explosion to your darkness layer with the max function (Math.Max in C#).
To produce your final texture you could just write from the darkness/transparency texture to your original texture or a copy of it (you only need to update the area that each explosion touches).
Or you could use a pixel shader to combine them - the details of which are beyond the scope of this question. (Also a pixel shader won't work on XNA 4.0 on Windows Phone 7.)
You should Make a new Texure2D with the Color of desired pixels.Alpha = 0.
Color[] bits = new Color[Texture.Width * Texture.Height];
Texture.GetData(bits);
foreach(Vector2D pixel in overlapedArea)
{
int x = (int)(pixel.X);
int y = (int)(pixel.Y);
bits[x + y * texture.Width] = Color.FromNonPremultiplied(0,0,0,0));
}
Texture2D newTexture = new Texture2D(texture.GraphicsDevice, texture.Width, texture.Height);
newTexture.SetData(bits);
Now replace the new Texture2D with the Last Texture and you're good to go!
For more code about Collision, or changing texture pixels color go to this page for codes:
http://www.codeproject.com/Articles/328894/XNA-Sprite-Class-with-useful-methods
How can I alphablend only certain parts of a texture in DX 9?
For example, layers in Photoshop (or any other photo editing program that supports layers).
You can draw something in a layer (background filled with alpha), then place the layer over the original image (draw the texture on the screen) which leads to the original image + ONLY the things I drew in the layer.
Yes, I know my english is not very "shiny".
Thank you very much, in advance!
P.S. The background of my texture IS filled with alpha.
So you have setup the alpha on the texture you wish to overlay such that 0 is transparent (ie shows whats underneath) and 1 is opaque (ie shows the overlay texture)?
If so then you just need to set up a a simple blend mode:
pDevice->SetRenderState( D3DRS_SRCBLEND, D3DBLEND_SRCALPHA );
pDevice->SetRenderState( D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA );
pDevice->SetRenderState( D3DRS_ALPHABLENDENABLE, TRUE );
Make sure you draw the background first! Also note that values between 0 and 1 represent a linear interpolation between background and the overlay texture.