I am doing a photo Editing application in Metal iOS. I am having a texture of image. I want to have a tool when user taps the texture I want to make the tapped point(a square area around tapped point) I want to read that specific area and I want to read the color and I want to make it grayscale.
I know we can read the pixel data of texture in Kernel function. Is it possible to read the pixel data in Fragment Shader and do the above scenario.
What you are describing is the HelloCompute Metal example provided by Apple. Just download it and take a look at how a texture is rendered and how a shader can be used to convert color pixels to grayscale. The BasicTexturing example also shows how to do a plain texture render on its own.
Related
I have loaded textures in memory and I want to draw them one draw call. I can put all texture coords to buffer but how I create one texture from small texture parts ? is that possible ?
or I must download images and combine them then I create textrute from combined big picture ?
In general combining images into a texture atlas is something you'd do off line either manually like in an image editing program or using custom or specialized tools. That's the most common and recommended way.
If you have to do it at runtime for some reason then the easiest way to combine images into a single texture is to first load all your images, then use the canvas 2D api to draw them into a 2D canvas, then use that canvas as a source for texImage2D in WebGL. The only issue with using a 2D canvas is if you need data other than images because 2D canvas only supports pre-multiplied alpha.
Otherwise doing it in WebGL is just a matter of rendering your smaller textures into a larger texture. Rendering to a texture requires creating the texture, attaching to a framebuffer, and then rendering like you would anything else. See this for rendering to a texture and this for rendering any part of an image to any place in the canvas or another texture.
I'm trying to use GPUImage to implement a histogram into my app. The example project on the GPUImage github called FilterShowcase comes with a good histogram generator, but due to the UI design of the app I'm making I'll need to write my own custom graph to display the histogram values. Does anyone know how can I get the RGB values from the GPUImageHistogramFilter so I can pop them into my own graph?
The GPUImageHistogramFilter produces a 256x3 image where the center 256x1 line of that contains the red, green, and blue values for the histogram packed in the RGB channels. iOS doesn't support a 1-pixel-high render framebuffer, so I have to pad it out to three pixels high.
The GPUImageHistogramGenerator creates the visible histogram overlay you see in the sample applications, and it does that by taking in the 256x3 image and rendering an output image using a custom shader that colors in bars whose height depends on the input color value. It's a quick, on-GPU implementation.
If you want to do something more custom that doesn't use a shader, you can extract the histogram values using a GPUImageRawDataOutput and pulling out the RGB components of the center 256x1 line. From there, you could draw the rest of your interface overlay, although something done using Core Graphics may chew a lot of processing power to update on every frame.
I would like to do something like this:
Have the camera on and tap on the screen to get the color of that area and then replace that color with a texture. I have done something similar by replacing the color on the screen with another color (that is still not working right though), but replacing with a texture is more complex i think.
So please, can somebody give me a hint on how i can do this?
Also , on how to create the texture.
Thank you,
Alex
basically you will want to do this with a boolean operation in the fragment shader.
you'll need to feed two textures to the shader, one is the camera image, the other is the replacement image. then you need a function which determines if the per-fragment color from the camera texture is within a certain color range (which you choose), and depending on that either show the camera texture or the other texture.
your question is a bit vague, you should try to break it down into smaller problems. the tricky part, if you haven't done this before, is getting the OpenGL boilerplate code right.
you need to know:
how to write, compile and use basic GLSL shaders
how to load images into OpenGL textures and use them in your shaders (search for sampler2d)
a good first step is to do the following:
figure out how to show a texture as a flat fullscreen image using 2D geometry. You'll need to render two triangles, and map the texture's coordinates (UV) onto the triangle points.
if you follow this tutorial you'll be able to do the thing you want:
http://www.raywenderlich.com/70208/opengl-es-pixel-shaders-tutorial
Im trying to implement a particle system (using OpenGL 2.0 ES), where each particle is made up of a quad with a simple texture
the red pixels are transparent. Each particle will have a random alpha value from 50% to 100%
Now the tricky part is i like each particle to have a blendmode much like Photoshop "overlay" i tried many different combinations with the glBlendFunc() but without luck.
I dont understand how i could implement this in a fragment shader, since i need infomations about the current color of the fragment. So that i can calculate a new color based on the current and texture color.
I also thought about using a frame buffer object, but i guess i would need to re-render my frame-buffer-object into a texture, for each particle since each particle every frame, since i need the calculated fragment color when particles overlap each other.
Ive found math' and other infomations regrading the Overlay calculation but i have a hard time figuring out which direction i could go to implement this.
http://www.pegtop.net/delphi/articles/blendmodes/
Photoshop blending mode to OpenGL ES without shaders
Im hoping to have a effect like this:
You can get information about the current fragment color in the framebuffer on an iOS device. Programmable blending has been available through the EXT_shader_framebuffer_fetch extension since iOS 6.0 (on every device supported by that release). Just declare that extension in your fragment shader (by putting the directive #extension GL_EXT_shader_framebuffer_fetch : require at the top) and you'll get current fragment data in gl_LastFragData[0].
And then, yes, you can use that in the fragment shader to implement any blending mode you like, including all the Photoshop-style ones. Here's an example of a Difference blend:
// compute srcColor earlier in shader or get from varying
gl_FragColor = abs(srcColor - gl_LastFragData[0]);
You can also use this extension for effects that don't blend two colors. For example, you can convert an entire scene to grayscale -- render it normally, then draw a quad with a shader that reads the last fragment data and processes it:
mediump float luminance = dot(gl_LastFragData[0], vec4(0.30,0.59,0.11,0.0));
gl_FragColor = vec4(luminance, luminance, luminance, 1.0);
You can do all sorts of blending modes in GLSL without framebuffer fetch, but that requires rendering to multiple textures, then drawing a quad with a shader that blends the textures. Compared to framebuffer fetch, that's an extra draw call and a lot of schlepping pixels back and forth between shared and tile memory -- this method is a lot faster.
On top of that, there's no saying that framebuffer data has to be color... if you're using multiple render targets in OpenGL ES 3.0, you can read data from one and use it to compute data that you write to another. (Note that the extension works differently in GLSL 3.0, though. The above examples are GLSL 1.0, which you can still use in an ES3 context. See the spec for how to use framebuffer fetch in a #version 300 es shader.)
I suspect you want this configuration:
Source: GL_SRC_ALPHA
Destination: GL_ONE.
Equation: GL_ADD
If not, it might be helpful if you could explain the math of the filter you're hoping to get.
[EDIT: the answer below is true for OpenGL and OpenGL ES pretty much everywhere except iOS since 6.0. See rickster's answer for information about EXT_shader_framebuffer_fetch which, in ES 3.0 terms, allows a target buffer to be flagged as inout, and introduces a corresponding built-in variable under ES 2.0. iOS 6.0 is over a year old at the time of writing so there's no particular excuse for my ignorance; I've decided not to delete the answer because it's potentially valid to those finding this question based on its opengl-es, opengl-es-2.0 and shader tags.]
To confirm briefly:
the OpenGL blend modes are implemented in hardware and occur after the fragment shader has concluded;
you can't programmatically specify a blend mode;
you're right that the only workaround is to ping pong, swapping the target buffer and a source texture for each piece of geometry (so you draw from the first to the second, then back from the second to the first, etc).
Per Wikipedia and the link you provided, Photoshop's overlay mode is defined so that the output pixel from a background value of a and a foreground colour of b, f(a, b) is 2ab if a < 0.5 and 1 - 2(1 - a)(1 - b) otherwise.
So the blend mode changes per pixel depending on the colour already in the colour buffer. And each successive draw's decision depends on the state the colour buffer was left in by the previous.
So there's no way you can avoid writing that as a ping pong.
The closest you're going to get without all that expensive buffer swapping is probably, as Sorin suggests, to try to produce something similar using purely additive blending. You could juice that a little by adding a final ping-pong stage that converts all values from their linear scale to the S-curve that you'd see if you overlaid the same colour onto itself. That should give you the big variation where multiple circles overlap.
I'm relatively new to WebGL, and OpenGL too for that matter, but in recent days I've filled up most my time writing a little game for it. However, when I wanted to implement something like heat waves, or any sort of distortion, I was left stuck.
Now, I can make a texture ripple using the fragment shader, but I feel like I'm missing something when it comes to distorting the content behind an object. Is there any way to grab the color of a pixel that's already been rendered within the fragment shader?
I've tried rendering to a texture and then having the texture of the object be that, but it appears if you choose to render your scene to a texture, you cannot render it to the screen also. And beyond that, if you want to render to a texture, that texture must be a power of two (which many screen resolutions do not quite fit into)
Any help would be appreciated.
You're going to have to render to a texture and draw that texture onto the screen while distorting it. Also, there's no requirement that framebuffer objects must be of a power-of-two size in OpenGL ES 2.0 (which is the graphics API WebGL uses). But non-power-of-two textures can't have mipmapping or texture-wrapping.
I believe you can modify individual canvas pixels directly. Might be a good way to ripple a small area, but might not be gpu-accelerated.