I saw
format specifies the format of the returned pixel values; accepted values are:
GL_ALPHA
GL_RGB
GL_RGBA
RGBA color components are read from the color buffer. Each color component is converted to floating point such that zero intensity maps to 0.0 and full intensity maps to 1.0.
Unneeded data is then discarded. For example, GL_ALPHA discards the red, green, and blue components, while GL_RGB discards only the alpha component. GL_LUMINANCE computes a single-component value as the sum of the red, green, and blue components, and GL_LUMINANCE_ALPHA does the same, while keeping alpha as a second value. The final values are clamped to the range [0, 1]."
at https://www.khronos.org/opengles/sdk/1.1/docs/man/glReadPixels.xml
It's work well if use GL_RGBA. But if I change
glReadPixelsPBOJNI(0, 0, width, height, GLES30.GL_RGBA, GLES30.GL_UNSIGNED_BYTE, 0);
to
glReadPixelsPBOJNI(0, 0, width, height, GLES30.GL_RGB, GLES30.GL_UNSIGNED_BYTE, 0);
0x502 is got. What's wrong with this?
My code is here: https://stackoverflow.com/questions/34347835/how-can-i-implement-pbopixel-buffer-object-in-android-grafika-project
GL_RGB is not generally supported as a format for glReadPixels() in OpenGL ES. From the ES 3.0 spec:
Only two combinations of format and type are accepted in most cases. The first varies depending on the format of the currently bound rendering surface. For normalized fixed-point rendering surfaces, the combination format RGBA and type UNSIGNED_BYTE is accepted. [..]
The second is an implementation-chosen format from among those defined in table 3.2, excluding formats DEPTH_COMPONENT and DEPTH_STENCIL. The values of format and type for this format may be determined by calling GetIntegerv with the symbolic constants IMPLEMENTATION_COLOR_READ_FORMAT and IMPLEMENTATION_COLOR_READ_TYPE, respectively.
So unless you query the implementation-chosen format, and it comes back as GL_RGB, you cannot use GL_RGB. The only format you can use everywhere is GL_RGBA.
The quote from the man page you found looks like a simple mistake. If you look at the top where the arguments are listed, it does not list GL_RGB as valid under format. So the information on that page is clearly inconsistent. Errors in the man pages are common. In case of doubt, you have to check the spec documents for more conclusive information.
Related
I've made a program that creates images using OpenCL and in the OpenCL code I have to access the underlaying data of the opencv-image and modify it directly but I don't know how the data is arranged internally.
I'm currently using CV_8U because the representation is really simple 0 black 255 white and everything in between but I want to add color and I don't know what format to use.
This is how I currently modify the image A[y*width + x] = 255;.
Since your A[y*width + x] = 255; works fine, then the underlaying image data A must be a 1D pixel array of size width * height, each element is a cv_8u (8 bit unsigned int).
The color values of a pixel, in the case of OpenCV, will be arranged B G R in memory. RGB order would be more common but OpenCV likes them BGR.
Your data ought to be CV_8UC3, which is the case if you use imread or VideoCapture. if it isn't that, the following information needs to be interpreted accordingly.
Your array index math needs to expand to account for the data's layout:
[(y*width + x)*3 + channel]
3 because 3 channels. channel is 0..2, x and y as you expect.
As mentioned in other answers, you'd need to convert this single-channel image to a 3-channel image to have color. The 3 channels are Blue, Green, Red (BGR).
OpenCV has a method that does just this, cv2.cvtColor(), this method takes an input image (in this case the single channel image that you have), and a conversion code (see here for more).
So the code would be like the following:
color_image = cv2.cvtColor(source_image, cv2.COLOR_GRAY2BGR)
Then you can modify the color by accessing each of the color channels, e.g.
color_image[y, x, 0] = 255 # this changes the first channel (Blue)
If you create a color and then run [color isEqual:[NSKeyedUnarchiver unarchiveObjectWithData:[NSKeyedArchiver archivedDataWithRootObject:color]]], you'll find the answer may be NO, at least in my testing with values between 0 and 1 exclusive. The new color is very, very close, but it's not exactly the same as the old color, at least in terms of its internal representation. Perhaps it has the same actual output color though, considering it's using 64 bits to represent what only needs 8 bits.
Any ideas?
I think this has to do with the representation of the color when is serialized. It might be represented as a float and if so this could happen. I think you could avoid this if you implement your own serialisation, using string or something else.
Today I've searched a lot about it, found something but I'm still confusing.
For example, I have the next filter:
The result need to be:
How can I apply it to my image?
I know how to apply such type of effects as: PhotoEffectNoir, or CIPhotoEffectChrome, but how can I apply this matrix(or I do not know how to call it) to my UIImage?
Can anyone help me with a little example?
This will be just a hint of an answer for now; I'll come back with more details as I have time.
Your first image is a color lookup table (aka CLUT), sometimes also called a color cube. It's a representation of a three-dimensional array where the x, y, and z coordinates are the r, g, and b components of an input color, and the value at a given xyz coordinate is the output color for that particular rgb input. (Because it's being stored in a 2D image, the 3D table is split into slices.)
You can use a CLUT for filtering in Core Image with the CIColorCube filter. The trick to it is in converting your CLUT image to the right format to pass as a parameter to that filter.
You can find some examples of constructing (rather than converting an image to) a color cube in Apple's docs and elsewhere on SO.
I'm trying to use the DepthBias property on the rasterizer state in DirectX 11 (D3D11_RASTERIZER_DESC) to help with the z-fighting that occurs when I render in wireframe mode over solid polygons (wireframe overlay), and it seems setting it to any value doesn't change anything to the result. But I noticed something strange... the value is defined as a INT rather than a FLOAT. That doesn't make sense to me, but it still doesn't happen to work as expected. How do we properly set that value if it is a INT that needs to be interpreted as a UNORM in the shader pipeline?
Here's what I do:
Render all geometry
Set the rasterizer to render in wireframe
Render all geometry again
I can clearly see the wireframe overlay, but the z-fighting is horrible. I tried to set the DepthBias to a lot of different values, such as 0.000001, 0.1, 1, 10, 1000 and all the minus equivalent, still no results... obviously, I'm aware when casting the float as integer, all the decimals get cut... meh?
D3D11_RASTERIZER_DESC RasterizerDesc;
ZeroMemory(&RasterizerDesc, sizeof(RasterizerDesc));
RasterizerDesc.FillMode = D3D11_FILL_WIREFRAME;
RasterizerDesc.CullMode = D3D11_CULL_BACK;
RasterizerDesc.FrontCounterClockwise = FALSE;
RasterizerDesc.DepthBias = ???
RasterizerDesc.SlopeScaledDepthBias = 0.0f;
RasterizerDesc.DepthBiasClamp = 0.0f;
RasterizerDesc.DepthClipEnable = TRUE;
RasterizerDesc.ScissorEnable = FALSE;
RasterizerDesc.MultisampleEnable = FALSE;
RasterizerDesc.AntialiasedLineEnable = FALSE;
As anyone figured out how to set the DepthBias properly? Or perhaps it is a bug in DirectX (which I doubt) or again maybe there's a better way to achieve this than using DepthBias?
Thank you!
http://msdn.microsoft.com/en-us/library/windows/desktop/cc308048(v=vs.85).aspx
Depending on whether your depth buffer is UNORM or floating point varies the meaning of the number. In most cases you're just looking for the smallest possible value that gets rid of your z-fighting rather than any specific value. Small values are a small bias, large values are a large bias, but how that equates to a shift numerically depends on the format of your depth buffer.
As for the values you've tried, anything less than 1 would have rounded to zero and had no effect. 1, 10, 1000 may simply not have been enough to fix the issue. In the case of a D24 UNORM depth buffer, the formula would suggest a depth bias of 1000 would offset depth by: 1000 * (1 / 2^24), which equals 0.0000596, a not very significant shift in z-buffering terms.
Does a large value of 100,000 or 1,000,000 fix the z-fighting?
If anyone cares, I made myself a macro to make it easier. Note that this macro will only work if you are using a 32bit float depth buffer format. A different macro might be needed if you are using a different depth buffer format.
#define DEPTH_BIAS_D32_FLOAT(d) (d/(1/pow(2,23)))
That way you can simply set your depth bias using standard values, such as:
RasterizerDesc.DepthBias = DEPTH_BIAS_D32_FLOAT(-0.00001);
I am trying to change the white point/white balance programmatically. This is what I want to accomplish:
- Choose a (random) pixel from the image
- Get color of that pixel
- Transform the image so that all pixels of that color will be transformed to white and all other colors shifted to match
I have accomplished the first two steps but the third step is not really working out.
At first I thought that, as per Apples documentation CIWhitePointAdjust should be the thing to accomplish exactly that but, although it does change the image it is not doing what I would like/expect it to do.
Then it seemed that CIColorMatrix should be something that would help me to shift the colors but I was (and still am) at a loss of what to input to it with those pesky vectors.
I have tried almost everything (same RGB values on all vectors, corresponding values (R for R, etc.) on each vector, 1 - corresponding value, 1 + corresponding value, 1/corresponding value. RGB values and different (1 - x, 1 + x, 1 / x).
I have also come across CITemperatureAndTint that, as per Apples documentation should also help, but I have not yet figured out how to convert from RGB to temperature and tint. I have seen algorithms and formulas about converting from RGB to Temperatur, but nothing regarding tint. I will continue experimenting with this a little though.
Any help much appreciated!
After a lot of experimenting and mathematics I finally got my app to work almost the way I want.
If anyone else will find themselves facing a similar problem then here is what I did.
I ended up using CITemperatureAndTint filter supplying a color in Kelvins calculated from the selected pixels RGB value and user suppliable tint value.
To get to Kelvins I:
- firstly converted RGB to XYZ using the D65 illuminant (ie Daylight).
- then converted from XYZ to Yxy. Both of these conversions were made using the algorithms found from EasyRGB.
- I then calculated Kelvins from Yxy using the McCamry's formula I found in a paper here.
These steps got the image in the ballpark but not quite there, so I added a UISlider for the user to supply the tint value ranging from -100 to 100.
With selecting a point that should be white and choosing values from the positive side of the tint scale (all the images I on my phone tend to be more yellow) an image can now be converted to (more) neutral colors. Yey!
I supplyed the calculated temperature and user chosen tint as inputNeutral vector values.
6500 (D65 daylight) and 0 as inputTargetNeutral vector values to CITTemperatureAndTint filter.