Removing R from RGB Color - image-processing

Is there a way to remove the red channel of an RGB pixel in a way such that in the resulting picture the red color goes to white not to black? I need to distinguish between red color and blue/black color, but in different light the RGB value varies. If I simply remove the R channel, darker red colors become black and I want the opposite result.
Thanks!

If I understand you correctly -
You need to normalize the red channel value and then use it as a mixing value:
mix = R / 255
Then mix white with the normal color minus the red channel using the mix factor:
Original-red White
R' = 0 + 255 * mix
G' = G * (1 - mix) + 255 * mix
B' = B * (1 - mix) + 255 * mix
Just notice it will kill the yellow and magenta colors as well as these of course uses the red channel, and the more red the more white is mixed in.
You should be able to get around this using the CMYK color-model, or a combination of both, so you can separate out all the main components. Then override the mix with f.ex. the yellow/magenta components from CMYK.
The mixing process should be the same as described though.
Conceptual demo
var ctx = c.getContext("2d");
var img = new Image;
img.onload = function() {
c.width = img.width;
c.height = img.height;
ctx.drawImage(this, 0, 0);
var idata = ctx.getImageData(0,0,c.width,c.height),
data = idata.data, len = data.length, i, mix;
/* mix = R / 255
R = 0 + 255 * mix
G = G * (1 - mix) + 255 * mix
B = B * (1 - mix) + 255 * mix
*/
for(i = 0; i < len; i+= 4) {
mix = data[i] / 255; // mix using red
data[i ] = 255 * mix; // red channel
data[i+1] = data[i+1] * (1 - mix) + 255 * mix; // green channel
data[i+2] = data[i+2] * (1 - mix) + 255 * mix; // blue channel
}
ctx.putImageData(idata,0,0);
};
img.crossOrigin = "";
img.src = "//i.imgur.com/ptOPQZx.png";
document.body.appendChild(img)
<h4>Red removed + to white</h4><canvas id=c></canvas><h4>Original:</h4>

Related

Failing to properly initialize a 2D texture from memory in Direct3D 11

I am trying to produce a simple array in system memory that represent a R8G8B8A8 texture and than transfer that texture to the GPU memory.
First, I allocate an array and fill it with the desired green color data:
frame.width = 3;
frame.height = 1;
auto components = 4;
auto length = components * frame.width * frame.height;
frame.data = new uint8_t[length];
frame.data[0 + 0 * frame.width] = 0; frame.data[1 + 0 * frame.width] = 255; frame.data[2 + 0 * frame.width] = 0; frame.data[3 + 0 * frame.width] = 255;
frame.data[0 + 1 * frame.width] = 0; frame.data[1 + 1 * frame.width] = 255; frame.data[2 + 1 * frame.width] = 0; frame.data[3 + 1 * frame.width] = 255;
frame.data[0 + 2 * frame.width] = 0; frame.data[1 + 2 * frame.width] = 255; frame.data[2 + 2 * frame.width] = 0; frame.data[3 + 2 * frame.width] = 255;
Then, I create the texture object and set it as the pixel shader resource:
D3D11_TEXTURE2D_DESC textureDescription;
textureDescription.Width = frame.width;
textureDescription.Height = frame.height;
textureDescription.MipLevels = textureDescription.ArraySize = 1;
textureDescription.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
textureDescription.SampleDesc.Count = 1;
textureDescription.SampleDesc.Quality = 0;
textureDescription.Usage = D3D11_USAGE_DYNAMIC;
textureDescription.BindFlags = D3D11_BIND_SHADER_RESOURCE;
textureDescription.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
textureDescription.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA initialTextureData;
initialTextureData.pSysMem = frame.data;
initialTextureData.SysMemPitch = frame.width * components;
initialTextureData.SysMemSlicePitch = 0;
DX_CHECK(m_device->CreateTexture2D(&textureDescription, &initialTextureData, &m_texture));
DX_CHECK(m_device->CreateShaderResourceView(m_texture, NULL, &m_textureView));
m_context->PSSetShaderResources(0, 1, &m_textureView);
My expectation is that the GPU memory will contain a 3x1 green texture and that each texel will have 1.0f in the alpha chanel. However, this is not the case as can be viewed by examining the loaded texture object via the Visual Studio Graphics Debugger:
Could someone explain what is happening? How can I fix this?
Let's take a look at your array addressing scheme (indices are reduced with the dimensions you provided):
frame.data[0] = 0; frame.data[1] = 255; frame.data[2] = 0; frame.data[3] = 255;
frame.data[3] = 0; frame.data[4] = 255; frame.data[5] = 0; frame.data[6] = 255;
frame.data[6] = 0; frame.data[7] = 255; frame.data[8] = 0; frame.data[9] = 255;
Re-ordering, we get
data[ 0] = 0 B pixel 1
data[ 1] = 255 G pixel 1
data[ 2] = 0 R pixel 1
data[ 3] = 0 (overwritten) A pixel 1
data[ 4] = 255 pixel 2
data[ 5] = 0
data[ 6] = 0
data[ 7] = 255
data[ 8] = 0 pixel 3
data[ 9] = 255
data[10] = undefined
data[11] = undefined
As you see, this is exactly the data that your debugger shows you.
So you just need to modify your adressing scheme. The correct formula would be:
index = component + x * components + y * pitch,
where you defined a dense packing with
pitch = width * components
In order to derive this formula, you just need to think about how many indices you have to skip when you increase one of the variables. E.g. when you increase the current component, you just need to step one entry further (because all components are right next to each other). On the other hand, if you increase the y-coordinate, you need to skip as many entries as there are in a row (this is called the pitch, which is the width of the image multiplied by the number of components for a dense packing).

How to randomly change color? Dart and StageXL

for(int i = 1; i < 10; i++) {
var shape = new Shape();
shape.graphics.beginPath();
shape.graphics.moveTo(r1.nextInt(1500), r2.nextInt(1500));
shape.graphics.lineTo(r3.nextInt(1500), r4.nextInt(1500));
shape.graphics.strokeColor(Color.Green);
shape.graphics.closePath();
stage.addChild(shape);
}
How can i randomly change the color of the line?
In StageXL, colors are actually just integers. So, as #bp74 says, you can do:
var a = 255; // Assuming you want full opacity every time.
var r = random.nextInt(256); // red = 0..255
var g = random.nextInt(256); // green = 0..255
var b = random.nextInt(256); // blue = 0..255
var color = (a << 24) + (r << 16) + (g << 8) + b;
shape.graphics.strokeColor(color);
This is assuming you have random defined somewhere above:
var random = new Random();
Note that you most probably don't need several Random instances (like you have with r1, r2, r3 and r4, I assume).
The color in StageXL (and Flash) is a 32 ARGB value. You use 4x8 bit for the alpha, red, green and blue color channel. The following example shows the color 0xFFFFFFFF which is white:
var a = 255; // alpha = 0..255
var r = 255; // red = 0..255
var g = 255; // green = 0..255
var b = 255; // blue = 0..255
var color = (a << 24) + (r << 16) + (g << 8) + b;

HSV values using openCV or javaCV

I want to track a color within an image. I use the following code (javaCV):
//Load initial image.
iplRGB = cvLoadImage(imageFile, CV_LOAD_IMAGE_UNCHANGED);
//Prepare for HSV
iplHSV = cvCreateImage(iplRGB.cvSize(), iplRGB.depth(), iplRGB.nChannels());
//Transform RGB to HSV
cvCvtColor(iplRGB, iplHSV, CV_BGR2HSV);
//Define a region of interest.
//minRow = 0; maxRow = iplHSV.height();
//minCol = 0; maxCol = iplHSV.width();
minRow = 197; minCol = 0; maxRow = 210; maxCol = 70;
//Print each HSV for each pixel of the region.
for (int y = minRow; y < maxRow; y++){
for (int x = minCol; x < maxCol; x++) {
CvScalar pixelHsv = cvGet2D(iplHSV, y, x);
double h = pixelHsv.val(0);
double s = pixelHsv.val(1);
double v = pixelHsv.val(2);
String line = y + "," + x + "," + h + "," + s + "," + v;
System.out.println(line);
}
}
I can easily find out the minimum and maximum for HUE and SAT from the output. Let's call then minHue, minSat, maxHue and maxSat (not fancy hey !). Then I execute this code:
iplMask = cvCreateImage(iplHSV.cvSize(), iplHSV.depth(), 1);
CvScalar min = cvScalar(minHue, minSat, 0, 0);
CvScalar max = cvScalar(maxHue, maxSat, 255 ,0);
cvInRangeS(iplHSV, min, max, iplMask);
When I show the iplMask, should not I see the region of interest entirely white ? I don't, I see the contour being white but the inside of the rectangle is black. I must mess with something but I do not understand what.
I know that Hue is in [0..179] with OpenCV and Sat and Val are in [0..255] but since I use the values displayed by openCV I would think I do not have to rescale...
Anyway, I am lost. Can somebody explain ? Thanks.

How to deal with RGB to YUV conversion

The formula says:
Y = 0.299 * R + 0.587 * G + 0.114 * B;
U = -0.14713 * R - 0.28886 * G + 0.436 * B;
V = 0.615 * R - 0.51499 * G - 0.10001 * B;
What if, for example, the U variable becomes negative?
U = -0.14713 * R - 0.28886 * G + 0.436 * B;
Assume maximum values for R and G (ones) and B = 0
So, I am interested in implementing this convetion function in OpenCV,
So, how to deal with negative values?
Using float image? anyway please explain me, may be I don't understand something..
Y, U and V are all allowed to be negative when represented by decimals, according to the YUV color plane.
You can convert RGB<->YUV in OpenCV with cvtColor using the code CV_YCrCb2RGB for YUV->RGB and CV_RGBYCrCb for RGB->YUV.
void cvCvtColor(const CvArr* src, CvArr* dst, int code)
Converts an image from one color space
to another.
for planar formats OpenCV is not the right tool for the job. Instead you are better off using ffmpeg. for example
static void rgbToYuv(byte* src, byte* dst, int width,int height)
{
byte* src_planes[3] = {src,src + width*height, src+ (width*height*3/2)};
int src_stride[3] = {width, width / 2, width / 2};
byte* dest_planes[3] = {dst,NULL,NULL};
int dest_stride[3] = {width*4,0,0};
struct SwsContext *img_convert_ctx = sws_getContext(
width,height,
PIX_FMT_YUV420P,width,height,PIX_FMT_RGB32,SWS_POINT,NULL,NULL,NULL);
sws_scale(img_convert_ctx, src_planes,src_stride,0,height,dest_planes,dest_stride);
sws_freeContext(img_convert_ctx);
}
will convert a YUV420 image to RGB32

How is a sepia tone created?

What are the basic operations needed to create a sepia tone? My reference point is the perl imagemagick library, so I can easily use any basic operation. I've tried to quantize (making it grayscale), colorize, and then enhance the image but it's still a bit blurry.
Sample code of a sepia converter in C# is available in my answer here: What is wrong with this sepia tone conversion algorithm?
The algorithm comes from this page, each input pixel color is transformed in the following way:
outputRed = (inputRed * .393) + (inputGreen *.769) + (inputBlue * .189)
outputGreen = (inputRed * .349) + (inputGreen *.686) + (inputBlue * .168)
outputBlue = (inputRed * .272) + (inputGreen *.534) + (inputBlue * .131)
If any of these output values is greater than 255, you simply set it
to 255. These specific values are the values for sepia tone that are
recommended by Microsoft.
This is in C#, however, the basic concepts are the same. You will likely be able to convert this into perl.
private void SepiaBitmap(Bitmap bmp)
{
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
System.Drawing.Imaging.BitmapData bmpData = bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite,
System.Drawing.Imaging.PixelFormat.Format32bppRgb);
IntPtr ptr = bmpData.Scan0;
int numPixels = bmpData.Width * bmp.Height;
int numBytes = numPixels * 4;
byte[] rgbValues = new byte[numBytes];
System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, numBytes);
for (int i = 0; i < rgbValues.Length; i += 4)
{
rgbValues[i + 2] = (byte)Math.Min((.393 * red) + (.769 * green) + (.189 * (blue)), 255.0); //red
rgbValues[i + 1] = (byte)Math.Min((.349 * red) + (.686 * green) + (.168 * (blue)), 255.0); //green
rgbValues[i + 0] = (byte)Math.Min((.272 * red) + (.534 * green) + (.131 * (blue)), 255.0); //blue
if ((rgbValues[i + 2]) > 255)
{
rgbValues[i + 2] = 255;
}
if ((rgbValues[i + 1]) > 255)
{
rgbValues[i + 1] = 255;
}
if ((rgbValues[i + 0]) > 255)
{
rgbValues[i + 0] = 255;
}
}
System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, numBytes);
this.Invalidate();
bmp.UnlockBits(bmpData);
}
It's easy if you use the imagemagic command line.
http://www.imagemagick.org/script/convert.php
Use the "-sepia-tone threshold" argument when converting.
Strangely enough, the PerlMagick API doesn't seem to include a method for doing this directly:
http://www.imagemagick.org/script/perl-magick.php
...and no reference to any Sepia method.
Take a look at how it's implemented in the AForge.NET library, the C# code is here.
The basics seem to be
transform it to the YIQ color space
modify it
transform back to RGB
The full alrogithm is in the source code, plus the RGB -> YIQ and YIQ -> RGB transformations are explained.

Resources