Suppose we have the following color:
const Scalar TRANSPARENT2 = Scalar(255, 0, 255,0);
which is magenta but fully transparent: alpha = 0 (to be fully opaque is 255).
Now I made the following test based on:
http://blogs.msdn.com/b/lucian/archive/2015/12/04/opencv-first-version-up-on-nuget.aspx
WriteableBitmap^ Grabcut::TestTransparent()
{
Mat res(400,400, CV_8UC4);
res.setTo(TRANSPARENT2);
WriteableBitmap^ wbmp = ref new WriteableBitmap(res.cols, res.rows);
IBuffer^ buffer = wbmp->PixelBuffer;
unsigned char* dstPixels;
ComPtr<IBufferByteAccess> pBufferByteAccess;
ComPtr<IInspectable> pBuffer((IInspectable*)buffer);
pBuffer.As(&pBufferByteAccess);
pBufferByteAccess->Buffer(&dstPixels);
memcpy(dstPixels, res.data, res.step.buf[1] * res.cols * res.rows);
return wbmp;
}
The issue I have is that the image created is not fully transparent, it has a bit of alpha:
I understand there is a fila in the memcpy data, but I am not really sure about how to solve this. any idea to get it to alpha 0?
more details
To see I saving the image could then read and test if it works, I saw that the imwrite contains an snippet about transparency like in the image, but well imwrite is not implemented yet. But the transparency method is not working neither.
Any light with this snippet?
Thanks.
Finally I did the conversion in the C# code, first avoid calling CreateAlphaMat.
Then what I did is use a BitmapEncoder to convert data:
WriteableBitmap wb = new WriteableBitmap(bitmap.PixelWidth, bitmap.PixelHeight);
using (IRandomAccessStream stream = new InMemoryRandomAccessStream())
{
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, stream);
Stream pixelStream = bitmap.PixelBuffer.AsStream();
byte[] pixels = new byte[pixelStream.Length];
await pixelStream.ReadAsync(pixels, 0, pixels.Length);
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied,
(uint)bitmap.PixelWidth, (uint)bitmap.PixelHeight, 96.0, 96.0, pixels);
await encoder.FlushAsync();
wb.SetSource(stream);
}
this.MainImage.Source = wb;
where bitmap is the WriteableBitmap from the OpenCV result. And now the image is fully transparent.
NOTE: Do not use MemoryStream and then .AsRandomAccessStream because it won't FlushAsync
Related
I need to allow the user to choose a color on iOS.
I use the following code to fire up the color picker:
var picker = new UIColorPickerViewController();
picker.SupportsAlpha = true;
picker.Delegate = this;
picker.SelectedColor = color.ToUIColor();
PresentViewController(picker, true, null);
When the color picker displays, the color is always slightly off. For example:
input RGBA: (220, 235, 92, 255)
the initial color in the color picker might be:
selected color: (225, 234, 131, 255)
(these are real values from tests). Not a long way off... but enough to notice if you are looking for it.
I was wondering if the color picker grid was forcing the color to the
nearest color entry - but if that were true, you would expect certain colors to
stay fixed (i.e. if the input color exactly matches one of the grid colors,
it should stay unchanged). That does not happen.
p.s. I store colors in a cross platform fashion using simple RGBA values.
The ToUIColor converts to local UIColor using
new UIColor((nfloat)rgb.r, (nfloat)rgb.g, (nfloat)rgb.b, (nfloat)rgb.a);
From the hints in comments by #DonMag, I've got some way towards an answer, and also a set of resources that can help if you are struggling with this.
The key challenge is that mac and iOS use displayP3 as the ColorSpace, but most people use default {UI,NS,CG}Color objects, which use the sRGB ColorSpace (actually... technically they are Extended sRGB so they can cover the wider gamut of DisplayP3). If you want to know the difference between these three - there's resources below.
When you use the UIColorPickerViewController, it allows the user to choose colors in DisplayP3 color space (I show an image of the picker below, and you can see the "Display P3 Hex Colour" at the bottom).
If you give it a color in sRGB, I think it gets converted to DisplayP3. When you read the color, you need to convert back to sRGB, which is the step I missed.
However I found that using CGColor.CreateByMatchingToColorSpace, to convert from DisplayP3 to sRGB never quite worked. In the code below I convert to and from DisplayP3 and should have got back my original color, but I never did. I tried removing Gamma by converting to a Linear space on the way but that didn't help.
cg = new CGColor(...values...); // defaults to sRGB
// sRGB to DisplayP3
tmp = CGColor.CreateByMatchingToColorSpace(
CGColorSpace.CreateWithName("kCGColorSpaceDisplayP3"),
CGColorRenderingIntent.Default, cg, null);
// DisplayP3 to sRGB
cg2 = CGColor.CreateByMatchingToColorSpace(
CGColorSpace.CreateWithName("kCGColorSpaceExtendedSRGB"),
CGColorRenderingIntent.Default, tmp, null);
Then I found an excellent resource: http://endavid.com/index.php?entry=79 that included a set of matrices that can perform the conversions. And that seems to work.
So now I have extended CGColor as follows:
public static CGColor FromExtendedsRGBToDisplayP3(this CGColor c)
{
if (c.ColorSpace.Name != "kCGColorSpaceExtendedSRGB")
throw new Exception("Bad color space");
var mat = LinearAlgebra.Matrix<float>.Build.Dense(3, 3, new float[] { 0.8225f, 0.1774f, 0f, 0.0332f, 0.9669f, 0, 0.0171f, 0.0724f, 0.9108f });
var vect = LinearAlgebra.Vector<float>.Build.Dense(new float[] { (float)c.Components[0], (float)c.Components[1], (float)c.Components[2] });
vect = vect * mat;
var cg = new CGColor(CGColorSpace.CreateWithName("kCGColorSpaceDisplayP3"), new nfloat[] { vect[0], vect[1], vect[2], c.Components[3] });
return cg;
}
public static CGColor FromP3ToExtendedsRGB(this CGColor c)
{
if (c.ColorSpace.Name != "kCGColorSpaceDisplayP3")
throw new Exception("Bad color space");
var mat = LinearAlgebra.Matrix<float>.Build.Dense(3, 3, new float[] { 1.2249f, -0.2247f, 0f, -0.0420f, 1.0419f, 0f, -0.0197f, -0.0786f, 1.0979f });
var vect = LinearAlgebra.Vector<float>.Build.Dense(new float[] { (float)c.Components[0], (float)c.Components[1], (float)c.Components[2] });
vect = vect * mat;
var cg = new CGColor(CGColorSpace.CreateWithName("kCGColorSpaceExtendedSRGB"), new nfloat[] { vect[0], vect[1], vect[2], c.Components[3] });
return cg;
}
Note: there's lots of assumptions in the matrices w.r.t white point and gammas. But it works for me. Let me know if there are better approaches out there, or if you can tell me why my use of CGColor.CreateByMatchingToColorSpace didn't quite work.
Reading Resources:
Reading this: https://stackoverflow.com/a/49040628/6257435
then this: https://bjango.com/articles/colourmanagementgamut/
are essential starting points.
Image of the iOS Color Picker:
I have two images (CV_8UC3) and a mask (CV_8UC1) all of the same size and I would like to apply the mask to one of the images and put it on top of the other:
const cv = require('opencv4nodejs');
const bg = cv.imread('./bg.jpg').cvtColor(cv.COLOR_RGB2RGBA);
//Loading the foreground image in RGB
const fg = cv.imread('./fg.jpg');
//Generating the mask with only one channel
const mask = cv.imread('./mask.jpg').cvtColor(cv.COLOR_RGB2GRAY);
const fgChannels = fg.split();
fgChannels.push(mask);
const maskedFg = new cv.Mat(fgChannels);
const output = cv.addWeighted(bg, 1, maskedFg, 1, 0).cvtColor(cv.COLOR_RGBA2RGB);
cv.imwrite('./output.jpg', output);
And here how it works. First the bg.jpg file:
Then the fg.jpg file:
The mask.jpg file:
And finally, the output.jpg file:
My problem with the output is that I was not expecting to see any part of the background image unless they are underneath the tunnel's opening. Can someone please help me find the solution?
Apparently, OpenCV does not have a function to do this directly. Instead, you have to use a combination of functions to do it:
const cv = require('opencv4nodejs');
const bg = cv.imread('./bg.jpg').convertTo(cv.CV_32FC3, 1.0 / 255);
const fg = cv.imread('./fg.jpg').convertTo(cv.CV_32FC3, 1.0 / 255);
const mask = cv.imread('./mask.jpg').convertTo(cv.CV_32FC3, 1.0 / 255);
const allOnes = new cv.Mat(mask.rows, mask.cols, cv.CV_32FC3, [1.0, 1.0, 1.0]);
const invMask = allOnes.sub(mask);
const output = mask.hMul(fg).add(invMask.hMul(bg));
cv.imwrite('./output.jpg', output);
What exactly are you trying to achieve? Only add the masked part from bg to fg or remove the masked part on fg before that?
I don't think you are applying the mask correctly.
The following approach should work to apply the mask:
If the mask is binary: use bitwise_and with bg and mask
If the mask needs to be grayscale: use element-wise multiplication
The resulting masked_bg will only contain the masked part of the image.
Also note that output.jpg is too bright because you are simply adding the two images on top of each other. You could change the weights to 0.5 each or make sure the colored parts of both images never overlap.
Is there an easy way to convert the grayscale Halcon/MVtec Himage object to a c# bitmap? Sample code exists here (mvtec documentation) for a color image:
HTuple type, width, height;
HImage patras = new HImage("patras");
HImage interleaved = patras.InterleaveChannels("argb", "match", 255);
IntPtr ptr = interleaved.GetImagePointer1(out type, out width, out height);
Image img = new Bitmap(width/4, height, width,
PixelFormat.Format32bppPArgb, ptr);
pictureBox.Image = img;
But from this sample, it is not clear how I can work with grayscale images.
I have researched your problem and at this link, https://multipix.com/supportblog/halcon-bitmap-himage-conversion/ it explains how to create a bitmap object for both RBG channel and single channels which is what you are looking for.
It states:
The creation of a bitmap from a HALCON image can be done through the constructors of the bitmap class. With single channel images this is straight forward by using the pointer from the operator get_image_pointer1 and the dimensions of the image.
I believe this means that it is the exact same format as the sample code you have given, but you just remove the line HImage interleaved = patras.InterleaveChannels("argb", "match", 255);
Your code will probably look like this if patras is a gray scale image:
HTuple type, width, height;
HImage patras = new HImage("patras");
IntPtr ptr = patras.GetImagePointer1(out type, out width, out height);
Image img = new Bitmap(width/4, height, width, PixelFormat.Format16bppGrayScale, ptr);
pictureBox.Image = img;
Since you cannot directly create 8-bit grayscale bitmap, the quickest way would be to convert gray image into RGB:
HImage hiImageNew = new HImage();
hiImageNew = hiImage.Compose3(hiImage, hiImage);
hiImageNew = hiImageNew.InterleaveChannels("argb", "match", 255);
IntPtr ptr = hiImageNew.GetImagePointer1(out htType, out htWidth, out htHeight);
System.Drawing.Image bImage = new Bitmap(htWidth/4, htHeight, htWidth, System.Drawing.Imaging.PixelFormat.Format32bppPArgb, ptr);
check this:
https://github.com/Joncash/HanboAOMClassLibrary/blob/master/Hanbo.Helper/ImageConventer.cs
In this Class you can find the function in which you can choose if you have a grayscale or rgb image
public static Bitmap ConvertHalconImageToBitmap(HObject halconImage, bool isColor)
I am just started with processing.js.
The goal of my program is adept image filter(opencv) to video frame.
So I thought (However I found out it does not working in this way :<) :
get video steam from Capture object which in processing.video package.
Store current Image(I hope it can store as PImage Object).
adept OpenCV image Filter
call image method with filtered PImage Object.
I found out how to get video stream from cam, but do not know how to store this.
import processing.video.*;
import gab.opencv.*;
Capture cap;
OpenCV opencv;
public void setup(){
//size(800, 600);
size(640, 480);
colorMode(RGB, 255, 255, 255, 100);
cap = new Capture(this, width, height);
opencv = new OpenCV(this, cap);
cap.start();
background(0);
}
public void draw(){
if(cap.available()){
//return void
cap.read();
}
image(cap, 0, 0);
}
this code get video stream and show what it gets. However, I can not store single frame since Capture.read() returns 'void'.
After Store current frame I would like to transform PImage s with OpenCV like :
PImage gray = opencv.getSnapshot();
opencv.threshold(80);
thresh = opencv.getSnapshot();
opencv.loadImage(gray);
opencv.blur(12);
blur = opencv.getSnapshot();
opencv.loadImage(gray);
opencv.adaptiveThreshold(591, 1);
adaptive = opencv.getSnapshot();
Is there any decent way to store and transform current frame? (I think my way - this means show frame after save current image and transform - uses lots of resources depend on frame rate)
Thanks for answer :D
not sure what you want to do, I'm sure you solved it already, but this could be useful for someone anyway...
It seems you can just write the Capture object's name directly and it returns a PImage:
cap = new Capture(this, width, height);
//Code starting and reading capture in here
PImage snapshot = cap;
//Then you can do whatever you wanted to do with the PImage
snapshot.save("snapshot.jpg");
//Actually this seems to work fine too
cap.save("snapshot.jpg");
Use opencv.loadImage(cap). For example:
if (cap.available() == true) {
cap.read();
}
opencv.loadImage(cap);
opencv.blur(15);
image(opencv.getSnapshot(), 0, 0);
Hope this helps!
I'm creating an application of watermarking using opencv, I'm not able to set background of image as transparent.
I'm using this code Scalar colorScalar = new Scalar(255,255,255,0);
Can any body help me how to make background transparent. I'm using PNG format image.
targetMat = new Mat(targetSize, scaledImage.type(), colorScalar);
Mat waterSubmat = targetMat.submat((int)offsetY,scaledImage.height(), (int)offsetX, scaledImage.width());
scaledImage.copyTo(waterSubmat);
center = new org.opencv.core.Point(pivotX, pivotY);
Mat rotImage = Imgproc.getRotationMatrix2D(center, degreevaluechange, 1);
Mat resultMat = new Mat(2,3, CvType.CV_32FC1);
colorScalar = new Scalar(255,255,255,0);
Imgproc.warpAffine(targetMat, resultMat, rotImage, targetSize, Imgproc.INTER_AREA, Imgproc.BORDER_CONSTANT, colorScalar);
scaledImage = resultMat.clone();
If you want to load your PNG image with the alpha channel and therefore load your image with transparenty, you have to use this code:
imread("image.png",-1)
You can find more informations in the opencv documentation here:
http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html?highlight=imread#imread
As you see in the documentation provided by Maximus. You need to create a 4 channel Mat:
Mat* targetMat = new Mat(targetSize, CV_8UC4, colorScalar);
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(9);
try {
imwrite("alpha.png", targetMat, compression_params);
}
catch (runtime_error& ex) {
fprintf(stderr, "Exception converting image to PNG format: %s\n", ex.what());
return 1;
}
Then add the parameters and write. (This code is from the documentation)