Gaussian Noise in emgucv - emgucv

How can I add gaussian noise ( with a particular mean and variance ) to an image using emgucv ?

I'm not sure quite what your asking as a Gaussian filter tends to be there to remove noise. To use a custom kernel you can use the following code. If you wish to add noise with a set mean and variance then you may have to resort to looping through the my_image.Data property and add it that way. Here is the code for using a custom Kernel if it's not quite what your after let me know and I'll try and find something more appropriate a link to example images may be useful in this case.
Image<Bgr, Byte> my_image = new Image<Bgr, byte>(open.FileName);
float[,] k = { {0, 0, 0},
{0, 0, -0},
{0.33F, 0, -0}};
ConvolutionKernelF kernel = new ConvolutionKernelF(k);
Image<Bgr, float> convolutedImage = my_image * kernel;
pictureBox1.Image = convolutedImage.ToBitmap();
I Hope this Helps,
Chris

Bitmap image = //your bitmap image
Image<Hsv, Byte> templateImageInHsv = new Image<Hsv, Byte>(image);
templateImageInHsv = templateImageInHsv.SmoothGaussian(3);
// Summary:
// Perform Gaussian Smoothing in the current image and return the result
//
// Parameters:
// kernelSize:
// The size of the Gaussian kernel (kernelSize x kernelSize)
//
// Returns:
// The smoothed image
public Image<TColor, TDepth> SmoothGaussian(int kernelSize);

public static Image<Gray, byte> AddGaussianNoise(Image<Gray, byte> originalImage,int mean=0, int sigma=1)
{
var rnd = new Random();
var noiseGenerator = new GaussianRandom(rnd);
var noiseImage = new Image<Gray, float>(originalImage.Width, originalImage.Height);
var tempImage = new Image<Gray, float>(originalImage.Bitmap);
var noiseArray = Enumerable.Range(0, tempImage.Width * tempImage.Height).Select(o =>(float) noiseGenerator.NextGaussian(0, sigma)+mean).ToArray();
noiseArray = noiseArray.Select(o => o+mean ).ToArray();
var arrayAsBytes= GetByteArray(noiseArray);
noiseImage.Bytes = arrayAsBytes;
tempImage += noiseImage;
return tempImage.Convert<Gray, byte>();
}
For the GaussianRandom I am using https://stackoverflow.com/a/4594881

Related

how to find corner vector from opencv.js

this is part of my code
export default class Comp extends Component {
onChange = async evt => {
const { files } = evt.target
const reader = new FileReader()
reader.onload = (e) => this.refs.preview.src = e.target.result
reader.readAsDataURL(files[0])
this.refs.preview.onload = () => {
let src = cv.imread(this.refs.preview)
let dst = new cv.Mat()
cv.cvtColor(src, src, cv.COLOR_RGB2GRAY, 0)
cv.threshold(src, src, 180, 300, cv.THRESH_BINARY)
let largest_area = 0
let largest_contour_index = 0
let dst1 = cv.Mat.zeros(src.rows, src.cols, cv.CV_8UC3)
let contours = new cv.MatVector()
let hierarchy = new cv.Mat()
cv.findContours(src, contours, hierarchy, cv.RETR_CCOMP, cv.CHAIN_APPROX_SIMPLE)
for (let i = 0; i < contours.size(); ++i) {
let color = new cv.Scalar(255, 0, 0)
let color2 = new cv.Scalar(0, 255, 0)
let area = cv.contourArea(contours.get(i))
if (area > largest_area) {
largest_area = area
largest_contour_index = i
cv.drawContours(dst1, contours, i, color, 5, cv.LINE_8)
}
}
cv.imshow('processed', dst1)
src.delete()
dst.delete()
dst1.delete()
}
}
render() {
return (
<div>
<form action="">
<input type="file" onChange={this.onChange} />
</form>
<div className='flex'>
<div className='flex-1'>
<img ref='preview' id='preview' style={{ width: '100%' }} />
</div>
<div className='flex-1'>
<canvas id='processed' />
<img ref='processed' style={{ width: '100%' }} />
</div>
</div>
</div>
)
}
}
the result is like
i want to get those contours data and do perspective transform, and save it to db.
Im assuming you are using Java. Im not very good in it, im core at C++ ,python also ok.
I`ll try to help anyway.
vector<vector<Point> > contours;
this is the prototype for contours in C++. so it means outside is the id of the contour and inside is the coordinates array.
For example, you wan to know the first contours, then you can find by
vector<Point> firstcontourcoord = contours.at(0);
Then the point inside is the point you want. So go ahead and find the largest contour id.
After that you want to transform, it needs a few more thing.
right now you should have declared target cornet point position you want it to be eg
var vv = cv.matFromArray( 4,3,cv.CV_32SC1,[
0,0,0,
0,210,0,
290,210,0,
290,0,0,
])
// sample A4 size
// 2D img coords from largest bounding countour. Example
var imageP = cv.matFromArray( 4,2,cv.CV_8S,[
292,272,
72,379,
487,530,
701,470,
])
Then you can choose to use 3D or 2D transform.
For 3D visualization/AR its
var cm = new cv.Mat(3,3,cv.CV_32FC1,new cv.Scalar())
\\plz find it. this is just a sample. find camera matrix and do unistort
var rvec
var tvec
cv.solvePnP( vv, imageP, cm, new cv.Mat(), rvec, tvec, false, cv.SOLVEPNP_P3P )
Then you have the matrix.
then apply the transform.
For but im think you might just doing those paperwork processing, which u just need 2D transformation, then it is
Mat H = Calib3d.findHomography( objMat, sceneMat, Calib3d.RANSAC, ransacReprojThreshold );
//-- Get the corners from the image_1 ( the object to be "detected" )
Mat objCorners = new Mat(4, 1, CvType.CV_32FC2), sceneCorners = new Mat();
float[] objCornersData = new float[(int) (objCorners.total() * objCorners.channels())];
objCorners.get(0, 0, objCornersData);
objCornersData[0] = 0;
objCornersData[1] = 0;
objCornersData[2] = imgObject.cols();
objCornersData[3] = 0;
objCornersData[4] = imgObject.cols();
objCornersData[5] = imgObject.rows();
objCornersData[6] = 0;
objCornersData[7] = imgObject.rows();
objCorners.put(0, 0, objCornersData);
Core.perspectiveTransform(objCorners, sceneCorners, H);
float[] sceneCornersData = new float[(int) (sceneCorners.total() * sceneCorners.channels())];
sceneCorners.get(0, 0, sceneCornersData);
DB I'm not very familiar. Im only good with older C# .net based method. Others never touch with. But once you have the image just do the push in the DB for your image. shouldn't have much issue.
Edit#
YOu can follow the link below for you implmentation
select Java version
https://docs.opencv.org/3.4/df/d0d/tutorial_find_contours.html
https://docs.opencv.org/3.4/d7/dff/tutorial_feature_homography.html
U just need to copy the code from these 2 links. and you are good to go. Java is most tedious way to go for. Personally C++ or python is a better choice to go

Transform a point from map A to map B

I´m trying to transform a point from one map to another. I´ve tried to use some OpenCV sample code for getAffineTransform(), getPerspectiveTransform(), warpAffine() and findHomography(), but there´re always some kind of gaps in my transformation mesh. The feature points are usually detected on very different positions, so I need a good interpolation method, I think.
About the maps:
Both maps are images which are containing human body parts and human skin. I´m using the OpenCV feature detection/matching algorithmns to get a couple of equal points in both maps. The tricky thing is they´re containing arms and feets, too. Feature points on arms/feets can have much bigger offsets than the points on the torso.
The goal:
I want to transform any point on map A as good as possible to the equivalent position on map B.
My current approach is to find the three most clostest points to my original point on map A and construct a triangle. Afterwards I transform this triangle to the same three feature points on map B. That´s working nice if I have a lot of close feature point surrounding my original point. But on larger areas without feature points I got some problems with the interpolation.
Is this a good way to do so? Or is there a much better solution?
My favorite one would be the contruction of a complete transformation map for both images, but I´m not sure how to do this. Is it possible at all?
Thanks a lot for any advice!
Simple sketch of the transformation (I´m trying to find the points X1 to X3 from the left image in the right image):
Sketch of a sample transformation
Sample for homography (OpenCVSharp):
Mat imgA = new Mat(#"d:\Mesh\Left2.jpg", ImreadModes.Color);
Mat imgB = new Mat(#"d:\Mesh\Right2.jpg", ImreadModes.Color);
Cv2.Resize(imgA, imgA, new Size(512, 341));
Cv2.Resize(imgB, imgB, new Size(512, 341));
SURF detector = SURF.Create(500.0);
KeyPoint[] keypointsA = detector.Detect(imgA);
KeyPoint[] keypointsB = detector.Detect(imgB);
SIFT extractor = SIFT.Create();
Mat descriptorsA = new Mat();
Mat descriptorsB = new Mat();
extractor.Compute(imgA, ref keypointsA, descriptorsA);
extractor.Compute(imgB, ref keypointsB, descriptorsB);
BFMatcher matcher = new BFMatcher(NormTypes.L2, true);
DMatch[] matches = matcher.Match(descriptorsA, descriptorsB);
double minDistance = 10000.0;
double maxDistance = 0.0;
for (int i = 0; i < matches.Length; ++i)
{
double distance = matches[i].Distance;
if (distance < minDistance)
{
minDistance = distance;
}
if (distance > maxDistance)
{
maxDistance = distance;
}
}
List<DMatch> goodMatches = new List<DMatch>();
for (int i = 0; i < matches.Length; ++i)
{
if (matches[i].Distance <= 3.0 * minDistance &&
Math.Abs(keypointsA[matches[i].QueryIdx].Pt.Y - keypointsB[matches[i].TrainIdx].Pt.Y) < 30)
{
goodMatches.Add(matches[i]);
}
}
Mat output = new Mat();
Cv2.DrawMatches(imgA, keypointsA, imgB, keypointsB, goodMatches.ToArray(), output);
List<Point2f> goodA = new List<Point2f>();
List<Point2f> goodB = new List<Point2f>();
for (int i = 0; i < goodMatches.Count; i++)
{
goodA.Add(keypointsA[goodMatches[i].QueryIdx].Pt);
goodB.Add(keypointsB[goodMatches[i].TrainIdx].Pt);
}
InputArray goodInputA = InputArray.Create<Point2f>(goodA);
InputArray goodInputB = InputArray.Create<Point2f>(goodB);
Mat h = Cv2.FindHomography(goodInputA, goodInputB);
Point2f centerA = new Point2f(imgA.Cols / 2.0f, imgA.Rows / 2.0f);
output.DrawMarker((int)centerA.X, (int)centerA.Y, Scalar.Red, MarkerStyle.Cross, 50, LineTypes.Link8, 5);
Point2f[] transformedPoints = Cv2.PerspectiveTransform(new Point2f[] { centerA }, h);
output.DrawMarker((int)transformedPoints[0].X + imgA.Cols, (int)transformedPoints[0].Y, Scalar.Red, MarkerStyle.Cross, 50, LineTypes.Link8, 5);
Code snippet for perspective transform (different approach, OpenCVSharp):
pointsA[0] = new Point(trisA[i].Item0, trisA[i].Item1);
pointsA[1] = new Point(trisA[i].Item2, trisA[i].Item3);
pointsA[2] = new Point(trisA[i].Item4, trisA[i].Item5);
pointsB[0] = new Point(trisB[i].Item0, trisB[i].Item1);
pointsB[1] = new Point(trisB[i].Item2, trisB[i].Item3);
pointsB[2] = new Point(trisB[i].Item4, trisB[i].Item5);
Mat transformation = Cv2.GetAffineTransform(pointsA, pointsB);
InputArray inputSource = InputArray.Create<Point2f>(new Point2f[] { new Point2f(10f, 50f) });
Mat outputMat = new Mat();
Cv2.PerspectiveTransform(inputSource, outputMat, transformation);
Mat.Indexer<Point2f> indexer = outputMat.GetGenericIndexer<Point2f>();
var target = indexer[0, 0];

AForge Imaging finding Rectagles in Image

I'm able to find the rectangles if the image is created in paint..
I'm not able to find the rectangles in Sanp shot or Screen Shot taken
It's not recognizing Rectangle in Snapshot imagees...
Pls help me
string path = "D:\\testc.png";
Bitmap image = (Bitmap)Bitmap.FromFile(path);
BlobCounter blobCounter = new BlobCounter();
blobCounter.FilterBlobs = true;
blobCounter.MinHeight = 1;
blobCounter.MinWidth = 1;
blobCounter.ProcessImage(image);
Blob[] blobs = blobCounter.GetObjectsInformation();
var retcs = blobCounter.GetObjects(image,true);
SimpleShapeChecker shapeChecker = new SimpleShapeChecker();
foreach (var blob in blobs)
{
List<IntPoint> edgePoints = blobCounter.GetBlobsEdgePoints(blob);
List<IntPoint> cornerPoints;
// use the shape checker to extract the corner points
if (shapeChecker.IsQuadrilateral(edgePoints, out cornerPoints))
{
// only do things if the corners form a rectangle
if (shapeChecker.CheckPolygonSubType(cornerPoints) == PolygonSubType.Rectangle)
{
// here i use the graphics class to draw an overlay, but you
// could also just use the cornerPoints list to calculate your
// x, y, width, height values.
List<Point> Points = new List<Point>();
foreach (var point in cornerPoints)
{
Points.Add(new Point(point.X, point.Y));
}
Graphics g = Graphics.FromImage(image);
g.DrawPolygon(new Pen(Color.Red, 5.0f), Points.ToArray());
image.Save("D:\\result.png");
}
}
}

Standard Hough Lines in EMGU CV

I am in need of using the standard Hough Transformation (instead of the using the HoughLinesBinary method which implements Probabilistic Hough Transform) and have attempted doing so by creating a custom version of the HoughLinesBinary method:
using (MemStorage stor = new MemStorage())
{
IntPtr lines = CvInvoke.cvHoughLines2(canny.Ptr, stor.Ptr, Emgu.CV.CvEnum.HOUGH_TYPE.CV_HOUGH_STANDARD, rhoResolution, (thetaResolution*Math.PI)/180, threshold, 0, 0);
Seq<MCvMat> segments = new Seq<MCvMat>(lines, stor);
List<MCvMat> lineslist = segments.ToList();
foreach(MCvMat line in lineslist)
{
//Process lines: (rho, theta)
}
}
My problem is that I am unsure of what type is the sequence returned. I believe it should be MCvMat, due to reading the documentation that CvMat* is used in OpenCV, which also states that for STANDARD "the matrix must be (the created sequence will be) of CV_32FC2 type"
I am unclear as to what I would need to do to return and process that correct output data from the STANDARD hough lines (i.e. the 2x1 vector for each line giving the rho and theta information).
Any help would be greatly appreciated. Thank you
-Sal
I had the same problem myself a couple of days ago. This is how I solved it using marshalling. Please let me know if you find a simpler solution.
using (MemStorage stor = new MemStorage())
{
IntPtr lines = CvInvoke.cvHoughLines2(canny.Ptr, stor.Ptr, Emgu.CV.CvEnum.HOUGH_TYPE.CV_HOUGH_STANDARD, rhoResolution, (thetaResolution*Math.PI)/180, threshold, 0, 0);
int maxLines = 100;
for(int i = 0; i < maxLines; i++)
{
IntPtr line = CvInvoke.cvGetSeqElem(lines, i);
if (line == IntPtr.Zero)
{
// No more lines
break;
}
PolarCoordinates coords = (PolarCoordinates)System.Runtime.InteropServices.Marshal.PtrToStructure(line, typeof(PolarCoordinates));
// Do something with your Hough lines
}
}
with a struct defined as follows:
public struct PolarCoordinates
{
public float Rho;
public float Theta;
}

How to use OpenCV SimpleBlobDetector

Instead of any additional blob detection library, how do I use the cv::SimpleBlobDetector class and its function detectblobs()?
Python: Reads image blob.jpg and performs blob detection with different parameters.
#!/usr/bin/python
# Standard imports
import cv2
import numpy as np;
# Read image
im = cv2.imread("blob.jpg")
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10
params.maxThreshold = 200
# Filter by Area.
params.filterByArea = True
params.minArea = 1500
# Filter by Circularity
params.filterByCircularity = True
params.minCircularity = 0.1
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.87
# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0.01
# Create a detector with the parameters
# OLD: detector = cv2.SimpleBlobDetector(params)
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures
# the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show blobs
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
C++: Reads image blob.jpg and performs blob detection with different parameters.
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
// Read image
#if CV_MAJOR_VERSION < 3 // If you are using OpenCV 2
Mat im = imread("blob.jpg", CV_LOAD_IMAGE_GRAYSCALE);
#else
Mat im = imread("blob.jpg", IMREAD_GRAYSCALE);
#endif
// Setup SimpleBlobDetector parameters.
SimpleBlobDetector::Params params;
// Change thresholds
params.minThreshold = 10;
params.maxThreshold = 200;
// Filter by Area.
params.filterByArea = true;
params.minArea = 1500;
// Filter by Circularity
params.filterByCircularity = true;
params.minCircularity = 0.1;
// Filter by Convexity
params.filterByConvexity = true;
params.minConvexity = 0.87;
// Filter by Inertia
params.filterByInertia = true;
params.minInertiaRatio = 0.01;
// Storage for blobs
std::vector<KeyPoint> keypoints;
#if CV_MAJOR_VERSION < 3 // If you are using OpenCV 2
// Set up detector with params
SimpleBlobDetector detector(params);
// Detect blobs
detector.detect(im, keypoints);
#else
// Set up detector with params
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
// Detect blobs
detector->detect(im, keypoints);
#endif
// Draw detected blobs as red circles.
// DrawMatchesFlags::DRAW_RICH_KEYPOINTS flag ensures
// the size of the circle corresponds to the size of blob
Mat im_with_keypoints;
drawKeypoints(im, keypoints, im_with_keypoints, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
// Show blobs
imshow("keypoints", im_with_keypoints);
waitKey(0);
}
The answer has been copied from this tutorial I wrote at LearnOpenCV.com explaining various parameters of SimpleBlobDetector. You can find additional details about the parameters in the tutorial.
You may store the parameters for the blob detector in a file, but this is not necessary. Example:
// set up the parameters (check the defaults in opencv's code in blobdetector.cpp)
cv::SimpleBlobDetector::Params params;
params.minDistBetweenBlobs = 50.0f;
params.filterByInertia = false;
params.filterByConvexity = false;
params.filterByColor = false;
params.filterByCircularity = false;
params.filterByArea = true;
params.minArea = 20.0f;
params.maxArea = 500.0f;
// ... any other params you don't want default value
// set up and create the detector using the parameters
cv::SimpleBlobDetector blob_detector(params);
// or cv::Ptr<cv::SimpleBlobDetector> detector = cv::SimpleBlobDetector::create(params)
// detect!
vector<cv::KeyPoint> keypoints;
blob_detector.detect(image, keypoints);
// extract the x y coordinates of the keypoints:
for (int i=0; i<keypoints.size(); i++){
float X = keypoints[i].pt.x;
float Y = keypoints[i].pt.y;
}
Note: all the examples here are using the OpenCV 2.X API.
In OpenCV 3.X, you need to use:
Ptr<SimpleBlobDetector> d = SimpleBlobDetector::create(params);
See also: the transition guide: http://docs.opencv.org/master/db/dfa/tutorial_transition_guide.html#tutorial_transition_hints_headers
// creation
cv::SimpleBlobDetector * blob_detector;
blob_detector = new SimpleBlobDetector();
blob_detector->create("SimpleBlobDetector");
// change params - first move it to public!!
blob_detector->params.filterByArea = true;
blob_detector->params.minArea = 1;
blob_detector->params.maxArea = 32000;
// or read / write them with file
FileStorage fs("test_fs.yml", FileStorage::WRITE);
FileNode fn = fs["features"];
//blob_detector->read(fn);
// detect
vector<KeyPoint> keypoints;
blob_detector->detect(img_text, keypoints);
fs.release();
I do know why, but params are protected. So I moved it in file features2d.hpp to be public:
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
public:
Params params;
protected:
struct CV_EXPORTS Center
{
Point2d loc
If you will not do this, the only way to change params is to create file (FileStorage fs("test_fs.yml", FileStorage::WRITE);), than open it in notepad, and edit. Or maybe there is another way, but I`m not aware of it.

Resources