how to find corner vector from opencv.js - opencv

this is part of my code
export default class Comp extends Component {
onChange = async evt => {
const { files } = evt.target
const reader = new FileReader()
reader.onload = (e) => this.refs.preview.src = e.target.result
reader.readAsDataURL(files[0])
this.refs.preview.onload = () => {
let src = cv.imread(this.refs.preview)
let dst = new cv.Mat()
cv.cvtColor(src, src, cv.COLOR_RGB2GRAY, 0)
cv.threshold(src, src, 180, 300, cv.THRESH_BINARY)
let largest_area = 0
let largest_contour_index = 0
let dst1 = cv.Mat.zeros(src.rows, src.cols, cv.CV_8UC3)
let contours = new cv.MatVector()
let hierarchy = new cv.Mat()
cv.findContours(src, contours, hierarchy, cv.RETR_CCOMP, cv.CHAIN_APPROX_SIMPLE)
for (let i = 0; i < contours.size(); ++i) {
let color = new cv.Scalar(255, 0, 0)
let color2 = new cv.Scalar(0, 255, 0)
let area = cv.contourArea(contours.get(i))
if (area > largest_area) {
largest_area = area
largest_contour_index = i
cv.drawContours(dst1, contours, i, color, 5, cv.LINE_8)
}
}
cv.imshow('processed', dst1)
src.delete()
dst.delete()
dst1.delete()
}
}
render() {
return (
<div>
<form action="">
<input type="file" onChange={this.onChange} />
</form>
<div className='flex'>
<div className='flex-1'>
<img ref='preview' id='preview' style={{ width: '100%' }} />
</div>
<div className='flex-1'>
<canvas id='processed' />
<img ref='processed' style={{ width: '100%' }} />
</div>
</div>
</div>
)
}
}
the result is like
i want to get those contours data and do perspective transform, and save it to db.

Im assuming you are using Java. Im not very good in it, im core at C++ ,python also ok.
I`ll try to help anyway.
vector<vector<Point> > contours;
this is the prototype for contours in C++. so it means outside is the id of the contour and inside is the coordinates array.
For example, you wan to know the first contours, then you can find by
vector<Point> firstcontourcoord = contours.at(0);
Then the point inside is the point you want. So go ahead and find the largest contour id.
After that you want to transform, it needs a few more thing.
right now you should have declared target cornet point position you want it to be eg
var vv = cv.matFromArray( 4,3,cv.CV_32SC1,[
0,0,0,
0,210,0,
290,210,0,
290,0,0,
])
// sample A4 size
// 2D img coords from largest bounding countour. Example
var imageP = cv.matFromArray( 4,2,cv.CV_8S,[
292,272,
72,379,
487,530,
701,470,
])
Then you can choose to use 3D or 2D transform.
For 3D visualization/AR its
var cm = new cv.Mat(3,3,cv.CV_32FC1,new cv.Scalar())
\\plz find it. this is just a sample. find camera matrix and do unistort
var rvec
var tvec
cv.solvePnP( vv, imageP, cm, new cv.Mat(), rvec, tvec, false, cv.SOLVEPNP_P3P )
Then you have the matrix.
then apply the transform.
For but im think you might just doing those paperwork processing, which u just need 2D transformation, then it is
Mat H = Calib3d.findHomography( objMat, sceneMat, Calib3d.RANSAC, ransacReprojThreshold );
//-- Get the corners from the image_1 ( the object to be "detected" )
Mat objCorners = new Mat(4, 1, CvType.CV_32FC2), sceneCorners = new Mat();
float[] objCornersData = new float[(int) (objCorners.total() * objCorners.channels())];
objCorners.get(0, 0, objCornersData);
objCornersData[0] = 0;
objCornersData[1] = 0;
objCornersData[2] = imgObject.cols();
objCornersData[3] = 0;
objCornersData[4] = imgObject.cols();
objCornersData[5] = imgObject.rows();
objCornersData[6] = 0;
objCornersData[7] = imgObject.rows();
objCorners.put(0, 0, objCornersData);
Core.perspectiveTransform(objCorners, sceneCorners, H);
float[] sceneCornersData = new float[(int) (sceneCorners.total() * sceneCorners.channels())];
sceneCorners.get(0, 0, sceneCornersData);
DB I'm not very familiar. Im only good with older C# .net based method. Others never touch with. But once you have the image just do the push in the DB for your image. shouldn't have much issue.
Edit#
YOu can follow the link below for you implmentation
select Java version
https://docs.opencv.org/3.4/df/d0d/tutorial_find_contours.html
https://docs.opencv.org/3.4/d7/dff/tutorial_feature_homography.html
U just need to copy the code from these 2 links. and you are good to go. Java is most tedious way to go for. Personally C++ or python is a better choice to go

Related

Transform a point from map A to map B

I´m trying to transform a point from one map to another. I´ve tried to use some OpenCV sample code for getAffineTransform(), getPerspectiveTransform(), warpAffine() and findHomography(), but there´re always some kind of gaps in my transformation mesh. The feature points are usually detected on very different positions, so I need a good interpolation method, I think.
About the maps:
Both maps are images which are containing human body parts and human skin. I´m using the OpenCV feature detection/matching algorithmns to get a couple of equal points in both maps. The tricky thing is they´re containing arms and feets, too. Feature points on arms/feets can have much bigger offsets than the points on the torso.
The goal:
I want to transform any point on map A as good as possible to the equivalent position on map B.
My current approach is to find the three most clostest points to my original point on map A and construct a triangle. Afterwards I transform this triangle to the same three feature points on map B. That´s working nice if I have a lot of close feature point surrounding my original point. But on larger areas without feature points I got some problems with the interpolation.
Is this a good way to do so? Or is there a much better solution?
My favorite one would be the contruction of a complete transformation map for both images, but I´m not sure how to do this. Is it possible at all?
Thanks a lot for any advice!
Simple sketch of the transformation (I´m trying to find the points X1 to X3 from the left image in the right image):
Sketch of a sample transformation
Sample for homography (OpenCVSharp):
Mat imgA = new Mat(#"d:\Mesh\Left2.jpg", ImreadModes.Color);
Mat imgB = new Mat(#"d:\Mesh\Right2.jpg", ImreadModes.Color);
Cv2.Resize(imgA, imgA, new Size(512, 341));
Cv2.Resize(imgB, imgB, new Size(512, 341));
SURF detector = SURF.Create(500.0);
KeyPoint[] keypointsA = detector.Detect(imgA);
KeyPoint[] keypointsB = detector.Detect(imgB);
SIFT extractor = SIFT.Create();
Mat descriptorsA = new Mat();
Mat descriptorsB = new Mat();
extractor.Compute(imgA, ref keypointsA, descriptorsA);
extractor.Compute(imgB, ref keypointsB, descriptorsB);
BFMatcher matcher = new BFMatcher(NormTypes.L2, true);
DMatch[] matches = matcher.Match(descriptorsA, descriptorsB);
double minDistance = 10000.0;
double maxDistance = 0.0;
for (int i = 0; i < matches.Length; ++i)
{
double distance = matches[i].Distance;
if (distance < minDistance)
{
minDistance = distance;
}
if (distance > maxDistance)
{
maxDistance = distance;
}
}
List<DMatch> goodMatches = new List<DMatch>();
for (int i = 0; i < matches.Length; ++i)
{
if (matches[i].Distance <= 3.0 * minDistance &&
Math.Abs(keypointsA[matches[i].QueryIdx].Pt.Y - keypointsB[matches[i].TrainIdx].Pt.Y) < 30)
{
goodMatches.Add(matches[i]);
}
}
Mat output = new Mat();
Cv2.DrawMatches(imgA, keypointsA, imgB, keypointsB, goodMatches.ToArray(), output);
List<Point2f> goodA = new List<Point2f>();
List<Point2f> goodB = new List<Point2f>();
for (int i = 0; i < goodMatches.Count; i++)
{
goodA.Add(keypointsA[goodMatches[i].QueryIdx].Pt);
goodB.Add(keypointsB[goodMatches[i].TrainIdx].Pt);
}
InputArray goodInputA = InputArray.Create<Point2f>(goodA);
InputArray goodInputB = InputArray.Create<Point2f>(goodB);
Mat h = Cv2.FindHomography(goodInputA, goodInputB);
Point2f centerA = new Point2f(imgA.Cols / 2.0f, imgA.Rows / 2.0f);
output.DrawMarker((int)centerA.X, (int)centerA.Y, Scalar.Red, MarkerStyle.Cross, 50, LineTypes.Link8, 5);
Point2f[] transformedPoints = Cv2.PerspectiveTransform(new Point2f[] { centerA }, h);
output.DrawMarker((int)transformedPoints[0].X + imgA.Cols, (int)transformedPoints[0].Y, Scalar.Red, MarkerStyle.Cross, 50, LineTypes.Link8, 5);
Code snippet for perspective transform (different approach, OpenCVSharp):
pointsA[0] = new Point(trisA[i].Item0, trisA[i].Item1);
pointsA[1] = new Point(trisA[i].Item2, trisA[i].Item3);
pointsA[2] = new Point(trisA[i].Item4, trisA[i].Item5);
pointsB[0] = new Point(trisB[i].Item0, trisB[i].Item1);
pointsB[1] = new Point(trisB[i].Item2, trisB[i].Item3);
pointsB[2] = new Point(trisB[i].Item4, trisB[i].Item5);
Mat transformation = Cv2.GetAffineTransform(pointsA, pointsB);
InputArray inputSource = InputArray.Create<Point2f>(new Point2f[] { new Point2f(10f, 50f) });
Mat outputMat = new Mat();
Cv2.PerspectiveTransform(inputSource, outputMat, transformation);
Mat.Indexer<Point2f> indexer = outputMat.GetGenericIndexer<Point2f>();
var target = indexer[0, 0];

OpenCV - iterate over each blob in a binary image and use it as mask

I have a binary image and a color image of the same size. I need to iterate each blob (white pixel blocks) of the binary image and use it as a mask and find the mean color of this blob region from the color image.
I have tried:
HierarchyIndex[] hierarchy;
Point[][] contours;
binaryImage.FindContours(out contours, out hierarchy, RetrievalModes.List, ContourApproximationModes.ApproxNone);
using (Mat mask = Mat.Zeros(matColor.Size(), MatType.CV_8UC1))
foreach (var bl in contours)
if (Cv2.ContourArea(bl) > 5)
{
mask.DrawContour(bl, Scalar.White, -1);
Rect rect = Cv2.BoundingRect(bl);
Scalar mean = Cv2.Mean(colorImage[rect], mask[rect]);
mask.DrawContour(bl, Scalar.Black, -1);
}
which works for the blobs not having holes. However in my case I have many blob regions having huge holes that affects the mean calculation.
I couldn't figure it out how to solve it using the hierarchy info; or with another approach.
(My code is for OpenCVSharp but answer in any other wrapper or language is wellcome.)
Edit: I've added an example image. The traffic signs part is the problem.
Actually I think I have solved this problem with this method:
using PLine = List<Point>;
using Shape = List<List<Point>>;
internal static IEnumerable<Tuple<PLine, Shape>> FindContoursWithHoles(this Mat mat)
{
Point[][] contours;
HierarchyIndex[] hierarchy;
mat.FindContours(out contours, out hierarchy, RetrievalModes.Tree, ContourApproximationModes.ApproxNone);
Dictionary<int, bool> dic = new Dictionary<int, bool>();
for (int i = 0; i < contours.Length; i++)
if (hierarchy[i].Parent < 0)
dic[i] = true;
bool ok = false;
while (!ok)
{
ok = true;
for (int i = 0; i < contours.Length; i++)
if (dic.ContainsKey(i))
{
bool isParent = dic[i];
var hi = hierarchy[i];
if (hi.Parent >= 0) dic[hi.Parent] = (!isParent);
if (hi.Child >= 0) dic[hi.Child] = (!isParent);
while (hi.Next >= 0)
{
dic[hi.Next] = isParent;
hi = hierarchy[hi.Next];
if (hi.Parent >= 0) dic[hi.Parent] = (!isParent);
if (hi.Child >= 0) dic[hi.Child] = (!isParent);
}
hi = hierarchy[i];
while (hi.Previous >= 0)
{
dic[hi.Previous] = isParent;
hi = hierarchy[hi.Previous];
if (hi.Parent >= 0) dic[hi.Parent] = (!isParent);
if (hi.Child >= 0) dic[hi.Child] = (!isParent);
}
}
else
ok = false;
}
foreach (int i in dic.Keys.Where(a => dic[a]))
{
PLine pl = contours[i].ToList();
Shape childs = new Shape();
var hiParent = hierarchy[i];
if (hiParent.Child >= 0)
{
childs.Add(contours[hiParent.Child].ToList());
var hi = hierarchy[hiParent.Child];
while (hi.Next >= 0)
{
childs.Add(contours[hi.Next].ToList());
hi = hierarchy[hi.Next];
}
hi = hierarchy[hiParent.Child];
while (hi.Previous >= 0)
{
childs.Add(contours[hi.Previous].ToList());
hi = hierarchy[hi.Previous];
}
}
yield return Tuple.Create(pl, childs);
}
}
By drawing the holes as black, we can use each blob as a single mask:
var blobContours = blobs.FindContoursWithHoles().ToList();
using (Mat mask = Mat.Zeros(mat0.Size(), MatType.CV_8UC1))
for (int i = 0; i < blobContours.Count; i++)
{
var tu = blobContours[i];
var bl = tu.Item1;
if (Cv2.ContourArea(bl) > 100)
{
mask.DrawContour(bl, Scalar.White, -1);
foreach (var child in tu.Item2)
mask.DrawContour(child, Scalar.Black, -1);
Rect rect = Cv2.BoundingRect(bl);
Scalar mean = Cv2.Mean(mat0[rect], mask[rect]);
}
}
I think there should be an easier way.
And yet there is another problem. In some cases, an individual red part of the sign (which is a seperate white blob) does not found as a parent outside circle and a child inside circle, but a large parent contour outside with two circles as children (ie. hole inside another hole, makes a seperate blob which is not found as a parent). Yes it is hierarchically correct but does not help me. I hope I could make my self clear, sorry for my English.
#Miki thank you very much. I was able to achieve what I want using ConnectedComponents. Its simple and fast:
var cc = Cv2.ConnectedComponentsEx(binaryImage, PixelConnectivity.Connectivity8);
foreach (var bl in cc.Blobs)
using (Mat mask = new Mat())
{
cc.FilterByBlob(binaryImage, mask, bl);
Rect rect = bl.Rect;
Scalar mean = Cv2.Mean(colorImage[rect], mask[rect]);
}

AForge Imaging finding Rectagles in Image

I'm able to find the rectangles if the image is created in paint..
I'm not able to find the rectangles in Sanp shot or Screen Shot taken
It's not recognizing Rectangle in Snapshot imagees...
Pls help me
string path = "D:\\testc.png";
Bitmap image = (Bitmap)Bitmap.FromFile(path);
BlobCounter blobCounter = new BlobCounter();
blobCounter.FilterBlobs = true;
blobCounter.MinHeight = 1;
blobCounter.MinWidth = 1;
blobCounter.ProcessImage(image);
Blob[] blobs = blobCounter.GetObjectsInformation();
var retcs = blobCounter.GetObjects(image,true);
SimpleShapeChecker shapeChecker = new SimpleShapeChecker();
foreach (var blob in blobs)
{
List<IntPoint> edgePoints = blobCounter.GetBlobsEdgePoints(blob);
List<IntPoint> cornerPoints;
// use the shape checker to extract the corner points
if (shapeChecker.IsQuadrilateral(edgePoints, out cornerPoints))
{
// only do things if the corners form a rectangle
if (shapeChecker.CheckPolygonSubType(cornerPoints) == PolygonSubType.Rectangle)
{
// here i use the graphics class to draw an overlay, but you
// could also just use the cornerPoints list to calculate your
// x, y, width, height values.
List<Point> Points = new List<Point>();
foreach (var point in cornerPoints)
{
Points.Add(new Point(point.X, point.Y));
}
Graphics g = Graphics.FromImage(image);
g.DrawPolygon(new Pen(Color.Red, 5.0f), Points.ToArray());
image.Save("D:\\result.png");
}
}
}

How can I detect the number of lit faces on a mesh?

I have a number of objects arranged in a THREE.scene, and I want to calculate or retrieve a relative value indicating how much light each object is receiving from a single PointLight source. Simplified example:
With the light positioned at the camera, Block 1's value might be 0.50 since 3 of 6 faces are completely exposed, while 2 is ~0.33 and 3 is ~1.67.
I could probably do this the hard way by drawing a ray from the light toward the center of each face and looking at the intersects, but I'm assuming it's possible to directly retrieve the light level of each face.
This code takes the object's global matrix in consideration.
var amount = 0;
var rotationMatrix = new THREE.Matrix4();
var vector = new THREE.Vector3();
var centroid = new THREE.Vector3();
var normal = new THREE.Vector3();
for ( var i = 0; i < objects.length; i ++ ) {
var object = objects[ i ];
rotationMatrix.extractRotation( object.matrixWorld );
for ( var j = 0; j < object.geometry.faces.length; j ++ ) {
var face = object.geometry.faces[ j ];
centroid.copy( face.centroid );
object.matrixWorld.multiplyVector3( centroid );
normal.copy( face.normal );
rotationMatrix.multiplyVector3( normal );
vector.sub( light.position, centroid ).normalize();
if ( normal.dot( vector ) > 0 ) amount ++;
}
}
I think something like this should do the trick.
var amount = 0;
var faces = mesh.geometry.faces;
for ( var i = 0; i < geometry.faces.length; i ++ ) {
if ( geometry.faces[ i ].normal.dot( light.position ) > 0 ) amount ++;
}
(Warning: Brute force method!)
I'm including this for reference since it's what I'm currently using to meet all of the requirements described in the question. This function considers a face unlit if its center is not directly visible from the light's position.
I have no rotation matrix to consider for my application.
function getLightLevel(obj) {
/* Return percentage of obj.geometry faces exposed to light */
var litCount = 0;
var faces = obj.geometry.faces;
var faceCount = faces.length;
var direction = new THREE.Vector3();
var centroid = new THREE.Vector3();
for (var i=0; i < faceCount; i++) {
// Test only light-facing faces (from mrdoob's first answer).
if (faces[i].normal.dot(light.position) > 0) {
centroid.add(obj.position, faces[i].centroid);
direction.sub(centroid, light.position).normalize();
// Exclude face if centroid is obscured by another object.
var ray = new THREE.Ray(light.position, direction);
var intersects = ray.intersectObjects(objects);
if (intersects.length > 0 && intersects[0].face === faces[i]) {
litCount ++;
}
}
}
return litCount / faceCount;
}

XNA Vertex Buffer is drawing very wrongly

I'm new to using the vertex buffer in XNA. In my project I construct one using a very large amount of vertices. I have been drawing the primitives heretofore with DrawUserIndexedPrimitives(), but I have grown out of that and need more efficiency, so I am trying to figure out the basics of buffers.
I think I've successfully implemented a buffer (and have been very satisfied with the performance improvements), except that all the faces look wrong. Here's how the primitives looked without the buffer: http://i.imgur.com/ygsnB.jpg (the intended look), and here's how they look without: http://i.imgur.com/rQN1p.jpg
Here's my code for loading a vertex buffer and index buffer:
private void RefreshVertexBuffer()
{
List<VertexPositionNormalTexture> vertices = new List<VertexPositionNormalTexture>();
List<int> indices = new List<int>();
//rebuild the vertex list and index list
foreach(var entry in blocks)
{
Block block = entry.Value;
for (int q = 0; q < block.Quads.Count; q++)
{
vertices.AddRange(block.Quads[q].Corners);
int offset = vertices.Count;
foreach (Triangle tri in block.Quads[q].Triangles)
{
indices.Add(tri.Indices[0] + offset);
indices.Add(tri.Indices[1] + offset);
indices.Add(tri.Indices[2] + offset);
}
}
}
vertexBuffer = new DynamicVertexBuffer(graphics, typeof(VertexPositionNormalTexture), vertices.Count, BufferUsage.None);
indexBuffer = new DynamicIndexBuffer(graphics, IndexElementSize.ThirtyTwoBits, indices.Count, BufferUsage.None);
vertexBuffer.SetData<VertexPositionNormalTexture>(vertices.ToArray(), 0, vertices.Count);
indexBuffer.SetData<int>(indices.ToArray(), 0, indices.Count);
}
and here is the draw call for plain rendering:
public void Render(Planet planet, Camera camera)
{
effect.View = camera.View;
effect.Projection = camera.Projection;
effect.World = planet.World;
foreach (KeyValuePair<Vector3, Block> entry in planet.Geometry.Blocks)
{
int blockID = planet.BlockMap.Blocks[entry.Value.U][entry.Value.V][entry.Value.W];
if (blockID == 1)
effect.Texture = dirt;
else if (blockID == 2)
effect.Texture = rock;
effect.CurrentTechnique.Passes[0].Apply();
foreach (Quad quad in entry.Value.Quads)
{
foreach (Triangle tri in quad.Triangles)
{
graphics.DrawUserIndexedPrimitives<VertexPositionNormalTexture>(PrimitiveType.TriangleList,
quad.Corners, 0, quad.Corners.Length, tri.Indices, 0, 1);
}
}
}
}
...and for the vertex buffer rendering:
public void RenderFromBuffer(Planet planet, Camera camera)
{
effect.View = camera.View;
effect.Projection = camera.Projection;
effect.World = planet.World;
graphics.SetVertexBuffer(planet.Geometry.VertexBuffer);
graphics.Indices = planet.Geometry.IndexBuffer;
effect.CurrentTechnique.Passes[0].Apply();
graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, planet.Geometry.VertexBuffer.VertexCount, 0, planet.Geometry.IndexBuffer.IndexCount / 6);
}
My indices may be off? Or is this due to some quirks with how the graphics device acts with a buffer vs. user indexed?
Edit: It may also have something to do with the way I stitch triangle indices. When I built this project with DrawUserIndexedPrimitives, I ran into some issues similar to this where triangles facing a certain direction would draw on the wrong 'side' (or, the wrong face would be culled). So I came up with this solution:
Triangle[] tris;
if (faceDirection == ADJACENT_FACE_NAMES.BOTTOM | faceDirection == ADJACENT_FACE_NAMES.OUTER | faceDirection == ADJACENT_FACE_NAMES.LEFT)
{
//create the triangles for the quad
tris = new Triangle[2]{
new Triangle( //the bottom triangle
new int[3] {
0, //the bottom left corner
1, //the bottom right corner
2 //the top left corner
}),
new Triangle( //the top triangle
new int[3] {
1, //the bottom right corner
3, //the top right corner
2 //the top left corner
})};
}
else
{
tris = new Triangle[2]{
new Triangle(
new int[3] {
2, //the top left corner
1,
0
}),
new Triangle(
new int[3] {
2,
3,
1
})
};
}
The problem is that the triangles are being culled. So you have two options to fix this:
You can change the order of the triangle indices.
foreach (Triangle tri in block.Quads[q].Triangles)
{
indices.Add(tri.Indices[1] + offset);
indices.Add(tri.Indices[0] + offset);
indices.Add(tri.Indices[2] + offset);
}
You can change your RasterizerState...
Default rasterizer state is RasterizerState.CullClockwise...
You can change it to RasterizerState.CullCounterClockwise
EDIT:
this line has a bug:
graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, planet.Geometry.VertexBuffer.VertexCount, 0, planet.Geometry.IndexBuffer.IndexCount / 6)
should be:
graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, planet.Geometry.VertexBuffer.VertexCount, 0, planet.Geometry.IndexBuffer.IndexCount / 3)

Resources