OpenCV detect and compute image features - opencv

Recently upgraded OpenCV from 3.4.5. to OpenCV 4.2.0.
Before I followed this stitching example: https://github.com/opencv/opencv/blob/5131619a1a4d1d3a860b5da431742cc6be945332/samples/cpp/stitching_detailed.cpp (particularly line 480). After upgrading, I altered the code to align more with this newer example: https://github.com/opencv/opencv/blob/master/samples/cpp/stitching_detailed.cpp (Note line 481).
Problem is with this new computeImageFeatures function, I am getting less detected features. Older code with same images gave me 1400+ features but computeImageFeatures gave me exactly 500 features per image. Any ideas how to "fix" this? I believe it also causes the "Bundle Adjuster" to fail later.

According to documentation of cv::ORB::create, default value of nfeatures argument is 500:
The first argument is nfeatures, you may set the first argument to grater number like 2000.
Here are the constructor arguments:
static Ptr<ORB> cv::ORB::create (int nfeatures = 500,
float scaleFactor = 1.2f,
int nlevels = 8,
int edgeThreshold = 31,
int firstLevel = 0,
int WTA_K = 2,
int scoreType = ORB::HARRIS_SCORE,
int patchSize = 31,
int fastThreshold = 20
)
Try modifying:
if (features_type == "orb")
{
finder = ORB::create();
}
to
if (features_type == "orb")
{
finder = ORB::create(2000);
}
In case you are not using ORB, but other type of features, read the documentation of the constructor.
I assume all types has a limiter argument.

Related

Visualizing a line in drake visualizer with C++

The question is related to Is there a way of visualising a line in drake visualizer where I had asked about how to visualize a line in the drake visualizer (about 3 years ago, which worked fine with v0.10.0). I am trying to achieve the same with the new API and was wondering if there was any example/documentation which can guide me on how to publish a line onto the visualizer. My previous method used for publishing a line looks like:
void publishLine(const std::vector<std::vector<double>>& pts,
const std::vector<std::string>& path, lcm::DrakeLcm& lcm,
std::vector<double> color) {
long long int now = getUnixTime() * 1000 * 1000;
nlohmann::json j = {{"timestamp", now},
{
"setgeometry",
{{{"path", path},
{"geometry",
{
{"type", "line"},
{"points", pts},
{"color", color},
{"radius", 0.1},
}}}},
},
{"settransform", nlohmann::json({})},
{"delete", nlohmann::json({})}};
auto msg = robotlocomotion::viewer2_comms_t();
msg.utime = now;
msg.format = "treeviewer_json";
msg.format_version_major = 1;
msg.format_version_minor = 0;
msg.data.clear();
for (auto& c : j.dump()) msg.data.push_back(c);
msg.num_bytes = j.dump().size();
// Use channel 0 for remote viewer communications.
lcm.get_lcm_instance()->publish("DIRECTOR_TREE_VIEWER_REQUEST_<0>", &msg);
}
You can use Meshcat::SetLine or Meshcat::SetLineSegments https://drake.mit.edu/doxygen_cxx/classdrake_1_1geometry_1_1_meshcat.html#aa5b082d79e267c040cbd066a11cdcb54
One caveat is that many browsers/webGL implementations do not support the linewidth property in ThreeJS. For thick lines, consider adding a cylinder using SetObject.

Where and how pool difficulty(pdiff) is set in bitcoin source code?

I am working with bitcoin source code want to set initial difficulty to 1 (I changed bdiff,nBits field). So I need to change pdiff as well. according to :
difficulty = difficulty_1_target / current_target (target is a 256
bit number)
difficulty_1_target can be different for various ways to measure
difficulty. Traditionally, it represents a hash where the leading 32
bits are zero and the rest are one (this is known as "pool difficulty"
or "pdiff"). The Bitcoin protocol represents targets as a custom
floating point type with limited precision; as a result, Bitcoin
clients often approximate difficulty based on this (this is known as
"bdiff").
Anyone knows where pdiff is stored ? Is it hard coded ?
I found the solution! It's not exactly a pdiff field in the code but there is a function in blockchain.cpp :
double GetDifficulty(const CBlockIndex* blockindex)
{
if (blockindex == nullptr)
{
return 1.0;
}
int nShift = (blockindex->nBits >> 24) & 0xff;
double dDiff =
(double)0x0000ffff / (double)(blockindex->nBits & 0x00ffffff);
while (nShift < 29)
{
dDiff *= 256.0;
nShift++;
}
while (nShift > 29)
{
dDiff /= 256.0;
nShift--;
}
return dDiff;
}
for bitcoin initial nBits is equal to 0x1d00ffff so dDiff field above becomes 1 and nshift is equal to 1D. For my private version I set nBits to 0x1f0fffff and should calculate dDiff like
double dDiff =(double)0x000ffff / (double)(blockindex->nBits & 0x00ffffff);
and nShift field for me is 0x1f so I changed while conditions to while(nShift < 31) andwhile (nShift > 31). by running command bitcoin-cli getdifficulty I got 1 as initial difficulty.

error in subtract two same size matrix in opencv , maybe error in conversion

I am comparing the difference of two similar grey images in Euclidean distance. The image is in grey format.
int dis = 0 ;
for(int i=0;i<mat1.rows;i++)
for(int j=0;j<mat1.cols;j++)
{
cout<< mat1.at<unsigned char>(i,j) <<endl;
int a = (mat1.at<unsigned char>(i,j) - mat2.at<unsigned char>(i,j));
dis += (a*a);
}
dis = sqrt (dis);
But the program gives out a error, it doesn't say what exact the error. But I think the error is due to the conversion - int a = (mat1.at(i,j) - mat2.at(i,j));
I have tried int a = (mat1.at(i,j) - mat2.at(i,j)); still doesn't work
mat2[i] looks weird. what's the purpose of the index there ?
also, you might just use the builtin norm function , which already does what you're trying

Standard Hough Lines in EMGU CV

I am in need of using the standard Hough Transformation (instead of the using the HoughLinesBinary method which implements Probabilistic Hough Transform) and have attempted doing so by creating a custom version of the HoughLinesBinary method:
using (MemStorage stor = new MemStorage())
{
IntPtr lines = CvInvoke.cvHoughLines2(canny.Ptr, stor.Ptr, Emgu.CV.CvEnum.HOUGH_TYPE.CV_HOUGH_STANDARD, rhoResolution, (thetaResolution*Math.PI)/180, threshold, 0, 0);
Seq<MCvMat> segments = new Seq<MCvMat>(lines, stor);
List<MCvMat> lineslist = segments.ToList();
foreach(MCvMat line in lineslist)
{
//Process lines: (rho, theta)
}
}
My problem is that I am unsure of what type is the sequence returned. I believe it should be MCvMat, due to reading the documentation that CvMat* is used in OpenCV, which also states that for STANDARD "the matrix must be (the created sequence will be) of CV_32FC2 type"
I am unclear as to what I would need to do to return and process that correct output data from the STANDARD hough lines (i.e. the 2x1 vector for each line giving the rho and theta information).
Any help would be greatly appreciated. Thank you
-Sal
I had the same problem myself a couple of days ago. This is how I solved it using marshalling. Please let me know if you find a simpler solution.
using (MemStorage stor = new MemStorage())
{
IntPtr lines = CvInvoke.cvHoughLines2(canny.Ptr, stor.Ptr, Emgu.CV.CvEnum.HOUGH_TYPE.CV_HOUGH_STANDARD, rhoResolution, (thetaResolution*Math.PI)/180, threshold, 0, 0);
int maxLines = 100;
for(int i = 0; i < maxLines; i++)
{
IntPtr line = CvInvoke.cvGetSeqElem(lines, i);
if (line == IntPtr.Zero)
{
// No more lines
break;
}
PolarCoordinates coords = (PolarCoordinates)System.Runtime.InteropServices.Marshal.PtrToStructure(line, typeof(PolarCoordinates));
// Do something with your Hough lines
}
}
with a struct defined as follows:
public struct PolarCoordinates
{
public float Rho;
public float Theta;
}

cvExtractSURF don't work when useProvidedKeypoints = true

So, I'm trying to extract some SURF keypoints, but I want to impose these key points! So, I put the last parameter to "true" which is "useProvidedKeypoints".
Also, when I create my Keypoint, I used the default constructor (so some default values there). I only change the point "pt" and the octave that I set to 3.
I'm using the C++ interface with SURF. But I know that the problem is right at cvExtractSURF because I copied that part of the code in mine to help me debug.
When I call that function, with the last parameter set to true, I got this error:
OpenCV Error: Bad argument (Unknown array type) in cvarrToMat, file /home/widgg/opencv/trunk/modules/core/src/matrix.cpp, line 651
terminate called after throwing an instance of 'cv::Exception'
what(): /home/widgg/opencv/trunk/modules/core/src/matrix.cpp:651: error: (-5) Unknown array type in function cvarrToMat
I really don't know what I'm doing wrong!
EDIT:
Here's some code. First how I create the keypoints (I left a couple of informations, like the layer_id stuff, but you get the main idea):
for (json_pt_info_vector::iterator b_beg = beg->points.begin(); b_beg != b_end; ++b_beg)
{
int layer_id = b_beg->layer_id;
json_point_info_coord &jpic = b_beg->coord;
jpic.feature_id = features[layer_id].keypoints.size();
KeyPoint kp;
kp.octave = 3;
kp.pt.x = jpic.x;
kp.pt.y = jpic.y;
features[layer_id].keypoints.push_back(kp);
}
Here's the call to SURF:
SURF surf(300, 3, 4);
for (int i = 0; i < nb_img; ++i)
{
debug_msg("extract_features #4.1");
cv::detail::ImageFeatures &cdif = features[i];
Mat gray_image = imread(param.layer_images[i], 0); // 0 = force to gray scale!
debug_msg("extract_features #4.2");
vector<float> descriptors;
debug_msg("extract_features #4.3");
surf(gray_image, Mat(), cdif.keypoints, descriptors, true); // MUST BE TRUE TO FORCE THE PROVIDED KEYPOINTS
debug_msg("extract_features #4.4");
cdif.descriptors = Mat(descriptors, true).reshape(1, (int)cdif.keypoints.size());
debug_msg("extract_features #4.5");
gray_image.release();
debug_msg("extract_features #4.6");
images[i] = imread(param.layer_images[i]); // keep the image open
}
It crashes after #4.3 in the debug message!
Hope that helps!
EDIT 2:
I replaced some part by cv::SurfDescriptorExtracter. I replace everything from 4.3 to 4.5 with the following line:
extractor.compute(gray_image, cdif.keypoints, cdif.descriptors);
So now, there's still a bug, but it's located somewhere else, not necessary related to this question!
I'm surprised that the call to surf(gray_image, Mat(), cdif.keypoints, descriptors, true) even compiles. the descriptors argument should be a cv::Mat, not a vector.

Resources