How to export a profile of a calibrated image with an imagej macro - imagej

i wrote a little macro, that locates spheres on a spatially calibrated (Dicom-) image. Once found, it draws lots! of lines and saves the brightness profiles along these lines to a csv file. So far, this works nice and fast. This is the piece of code, that extracts the profiles and saves them:
for (i=0; i < 360; i++){
run("Clear Results");
angle = i*2*PI/360;
makeLine(xm,ym,(xm+(length*sin(angle))), (ym-(length*cos(angle))));
profile = getProfile();
for (j=0; j<profile.length; j++) {
setResult("Value", j, profile[j]);
}
updateResults;
saveAs("Results",path + "\\angle_"+i+".csv");
}
My problem is, that the actual scale is not exported. I get something like this:
1,3070.070
2,3069.000
3,3069.986
4,3053.646
but i want
0.4395 3070
0.8789 3070
1.3184 3070
1.7578 069.8994
And so on. I tried to modify this line a little:
setResult("Value", j*xscale, profile[j]);
but this does not work. I also tried to plot the profiles and then read and save them.
for (i=0; i < 360; i++){
run("Clear Results");
angle = i*2*PI/360;
makeLine(xm,ym,(xm+(length*sin(angle))), (ym-(length*cos(angle))));
run ("Plot Profile");
Plot.getValues(xplot,yplot);
for (j=0; j< xplot.length; j++){
print (xplot[j],yplot [j]);
}
selectWindow("Log");
saveAs("Text",path + "\\angle_"+i+".csv");
print("\\Clear");
selectWindow("04");
}
(Sorry, the window switching is still experimental and the profiles are not closing..)
This works in principle but it is of course PAINFULLY slow. So my question is.... How do i extract not online line numbers but the correct scale in a profile?
Thank you all very much!

The first column is just a non editable line number. The code must be changed like this:
for (i=0; i < 360; i++){
run("Clear Results");
angle = i*2*PI/360;
makeLine(xm,ym,(xm+(length*sin(angle))), (ym-(length*cos(angle))));
profile = getProfile();
for (j=0; j<profile.length; j++) {
setResult("xvalues", j, j*dx);
setResult("yvalues", j, profile[j]);
}
updateResults;
saveAs("Results",path + "\\angle_"+i+".csv");
}

Related

how to Recreate boundary from array of junctions in imagej - fiji?

I am detecting and storing the boundary of a sandpile to an array and then save it as a text file for later use. the way i stored the boundary as text file is by using the wand tool, then getting the properties of the selection which gives me a table. Then converting table to array and finally storing it as text file.
after doing so i noticed that the above mentioned table only has the (X,Y) coordinates of the "junctions" on the boundary and not every pixel of the boundary.
Now when i say later use, i want to smooth out the boundary with various methods and redraw it but i am stuck at how to go from junctions to complete boundary. Below is my attempt to do it but i am seeing heavy ram usage just after the output is printed.
Thanks for any help and your time.
X=newArray(1,5,5,10);
Y=newArray(3,3,8,8);
c=newArray("c1","c2");
//answer should be g=(1,2,3,4,5,5,5,5,5,5,6,7,8,9,10, --x part
// 3,3,3,3,3,4,5,6,7,8,8,8,8,8,8) --y part
//f=slide(a,2,c);Array.print(f);
g=cmpltarray(X,Y);Array.print(g);
function cmpltarray(Tx,Ty){
for (i = 0; i < Tx.length-1; i++) {
if(Tx[i]==Tx[i+1] && Ty[i]!=Ty[i+1])
{
l=abs(Ty[i]-Ty[i+1])-1;tempy=newArray(l);tempx=newArray(l);
for (j = 0; j < l; j++) {
tempy[j]=Ty[i]+j+1;tempx[j]=Tx[i];
}
Tx=slide(Tx,(i+1),tempx);Ty=slide(Ty,(i+1),tempy);i=i+l;
}
if(Ty[i]==Ty[i+1] && Tx[i]!=Tx[i+1])
{
l=abs(Tx[i]-Tx[i+1])-1;tempy=newArray(l);tempx=newArray(l);
for (j = 0; j < l; j++) {
tempx[j]=Tx[i]+j+1;tempy[j]=Ty[i];
}
Tx=slide(Tx,(i+1),tempx);Ty=slide(Ty,(i+1),tempy);i=i+l;
}
}
return Array.concat(Tx,Ty);
}
function slide(array,n,data){
array=Array.concat(Array.slice(array,0,n),data,Array.slice(array,n,array.length));
return array;
}

Getting vector which holding certain point

I've a vector of points from a grey section of an image and written like this:
std::vector<Point> vectorg;
for(i = 0; i <= hei - 1; i++) {
for(j = 0; j <= wid - 1; j++) {
if(mask(i,j) == 128) {
vectorg.push_back(Point(j,i));
}
}
}
Knowing what coordinates stored in certain cell is possible by:
cout << vectorg[0].x;
cout << vectorg[0].y;
The question is now the other way around, is it possible to know which cell holds certain coordinates?
Thanks a lot, I'm new here also with opencv programming, I'll be in your care.
Just do the following:
#include <algorithm>
// ...
Point p(searchedX, searchedY);
std::vector<Point>::iterator element = std::find(vectorg.begin(), vectorg.end(), p);
if (element != vectorg.end()) {
cout << (*element).x << endl;
cout << (*element).y << endl;
} else {
cout << "The point is not in the vector" << endl;
}
It may be overkill, but a way to do it (without doing a greedy exhaustive search) would be to build a FLANN index that will store the position of your points.
The feature matrix is made of the coordinates of your points.
Since OpenCV knows how to convert a vector to a matrix, you should be able to use your current vector as is.
Then, if you want only one point, just ask for the 1 nearest neighbour in the query (k parameter).
The bonus is, if you decide later that you need to have also the closest points in the neighborhood, just raise the value of k.
Sorry for the late response, and thanks for the answers, they inspired me indirectly.
I found an easy way to do it. This is by making another Mat which holds the numbers where the coordinates were saved.
std::vector<Point> vectorg;
cv::Mat_<int> Index =Mat_<int>::zeros(hei,wid);
for(i = 0; i <= hei - 1; i++) {
for(j = 0; j <= wid - 1; j++) {
if(mask(i,j) == 128) {
vectorg.push_back(Point(j,i));
Index(vector[count])=count;
count++;
}
}
}
This way I can know which cell holds certain coordinates by simply:
cout<<Index(36,362); //as example
Thanks a lot, I'll be in your care next time.

Compile error in CV_MAT_ELEM

As a result of a call to estimateRigidTransform() I get a cv::Mat object named "trans". To retrieve its contained matrix I try to access its elements this way:
for (i=0; i<2; i++) for (j=0; j<3; j++)
{
mtx[j][i]=CV_MAT_ELEM(trans,double,i,j);
}
Unfortunately with VS2010 I get a compiler error
error C2228: left of '.ptr' must have class/struct/union
for the line with CV_MAT_ELEM. When I unwrap this macro I find something like
(mat).data.ptr + (size_t)(mat).step*(row) + (pix_size)*(col))
When I remove the ".ptr" behind (mat).data it compiles. But I can't imagine that's the solution (or can't imagine that's a bug and I'm the only one who noticed it). So what could be wrong here really?
Thanks!
You don't access the mat elements like this. For a traversal see my other answer here with sample code:
color matrix traversal
or see the opencv refman for grayscale Mat:
Mat M; // should be grayscale
int cols = M.cols, rows = M.rows;
for(int i = 0; i < rows; i++)
{
const double* Mi = M.ptr<double>(i);
for(int j = 0; j < cols; j++)
{
Mi[j]; // is the matrix element.
}
}
Just an addendum from my side: meanwhile I found CV_MAT_ELEM expects a structure CvMat (OpenCV-C-interface) but not cv::Mat (the C++-interface). That's why I get this funny error. Conversion from cv::Mat to CvMat can be done simply by casting to CvMat. Funny confusion with the C and C++ interfaces in OpenCV...

Bug in OpenCV2.3 cv::split() function. Identical values in all 3 channels

After spending a couple of days trying figure out why opencv DFT would give 100% similar results for all three channels I ended up finding out that there might be a bug in the split() function that OpenCV provides for splitting a input image to 3 single channel images.
std::vector<cv::Mat> rgbChannels(3,cv::Mat(inputImage.size(),CV_64FC1));
cv::split(inputImage, rgbChannels);
After saving the image values onto disk and using a file differencing tool, I found out that all values in the split channels were identical.
Have I done something wrong?
My work around was the following function. But that also gave me the exact identical values, giving me a hint that somehow vectors were not being handled correctly by OpenCV.
SplitImage(cv::Mat inputImage)
{
//copy original in BGR order
std::vector<cv::Mat> splittedImage(3,cv::Mat(inputImage.size(),CV_64FC1));
cv::Mat tempImage(inputImage.size(),CV_64FC1);
for (int row = 0; row < inputImage.size().height; row++)
{
for (int col = 0; col < inputImage.size().width; col++)
{
splittedImage[0].at<double>(row, col) = inputImage.at<cv::Vec3d>(row, col)[0];
splittedImage[1].at<double>(row, col) = inputImage.at<cv::Vec3d>(row, col)[1];
splittedImage[2].at<double>(row, col) = inputImage.at<cv::Vec3d>(row, col)[2];
}
}
return splittedImage;
}
And finally wrote the following to solve the problem
SplitImage(cv::Mat inputImage)
{
//copy original in BGR order
std::vector<cv::Mat> splittedImage(3,cv::Mat(inputImage.size(),CV_64FC1));
std::vector<cv::Mat>::iterator it;
it = splittedImage.begin();
for(int channelNo = 0; channelNo < inputImage.channels(); channelNo++)
{
cv::Mat tempImage(inputImage.size(),CV_64FC1);
for (int row = 0; row < inputImage.size().height; row++)
{
for (int col = 0; col < inputImage.size().width; col++)
{
tempImage.at<double>(row, col) = inputImage.at<cv::Vec3d>(row, col)[channelNo];
}
}
it = splittedImage.insert ( it , tempImage );
it++;
}
return splittedImage;
}
Has anyone had a problem with split() function or have I done something wrong?
It is not a bug in OpenCV but there is a problem with your code.
The following line does not create a vector of 3 different Mats:
std::vector<cv::Mat> rgbChannels(3,cv::Mat(inputImage.size(),CV_64FC1));
Instead, this line produces a vector of 3 Mat headers sharing the same memory. It works this way because Mat copy constructor does not make a deep copy - it just increments an internal reference counter.
Just change your code to the following to solve your problem:
std::vector<cv::Mat> rgbChannels(3);
cv::split(inputImage, rgbChannels);

Histogram Smoothing

I have a probably pretty simple question but I am still not sure!
Actually I only want to smooth a histogram, and I am not sure which of the following to methods is correct. Would I do it like this:
vector<double> mask(3);
mask[0] = 0.25; mask[1] = 0.5; mask[2] = 0.25;
vector<double> tmpVect(histogram->size());
for (unsigned int i = 0; i < histogram->size(); i++)
tmpVect[i] = (*histogram)[i];
for (int bin = 1; bin < histogram->size()-1; bin++) {
double smoothedValue = 0;
for (int i = 0; i < mask.size(); i++) {
smoothedValue += tmpVect[bin-1+i]*mask[i];
}
(*histogram)[bin] = smoothedValue;
}
Or would you usually do it like this?:
vector<double> mask(3);
mask[0] = 0.25; mask[1] = 0.5; mask[2] = 0.25;
for (int bin = 1; bin < histogram->size()-1; bin++) {
double smoothedValue = 0;
for (int i = 0; i < mask.size(); i++) {
smoothedValue += (*histogram)[bin-1+i]*mask[i];
}
(*histogram)[bin] = smoothedValue;
}
My Questin is: Is it resonable to copy the histogram in a extra vector first so that when I smooth at bin i I can use the original i-1 value or would I simply do smoothedValue += (*histogram)[bin-1+i]*mask[i];, so that I use the already smoothed i-1 value instead the original one.
Regards & Thanks for a reply.
Your intuition is right: you need a temporary vector. Otherwise, you will end up using partly old values, and partly new values, and the result will not be correct. Try it yourself on paper with a simple example.
There are two ways you can write this algorithm:
Copy the data to a temporary vector first; then read from that one, and write to histogram. This is what you did in your first code fragment.
Read from histogram and write to a temporary vector; then copy from the temporary vector back to histogram.
To prevent needless copying of data, you can use vector::swap. This is an extremely fast operation that swaps the contents of two vectors. Using strategy 2 above, this would result in:
vector<double> mask(3);
mask[0] = 0.25; mask[1] = 0.5; mask[2] = 0.25;
vector<double> newHistogram(histogram->size());
for (int bin = 1; bin < histogram->size()-1; bin++) {
double smoothedValue = 0;
for (int i = 0; i < mask.size(); i++) {
smoothedValue += (*histogram)[bin-1+i]*mask[i];
}
newHistogram[bin] = smoothedValue;
}
histogram->swap(newHistogram);

Resources