Creating an image of difference of adjacent pixels with digitalmicrograph (DM) script - image-processing

The following digitalmicrograph function tries to create an image by taking difference of neighboring pixel in a sub-row of a row of the image. The first pixel is replaced with a mean of the difference result of the sub-row thus created.
E.g. If the input image is 8 pixels wide and 1 rows tall and the sub-row size is 4 -
In_img = {8,9,2,4,9,8,7,5}
Then the output image will be -
Out_img = {mean(8,9,2,4)=5.75,9-8=1,2-9=-7,4-2=2,mean(9,8,7,5)=7.25,8-9=-1,7-8=-1,5-7=-2}
When I run this script, the first pixel of the first row is correct but rest of the pixels are incorrect. When I set the loop limit to only one sub-row and one row i.e. x=1 and y=1, then the script works correctly.
Any ideas as to what may be happening or what may be wrong with the script?
The test image is here and the result is here.
// Function to compute the standard deviation (sigma n-1) of an image, or
// a set of values passed in as pixel values in an image. The
// number of data points (n) the mean and the sum are also returned.
// version:20080229
// D. R. G. Mitchell, adminnospam#dmscripting.com (remove the nospam to make this email address work)
// v1.0, February 2008
void StandardDeviation(image arrayimg, number &stddev, number &n, number &mean, number &sum)
{
mean=mean(arrayimg)
number xsize, ysize
getsize(arrayimg,xsize, ysize)
n=xsize*ysize
sum=sum(arrayimg)
image imgsquared=arrayimg*arrayimg
number sumofvalssqrd=sum(imgsquared)
stddev=sqrt(((n*sumofvalssqrd)-(sum*sum))/(n*(n-1)))
}
image getVectorImage(image refImage, number rowsize)
{
number fh, fv, fhx
getsize(refImage, fh, fv)
fhx=trunc(fh/rowsize)
//result("ByteSize of refimage = "+refImage.ImageGetDataElementByteSize()+"\n")
//create image to save std of each row of the ref image.
//The std values are saved as pixels of one row. The row size is same as number of rows.
//use fhx*rowsize for the new imagesize as fhx is truncated value.
image retImage:=RealImage("",4,fhx*rowsize,fv)
image workImage=slice1(refImage,rowsize+1,0,0,0,rowsize-1,1)
number stddev,nopix,mean,sum
for ( number y=0;y<fv;y++)
{
for (number x=0;x<fhx;x++)
{
//result ("x,y="+x+","+y+"; fhx="+fhx+"; rowsize="+rowsize+"\n")
workImage=slice1(refImage,x*rowsize+1,y,0,0,rowsize-1,1)-slice1(refImage,x*rowsize,y,0,0,rowsize-1,1)
showimage(workImage)
StandardDeviation(workImage,stddev,nopix,mean,sum )
retImage[y,x*rowsize+1,y+1,x*rowsize+rowsize]=workImage
retImage[y,x]=mean
result("mean # row "+y+" = "+mean+"\n")
}
}
return retImage
}
showimage(getVectorImage(getfrontimage(),rowsize))

After your edit, I understood that you want to do something like this:
and that this should be performed for each line of the image individually.
The following script does this. (Explanations below.)
image Modify( image in, number subsize )
{
// Some checking
number sx,sy
in.GetSize(sx,sy)
if ( 0 != sx%subsize )
Throw( "The image width is not an integer multiplication of the subsize." )
// Do the means...
number nTile = sx/subsize
image meanImg := RealImage( "Means", 4, nTile , sy )
meanImg = 0
for ( number i=0; i<subsize; i++ )
meanImg += in.Slice2( i,0,0, 0,nTile,subsize, 1,sy,1 )
meanImg *= 1/subsize
// Do the shifted difference
image dif := RealImage( "Diff", 4, sx-1, sy )
dif = in.slice2( 1,0,0, 0,sx-1,1, 1,sy,1) - in.slice2( 0,0,0, 0,sx-1,1, 1,sy,1)
// Compile the result
image out := in.ImageClone()
out.SetName( in.getName() + "mod" )
out.slice2( 1,0,0, 0,sx-1,1, 1,sy,1 ) = dif
out.slice2( 0,0,0, 0,nTile,subsize, 1,sy,1 ) = meanImg
return out
}
number sx = 8, sy = 4
image img := RealImage( "test", 4, 8, 4 )
img = icol*10 + trunc( Random()*10 )
img.ShowImage()
Modify(img,4).ShowImage()
Some explanations:
You want to do two different things in the image, so you have to be careful not to overwrite data in pixels you will subsequently use for computation! Images are processed pixel by pixel, so if you first compute the mean and write it in the first pixel, the evaluation of the second pixel will be the difference of "9" and the just stored mean-value (not the original "8"). So you have to split computation and use "buffer" copies.
The slice2 command is extremely convenient, because it allows to define a stepsize when sampling. You can use it to address the dark-grey pixels directly.
Be aware of the difference between := and = in image expressions. The first is a memory assignment:
A := B means that A now is the same memory location as B. A is basically another name for B.
A = B means A gets the values of B (copied). A and B are two different memory locations and only values are copied over.

I have some observations in your script:
Instead of the defined method for getting mean/sum/stdev/n of an image, you can as easily get to those numbers from any image img using the following:
mean: number m = mean( img )
sum: number s = sum( img )
stdev: number sd = sqrt( variance( img ) )
pixels: number n = sum( 0 * img + 1 )
if you want to get the difference of an image with an image "shifted by one" you don't have to loop over the lines/columns but can directly use the slice2() command; a [ ] notation using icol and irow; or the command offset() Personally, I prefer the slice2() command.
If I want a script which gives me the standard deviation of the difference of each row with its successor row, i.e. stdDev( row_(y) - row_(y+1) ) for all y < sizeY, my script would be:
Image img := GetFrontImage()
number sx,sy
img.GetSize(sx,sy)
number dy = 1
Image dif = img.Slice2(0,0,0, 0,sx,1, 1,sy-1,1 ) - img.Slice2(0,dy,0, 0,sx,1, 1,sy-1,1)
Image sDevs := RealImage( "Row's stDev", 4, sy-1 )
for ( number y=0; y<sy-1; y++ )
sDevs[y,0] = SQRT( Variance( dif.Slice1(0,y,0, 0,sx,1) ) )
sDevs.ShowImage()
Is this, what you try to achieve? If not, please edit your question for some clarification.

Related

how to assign 1D spectra to imagestack

any idea why this does not work?
DM::Image im2D = DM::RealImage("2D", 4, 2048);
DM::Image im3D= DM::RealImage("3D", 4, 2048, 9, 9);
PlugIn::ImageDataLocker im2D_LLl(im2D, PlugIn::ImageDataLocker::lock_data_CONTIGUOUS);
float *im2D_data = (float*)(im2D_LLl.get_image_data().get_data());
for (int i = 0; i <2048; i++) *Im2D_data++ = i;
Imaging::DataSlice planeSlice;
long xi=0, yi=0;
planeSlice = Imaging::DataSlice(Imaging::DataIndex(xi, yi, 0), Imaging::DataSlice::Slice1(2, 2048, 1));
DM::SliceImage(im3D, planeSlice) = im2D;
im3D is not changed, giving only zeros. In DM scripting side this would be:
slice1(im3D, 0,0,0,2,2048,1) = im2D
which works fine.
I'm somewhat confused by your example code.
It seems you create a 3D image of XYZ = 2048 x 9 x 9
but then slice it along dim=2 (z) for 2048 channels (it has only 9!)
The same is true for you script code. slice1 creates a 1D image along the dimension 2.
I think you've meant to use
slice2( img3D, 0,0,0, 0,9,1, 1,9,1 ) = img2d
Or, if you really meant to do spectrum-insertion (as you title suggests), you want some better named variables for sure.
Script example of creating a stack and filling it plane-wise:
image stack := realimage("Stack of 2048 2D images 9x9",4,9,9,2048)
for ( number i=0; i<2048; i++ ){
image plane2D := Realimage("2D plane 9x9",4,9,9)
plane2D = iradius + random()
stack.Slice2(0,0,i, 0,9,1, 1,9,1 ) = plane2D
}
stack.ShowImage()
Script example of creating a stack and filling it spectrum-wise:
image stack := realimage("Stack of 2048 2D images 9x9",4,9,9,2048)
for ( number i=0; i<9; i++ ){
for ( number j=0; j<9; j++ ){
image spec1D:= Realimage("1D spectrum 2048",4,2048)
spec1D = iradius + random()
stack.Slice1(i,j,0, 2,2048,1 ) = spec1D
}
}
stack.ShowImage()
As for the SDK code: When you create an image locker to change the data, make sure you use
im2d.DataChanged();
to finalize and update the image object.

Vectorize getting every nth element (but nth element is variable)

How can I vectorize getting every nth element if the nth element is variable?
I know about:
A = randi( 10, 10, 2 );
B = A(2:2:end, :); % make another matrix (B) that contains every 2nd element
But my nth variable changes.
Here's a working FOR loop code based on the golden angle:
1) It converts the (golden angle) in degrees wanted into cell bit location for array.
2) Shift array by that given amount.
3) Places the 1st shifted cell wanted into a new array.
signal_used_L1 = [1:9](:).';
total_samples = numel( signal_used_L1 );
for hh = 1 : length( signal_used_L1 )
% PHI
deg_to_shift = 137.5077 * hh;
% convert degrees wanted into cell bits
shift_sig_in_bits_L1 = total_samples * deg_to_shift / 360;
% shift signal by given amount of cell bits
shift_sig_L1 = circshift( signal_used_L1(:).' , ...
[0, round(shift_sig_in_bits_L1)] );
% create array with shifted cell bits
sig_bit_built(1, hh) = shift_sig_L1(1, 1);
end
PS: I'm using Octave 4.2.2
Not sure what you're trying to do exactly, but I'd vectorise your code as follows:
signal_used_L1 = [1:9](:).';
total_samples = numel( signal_used_L1 );
% PHI
deg_to_shift = 137.5077 * [1:length( signal_used_L1 )];
% convert degrees wanted into cell bits
shift_sig_in_bits_L1 = total_samples * deg_to_shift / 360;
% obtain "wrap-around" indices given above cell bits
indices = mod( -round( shift_sig_in_bits_L1 ), total_samples ) + 1;
% create array with shifted cell bits
signal_used_L1( indices )
Incidentally, I think you meant to do circshift with a negative shift though (i.e. move "n" places to the right). In which case the vectorised code above would be mod( round... rather than mod( -round...

OpenCV Tone Curve progrommatically

I want to realize smth like tone curve.
I have predefined set of curves that I should apply to the image.
For instance:
as I understand on this chart we see dependences of current tone value to new, for example:
if we get first dot on the left - every r,g and b that = 0 will be converted to 64
or every value more than 224 will be converted to 0 and ect.
so I tried to change every pixel of image to new value
for test purpose i've simplified curve:
and here the code I have:
//init original image
cv::Mat originalMat = [self cvMatFromUIImage:inputImage];
//out image the same size
cv::Mat outMat = [self cvMatFromUIImage:inputImage];
//loop throw every row of image
for( int y = 0; y < originalMat.rows; y++ ){
//loop throw every column of image
for( int x = 0; x < originalMat.cols; x++ ){
//loop throw every color channel of image (R,G,B)
for( int c = 0; c < 3; c++ ){
if(originalMat.at<cv::Vec3b>(y,x)[c] <= 64)
outMat.at<cv::Vec3b>(y,x)[c] = 64 + ( originalMat.at<cv::Vec3b>(y,x)[c] ) -
( originalMat.at<cv::Vec3b>(y,x)[c] ) * 2 ;
if((originalMat.at<cv::Vec3b>(y,x)[c] > 64)&&(originalMat.at<cv::Vec3b>(y,x)[c] <= 128))
outMat.at<cv::Vec3b>(y,x)[c] = (( originalMat.at<cv::Vec3b>(y,x)[c] ) - 64 ) * 4
;
if((originalMat.at<cv::Vec3b>(y,x)[c] > 128))
outMat.at<cv::Vec3b>(y,x)[c] = ( originalMat.at<cv::Vec3b>(y,x)[c] ) + 128 -
(( originalMat.at<cv::Vec3b>(y,x)[c] ) - 128) * 3;
} //end of r,g,b loop
} //end of column loop
} //end of row loop
//send to output
return [self UIImageFromCVMat:outMat];
but here the result I get:
by some reason only 3/4 of image was processed
and it not matches with result i expected:
Update 0
thanks to #ACCurrent comment found errors in calculation(code and image updated), but still not understand why only 3/4 of images processed.
not sure that understand why 'noise' appears, hope it because of curve not smooth.
looks the way to avoid .at operation.
Update 1
original image:
You need to access the images with Vec4b
originalMat.type() is equals to 24
Your originalMat is of type 24, i.e. CV_8UC4. This means that the image has 4 channels, but you're accessing it with Vec3b as if it has only 3 channels. This explains why about 1/4 of the image is not modified.
So, simply replace every Vec3b in your code with Vec4b.

background extraction and update from video using matlab

I have a video about traffic scene. Now, I want to calculate the percentage of vehicle area on the road area (or the percentage of foreground area on the background area). The first step for this is background extraction. I have read many document and scientific articles about it, one of them recommends to use the mean filter following this formula:
This is the link of that the article. The results are very good, it is exactly what I want.
I followed his formula and I tried to write my code. But It didn't work! Who can help me and give me some advice.
This is my code:
clc; % Clear the command window.
close all; % Close all figures (except those of imtool.)
imtool close all; % Close all imtool figures.
clear; % Erase all existing variables.
workspace; % Make sure the workspace panel is showing.
fontSize = 14;
%input video;
step = 10;
vob = VideoReader('NKKN.avi');
frame = vob.read(inf);
vidHeight = vob.Height;
vidWidth = vob.Width;
nFrames = vob.NumberOfFrames;
%%% First-iteration background frame
background_frame = double(frame*0);
redbackground_frame = background_frame(:,:,1);
greenbackground_frame = background_frame(:,:,2);
bluebackground_frame = background_frame(:,:,3);
%calculate background
i = 0;
for k = 1:10 %get background from 10 frame (J=10)
thisframe = double(read(vob, k));
%background_frame = background_frame + thisframe;
redbackground_frame = redbackground_frame + thisframe(:,:,1);
greenbackground_frame = greenbackground_frame + thisframe(:,:,2);
bluebackground_frame = bluebackground_frame + thisframe(:,:,3);
i=i+1;
disp(i);
end
A = redbackground_frame/i;
B = greenbackground_frame/i;
C = bluebackground_frame/i;
background = cat(3,A,B,C);
imshow(background);
You can maintain a buffer of B frames for a dynamic estimation of backgound
buff = NaN( [vidHeight, vidWidth, 3, B] ); % allocate room for buffer
% process the video
for fi = 1:nFrames
% read current frame
thisframe = double(read(vob, k)) / 255; % convert to [0..1] range
% update background model
buff(:, :, :, mod( fi, B ) + 1 ) = thisframe;
background_L1 = nanmedian( buff, 4 ); % I think this is better than `mean` - try it!
background_L2 = nanmean( buff, 4 );
% do whatever processing you need with fi-th frame
% and the current background mode...
% ...
end
Note that if fi < B (i.e., you processed less than B frames) the background model is not stable. I am using NaNs as default values for the buffer and these values are ignored when backgound model is estimated -- this is the reason why I use nanmedian and nanmean instead of simply median and mean.
vob = VideoReader('NKKN.avi');
frame = vob.read(inf);
vidHeight = vob.Height;
vidWidth = vob.Width;
nFrames = vob.NumberOfFrames;
%allocate room for buffer of 20 frames
buff = NaN( [vidHeight, vidWidth, 3, 20] ); % allocate room for buffer
for fi = 1:20:nFrames
disp(fi);
% read current frame
thisframe = double(read(vob, fi)) / 255; % convert to [0..1] range
% update background model
buff(:, :, :, mod( fi, 10 ) + 1 ) = thisframe;
background_L1 = nanmedian( buff, 4 );
background_L2 = nanmean( buff, 4 );
hImage = subplot(2, 2, 1);
image(thisframe);
caption = sprintf('thisframe');
title(caption, 'FontSize', fontSize);
drawnow; % Force it to refresh the window.
subplot(2,2,2);
imshow(background_L2);
title('background-L2');
subplot(2,2,3);
imshow(background_L1);
title('background-L1');
end
Extracting the background of this video
https://www.youtube.com/watch?v=URJxS1giCA4&ab_channel=AliShahzil
clear all
close all
reader = VideoReader('C:\Users\Ali Sahzil\Desktop\Media.wmv'); // your video file location
vid = {};
while hasFrame(reader)
vid{end+1} = im2single(readFrame(reader));
end
bg = mean( cat(4, vid{:}), 4);
x =bg;
imshow( bg );
Here is a very simple solution you can build upon. First you will need a sample background image of the scene with no traffic. We will call this 'bg'.
Here is a simple approach in pseudo-code:
load in background image 'bg'
set threshold upper value
set threshold lower value
loop until done for each frame
{
subtract 'bg' image from your first frame
if pixel value of foreground > than threshold upper value
{
set foreground pixel value to 'nan'
}
if pixel value of foreground < than threshold lower value
{
set foreground pixel value to 'nan'
}
if pixel value of foreground == 0
{
set foreground pixel value to 'nan'
}
}
This will bracket your foreground images to only show the parts of the scene you are interested in. Note: this process can be greatly enhanced by using a stereoscopic camera to give you depth perception. However, you should be able to build upon this code to remove unwanted parts of your image.
This is actually based on Shai's and user3725204's answers. I didn't use read and NumberOfFrames which are not recommended. I also adopted user3725204's suggestion, since there's no need to read all frames.
function backGrnd = getBackGrnd(filename, nTest, method)
tic
if nargin < 2, nTest = 20; end
if nargin < 3, method = 'median'; end
v = VideoReader(filename);
nChannel = size(readFrame(v), 3);
tTest = linspace(0, v.Duration-1/v.FrameRate , nTest);
%allocate room for buffer
buff = NaN([v.Height, v.Width, nChannel, nTest]);
for fi = 1:nTest
v.CurrentTime =tTest(fi);
% read current frame and update model
buff(:, :, :, mod(fi, nTest) + 1) = readFrame(v);
end
switch lower(method)
case 'median'
backGrnd = nanmedian(buff, 4);
case 'mean'
backGrnd = nanmean(buff, 4);
end
toc
end
And the result is like this:
subplot(221); imshow(uint8(TrafficVisionLab.getBackGrnd('traffic.avi', 10, 'mean')));
subplot(222); imshow(uint8(TrafficVisionLab.getBackGrnd('traffic.avi', 10, 'median')));
subplot(223); imshow(uint8(TrafficVisionLab.getBackGrnd('traffic.avi', 50, 'mean')));
subplot(224); imshow(uint8(TrafficVisionLab.getBackGrnd('traffic.avi', 50, 'median')));

Find distance between two lines (OpenCV)

I have the below image after some conversions.
How can I find a distance between these two lines?
A simple way to do this would be
- Scan across a row until you find a pixel above a threshold.
- Keep scanning until you find a pixel below the threshold.
- Count the pixels until the next pixel above the threshold.
- Take the average across a number of rows sampled from the image (or all rows)
- You'll need to know the image resolution (e.g. dpos per inch) to convert the count to an actual distance
An efficient method to scan across rows can be found in the OpenCV documentation
A more complicated approach would use Houghlines to extract lines. It will give you two points on each line (hopefully you only have two). From that it is possible to work out a distance formula, assuming the lines are parallel.
A skeleton code (not efficient, just readable so that you know how to do it) would be,
cv::Mat source = cv::imread("source.jpg", CV_LOAD_IMAGE_GRAYSCALE);
std::vector<int> output;
int threshold = 35, temp_var; // Change in accordance with data
int DPI = 30; // Digital Pixels per Inch
for (int i=0; i<source.cols; ++i)
{
for (int j=0; j<source.rows; ++j)
{
if (source.at<unsigned char>(i,j) > threshold)
{
temp_var = j;
for (; j<source.rows; ++j)
if (source.at<unsigned char>(i,j) > threshold)
output.push_back( (j-temp_var)/DPI ); // Results are stored in Inch
}
}
}
Afterwards, you could take an average of all the elements in output, etc.
HTH
Assumptions:
You have only two continuous lines without any break in between.
No other pixels (noise) apart from the lines
My proposed solution: Almost same as given above
Mark leftmost line as line 1. Mark rightmost line as line 2.
Scan the image (Mat in OpenCV) from the leftmost column and make a list of points matching the pixel value of line 1
Scan the image (Mat in OpenCV) from the rightmost column and make a list of points matching the pixel value of line 2
Calculate the distance between points from that list using the code below.
public double euclideanDistance(Point a, Point b){
double distance = 0.0;
try{
if(a != null && b != null){
double xDiff = a.x - b.x;
double yDiff = a.y - b.y;
distance = Math.sqrt(Math.pow(xDiff,2) + Math.pow(yDiff, 2));
}
}catch(Exception e){
System.err.println("Something went wrong in euclideanDistance function in "+Utility.class+" "+e.getMessage());
}
return distance;
}

Resources