I have two images. The first one is of background noise + content, and the second one is just of the background noise. I would like to subtract the second image from the first to remove noise from the content. The image is in greyscale.
I'm confused between the various ways to handle this, as well as the handling of greyscale values in mathematica.
1) Firstly, we may use ImageSubtract[imageOne, imageTwo].
2) By using ImageDifference[imageOne, imageTwo], we avoid negative pixel values, but the image is artificial at places where we would've had to have negative pixels when using ImageSubtract.
3) We obtain the values of each pixel using ImageData, subtract each corresponding value and then display the result using Image.
Each of these methods yields different results.
For images with real data types, pixel values can be negative, and these three operations are equivalent:
real1 = Image[RandomReal[1, {10, 10}]];
real2 = Image[RandomReal[1, {10, 10}]];
ImageData[ImageDifference[real1, real2]] ==
Abs#ImageData[ImageSubtract[real1, real2]] ==
Abs[ImageData[real1] - ImageData[real2]]
Out[4]= True
But it is not the case with images of integer datatypes. That is because only positive values can be stored in such images, and negative results from the subtraction are clipped to zero in the output image:
int1 = Image[RandomInteger[255, {10, 10}], "Byte"];
int2 = Image[RandomInteger[255, {10, 10}], "Byte"];
This is still True:
ImageData[ImageDifference[int1, int2]]
== Abs[ImageData[int1] - ImageData[int2]]
But these two are different because of clipping:
ImageData[ImageDifference[int1, int2]]
== Abs#ImageData[ImageSubtract[int1, int2]]
There would be less puzzling results when converting both input images to "Real" or "Real32" data type.
Related
I have Z-stacks of fluorescently labelled cells.
The samples have an artefact that causes very bright regions inside the cells which are not based on my signal of interest.
Since the intensity (brightness) of these artefacts is far above my signal of interest's intensity, I want to simply zero all those pixels that are above some arbitrary value I will chose.
So I want macro that logically does something like:
For each slice:
For each pixel:
if pixel intensity>150 then set pixel=0
I am coding in imageJ macro language. I want to avoid using ROIs for this part because I already have ROIs representing each cell and am looping through them in my script.
I think this should be really simple but right now my attempted solution is super cumbersome; going through thresholding, analyze particles, generating ROIs, selecting each ROI, and subtracting the value (e.g 150) from each ROI.
Any idea how this is done in simple way?
The problem is resolved using selection and thresholding:
HotPix = 150; Stack.getStatistics(voxelCount, mean, min, StackMax, stdDev); setThreshold(HotPix, StackMax); //your thresholds here for (i = 1; i <= nSlices; i++) { setSlice(i); run("Create Selection"); if (selectionType() != -1) { run("Set...", "value=0"); } run("Select None"); } resetThreshold;
the olsution comes from #antonis on imageJ forum: https://forum.image.sc/t/how-to-delete-all-pixels-or-set-to-zero-in-a-roi-which-are-above-a-certain-value/51173/5
1.Introduction:
So I want to develop a special filter method for uiimages - my idea is to change from one picture all the colors to black except a certain color, which should keep their appearance.
Images are always nice, so look at this image to get what I'd like to achieve:
2.Explanation:
I'd like to apply a filter (algorithm) that is able to find specific colors in an image. The algorithm must be able to replace all colors that are not matching to the reference colors with e.g "black".
I've developed a simple code that is able to replace specific colors (color ranges with threshold) in any image.
But tbh this solution doesn't seems to be a fast & efficient way at all!
func colorFilter(image: UIImage, findcolor: String, threshold: Int) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let binaryData = context.data!.assumingMemoryBound(to: UInt8.self),
referenceColor = HEXtoHSL(findcolor) // [h, s, l] integer array
for i in 0..<img.height {
for j in 0..<img.width {
let pixel = 4 * (i * img.width + j)
let pixelColor = RGBtoHSL([Int(binaryData[pixel]), Int(binaryData[pixel+1]), Int(binaryData[pixel+2])]) // [h, s, l] integer array
let distance = calculateHSLDistance(pixelColor, referenceColor) // value between 0 and 100
if (distance > threshold) {
let setValue: UInt8 = 255
binaryData[pixel] = setValue; binaryData[pixel+1] = setValue; binaryData[pixel+2] = setValue; binaryData[pixel+3] = 255
}
}
}
let outputImg = context.makeImage()!
return UIImage(cgImage: outputImg, scale: image.scale, orientation: image.imageOrientation)
}
3.Code Information The code above is working quite fine but is absolutely ineffective. Because of all the calculation (especially color conversion, etc.) this code is taking a LONG (too long) time, so have a look at this screenshot:
My question I'm pretty sure there is a WAY simpler solution of filtering a specific color (with a given threshold #c6456f is similar to #C6476f, ...) instead of looping trough EVERY single pixel to compare it's color.
So what I was thinking about was something like a filter (CIFilter-method) as alternative way to the code on top.
Some Notes
So I do not ask you to post any replies that contain suggestions to use the openCV libary. I would like to develop this "algorithm" exclusively with Swift.
The size of the image from which the screenshot was taken over time had a resolution of 500 * 800px
Thats all
Did you really read this far? - congratulation, however - any help how to speed up my code would be very appreciated! (Maybe theres a better way to get the pixel color instead of looping trough every pixel) Thanks a million in advance :)
First thing to do - profile (measure time consumption of different parts of your function). It often shows that time is spent in some unexpected place, and always suggests where to direct your optimization effort. It doesn't mean that you have to focus on that most time consuming thing though, but it will show you where the time is spent. Unfortunately I'm not familiar with Swift so cannot recommend any specific tool.
Regarding iterating through all pixels - depends on the image structure and your assumptions about input data. I see two cases when you can avoid this:
When there is some optimized data structure built over your image (e.g. some statistics in its areas). That usually makes sense when you process the same image with same (or similar) algorithm with different parameters. If you process every image only once, likely it will not help you.
When you know that the green pixels always exist in a group, so there cannot be an isolated single pixel. In that case you can skip one or more pixels and when you find a green pixel, analyze its neighbourhood.
I do not code on your platform but...
Well I assume your masked areas (with the specific color) are continuous and large enough ... that means you got groups of pixels together with big enough areas (not just few pixels thick stuff). With this assumption you can create a density map for your color. What I mean if min detail size of your specific color stuff is 10 pixels then you can inspect every 8th pixel in each axis speeding up the initial scan ~64 times. And then use the full scan only for regions containing your color. Here is what you have to do:
determine properties
You need to set the step for each axis (how many pixels you can skip without missing your colored zone). Let call this dx,dy.
create density map
simply create 2D array that will hold info if center pixel of region is set with your specific color. so if your image has xs,ys resolution than your map will be:
int mx=xs/dx;
int my=ys/dy;
int map[mx][my],x,y,xx,yy;
for (yy=0,y=dy>>1;y<ys;y+=dy,yy++)
for (xx=0,x=dx>>1;x<xs;x+=dx,xx++)
map[xx][yy]=compare(pixel(x,y) , specific_color)<threshold;
enlarge map set areas
now you should enlarge the set areas in map[][] to neighboring cells because #2 could miss edge of your color region.
process all set regions
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy])
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
If you want to speed up this even more than you need to detect set map[][] cells that are on edge (have at least one zero neighbor) you can distinquish the cells like:
0 - no specific color is present
1 - inside of color area
2 - edge of color area
That can be done by simply in O(mx*my). After that you need to check for color only the edge regions so:
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy]==2)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
} else if (map[xx][yy]==0)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
pixel(x,y)=0x00000000;
}
This should be even faster. In case your image resolution xs,ys is not a multiple of region size mx,my you should handle the outer edge of image either by zero padding or by special loops for that missing part of image...
btw how long it takes to read and set your whole image?
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
pixel(x,y)=pixel(x,y)^0x00FFFFFF;
if this alone is slow than it means your pixel access is too slow and you should use different api for this. That is very common mistake on Windows GDI platform as people usually use Pixels[][] which is slower than crawling snail. there are other ways like bitlocking/blitting,ScanLine etc so in such case you need to look for something fast on your platform. If you are not able to speed even this stuff than you can not do anything else ... btw what HW is this run on?
I have an image which is a result of k-means segmentation. The code to obtain it it's here:
% Read the image and convert to L*a*b* color space
I = imread('Crop.jpg');
% h = ginput(2);
% Diameter = sqrt((h(2)-h(1))^2+(h(4)-h(3))^2);
% MeanArea = 3.14*(diameter^2)/4;
Ilab = rgb2lab(I);
% Extract a* and b* channels and reshape
ab = double(Ilab(:,:,2:3));
nrows = size(ab,1);
ncols = size(ab,2);
ab = reshape(ab,nrows*ncols,2);
% Segmentation usign k-means
nColors = 4;
[cluster_idx, cluster_center] = kmeans(ab,nColors,...
'distance', 'sqEuclidean', ...
'Replicates', 3);
% Show the result
pixel_labels = reshape(cluster_idx,nrows,ncols);
figure(1);
imshow(pixel_labels,[]), title('image labeled by cluster index');
Resulting in this picture:
Now as you can see, most of the elements are connected, so I want to count all of the blobs (besides the background one), then filter them using MeanArea, area of the elements incircle. If the blob has dimensions < MeanArea I do not count it, while if the blob has dimensions > MeanArea I want to divide its area by MeanArea to obtain the number of elements. All of this is to have a measure such that #blobs = #elements. I know that it has something to do with 'bwlabel' and 'regionprops' but I don't know how to code this since I'm a beginner, any coding help is appreciated. Thanks.
EDIT: using the 'trees' approach linked in the comments I got very bad results, so I don't think it's the right method. I don't have objects with same color as the tree example, I just have same shape.
I'm following this other approach. Color segmentation by k-means
I obtained the labeled image above, but how can I save it into a variable so that I can erode it and count the number of blobs? That's my question.
EDIT2: The original picture is this one. I'm trying to detect the number of red green and blue objects.
I have reading an image as a tensor object, which aims to be a mask.
Now, I want to replace values which are close to white (almost 1.0) with 0
and values which are gray to 1.
Then the mask would be correct for my machine learning task.
I have tried it with:
tf.where(imag >= 1.0)
or the next function also returns me the indices
greater = tf.greater_equal(mask, 0.95)
but how to update/assign 0? scatter_nd_add does not work for me.
mask = tf.scatter_nd_add(mask, greater, 0)
Edit:
I tried it differently:
v_mask = tf.Variable(tf.shape(mask))
ind = tf.to_float(mask >= 0.0)
v_mask.assign(ind)
but if I run the session. It stops there and does not go on.
What I really wanna do:
I have a gray image with the dimensions (mxnx1, tensor, float32) and the values are rescaled to from [0,255] to [0,1].
I want to replace all values which are white (1) with 0 and gray (0.45 - 0.55) with 1 and the rest should be undefined.
To threshold your image, you can use:
thim = tf.tofloat(im >= 0.95) # or to whichever type you use
To reassign the result to im, assuming it is a variable:
im_update = im.assign(thim)
This gives you an update op that you need to call for the update to happen.
If im is not a variable, then you cannot reassign values to it. Generally though, cases where you really need to reassign values to a node are scarce.
One workaround I found is to use the numpy() bridge. Do the numpy operations on the numpy array and the same is reflected in the tensor values. This is because, the numpy array and the pytorch tensor use the same underlying memory locations.
Memory sharing is mentioned on the pytorch introductory tutorial here
I am currently trying to detect for two certain colors in a certain image. I am trying to filter an image to display pixels in a certain range. I know that to find one color, you input an upper and lower bound like so
COLOR_MIN = np.array([0, 0, 130], np.uint8)
COLOR_MAX = np.array([90, 145,255], np.uint8)
dst1 = cv2.inRange(img, COLOR_MIN, COLOR_MAX)
And I simply apply dst1 to the image and everything works just as it should. An image is displayed with only pixels in those ranges. However, I would like to search for two specific ranges of colors. Should I apply the two color ranges to the image separately to get two different images, and then blend the image together? Or is there a more efficient way of displaying an image whose pixels fit in two different color ranges?
Aha! Found it. You can make a similar filter for your second color and then simply use the bitwise or operator | combining the two filters dst1 and dst2.