Combining two image label to one label - image-processing

I have medical images for the prostate where each image has two anatomical labels label0= background label1= peripheral zone, Label2= transitional zone.
I would like to combine two labels so that they become one label and will form the whole prostate structure, Can you help me with this?
attached is the picture to verify my question more:

You can clip the values of your segmentation mask to the range [0, 1] (instead of [0, 2]): After clipping, all pixels with label > 0 (non-background) will have label 1.

Related

How to divide image into two parts without crossing any object using openCV?

I am using an object detection machine learning model (only 1 object). It working well in case there are a few objects in image. But, if my image has more than 300 objects, it can't recognize anything. So, I want to divide it into two parts or four parts without crossing any object.
I used threshold otsu and get this threshold otsu image. Actually I want to divide my image by this line expect image. I think my model will work well if make predictions in each part of image.
I tried to use findContour, and find contourArea bigger than a half image area, draw it into new image, get remain part and draw into another image. But most of contour area can't reach 1/10 image area. It is not a good solution.
I thought about how to detect a line touch two boundaries (top and bottom), how can I do it?
Any suggestion is appreciate. Thanks so much.
Since your region of interests are separated already, you can use connectedComponents to get the bounding boxes of these regions. My approach is below.
img = cv2.imread('circles.png',0)
img = img[20:,20:] # remove the connecting lines on the top and the left sides
_, img = cv2.threshold(img,0,1,cv2.THRESH_BINARY)
labels,stats= cv2.connectedComponentsWithStats(img,connectivity=8)[1:3]
plt.imshow(labels,'tab10')
plt.show()
As you can see, two regions of interests have different labels. All we need to do is to get the bounding boxes of these regions. But first, we have to get the indices of the regions. For this, we can use the size of the areas, because after the background (blue), they have the largest areas.
areas = stats[1:,cv2.CC_STAT_AREA] # the first index is always for the background, we do not need that, so remove the background index
roi_indices = np.flip(np.argsort(areas))[0:2] # this will give you the indices of two largest labels in the image, which are your ROIs
# Coordinates of bounding boxes
left = stats[1:,cv2.CC_STAT_LEFT]
top = stats[1:,cv2.CC_STAT_TOP]
width = stats[1:,cv2.CC_STAT_WIDTH]
height = stats[1:,cv2.CC_STAT_HEIGHT]
for i in range(2):
roi_ind = roi_indices[i]
roi = labels==roi_ind+1
roi_top = top[roi_ind]
roi_bottom = roi_top+height[roi_ind]
roi_left = left[roi_ind]
roi_right = roi_left+width[roi_ind]
roi = roi[roi_top:roi_bottom,roi_left:roi_right]
plt.imshow(roi,'gray')
plt.show()
For your information, my method is only valid for 2 regions. In order to split into 4 regions, you would need some other approach.

return_sequences in LSTM

In Keras, model.add(LTSM(units=xx, return_sequences = yy, input_shape=zz), When setting the return_sequences to true, does it mean to have/enable the arrow circled in blue, and not have/disable the arrow circled in red? and vice versa when return_sequences set to false?
Note the picture comes from this page: https://www.analyticsvidhya.com/blog/2017/12/fundamentals-of-deep-learning-introduction-to-lstm/ under '4. Architecture of LSTMs'
First, the circle in blue is the essence of LSTM and it will never be disabled or you will not have a rnn/lstm at all. That arrow means that whatever the value you get from the last rnn/lstm cell, you will pass it to the next rnn/lstm cell and it will be processed together with the next input. The only difference between rnn and lstm is just that a simple rnn does not have that blue-circled arrow, only the black arrow below while lstm has that arrow as a gate for short/long term memory.
Second, for return_sequences, it is typically used for stacked rnn/lstm, meaning that you stack one layer of rnn/lstm on top of another layer VERTICALLY, not horizontally. Horizontal rnn/lstm cells represent processing across time, while vertical rnn/lsm cells means stacking one layer across another layer.
When you set it to false, it means that only the last cell (horizontally) will have that red_circled arrow while all other cells (in the same layer) will have that red_circled arrow disabled so you will only pass one piece of information from all that horizontal layer (that is the information passed by the last cell in that layer).
Conversely, when you set it to true, all cells from that horizontal layer will have that red-circled arrow abled, and will pass information to the layer stacked top of it. It means that if you want to stack one rnn/lstm layers on top of another, you need to set it to true.
Lastly, for more information you can refer to this book. It has great explanation for this return_sequences option: https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/

Is there any way to automatically calculate the axis labels' step size in Highcharts ?

I have 1000~ data labels on the xAxis and finding a way to automatically calculate the step based on the number of data labels and size of the charts.
Based on the documentation , if I do not set the step property of xAxis.labels , it will be calculated automatically, But in my case it is not.
Current output:
Expected output:
Currenlty the labels look as shown in the image. I'd expect the labels to drop automatically and do not show the ellipses. Any help would be much appreciated. Thanks.
-I'm using Highcharts-5.0.8

Tweaking display of quality histogram, exporting the colormap

I have a couple of questions, which get tied back to a simple need - I want to use the quality histogram as a colorbar in my publication. To export it along with labels for publication, I tried just taking a snapshot with the appropriate tool, but if I use alpha/ solid white background the text/ colorbars is not visible. If I use the solid black or meshlab background, the text is white, or can not be used directly in publication.
My questions are as follows:
I know how to change the text color on meshlab window. Is there a similar function to change the text font size on meshlab window?
As a more demanding question, is there a way I can import the quality map file into matlab or some other software, and plot a custom colorbar. I will append my .qmap file here, but it seems that the color field is empty, and I can not reproduce the colors without them.
%%%%%QMAP FILE TO FOLLOW%%%%%
// COLOR BAND FILE STRUCTURE - first row: RED CHANNEL DATA - second row GREEN CHANNEL DATA - third row: BLUE CHANNEL DATA
// CHANNEL DATA STRUCTURE - the channel structure is grouped in many triples. The items of each triple represent respectively: X VALUE, Y_LOWER VALUE, Y_UPPER VALUE of each node-key of the transfer function
0;0.5;0.125;1;0.375;1;0.625;0;0.875;0;1;0;
0;0;0.125;0;0.375;1;0.625;1;0.875;0;1;0;
0;0;0.125;0;0.375;0;0.625;1;0.875;1;1;0.5;
//THE FOLLOWING 4 VALUES REPRESENT EQUALIZER SETTINGS - the first and the third values represent respectively the minimum and the maximum quality values used in histogram, the second one represent the position (in percentage) of the middle quality, and the last one represent the level of brightness as a floating point number (0 copletely dark, 1 original brightness, 2 completely white)
-0.001;0.714286;0.0004;1;

How to apply two color thresholds to image OpenCV

I am currently trying to detect for two certain colors in a certain image. I am trying to filter an image to display pixels in a certain range. I know that to find one color, you input an upper and lower bound like so
COLOR_MIN = np.array([0, 0, 130], np.uint8)
COLOR_MAX = np.array([90, 145,255], np.uint8)
dst1 = cv2.inRange(img, COLOR_MIN, COLOR_MAX)
And I simply apply dst1 to the image and everything works just as it should. An image is displayed with only pixels in those ranges. However, I would like to search for two specific ranges of colors. Should I apply the two color ranges to the image separately to get two different images, and then blend the image together? Or is there a more efficient way of displaying an image whose pixels fit in two different color ranges?
Aha! Found it. You can make a similar filter for your second color and then simply use the bitwise or operator | combining the two filters dst1 and dst2.

Resources