Image blend with plotly - image-processing

I'm trying to blend two plotly images together with a user-defined opacity parameter alpha.
It is super easy to do so with pillow library:
Image.blend(image1, image2, alpha=.2)
But I did not figure out how to do it using plotly.
My end goal is to use it in dash plotly app, where users can control the weight of each image in the final figure.

Related

Separating superimposed images based on color

I have a stereo camera which combines both the left and right views into this singular image. Although each camera is a different color, left is green, and right is magenta. How can I separate this image into two separate images. Using Python, opencv, numpy, etc.
Stereo camera combines both left and right images:
you only need to extract the color components.
Channel Red+blue = right image
Channel Green = left image.
you can do it with any photo editor like Photoshop or GIMP.
see this thread to see how to do it with opencv.

Labeling Medical Image

I want to extract ground truth from my medical data. I'm looking for a program that can help with this. What I want to do is as follows.
I want to select a specific area and make it white, and I want it to be black in the other area. So I would have ground truth in my hand. There are examples in the pictures. Note: I dont have ground truth, only have original images without ground truth. I need to draw and extract this area from original image...
enter image description hereThank you for your help in advance.
enter image description here
Let's split your image into its two constituent parts, first image.png:
and second, mask.png:
Now you can just use ImageMagick in the Terminal without writing any code. You have a couple of choices. You can either:
make the black parts of the mask transparent in the result, or
make the black parts of the mask black in the result.
Let's make them transparent first, so we are effectively copying the mask into the image and treating it as an alpha/transparency layer:
magick image.png mask.png -compose copyalpha -composite result.png
And now let's make them black, by choosing the darker of the original image and the mask at each pixel location - hence the darken blend mode:
magick image.png mask.png -compose darken -composite result.png
Note that if you use the first technique, the original information that appears transparent is still in the image and can be retrieved - so do not use this technique to hide confidential information.
If you want to use the transparency method from Python with PIL, you can do:
from PIL import Image
# Read image and mask as PIL Images
im = Image.open('image.png').convert('RGB')
ma = Image.open('mask.png').convert('L')
# Merge in mask as alpha channel and save
im.putalpha(ma)
im.save('result.png')
Or, transparency method with OpenCV and Numpy:
import cv2
import numpy as np
# Open image and mask as NMumoy arrays
im = cv2.imread('image.png')
ma = cv2.imread('mask.png', cv2.IMREAD_GRAYSCALE)
# Merge mask in as alpha channel and save
res = np.dstack((im,ma))
cv2.imwrite('result.png', res)
If you want to use the blacken method with Python and PIL/Pillow, use:
from PIL import Image, ImageChops
# Read image and mask as PIL Images
im = Image.open('image.png').convert('RGB')
ma = Image.open('mask.png').convert('RGB')
# Choose darker image at each pixel location and save
res = ImageChops.darker(im, ma)
res.save('result.png')
If you want to use the blacken method with OpenCV and Numpy, use the code above but replace the np.dstack() line with:
res = np.minimum(im, ma[...,np.newaxis])
I can highly recommend ITK - SNAP for this task. You can manually label your input images with certain labels (1 for foreground, 0 for background in your example) and export the groundtruth very comfortably.

Find intuitive image orientation with opencv

I have a bunch of images of handtools and I would like to reorient the pictures along the "intuitive" axis. So what I am trying is to find the magenta line:
Eventually I was able to feed fitLine with the main contour of this hammer, but the middle line does not pass by the center of this hammer:
Here the sample image for the scissors:

Chromatic filter openCV

I'm looking for a filter in openCV library that change the image chromatically. For example, the blur does not change the colors of the image, I need one that do it.
I got a colored image, I need to apply a filter that distorts the color. For example, if I have an image with a lot of blue, with this filter this blue will be less or more intensity.
My images are in L* a* b* colour space and I need to work in it.

How to blend 80x60 thermal and 640x480 RGB image?

How do I blend two images - thermal(80x60) and RGB(640x480) efficiently?
If I scale the thermal to 640x480 it doesn't scale up evenly or doesn't have enough quality to do any processing on it. Any ideas would be really helpful.
RGB image - http://postimg.org/image/66f9hnaj1/
Thermal image - http://postimg.org/image/6g1oxbm5n/
If you scale the resolution of the thermal image up by a factor of 8 and use Bilinear Interpolation you should get a smoother, less-blocky result.
When combining satellite images of different resolution, (I talk about satellite imagery because that is my speciality), you would normally use the highest resolution imagery as the Lightness or L channel to give you apparent resolution and detail in the shapes because the human eye is good at detecting contrast and then use the lower resolution imagery to fill in the Hue and Saturation, or a and b channels to give you the colour graduations you are hoping to see.
So, in concrete terms, I would consider converting the RGB to Lab or HSL colourspace and retaining the L channel. The take the thermal image and up-res it by 8 using bilinear interpolation and use the result as the a, or b or H or S and maybe fill in the remaining channel with the one from the RGB that has the most variance. Then convert the result back to RGB for a false-colour image. It is hard to tell without seeing the images or knowing what you are hoping to find in them. But in general terms, that would be my approach. HTH.
Note: Given that a of Lab colourspace controls the red/green relationship, I would probably try putting the thermal data in that channel so it tends to show more red the "hotter" the thermal channel is.
Updated Answer
Ok, now I can see your images and you have a couple more problems... firstly the images are not aligned, or registered, with each other which is not going to help - try using a tripod ;-) Secondly, your RGB image is very poorly exposed so it is not really going to contribute that much detail - especially in the shadows - to the combined image.
So, firstly, I used ImageMagick at the commandline to up-size the thermal image like this:
convert thermal.png -resize 640x480 thermal.png
Then, I used Photoshop to do a crude alignment/registration. If you want to try this, the easiest way is to put the two images into separate layers of the same document and set the Blending mode of the upper layer to Difference. Then use the Move Tool (shortcut v) to move the upper image around till the screen goes black which means that the details are on top of each other and when subtracted they come to zero, i.e. black. Then crop so the images are aligned and turn off one layer and save, then turn that layer back on and the other layer off and save again.
Now, I used ImageMagick again to separate the two images into Lab layers:
convert bigthermalaligned.png -colorspace Lab -separate thermal.png
convert rgbaligned.png -colorspace Lab -separate rgb.png
which gives me
thermal-0.png => L channel
thermal-1.png => a channel
thermal-2.png => b channel
rgb-0.png => L channel
rgb-1.png => a channel
rgb-2.png => b channel
Now I can take the L channel of the RGB image and the a and b channels of the thermal image and put them together:
convert rgba-0.png thermal-1.png thermal-2.png -normalize -set colorpsace lab -combine result.png
And you get this monstrosity! Obviously you can play around with the channels and colourpsaces and a tripod and proper exposures, but you should be able to see some of the details of the RGB image - especially the curtains on the left, the lights, the camera on the cellphone and the label on the water bottle - have come through into the final image.
Assuming that the images were not captured using a single camera, you need to note that the two cameras may have different parameters. Also, if it's two cameras, they are probably not located in the same world position (offset).
In order to resolve this, you need to get the intrinsic calibration matrix of each of the cameras, and find the offset between them.
Then, you can find a transformation between a pixel in one camera and the other. Unfortunately, if you don't have any depth information about the scene, the most you can do with the calibration matrix is get a ray direction from the camera position to the world.
The easy approach would be to ignore the offset (assuming the scene is not too close to the camera), and just transform the pixel.
p2=K2*(K1^-1 * p1)
Using this you can construct a new image that is a composite of both.
The more difficult approach would be to reconstruct the 3D structure of the scene by finding features that you can match between both images, and then triangulate the point with both rays.

Resources