I want to do some image processing using .dng files. I am using rawpy to convert the file into a numpy array. Then imageio to save the image. It is a very simple code, given on the rawpy webpage.
The resulting .png image is not as I expect it to be. The result is very red, while the raw image isn't. It is like a red overlay on the original image.
The code I am using is:
import rawpy
import imageio
path = r'img\CGraw.dng'
with rawpy.imread(path) as raw:
rgb = raw.postprocess()
imageio.imsave(r"img\spyraw.png",rgb)
Original dng image: https://drive.google.com/file/d/1Ip4U7KvI4-Lit4u7AuaHfoHIkXvGNRFk/view?usp=sharing
Resulting png image: https://drive.google.com/file/d/1CyrqJS2Osj3u-jEzLSxSy5eKMUP4csAV/view?usp=sharing
Simply add the argument use_camera_wb=True.
According to the documentation:
use_camera_wb (bool) – whether to use the as-shot white balance values
That means using the white balance coefficients that the camera found automatically during the shot (the values are stored as Exif data of the DNG file).
I don't know why the default is use_camera_wb=False. (It could be a result of poor job wrapping LibRaw with Python).
In my opinion the default supposed to be use_camera_wb=True and use_auto_wb=False.
We may also select the output Color Space: output_color=rawpy.ColorSpace.sRGB.
The colors looks right, but there may be other pitfalls that I am not aware of... (I don't have enough experience with rawpy and LibRaw).
import rawpy
import imageio
path = r'img\CGraw.dng'
with rawpy.imread(path) as raw:
rgb = raw.postprocess(use_camera_wb=True, use_auto_wb=False, output_color=rawpy.ColorSpace.sRGB)
imageio.imsave(r"img\spyraw.png",rgb)
Output (downscaled):
Related
Here is my snippet for both of them
from google.colab.patches import cv2_imshow
import cv2
pt = '/content/content/DATA/testing_data/1/126056495_AO_BIZ-0000320943-Process_IP_Cheque_page-0001.jpg' ##param
img = cv2.imread(pt)
cv2_imshow(img)
and here is the other one
import matplotlib.image as mpimg
pt = '/content/content/DATA/testing_data/1/126056495_AO_BIZ-0000320943-Process_IP_Cheque_page-0001.jpg'
image = mpimg.imread(pt)
plt.imshow(image)
Now, the image in second case is inverted
and image on my system is upright
What I am mostly afraid of is, if my ML model is consuming inverted image, that is probably messing with my accuracy. What could possibly be the reason to It and how do I fix it
(ps: I cannot share the pictures unfortunately, as they are confidential )
(Run on google colab)
All the help is appreciated
Your picture is upside-down when you use one method for reading, and upright when you use the other method?
You use two different methods to read the image file:
OpenCV cv.imread()
Mediapipe mpimg.imread()
They behave differently. OpenCV's imread() respects file metadata and rotates the image as instructed. Mediapipe's function does not.
Solution: Stick to OpenCV's imread(). Don't use Mediapipe's function.
The issue is not with matplotlib. When plt.imshow() is called, it presents the image with an origin in the top left corner, i.e. the Y-axis grows downward. That corresponds to how cv.imshow() behaves.
If your plot does have an Y-axis growing upwards, causing the image to stand upside-down, then you must have set this plot up in specific ways that aren't presented in your question.
I am trying to save image after applying filter, but after saving operation on my disk I have not dst_1, but img.
img = cv2.imread(images[14], 1)
kernel = np.ones((5,5), np.float32)/25
dst_1 = cv2.filter2D(img, -1, kernel)
cv2.imwrite("path/f.jpg", dst_1)
When I am using imshow after applying filter, I see what I expected to see - filtered image.
Does anybody know, what is the reason of this imwrite behavior or where is my mistake?
I believe the problem is not with the way you are saving the file, but rather your filter is not doing what you think it is doing, thus it looks no different to the image you are saving.
OpenCV opens the image in BGR format, and the second image you showed in your comment is the same image, but in RGB format, OpenCV has a function that can do this for you, so please try the below code:
import cv2
import numpy as np
img = cv2.imread(images[14], 1)
dst_1 = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
cv2.imwrite("path/f.jpg", dst_1)
This should achieve the output you are looking for, hope it helps!
I have an image I am reading from a pdf file and converting to jpg. It works fine, until I apply "resize_to_fit" - which results in a black-rectangle (of the specified size).
file = file + "[0]"
jpg_file = file + ".jpg"
pdf = Magick::Image.read(file) do
self.quality = 80
self.density = '300'
self.colorspace = Magick::RGBColorspace
self.interlace = Magick::NoInterlace
end
pdf.first.resize_to_fit!("600")
}
pdf.first.write(jpg_file)
Subsituting:
pdf.first.change_geometry!('600x600') { |cols, rows, img|
img.resize!(cols, rows)
}
... for the resize makes no difference, nor changing the quality, or the density, nor omitting the colorspace and interlace settings.
Since I have a good image at full size (a mostly white image), I don't see why "resize" or "change_geometry" would output pure black.
Ideas?
Dozens of random experiments later, I found the only conversion of size which does not output a black rectangle, is:
pdf.first.sample!(0.25)
The limitation, of course, is that I must have a consistent input-size for this to work, as the other argument-set (x and y) will change the aspect-ratio.
Also, the quality produced by 'sample' is horrible, no matter the settings applied on the input or output side.
I need a way to get resize_to_fit to work properly. I am following the docs and examples, so the result makes no sense. I really hope someone who uses rmagick often, and is familiar with what parts of it are not broken, or what I am doing wrong, can respond with help. Thanks
Answer from #bumpy was the solution. I am now doing it a different way using Carrierwave, but I rewound the code and did an A:B test; the line
pdf.first.alpha(Magick::DeactivateAlphaChannel)
... works. Note that Carrierwave does the conversion correctly, and with decent quality results (identical to this solution), without any special settings. I would guess this is built into its defaults for conversions to jpg.
It's possible your PDF file has a transparent background, which is causing the problem. Try removing the alpha channel before the resize using
pdf.first.alpha(Magick::DeactivateAlphaChannel)
pdf.first.resize_to_fit!("600")
I am facing a problem while writing image from cvMat.
This is what I have done.
IplImage* low_threshold_mask = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);
CvMat* labelMat = cvCreateMat(low_threshold_mask->height,low_threshold_mask->width,CV_32F);
/* I populate elements of labelMat inside a function. Its done like this: cvmSet(labelMat,r,c,label); // where label is of type long */
To check the values I dump each pixel value in a text file and also dump the image.
IplImage* labelImg;
IplImage imageHeader;
labelImg = cvGetImage(labelMat, &imageHeader);
Now when I cross-check pixel intensity with corresponding value in dumped text file, I find mis-match. I feel I have got correct values in text file but wrong ones in image.
Can anyone help in figuring out the mistake?
---------------------New addition-------------------
I am still facing the problem. I have uploaded my programs. I will explain where exactly I am facing the error.
Libraries used: Along with openCV, I am using disjoint_sets of boost library.
Basically I am doing connected component labeling.
For debugging purpose, for 20th frame, I have dumped the label info of each pixel both in a)text file as well b) an image with intensity levels same as the final label of the pixels. So i am expecting the values same in both text and image. But that's not happening. And I am unable to figure out why. The text files shows the correct values but not the image. I am checking pixel values of image in Matlab(i had taken care of indices in matlab starts with 1 not 0).
My text files
a) (frame20final.txt) gets populated in GrimsonGMM.cpp/ConCompLabeling().
b) (frame20image.txt) gets populated in main.cpp
My dumped image(frame-ccs.jpg) gets populated in main.cpp.
Both the text files get same values. So there must be some mistake in writing the image from CvMat.
Test Video: person15_walking_d1_uncomp.avi
You can try with any other video also.
Thanks in advance,
Kaushik
I understood why I was getting the error. I was dumping in .jpg image which was doing compression. This was resolved when I used .png
Your question its so easy.
You want to work with CvMat and after do operations with CvMat you want to plot your CvMat like it was an image.
You must create imageHeader, something like this.
CvMat* mat = cvCreateMatHeader(rows, cols, type);
mat->step = 4 * (mat->cols * CV_ELEM_SIZE1(mat->type) * CV_MAT_CN(mat->type) / 4 + 1);//critical
cvCreateData(mat);
In OpenCV 2.0 and below C++ interface it's not really necessary to change from Mat to IplImage.
You can plot your image using cvShowImage and if you want to convert intoIplImage
just do a easy cast IplImage *img = labelMat;
My usual method of 100% contrast and some brightness adjusting to tweak the cutoff point usually works reasonably well to clean up photos of small sub-circuits or equations for posting on E&R.SE, however sometimes it's not quite that great, like with this image:
What other methods besides contrast (or instead of) can I use to give me a more consistent output?
I'm expecting a fairly general answer, but I'll probably implement it in a script (that I can just dump files into) using ImageMagick and/or PIL (Python) so if you have anything specific to them it would be welcome.
Ideally a better source image would be nice, but I occasionally use this on other folk's images to add some polish.
The first step is to equalize the illumination differences in the image while taking into account the white balance issues. The theory here is that the brightest part of the image within a limited area represents white. By blurring the image beforehand we eliminate the influence of noise in the image.
from PIL import Image
from PIL import ImageFilter
im = Image.open(r'c:\temp\temp.png')
white = im.filter(ImageFilter.BLUR).filter(ImageFilter.MaxFilter(15))
The next step is to create a grey-scale image from the RGB input. By scaling to the white point we correct for white balance issues. By taking the max of R,G,B we de-emphasize any color that isn't a pure grey such as the blue lines of the grid. The first line of code presented here is a dummy, to create an image of the correct size and format.
grey = im.convert('L')
width,height = im.size
impix = im.load()
whitepix = white.load()
greypix = grey.load()
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 * impix[x,y][0] / whitepix[x,y][0], 255 * impix[x,y][1] / whitepix[x,y][1], 255 * impix[x,y][2] / whitepix[x,y][2]))
The result of these operations is an image that has mostly consistent values and can be converted to black and white via a simple threshold.
Edit: It's nice to see a little competition. nikie has proposed a very similar approach, using subtraction instead of scaling to remove the variations in the white level. My method increases the contrast in the regions with poor lighting, and nikie's method does not - which method you prefer will depend on whether there is information in the poorly lighted areas which you wish to retain.
My attempt to recreate this approach resulted in this:
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 + impix[x,y][0] - whitepix[x,y][0], 255 + impix[x,y][1] - whitepix[x,y][1], 255 + impix[x,y][2] - whitepix[x,y][2]))
I'm working on a combination of techniques to deliver an even better result, but it's not quite ready yet.
One common way to remove the different background illumination is to calculate a "white image" from the image, by opening the image.
In this sample Octave code, I've used the blue channel of the image, because the lines in the background are least prominent in this channel (EDITED: using a circular structuring element produces less visual artifacts than a simple box):
src = imread('lines.png');
blue = src(:,:,3);
mask = fspecial("disk",10);
opened = imerode(imdilate(blue,mask),mask);
Result:
Then subtract this from the source image:
background_subtracted = opened-blue;
(contrast enhanced version)
Finally, I'd just binarize the image with a fixed threshold:
binary = background_subtracted < 35;
How about detecting edges? That should pick up the line drawings.
Here's the result of Sobel edge detection on your image:
If you then threshold the image (using either an empirically determined threshold or the Ohtsu method), you can clean up the image using morphological operations (e.g. dilation and erosion). That will help you get rid of broken/double lines.
As Lambert pointed out, you can pre-process the image using the blue channel to get rid of the grid lines if you don't want them in your result.
You will also get better results if you light the page evenly before you image it (or just use a scanner) cause then you don't have to worry about global vs. local thresholding as much.