I am trying to save image after applying filter, but after saving operation on my disk I have not dst_1, but img.
img = cv2.imread(images[14], 1)
kernel = np.ones((5,5), np.float32)/25
dst_1 = cv2.filter2D(img, -1, kernel)
cv2.imwrite("path/f.jpg", dst_1)
When I am using imshow after applying filter, I see what I expected to see - filtered image.
Does anybody know, what is the reason of this imwrite behavior or where is my mistake?
I believe the problem is not with the way you are saving the file, but rather your filter is not doing what you think it is doing, thus it looks no different to the image you are saving.
OpenCV opens the image in BGR format, and the second image you showed in your comment is the same image, but in RGB format, OpenCV has a function that can do this for you, so please try the below code:
import cv2
import numpy as np
img = cv2.imread(images[14], 1)
dst_1 = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
cv2.imwrite("path/f.jpg", dst_1)
This should achieve the output you are looking for, hope it helps!
Related
I want to do some image processing using .dng files. I am using rawpy to convert the file into a numpy array. Then imageio to save the image. It is a very simple code, given on the rawpy webpage.
The resulting .png image is not as I expect it to be. The result is very red, while the raw image isn't. It is like a red overlay on the original image.
The code I am using is:
import rawpy
import imageio
path = r'img\CGraw.dng'
with rawpy.imread(path) as raw:
rgb = raw.postprocess()
imageio.imsave(r"img\spyraw.png",rgb)
Original dng image: https://drive.google.com/file/d/1Ip4U7KvI4-Lit4u7AuaHfoHIkXvGNRFk/view?usp=sharing
Resulting png image: https://drive.google.com/file/d/1CyrqJS2Osj3u-jEzLSxSy5eKMUP4csAV/view?usp=sharing
Simply add the argument use_camera_wb=True.
According to the documentation:
use_camera_wb (bool) – whether to use the as-shot white balance values
That means using the white balance coefficients that the camera found automatically during the shot (the values are stored as Exif data of the DNG file).
I don't know why the default is use_camera_wb=False. (It could be a result of poor job wrapping LibRaw with Python).
In my opinion the default supposed to be use_camera_wb=True and use_auto_wb=False.
We may also select the output Color Space: output_color=rawpy.ColorSpace.sRGB.
The colors looks right, but there may be other pitfalls that I am not aware of... (I don't have enough experience with rawpy and LibRaw).
import rawpy
import imageio
path = r'img\CGraw.dng'
with rawpy.imread(path) as raw:
rgb = raw.postprocess(use_camera_wb=True, use_auto_wb=False, output_color=rawpy.ColorSpace.sRGB)
imageio.imsave(r"img\spyraw.png",rgb)
Output (downscaled):
I'm working currently with EmguCV and I need empty Mat. But apparently when I create it, the new Mat has sometimes some random values which I do not want.
I'm creating it like that:
Mat mask =new Mat(mainImg.Size,Emgu.CV.CvEnum.DepthType.Cv8U,1);
And when I display the 'mask' it looks like that:
It should be completely black but as you can see there is some trash which cause me trouble in reading the mat.
Does anyone know why it is like that? Maybe is there a clever way to clear the Mat?
Thanks in advance!
To create an empty Mat just use the code below.
Mat img = new Mat();
If you want to make it a specific size, use the following code. In your question above, you were choosing a depth type of 8U, which might be contributing to your low quality of an image. Here I choose a depth type of 32F, which should increase the quality of the new mask image. I also added 3 channels instead of 1, so you can have full access to the Bgr color space.
Mat mask = new Mat(500, 500, DepthType.Cv32F, 3);
Mat objects are great because you don't need to specify the size or depth of the image beforhand. Simmilarly, if you want to use an Image instead, you can use the code below.
Image<Bgr, byte> img = new Image<Bgr, byte>(500, 500);
You will need to add some dependencies, but this is the easiest and my preferred way of doing it.
I have one picture.
There are many broken places in the image.
Please refer to the the picture.
Who knows how to repair the broken stroke using opencv 3.0?
I used dilate operation in OpenCV and I got the picture as belows:
It looks so ugly if comparing the original image.
I am late to the party but I hope this helps someone.
Since you have not provided the original image I cannot say the following solution would work 100%. Not sure how you are thresholding the image but adaptive thresholding might give you better results. Opencv (Python) code:
gauss_win_size = 5
gauss_sigma = 3
th_window_size = 15
th_offset = 2
img_blur = cv2.GaussianBlur(image,(gauss_win_size,gauss_win_size),gauss_sigma)
th = cv2.adaptiveThreshold(img_blur,255, cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY_INV,th_window_size,th_offset)
Tinker around with the parameter values to see what values work best. It's usually a good idea to blur your image and that might possibly take care of broken binary images of alphabets. Note, blurring may eventually produce slightly thicker characters in the binary image. If this still leaves with a few broken characters then you can use morphological closing:
selem_shape = cv2.MORPH_RECT
selem_size = (3, 3)
selem = cv2.getStructuringElement(selem_shape, selem_size)
th = cv2.morphologyEx(image, cv2.MORPH_CLOSE, selem)
Again, tinker around with structuring element size and shape that works best with your images.
I am cropping an opencv Mat:
cv::Size size = img.size();
cv::Rect roi(size.width*/4., size.height/4.,size.width/2., size.height/.2);
img= img(roi);
I then use img.data pointer to create a vtkImageData (via vtkImageImport):
vtkSmartPointer<vtkImageImport> importer = vtkSmartPointer<vtkImageImport>::New();
importer->SetImportVoidPointer(img.data);
...
importer->Update();
vtkImageData* vtkImg = importer->GetOutput();
I don't get the expected result when I display the vtkImg. I've digged into opencv's code and the problem is that when creating the cropped data, opencv does not allocate a new pointer that is 4 times smaller but instead keeps the same already allocated block, advances the pointer upstream and flags the new img variable as not continuous. Therefore my vtk image still imports data from the original uncropped Mat. I know I could import the full image to vtkImageData and then do the cropping with a vtk filter but I would prefer not to.
Is there a way with opencv to obtain a cropped image that is "physically" cropped (with a newly allocated data pointer)?
Thank you
I believe you are looking for cv::Mat::clone(). It makes a deep copy of the underlying image data and returns a cv::Mat object which contains said data.
You would then change the line
img= img(roi);
to
img = img(roi).clone();
After which img contains only the cropped data.
I am facing a problem while writing image from cvMat.
This is what I have done.
IplImage* low_threshold_mask = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);
CvMat* labelMat = cvCreateMat(low_threshold_mask->height,low_threshold_mask->width,CV_32F);
/* I populate elements of labelMat inside a function. Its done like this: cvmSet(labelMat,r,c,label); // where label is of type long */
To check the values I dump each pixel value in a text file and also dump the image.
IplImage* labelImg;
IplImage imageHeader;
labelImg = cvGetImage(labelMat, &imageHeader);
Now when I cross-check pixel intensity with corresponding value in dumped text file, I find mis-match. I feel I have got correct values in text file but wrong ones in image.
Can anyone help in figuring out the mistake?
---------------------New addition-------------------
I am still facing the problem. I have uploaded my programs. I will explain where exactly I am facing the error.
Libraries used: Along with openCV, I am using disjoint_sets of boost library.
Basically I am doing connected component labeling.
For debugging purpose, for 20th frame, I have dumped the label info of each pixel both in a)text file as well b) an image with intensity levels same as the final label of the pixels. So i am expecting the values same in both text and image. But that's not happening. And I am unable to figure out why. The text files shows the correct values but not the image. I am checking pixel values of image in Matlab(i had taken care of indices in matlab starts with 1 not 0).
My text files
a) (frame20final.txt) gets populated in GrimsonGMM.cpp/ConCompLabeling().
b) (frame20image.txt) gets populated in main.cpp
My dumped image(frame-ccs.jpg) gets populated in main.cpp.
Both the text files get same values. So there must be some mistake in writing the image from CvMat.
Test Video: person15_walking_d1_uncomp.avi
You can try with any other video also.
Thanks in advance,
Kaushik
I understood why I was getting the error. I was dumping in .jpg image which was doing compression. This was resolved when I used .png
Your question its so easy.
You want to work with CvMat and after do operations with CvMat you want to plot your CvMat like it was an image.
You must create imageHeader, something like this.
CvMat* mat = cvCreateMatHeader(rows, cols, type);
mat->step = 4 * (mat->cols * CV_ELEM_SIZE1(mat->type) * CV_MAT_CN(mat->type) / 4 + 1);//critical
cvCreateData(mat);
In OpenCV 2.0 and below C++ interface it's not really necessary to change from Mat to IplImage.
You can plot your image using cvShowImage and if you want to convert intoIplImage
just do a easy cast IplImage *img = labelMat;