Lena grayscale image processing values - image-processing

I am trying to get the grayscale image values 0-255 of the 512x512 image of lena. Some have suggested using Matlab, however I do not have Matlab. Has anyone used Gimp for this?

Just use ImageMagick. It is installed on most Linux distros and available for OSX and Windows:
convert lena.jpg -colorspace gray -depth 8 txt:-

The octave solution is to read the image using
im = imread("lena512.jpg");
The image im can then be shown using imshow (im).
Conversion to grayscale can be performed using
lenagy = 0.3*im(:,:,1) + 0.6*im(:,:,2) + 0.1*im(:,:,3);
The result is that lenagy consists of a 2-D array, which can be saved to a file using for example
save lenagy.org lenagy

Related

How do you output and indexed (black and white) tiff image in opencv

If I imwrite a binarized image, it only creates a grayscale image file not an indexed=2 file. What options do I need on the imwrite to accomplish this? I would also like LZW compression if possible.
orig = cv2.imread('rgb.tiff')
work = cv2.cvtColor(orig, cv2.COLOR_BGR2GRAY)
work = cv2.ximgproc.niBlackThreshold(work, 255, cv2.THRESH_BINARY, 41, -0.2,
binarizationMethod=cv2.ximgproc.BINARIZATION_NICK)
cv2.imwrite('bw.tiff', work)
If you really, really want a bi-level LZW-compressed TIFF, you can write one with wand like this:
#!/usr/bin/env python3
from wand.image import Image
# Open image and save as bi-level LZW-compressed version
with Image(filename='image.tif') as img:
img.compression='lzw'
img.type='bilevel'
img.save(filename='result.tif')
Input:
Result:
Note that you can save OpenCV images like this, but you must first convert from its BGR order to conventional RGB order first. You can use:
RGB = cv2.cvtColor(BGRimage, cv2.COLOR_BGR2RGB)
or pure Numpy:
RGB = BGRimage[...,::-1]
Keywords: Python, image processing, wand, TIFF, TIF, LZW, compressed, compression, bilevel, bi-level
imwrite
Only 8-bit (or 16-bit unsigned (CV_16U) in case of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’ channel order) images can be saved using this function.

Lower noise in picture to enable OCR with tesseract

I'm trying to do OCR on this kind of images:
Unfortunately, tesseract is unable to retrieve the number because of the noisy points arround the characters.
I tried playing with ImageMagick to enhance the quality of the image but no luck.
Examples:
convert input.tif -level 0%,150% output.tif
convert input.tif -colorspace CMYK -separate output_%d.tif
Is there any way to retrieve efficiently the characters in this kind of images?
Many thanks.
Simple closing operation(Dilation followed by Erosion) will give you desired output. Below is the Python implementation of the same.
img = cv2.imread(r'D:\Image\noiseOCR.png',0)
kernel = np.ones((3,3),np.uint8)
closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
Digits in this image are largest connected components. So another approach is doing the connected component analysis.

DNG to PNG conversion using ImageMagick

I have captured a burst of 5 dng images from a Nexus6P for scientific imaging. The pixel intensities from the image will be mapped to my measurement value. For further processing the 5 dng images are averaged to reduce the noise and converted to png. I am using the below code to achieve this
convert dng:*.dng -average out.png
I would like to know if any processing is being done on the dng image, changing the pixel intensity values while conversion as it would affect my final calibration.
Version: ImageMagick 7.0.3-4, Windows 10

Bayer to HSV using openCV

does somebody know is there is a function somewhere available (based on OpenCV), to convert bayer images to HSV colourspace?
Or do I have to do the step via RGB?
Read about cvtColor and CV_BayerBG2BGR. More here.

OpenCV image conversion from RGB to Grayscale using imread giving poor results

I'm loading a 24 Bit RGB image from a PNG file into my OpenCV application.
However loading the image as grayscale directly using imread gives a very poor result.
Mat src1 = imread(inputImageFilename1.c_str(), 0);
Loading the RGB image as RGB and converting it to Grayscale gives a much better looking result.
Mat src1 = imread(inputImageFilename1.c_str(), 1);
cvtColor(src1, src1Gray, CV_RGB2GRAY);
I'm wondering if I'm using imread for my image type correctly. Has anyone experienced similar behavior?
The image converted to grayscale using imread is shown here:
The image converted to grayscale using cvtColor is shown here:
I was having the same issue today. Ultimately, I compared three methods:
//method 1
cv::Mat gs = cv::imread(filename, CV_LOAD_IMAGE_GRAYSCALE);
//method 2
cv::Mat color = cv::imread(filename, 1); //loads color if it is available
cv::Mat gs_rgb(color.size(), CV_8UC1);
cv::cvtColor(color, gs_rgb, CV_RGB2GRAY);
//method 3
cv::Mat gs_bgr(color.size(), CV_8UC1);
cv::cvtColor(color, gs_bgr, CV_BGR2GRAY);
Methods 1 (loading grayscale) and 3 (CV_BGR2GRAY) produce identical results, while method 2 produces a different result. For my own ends, I've started using CV_BGR2GRAY.
My input files are jpgs, so there might be issues related to your particular image format.
The simple answer is, that openCV functions uses the BGR format. If you read in a image with imread or VideoCapture, it'll be always BGR. If you use RGB2GRAY, you interchange the blue channel with the green. The formula to get the brightness is
y = 0.587*green + 0.299*red + 0.114*blue
so if you change green and blue, this will cause an huge calculation error.
Greets
I have had a similar problem once, working with OpenGL shaders. It seems that the first container that OpenCV reads your image with does not support all the ranges of color and hence you see that the image is a poor grayscale transformation. However once you convert the original image into grayscale using cvtColor the container is different from the first one and supports all ranges. In my opinion the first one uses less than 8 bits for grayscale or changing to the grayscale uses a bad method. But the second one gives smooth image because of more bits in gray channel.

Resources