Big issue reading a large 16bit grayscale PNG using Python - opencv

I have a big issue trying to convert a large scientific image (7 Mb) 16bit PNG image to JPG in order to compress it and check for any eventual artifact in Python.
The original image can be found at:
https://postimg.cc/p5PQG8ry
Reading other answers here I have tried Pillow and OpenCV without any success the only thing I obtain is a white sheet. What I'm doing wrong?
The commented line was an attempt from Read 16-bit PNG image file using Python but seems not working for me generating a data type error.
import numpy as np
from PIL import Image
import cv2
image = cv2.imread('terzafoto.png', -cv2.IMREAD_ANYDEPTH)
cv2.imwrite('terza.jpg', image)
im = Image.open('terzafoto.png').convert('RGB')
im.save('terzafoto.jpg', format='JPEG', quality=100)
#im = Image.fromarray(np.array(Image.open('terzafoto.jpg')).astype("uint16")).convert('RGB')

Thanks to Dan Masek I was able to find the error in my code.
I was not correctly converting the data from 16 to 8 bit.
Here the updated code with the solution for OpenCV and Pillow.
import numpy as np
from PIL import Image
import cv2
im = Image.fromarray((np.array(Image.open('terzafoto.png'))//256).astype("uint8")).convert('RGB')
im.save('PIL100.jpg', format='JPEG', quality=100)
img = cv2.imread('terzafoto.png', cv2.IMREAD_ANYDEPTH)
cv2.imwrite('100.jpeg', np.uint8(img // 256),[int(cv2.IMWRITE_JPEG_QUALITY), 100])
The image quality factor can be set in function of your needs. 100 means lossless compression.

Related

How do I read an image given its link using OpenCV and then plot it using plt.imshow()?

I've tried this
from PIL import Image
import requests
im = Image.open(requests.get('http://images.cocodataset.org/train2017/000000000086.jpg', stream=True).raw)
img = cv.imread('im.jpg')
print(img)
This works but print(img) prints "None". Also, shouldn't it be
cv.imread(im)(doing that gives an error).
when i do
plt.imshow(img)
it gives an error
Image data of dtype object cannot be converted to float
Not sure I would introduce PIL as an additional dependency as well as OpenCV, nor would I write the image to disk.
I think it would be more direct to do:
import cv2
import requests
import numpy as np
# Fetch JPEG data
d = requests.get('http://images.cocodataset.org/train2017/000000000086.jpg')
# Decode in-memory, compressed JPEG into Numpy array
im = cv2.imdecode(np.frombuffer(d.content,np.uint8), cv2.IMREAD_COLOR)
I think you forgot to save the downloaded image image file to disk before reading it.
from PIL import Image
import requests
import cv2 as cv
im = Image.open(requests.get('http://images.cocodataset.org/train2017/000000000086.jpg', stream=True).raw)
im.save('im.jpg') # write image file to disk
img = cv.imread('im.jpg') # read image file from disk
print(img)
If you want to show image directly, without saving it to disk, use this approach from another answer:
import numpy as np
im = Image.open(...)
img = np.array(im)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert RGB to BGR
plt.imshow(img)
plt.show()

Find coordinates of high frequency centers after plotting fftshifted image with Python

I am trying to write a code to detect Moire Patterns in images. I am quite new to image processing in Python, so please excuse me if the solution is trivial.
My approach is, to use the scipy fftshift function to distinguish between Moire and non Moire images (see below). Moire images have several high frequency centers, whereas normal images only have one center.
I would like to get the coordinates of these "centers", but I don't know how to do it exactly.
I am happy about every suggestion!
Code:
import numpy as np
from scipy.fftpack import *
import imageio
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
moire = imageio.imread(imgpath1)
nonMoire = imageio.imread(imgpath2)
moire = np.dot(moire[...,:3], [0.2989, 0.5870, 0.1140])
nonMoire = np.dot(nonMoire[...,:3], [0.2989, 0.5870, 0.1140])
noisy_gray_fft = fft2(moire)
orig_gray_fft = fft2(nonMoire)
orig_gray_fft_shift = fftshift(orig_gray_fft)
noisy_gray_fft_shift = fftshift(noisy_gray_fft)
plt.subplot(121)
plt.imshow(np.abs(noisy_gray_fft_shift), cmap='gray', norm=LogNorm(vmin=5))
plt.subplot(122)
plt.imshow(np.abs(orig_gray_fft_shift), cmap='gray', norm=LogNorm(vmin=5))
Non Moire original:
Moire original:
After FFT:

different results when openning an image into numpy array using cv2.imread and PIL.Image.open

I am trying to open an image and turn it into a numpy array.
I have tried:
1) cv2.imread which gives you a numpy array directly
2) and PIL.Image.open then do a numpy.asarray to convert the image object.
Then i realise the resulting array from the same picture is different, please see the attached screenshots.
cv2.imread
PIL.Image.open
I would expect the color channel should always have the same sequence, no matter the package, but I do not seem to be able find any documentation for pillow reagarding this.
Or am I just being silly? Thanks in advance for any suggestion!!!
I don't know anything about PIL but, contrary to just about every other system in the world, OpenCV stores images in BGR order, not RGB. That catches every OpenCV beginner by surprise and it looks like that's the case with your example.
Opencv
image = cv2.imread(image_path, 1)
image_cv = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
Pillow
image_from_pil = Image.open(image_path).convert("RGB")
image_pillow = numpy.array(image_from_pil)
image_np equals image_cv
Notes: While reading a JPEG image, image_np and image_cv may be little different because the libjpeg version may be different in OpenCV and Pillow.
As SSteve and Xin Yang correctly say, the main problem could be that cv2 returns spatial domain (pixels) in BGR color plane, instead of usual RGB. You need to convert the output (reverse the order in channels' axis or use cv2.cvtColor).
Even after the color plane conversion, the output might not be the same. Both PIL and cv2 use libjpeg under the hood, but the outputs of libjpeg do differ for different versions. Read this research paper for reference. Based on my experiments I can say that the libjpeg version used by PIL is unpredictable (differs even on two identical MacBook Pro 2020 M1 using brew and the same Python and PIL version).
If it does matter and you want to have control over which libjpeg/libjpeg-turbo/mozjpeg version is used for compression and decompression, use jpeglib. It is still in beta, but the production release is coming.

Convert coloured images in a folder to grayscale and store it in a different folder using opencv

can anyone help me in converting coloured images in a folder to grayscale and store it in a different folder using opencv
I am expecting a code
Thanks
You must realize that no one on stackoverflow will write code for you. Here's pseudo/short code for you to do this in Python
import os
import cv2
color_imgs = [x for x in os.listdir(color_img_dir) if x[-3:]=='jpg']
for img in color_imgs:
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imwrite(new_path + img_name, gray_img)

Out of core resampling

I have a large image file (single band) that do not fit in my ram.
I wan to read it as numpy array (data) and plot it using matplotlib, possibly using imshow(data). I know how to do it for a small-sized image. But how can I do it for large file? Ofcourse, its okay to resample (possibly scipy zoom) it before plotting. But how can I resample it before reading as numpy arrray because reading of large file into memory is not possible.
maybe it is better to display the tiff with an external viewer https://superuser.com/questions/254677/what-software-works-well-for-viewing-massive-tiff-images-on-windows-7 .
Otherwise you could try to convert the tiff in an HDF5 file first (ftp://ftp.hdfgroup.org/HDF/contrib/salem/tiffutils.c) , and then load only a part of the matrix you want to display.

Resources