DNG to PNG conversion using ImageMagick - imagemagick

I have captured a burst of 5 dng images from a Nexus6P for scientific imaging. The pixel intensities from the image will be mapped to my measurement value. For further processing the 5 dng images are averaged to reduce the noise and converted to png. I am using the below code to achieve this
convert dng:*.dng -average out.png
I would like to know if any processing is being done on the dng image, changing the pixel intensity values while conversion as it would affect my final calibration.
Version: ImageMagick 7.0.3-4, Windows 10

Related

Converting normalised CIELAB image tensor to RGB image

I trained an image to image translation model on pytorch and the input and output images are in CIELAB color space. How do I convert this to an RGB image? Simply converting the image causes some sort of clipping and produces white patches.
out=model.forward(x)
out=torch.squeeze(out)
out=out.permute(1,2,0)
out=torch.from_numpy(out.data.numpy())
plt.imshow(out)
This doesn't produce white patches however I cant use OpenCV and convert it to RGB as the values are in range 0-1.
Now if I convert the tensor to a PIL image and then convert to RGB(0-255) some sort of clipping occurs and produces white patches which are even visible before converting to RGB
out=model.forward(x)
out=torch.squeeze(out)
out=np.asarray(transforms.ToPILImage()(out))
plt.imshow(out)
The white patches after using out=cv2.cvtColor(out, cv2.COLOR_LAB2RGB) to convert
How can I properly convert the CIELAB image to RGB?

Is 90 degree image rotation with graphicsmagick or imagemagick lossless

Is the 90 degree image rotation with graphicsmagick or imagemagick always lossless?
E.g. when doing
gm convert -rotate 90 img.img rot90.img.img
gm convert -rotate -90 rot90.img.img back.img
will img.img and back.img be equal?
The answer to this depends more on the particular image format you're using, rather than the internals of Image/GraphicsMagick (assuming they're competently written).
With a raw format (e.g. BMP), there should be no reason for this not to be completely identical.
With a lossless format, it's possible there may be some subtle variations due to numerical precision.
With a lossy format (e.g. JPEG), it's almost certain there will be differences. In the case of JPEG for example, the compression of each 8x8 block is affected by the block to its left - if you rotate the image then that spatial relationship will change.

Which YUV format should I select for computing PSNR of video sequence?

For example I can give one of following video formats:
400
411
420
422
444
Selecting every video format is showing different PSNR value for video sequence.
OR Is there any way I can determine YUV video data format of my input YUV video sequence?
According to Wikipedia, PSNR is reported against each channel of color space.
Alternately, for color images the image is converted to a different color space and PSNR is reported against each channel of that color space, e.g., YCbCr or HSL.
See: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
For computing PSNR of video, you must have the source video, and the same video after some kind of processing stage.
PSNR is most commonly used to measure the quality of reconstruction of lossy compression codecs).
In case color sub-sampling (e.g converting YUV 444 to YUV 420), is part of the lossy compression pipeline, it's recommended to include the sub-sampling in the PSNR computation.
Note: There is no strict answer, it depends what you need get measured.
Example:
Assume input video is YUV 444, and H.264 codec were used from lossy compression, and assume pre-processing stage is converting YUV 444 to YUV 420.
Video Compression: YUV444 --> YUV420 --> H264 Encoder.
You need to reverse the process, and then compute PSNR.
Video Reconstruction: H264 Decoder --> YUV420 --> YUV444.
Now you have input video in YUV 444 format, and reconstructed video in YUV 444 format, apply PSNR computation of the two videos.
Determine YUV video data format of input YUV video:
I recommend using ffprobe tool.
You can download it from here: https://ffmpeg.org/download.html (select "Static Linking").
I found the solution here: https://trac.ffmpeg.org/wiki/FFprobeTips.
You can use the following example:
ffprobe -v error -show_entries stream=pix_fmt -of default=noprint_wrappers=1:nokey=1 input.mp4
Y-PSNR: you can simply extract the Y component of the original and the reference images, and calculate the PSNR value for each image/video frame.
For video: you need to calculate the mean value of the all estimated PSNR values.

Grayscale conversion algorithm of OpenCV's imread()

What grayscale conversion algorithm does OpenCV's
cv::imread("image.jpg", cv::IMREAD_GRAYSCALE);
use?
In OpenCV 3.0:
cv::IMREAD_COLOR: the image is decompressed by cv::JpegDecoder as JCS_RGB (three channel image) and then the icvCvt_RGB2BGR_8u_C3R() function will swap the red and blue channels in order to get BGR format.
cv::IMREAD_GRAYSCALE: the image is decompressed by cv::JpegDecoder as JCS_GRAYSCALE (one channel image), all details of color conversion and other preprocessing/postprocessing is handled by the libjpeg. Finally, the decompressed data are copied into the internal buffer of the given cv::Mat.
Ergo no cv::cvtColor() is called after reading the image as cv::IMREAD_GRAYSCALE.

Convert 8 depth single channel (YUV) image to 24 depth RGB(3 Channels) image

How convert one channel YUV image (first channel - Y are used) to 24 depth RGB image? I asks, because i must display it using gtk+ interface and gtk supports only 24 depth RGB image.
I'm not sure what you are actually starting from, a single-channel grayvalue image or a three-channel YUV image of which the second and third channel are full of zeros. If you have a single-channel 8-bit image to start with, you can use cvtColor(source_mat,destination_mat,CV_GRAY2RGB) to convert to 24-bit RGB. If you are starting from a 3-channel 24-bit YUV image with two channels full of zeros, you can use the split() function to get the Y channel out of it, then convert that as described above.

Resources