Check in which Image in a dataset I get the warning libpng warning: iCCP: known incorrect sRGB profile while using OpenCV - opencv

I have a dataset, it has 2500 images. To access each of them and apply normalization, I loop over the images:
for path_to_image in dataset:
image = cv2.imread(path_to_image)
# rest of the code follows
While running the code I get this error/warning libpng warning: iCCP: known incorrect sRGB profile while using OpenCV
I tried to use some tricks to get the approximate value of iteration where the image is not being read properly but it did not work.
Is there any way I can obtain the variable path_to_image where I get this error?
What I was thinking is there must be a variable that stores some information about the warning.
Would be grateful for any help.

Related

OpenCV won't accept large image (over 1 GB) Attempted changing the path variable and it still won't accept the image

So I am trying to read some very large files into opencv using cv2.imread(image_path). However open cv returns the error "error: (-215:Assertion failed) pixels <= CV_IO_MAX_IMAGE_PIXELS in function 'cv::validateInputImageSize'".
I have tried using os.environ["OPENCV_IO_MAX_IMAGE_PIXELS"] = pow(2,30).__str__() as suggested in other threads but I still get the same error. I am at a loss as to how to adjust the maximum pixel values, any help would be very appreciated.

Is there anyway (commandline tools) to calculate MD5 hash for .NEF (also .CR2, .TIFF) regardless any metadata?

Is there anyway (commandline tools) to calculate MD5 hash for .NEF (also .CR2, .TIFF) regardless any metadata, e.g. EXIF, IPTC, XMP and so on?
The MD5 hash should be same once we update any metadata inside the image file.
I searched for a while, the closest solution is:
exiftool test.nef -all= -o - -m | md5
but 'exiftool -all=' still keeps a set of EXIF tags in the output file. The MD5 hash can be changed if I update remaining tags.
ImageMagick has a method for doing exactly this. It is installed on most Linux distros and is available for OSX (ideally via homebrew) and also Windows. There is an escape for the image signature which includes only pixel data and not metadata - you use it like this:
identify -format %# _DSC2007.NEF
feb37d5e9cd16879ee361e7987be7cf018a70dd466d938772dd29bdbb9d16610
I know it does what you want and that the calculated checksum does not change when you modify the metadata on PNG files for example, and I know it does calculate the checksum correctly for CR2 and NEF files. However, I am not in the habit of modifying RAW files such as you have and have not tested it does the right thing in that case - though I would be startled if it didn't! So please test before use.
The reason that there is still some Exif data left is because the image data for a NEF file (and similar TIFF based filetypes) is located within that Exif block. Remove that and you have removed the image data. See ExifTool FAQ 7, which has an example shortcut tag that may help you out.
I assume your intention is to verify the actual image data has not been tampered with.
An alternate approach to stripping the meta-data can be to convert the image to a format that has no metadata.
ImageMagick is a well known open source (Apache 2 license) for image manipulation and conversion. It provides libraries with various language bindings as well as command line tools for various operating systems.
You could try:
convert test.nef bmp:- | md5
This converts test.nef to bmp on stdout and pipes it to md5.
AFAIR bmp has no support for metadata and I'm not sure if ImageMagick even preserves metadata across conversions.
This will only work with single image files (i.e. not multi-image tiff or gif animations). There is also the slight possibility some changes can be made to the image which result in the same conversion because of color space conversions, but these changes would not be visible.

error: imread: invalid image file: Magick++ exception: Magick:

I have created the images using Gimp. The image is successfully loaded by octave's imread .But when I use convert command to resize the image and try to load in octave following errors occurs.
warning: your version of GraphicsMagick limits images to 16 bits per pixel
error: imread: invalid image file: Magick++ exception: Magick: Must specify image size (/home/tensor/Documents/Projects/ML/datasets/NepaliChar/KA/resize/makeMat.m) reported by coders/gray.c:128 (ReadGRAYImage)
I am using Arch Linux with Octave version 3.8.0 and ImageMagick 6.8.8-4
Reading a .GRAY image requires you to specify the dimensions of your image (e.g. 800x600). You have to do this because you are reading raw pixels.
According to the documentation you cannot specify the dimensions of your .GRAY image before reading it.

Unsupported format or combination of formats when using cv::reduce method in OpenCV

I am using OpenCV 2.4.2 and I am trying to take projections of two matrices (tmpl(32x44), subj(32x44)) along row and column. I have initialised a result matrix as rowProjectionSubj(subj.rows,1,CV_8UC1) Then I call cv::reduce(subj,rowProjectionSubj,1,CV_REDUCE_SUM,-1);
Why is this complaining about the type mismatch? I have kept the types same (by keeping dtype=-1 in cv::reduce. I get the tmpl and subj objects by doing cv::imread("image_path",0) i.e. scanning grayscale images in.
I might not be right, but after I saw this:
http://answers.opencv.org/question/3698/cvreduce-gives-unsupported-format-exception/?answer=3701#post-id-3701
and with a little experiment and using an old friend called "register math", I realised that when you add two 8-bit numbers, you need to consider a 8+1+1 bit register to store the sum because it potentially has carry output. so any result of reduce should have bigger space than the source i.e. if the source is 8-bit unsigned, it should be at least 16-bit unsigned or signed; might as well be 32-bit if it is going to be used for some product calculation and stuff...
NOTE: The destination type must be EXPLICITLY stated in the cv::reduce method. Please follow my openCV link for further information.

Sampling BC5_SNORM texture yields incorrect value range

I'm working with Direct3d 11, and I've come across something strange. I have taken a normal map and encoded it to a DDS file twice. Once with R8G8B8A8_SNORM encoding, and once with BC5_SNORM.
Next I load each texture using D3DX11CreateShaderResourceViewFromFile in conjunction with D3DX11GetImageInfoFromFile. When I sample these textures in my pixel shader I find that the R8G8B8A8_SNORM texture is returning values in the range [-1,1], which is what I would expect for a SNORM texture. However, the BC5_SNORM texture is returning values in the range [0,1], which doesn't make any sense to me.
I double an triple checked with my debugger and PIX. The format of the texture is correct (BC5_*S*NORM), so I am at a loss for why it's not returning signed values.
I managed to reproduce the same issue as you and I also got the same behaviour when doing a conversion from a R8G8B8A8_SNORM texture (with -1 to +1 values) to BC5_SNORM (producing only 0 to 1 values) when doing the conversion through D3Dx11LoadTextureFromTexture. There does appear to be a fault in D3DX11, at least regarding BC5_SNORM, in that, regardless of all kinds of input formats, the (BC5)SNORM output is always in the 0 to 1 range.
As suggested by #chuckwalbourn I can confirm that the DirectXTex utilities, which supersedes the now deprecated D3DX11, does respect and correctly handle signed values for BC5_SNORM outputs.
You can either have your program write out a temporary .dds (using D3DX11SaveTextureToFile with a R8G8B8A8_SNORM texture) and then invoke the standalone DirectXTex 'texconv.exe' utility to convert to BC5_SNORM, or wrangle the DirectXTex library into your program and use the 'Convert(...)' function appropriately.

Resources