I have a tif image in which its pixel are integers. I want to import it to Julia and process it further.
I used in IJulia:
using FileIO
using Images
using ImageView
path_seed = joinpath(#__DIR__,"seed.tif")
seed = load(path_seed);
When I enter seed and enter I will get an image while I want the matrix of elements.
If I use:
mat = convert(Array{Float32}, seed)
I will get a matrix but there are two problems:
1- Its entries are all float but not an integer.
2- The value of the float does not correspond to the value of integers I expect. For example, in my images there are values 0,1,2,3,4 (the image is a mask and each connected component have values of 0,1,2,3,4) but I get floats 0.0, 0.011764707, 0.015686275, 0.007843138, 0.003921569.
How can I import the image as a matrix of integers? Here is the sample image:
http://s000.tinyupload.com/index.php?file_id=21432720633236092551
When you load that file, you're seeing the effects of two of the key abstractions of JuliaImages:
every pixel is a single entry in an array (not, e.g., 3 entries if it's an RGB image)
numbers mean what they say they are. In particular, 255 ≠ 1.0.
When you load your seed.tif image, you'll note that the returned values are of type Gray{N0f8}. The Gray part means it has been interpreted as a grayscale image---had it been a color image, they might have been elements like RGB{N0f8}(1.0, 0.8, 0.4). In either case, accessing img[i,j] returns all the information about that whole pixel.
The part you're probably most concerned about is the N0f8. In most image-processing frameworks, the meaning of a number depends on its representation (e.g., https://scikit-image.org/docs/stable/user_guide/data_types.html). "White" is 255 if your numbers are encoded as UInt8, but white is 1.0 if your numbers are encoded as Float32. When you want to change the representation, you have to remember to use special conversion functions that also change the values of the pixels. In no other field of mathematics am I aware of the equality 255 == 1.0.
To stop encouraging bad mathematics, JuliaImages has gone to the trouble to define new number types that harmonize these notions. In JuliaImages, white is always 1. But to support 8-bit images, we define a new number type, N0f8, with 8 bits whose maximum value is 1. These are internally represented just like UInt8, they are just interpreted as if they have been divided by 255. Similarly, there are N0f16 for 16-bit images, and even special types like N4f12 that are useful, e.g., if you're collecting images with a 12-bit camera. This means it's possible to detect image saturation simply by looking for pixels with value 1.
Of course, sometimes you might want to look at things differently. JuliaImages supports several "views" that provide an alternative interpretation of the same bitwise data. In your case,
rawview(channelview(seed))
would return an array of UInt8 values which might be what you're expecting.
Note, however, that if you want to save an array of integers that shouldn't really be interpreted as an image, there are possibly better formats such as HDF5. Image formats are sometimes subject to compression that can corrupt the values you save. TIFF is often called lossless, but in fact it's possible to use lossy compression (https://en.wikipedia.org/wiki/TIFF).
Related
Anyone knows which method is needed to be invoked to access the RGB value of a picture of the type 8UC4 (8 bits per component, 4 channels (color channels + alpha)
You may use mat.at(i, j), to access the pixel located at i row and j column, But at() must be used with data type of Mat, which is passed as a template type. OpenCV won't throw any error if you use wrong data type, instead it would return some garbage, so you need to take care of that thing, single channel matrix pixels can be accessed as mat.at<uchar>(i, j), mat.at<float>(i, j).
For multi-channel matrices, you need to use cv::Vec3b, cv::Vec3f, cv::Vec4b, etc.
In your case, since it is a 4-channel uchar matrix, its pixel values can be accessed as:
cv::Vec4b pixVal = mat.at<cv::Vec4b>(0, 0);
I tried to cv::bitwise_not to a cv::Mat matrix of double values. I applied like
cv::bitwise_not(img, imgtemp);
img is CV_64F data of 0 and 1. But imgtemp has all nonsense data inside.
I am expecting 0 in img to be 1 at imgtemp and 1 in img to be 0 at imgtemp. How to apply bitwise_not to a double Mat matrix?
Thanks
I cannot get the sense of doing a bitwise not of a double (floating point) value: you will be doing bitwise operations also on the exponent (see here). All bits will be inverted, from 0 to 1 and viceversa.
There is also a note on this aspect in the function documentation.
In case of a floating-point input array, its machine-specific bit
representation (usually IEEE754-compliant) is used for the operation.
If you want zeros to become ones and viceversa, as you suggested, you could do:
cv::threshold(warpmask, warpmaskTemp,0.5,1.0,THRESH_BINARY_INV)
(see documentation) (and yes, you can use same matrix for input and destination).
I think you are either getting the method signature wrong or wrongly named the parameters for the bitwise_not method.
According to [OpenCV 2.4.6 Documentation on bitwise_not() method] (http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#void bitwise_not(InputArray src, OutputArray dst, InputArray mask))
void bitwise_not(InputArray src, OutputArray dst, InputArray mask=noArray())
If you are going to use any mask, it needs to be the last argument as mask is an optional for 'bitwise_not' method.
Additionally, all the data types need to be the same in order to avoid confusion. What I am trying to imply is that your source and destination data formats and any interim ones such as the method parameters must be in the same format. You cannot have on ein CV_64F and others in different. If I am not loosing my marbles here, bitwise operation would possibly require you to have all the data in unsigned or signed integer format for the sake of simplicity. Nevertheless, you should have all the types same.
About the garbage that you got, I think it is a general and good programming practice that you initialise your variables with some reasonable values. This helps when you are debugging step by step and ascertain the details where it failed.
Give it a try.
To follow on from Antonio's answer, you should use the right tool for the job. double is not an appropriate storage medium for boolean data.
In open cv you can type a boolean as an unsigned char (8bits). Although in typing your own true value you can pick any non-zero value, in open cv it is more natural to have 0/255; that way fitting in with open cv's bitwise operations and comparison operators. E.g. a bitwise not could be achieved by result = (input == 0) which can take any type. threshold in Antonio's answer maintains the same type (useful in some circumstances). For bitwise_not you should have it in the boolean format first.
Unfortunately opencv makes it very difficult to work with black and white bitwise data.
I have the code for computing the histogram for hsv and yuv images. As am trying to obtain values corresponding to brightness alone, I want the 'v' channel value from hsv image and luma ('y') channel value from yuv image. this is the code I have used.
int channels[] = {0};
calcHist(&src_yuv,1,channels,Mat(),hist,1,histSize,ranges,true,false);
This sample code is for yuv. I just change {0} to {2} to obtain 'v' channel values from HSV. I am getting certain results, but am not sure if am choosing the right channels. could you please help me, to know if those numbers choose the exact channels I want to? Thanks in advance
To be absolute sure that the channel number X corresponds to the channel you are after, consult the channelSeq attribute of the IPL Image structure. If channelSeq[X] gives the name (a character) of the channel you are after, then you found it.
But, given how this attribute is documented (along other interesting ones), even if you were always using IPLImage, there is no guarantee that the information contained there would be accurate. Thus, to be absolutely sure about the channel sequence in your image you have to trust the conversion specification and remember that yourself. So, if you start with an image in BGR and convert using BGR2YUV, then you trust that the Y channel is the first one, and so on. If OpenCV ever changes BGR2YUV to mean that Y goes to the last channel, and so on, then too bad for you.
I am using OpenCV 2.4.2 and I am trying to take projections of two matrices (tmpl(32x44), subj(32x44)) along row and column. I have initialised a result matrix as rowProjectionSubj(subj.rows,1,CV_8UC1) Then I call cv::reduce(subj,rowProjectionSubj,1,CV_REDUCE_SUM,-1);
Why is this complaining about the type mismatch? I have kept the types same (by keeping dtype=-1 in cv::reduce. I get the tmpl and subj objects by doing cv::imread("image_path",0) i.e. scanning grayscale images in.
I might not be right, but after I saw this:
http://answers.opencv.org/question/3698/cvreduce-gives-unsupported-format-exception/?answer=3701#post-id-3701
and with a little experiment and using an old friend called "register math", I realised that when you add two 8-bit numbers, you need to consider a 8+1+1 bit register to store the sum because it potentially has carry output. so any result of reduce should have bigger space than the source i.e. if the source is 8-bit unsigned, it should be at least 16-bit unsigned or signed; might as well be 32-bit if it is going to be used for some product calculation and stuff...
NOTE: The destination type must be EXPLICITLY stated in the cv::reduce method. Please follow my openCV link for further information.
I'm working with Direct3d 11, and I've come across something strange. I have taken a normal map and encoded it to a DDS file twice. Once with R8G8B8A8_SNORM encoding, and once with BC5_SNORM.
Next I load each texture using D3DX11CreateShaderResourceViewFromFile in conjunction with D3DX11GetImageInfoFromFile. When I sample these textures in my pixel shader I find that the R8G8B8A8_SNORM texture is returning values in the range [-1,1], which is what I would expect for a SNORM texture. However, the BC5_SNORM texture is returning values in the range [0,1], which doesn't make any sense to me.
I double an triple checked with my debugger and PIX. The format of the texture is correct (BC5_*S*NORM), so I am at a loss for why it's not returning signed values.
I managed to reproduce the same issue as you and I also got the same behaviour when doing a conversion from a R8G8B8A8_SNORM texture (with -1 to +1 values) to BC5_SNORM (producing only 0 to 1 values) when doing the conversion through D3Dx11LoadTextureFromTexture. There does appear to be a fault in D3DX11, at least regarding BC5_SNORM, in that, regardless of all kinds of input formats, the (BC5)SNORM output is always in the 0 to 1 range.
As suggested by #chuckwalbourn I can confirm that the DirectXTex utilities, which supersedes the now deprecated D3DX11, does respect and correctly handle signed values for BC5_SNORM outputs.
You can either have your program write out a temporary .dds (using D3DX11SaveTextureToFile with a R8G8B8A8_SNORM texture) and then invoke the standalone DirectXTex 'texconv.exe' utility to convert to BC5_SNORM, or wrangle the DirectXTex library into your program and use the 'Convert(...)' function appropriately.