how to random dc? - image-processing

i want to make an image scrambling using DCT in matlab. i used a grayscale image. i want to random dc with random value. how to set the DC coefficient of each block to a random
value 0-255 and leave all others??these are steps for experiment:Divide a gray image into 8x8 blocks;Perform DCT on each block;set the DC coefficient of each block to a random
value 0-255 and leave all others;Perform inverse DCT and restore the image;Compare the restored image with the original one
by SSIM.
thank you

The question is "how to set the DC coefficient of each block to a (given) value...". So the procedure you mention (DCT, set DC coefficient, then iDCT) should work. You would use Matlab's dct2 and idct2 functions.
However, from the DCT definition, the DC coefficient is the sum of the values of the pixels in each of your blocks; setting it to a random value and taking the inverse transform will produce a block that would be different from the original one only by a constant. That's no surprise because you are just changing the DC level. So you could skip the DCT/iDCT and directly add or subtract a random value to all pixels in each block.
But you can see that each block would look like the original one, except for a different luminosity; also, the boundaries between blocks would be quite visible, so the scrambling method could be easily reversed.

Related

Is there a known algorithm to find groups of adjacent pixels with similar color?

I'd like to know if this is a known algorithm with a name.
I've never done any image processing, but I'm picturing an image as a 2-d matrix of 3-d vectors (ignore transparency).
The only input parameter is distance. Every pixel is tested against its neighbors. If they are closer than the parameter, they join a group and their values are averaged. As groups grow by gaining new pixels all pixels get the average value of the group.
For your typical selfie the result might resemble quantizing or posterizing, but unlike quantizing or posterizing, there is no fixed count of output colors. If absolutely no pixels are close enough to their neighbors, the result is a 1:1 mapping of every pixel to its own group.
Is there a name for this?

how use Discrete cosine transform(DCT) in opencv

dct don't the conversion properly in opencv.
imf = np.float32(block)
dct = cv2.dct(imf)
[[154,123,123,123,123,123,123,136],
[192,180,136,154,154,154,136,110],
[254,198,154,154,180,154,123,123],
[239,180,136,180,180,166,123,123],
[180,154,136,167,166,149,136,136],
[128,136,123,136,154,180,198,154],
[123,105,110,149,136,136,180,166],
[110,136,123,123,123,136,154,136]]
this block of an image,when converting with code shown above
[162.3 ,40.6, 20.0...
[30.5 ,108.4...
this should be the result,
[1186.3 , 40.6, 20.0...
[30.5, 108.4 ....
but I found this Result. for sample block, https://www.math.cuhk.edu.hk/~lmlui/dct.pdf
The DCT is working fine. The difference between what you got and what you expect is because that particular example given actually does the DFT on M instead of on the original image, I. In this case, as the paper shows, M = I - 128. The only difference in your example is that you don't subtract off that piece, so the values are all larger. In a cosine or Fourier transform, the first coefficient (the "DC offset" as it is sometimes called) has a higher value because your image values are just greater. But that's why all the other coefficients are the same. If you take an image and you simply add some or subtract some from the entire image equally, the coefficients of the transform will be the same, except the very first one.
From the standard definition of the DCT:
You can see here that for the first coefficient with k = 0, that inside the cosine function, you just get 0, and cos(0) = 1. Thus, X_0 as it's shown in this picture is just the sum of all the x_n values. Generally this value may be scaled by something relating to N so that it's something like an average. When doing so, it relates back to the X_0 term being a "DC offset" which you'll see described as the "mean value of the signal," or in other words, how far the signal is from 0. This is super useful to have as one of the cosine/Fourier transform coefficients as it then can completely describe a signal; all the other coefficients describe the frequency content and so they say nothing about how far the values are from 0, but the first coefficient, the DC offset, does tell you the shift!

How to store the DCT values in an image?

My initial task is to take a grayscale image in java, perform DCT over it (using block size 8X8) , write the DCT image. Then take the DCT image and then perform the inverse DCT over it to get back the original image.
Now the problem am facing here is , when each of the block undergoes DCT, the operation will result in many negative values and the values out of the grayscale range (for eg.,260).
How to write a grayscale image with negative values and the values out of the range? Is there any other operation need to be performed before writing the image so that all the values fall under the 0-255 range?

Rotating 1d FFT to get 2D FFT?

I have a blurry image with a sharp edge and I want to use the profile of that sharp edge to estimate the point spread function (PSF) of the imaging system (assuming that it is symmetric). The profile of the edge gives me the "edge spread function" (ESF) and the derivative of that gives me the "line spread function" (LSF). I am trying to follow these directions that I found in an old paper on how to convert from the LSF to the PSF:
"If we form the one-dimensional Fourier transform of the LSF and rotate the resulting curve about its vertical axis, the surface thus generated proves to be the two-dimensional fourier transform of the PSF. Hence it is merely necessary to take a two-dimensional inverse Fourier transform to obtain the PSF"
I can't seem to get this to work. The 2D FFT of a PSF-like function (for example a 2d gaussian) has lots of alternative positive and negative values, but if I rotate a 1D FFT, I get concentric rings of positive or negative values and the inverse transform looks nothing like a point-spread function. Am I missing a step or misunderstanding something? Any help would be appreciated! Thanks!
Edit: Here is some code showing my attempt to follow the procedure described
;generate x array
x=findgen(1000)/999*50-25
;generate gaussian test function in 1D
;P[0] = peak value
;P[1] = centroid
;P[2] = sigma
;P[3] = base level
P=[1.0,0.0,4.0,0.0]
test1d=gaussian_1d(x,P)
;Take the FFT of the test function
fft1d=fft(test1d)
;create an array with the frequency values for the FFT array, following the conventions used by IDL
;This piece of code to find freq is straight from IDL documentation: http://www.exelisvis.com/docs/FFT.html
N=n_elements(fft1d)
T=x[1]-x[0] ;T = sampling interval
fftx=(findgen((N-1)/2)+1)
is_N_even=(N MOD 2) EQ 0
if (is_N_even) then $
freq=[0.0,fftx,N/2,-N/2+fftx]/(N*T) $
else $
freq=[0.0,fftx,-(N/2+1)+fftx]/(N*T)
;Create a 1000x1000 array where each element holds the distance from the center
dim=1000
center=[(dim-1)/2.0,(dim-1)/2.0]
xarray=cmreplicate(findgen(dim),dim)
yarray=transpose(cmreplicate(findgen(dim),dim))
rarray=sqrt((xarray-center[0])^2+(yarray-center[1])^2)
rarray=rarray/max(rarray)*max(freq) ;scale rarray so max value is equal to highest freq in 1D FFT
;rotate the 1d FFT about zero to get a 2d array by interpolating the 1D function to the frequency values in the 2d array
fft2d=rarray*0.0
fft2d(findgen(n_elements(rarray)))=interpol(fft1d,freq,rarray(findgen(n_elements(rarray))))
;Take the inverse fourier transform of the 2d array
psf=fft(fft2d,/inverse)
;shift the PSF to be centered in the image
psf=shift(psf,500,500)
window,0,xsize=1000,ysize=1000
tvscl,abs(psf) ;visualize the absolute value of the result from the inverse 2d FFT
I don't know IDL, but I think your problem here is that you're taking the FFT of signals that are centered, where by default the function expects 0-frequency components at the beginning of the array.
A quick search for the proper way to do this in IDL indicates the CENTER keyword is what you're looking for.
CENTER
Set this keyword to shift the zero-frequency component to the center of the spectrum. In the forward direction, the resulting Fourier transform has the zero-frequency component shifted to the center of the array. In the reverse direction, the input is assumed to be a centered Fourier transform, and the coefficients are shifted back before performing the inverse transform.
Without letting the FFT routine know where the center of your signal is, it will seem shifted by N/2. In the converse domain this is a strong phase shift that will appear as if values are alternating positive and negative.
Ok, looks like I have solved the problem. The main issue seems to be that I needed to use the absolute value of the FFT results, rather than the complex array that is returned by default. Using the /CENTER keyword also helped make the indexing of the FFT result much simpler than IDL's default. Here is the working version of the code:
;generate x array
x=findgen(1000)/999*50-25
;generate lorentzian test function in 1D
;P[0] = peak value
;P[1] = centroid
;P[2] = fwhm
;P[3] = base level
P=[1.0,0.0,2,0.0]
test1d=lorentzian_1d(x,P)
;Take the FFT of the test function
fft1d=abs(fft(test1d,/center))
;Create an array of frequencies corresponding to the FFT result
N=n_elements(fft1d)
T=x[1]-x[0] ;T = sampling interval
freq=findgen(N)/(N*T)-N/(2*N*T)
;Create an array where each element holds the distance from the center
dim=1000
center=[(dim-1)/2.0,(dim-1)/2.0]
xarray=cmreplicate(findgen(dim),dim)
yarray=transpose(cmreplicate(findgen(dim),dim))
rarray=sqrt((xarray-center[0])^2+(yarray-center[1])^2)
rarray=rarray/max(rarray)*max(freq) ;scale rarray so max value is equal to highest freq in 1D FFT
;rotate the 1d FFT about zero to get a 2d array by interpolating the 1D function to the frequency values in the 2d array
fft2d=rarray*0.0
fft2d(findgen(n_elements(rarray)))=interpol(fft1d,freq,rarray(findgen(n_elements(rarray))))
;Take the inverse fourier transform of the 2d array
psf=abs(fft(fft2d,/inverse,/center))
;shift the PSF to be centered in the image
psf=shift(psf,dim/2.0,dim/2.0)
psf=psf/max(psf)
window,0,xsize=1000,ysize=1000
tvscl,real_part(psf) ;visualize the resulting PSF
;Test the performance by integrating the PSF in one dimension to recover the LSF
psftotal=total(psf,1)
plot,x*sqrt(2),psftotal/max(psftotal),thick=2,linestyle=2
oplot,x,test1d

Matlab Camera Calibration - Correct lens distortion

In the Computer Vision System Toolbox for Matlab there are three types of interpolation methods used for Correct lens distortion.
Interpolation method for the function to use on the input image. The interp input interpolation method can be the string, 'nearest', 'linear', or 'cubic'.
My question is: what is the difference between 'nearest', 'linear', or 'cubic' ? and which one implemented in "Zhang" and "Heikkila, J, and O. Silven" methods.
I can't access the paged at the link you wrote in your question (it asks for a username and password) and so I assume your linked page has the same contents of the page http://www.mathworks.it/it/help/vision/ref/undistortimage.html which I quote here:
J = undistortImage(I,cameraParameters,interp) removes lens distortion from the input image, I and specifies the
interpolation method for the function to use on the input image.
Input Arguments
I — Input image
cameraParameters — Object for storing camera parameters
interp — Interpolation method
'linear' (default) | 'nearest' | 'cubic'
Interpolation method for the function to use on
the input image. The interp input interpolation method can be the
string, 'nearest', 'linear', or 'cubic'.
Furthermore, I assume you are referring to these papers:
ZHANG, Zhengyou. A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2000, 22.11: 1330-1334.
HEIKKILA, Janne; SILVEN, Olli. A four-step camera calibration procedure with implicit image correction. In: Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on. IEEE, 1997. p. 1106-1112.
I have searched for the word "interpolation" in the two pdf documents Zhang and Heikkila and Silven and I did not find any direct statement about the interpolation method they have used.
To my knowledge, in general, a camera calibration method is concerned on how to estimate the intrinsic, extrinsic and lens distortion parameters (all these parameters are inside the input argument cameraParameters of Matlab's undistortImage function); the interpolation method is part of a different problem, i.e. the problem of "Geometric Image Transformations".
I quote from the OpenCV's page Geometric Image Transformation (I have slightly modified the original omitting some details and adding some definitions, I assume you are working with grey level image):
The functions in this section perform various geometrical
transformations of 2D images. They do not change the image content but
deform the pixel grid and map this deformed grid to the destination
image. In fact, to avoid sampling artifacts, the mapping is done in
the reverse order, from destination to the source. That is, for each
pixel (x, y) of the destination image, the functions compute
coordinates of the corresponding “donor” pixel in the source image and
copy the pixel value:
dst(x,y) = src(f_x(x,y), f_y(x,y))
where
dst(x,y) is the grey value of the pixel located at row x and column y in the destination image
src(x,y) is the grey value of the pixel located at row x and column y in the source image
f_x is a function that maps the row x and the column y to a new row, it just uses coordinates and not the grey level.
f_y is a function that maps the row x and the column y to a new column, it just uses coordinates and not the grey level.
The actual implementations of the geometrical transformations, from
the most generic remap() and to the simplest and the fastest resize()
, need to solve two main problems with the above formula:
• Extrapolation of non-existing pixels. Similarly to the filtering
functions described in the previous section, for some (x,y) , either
one of f_x(x,y) , or f_y(x,y) , or both of them may fall outside of
the image. In this case, an extrapolation method needs to be used.
OpenCV provides the same selection of extrapolation methods as in the
filtering functions. In addition, it provides the method
BORDER_TRANSPARENT . This means that the corresponding pixels in the
destination image will not be modified at all.
• Interpolation of pixel
values. Usually f_x(x,y) and f_y(x,y) are floating-point numbers. This
means that <f_x, f_y> can be either an affine or
perspective transformation, or radial lens distortion correction, and
so on. So, a pixel value at fractional coordinates needs to be
retrieved. In the simplest case, the coordinates can be just rounded
to the nearest integer coordinates and the corresponding pixel can be
used. This is called a nearest-neighbor interpolation. However, a
better result can be achieved by using more sophisticated
interpolation methods, where a polynomial function is fit into some
neighborhood of the computed pixel (f_x(x,y), f_y(x,y)), and then the
value of the polynomial at (f_x(x,y), f_y(x,y)) is taken as the
interpolated pixel value. In OpenCV, you can choose between several
interpolation methods. See resize() for details.
For a "soft" introduction see also for example Cambridge in colour - DIGITAL IMAGE INTERPOLATION.
So let's say you need the grey level of pixel at x=20.2 y=14.7, since x and y are number with a fractional part different from zero you will need to "invent" (compute) the grey level in some way. In the simplest case ('nearest' interpolation) you just say that the grey level at (20.2,14.7) is the grey level you retrieve at (20,15), it is called "nearest" because 20 is the nearest integer value to 20.2 and 15 is the nearest integer value to 14.7.
In the (bi)'linear' interpolation you will compute the value at (20.2,14.7) with a combination of the grey levels of the four pixels at (20,14), (20,15), (21,14), (21,15); for the details on how to compute the combination see the Wikipedia page which has a numeric example.
The (bi)'cubic' interpolation considers the combination of sixteen pixels in order to compute the value at (20.2,14.7), see the Wikipedia page.
I suggest you to try all the three methods, with the same input image, and see the differences in the output image.
Interpolation method is actually independent of the camera calibration. Any time you apply a geometric transformation to an image, such as rotation, re-sizing, or distortion compensation, the pixels in the new image will correspond to points between the pixels of the old image. So you have to interpolate their values somehow.
'nearest' means you simply use the value of the nearest pixel.
'linear' means you use bi-linear interpolation. The new pixel's value is a weighted sum of the values of the neighboring pixels in the input image, where the weights are proportional to distances.
'cubic' means you use a bi-cubic interpolation, which is more complicated than bi-linear, but may give you a smoother image.
A good description of these interpolation methods is given in the documentation for the interp2 function.
And finally, just to clarify, the undistortImage function is in the Computer Vision System Toolbox.

Resources