A single channel image is my input. ( defalut IPL_DEPTH_8U)
I am multiplying each pixel of my input image with scalar floating point numbers like 2.8085 (as a part of my algorithm).
So this needs me to increase the depth and change the image type to IPL_DEPTH_64F
But whenever I am trying to change my image datatype to IPL_DEPTH_64F and have a double* to access each pixel, my program execution stops abruptly, cribbing that
"file.exe has stopped working. A problem caused the program to stop working."
Does it mean, my processor is not able to handle the double ptr arithmetic ???
You have to create a new image - I'd recommend making a new image of depth IPL_DEPTH_64F and setting each pixel to the appropriate value (2.8085*value).
Also, can you post the code you used?
Related
How to use free scaling parameter (alpha) when dealing with getOptimalNewCameraMatrix and stereoRectify : should one use the same value ?
As far as I understand it, I guess a few things that led me to this question are worth to be listed:
In getOptimalNewCameraMatrix, OpenCV doc says "alpha Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image)" [sounds to me like 1 = retain source pixels = minimize loss]
In stereoRectify, OpenCV doc says "alpha Free scaling parameter.... alpha=0 means that ... (no black areas after rectification). alpha=1 means that ... (no source image pixels are lost)
So in the end alpha, seems to be a parameter that may "act" the same way ? (1 = no source pixel lost - sounds like, not sure here)
As far as I understand it, after calibrateCamera, one may want to call getOptimalNewCameraMatrix (computing new matrices as outputs) and then stereoRectify (using new computed matrices as inputs) : do one want to use the same alpha?
Are these 2 alphas the same? Or does one want to use 2 different alphas?
The alphas are the same.
The choice of value depends entirely on the application. Ask yourself:
Does the application need to see all the input pixels to do its job (because, for example, it must use all the "extra" FOV near the image edges, or because you know that the scene's subject that's of interest to the application may be near the edges and you can't lose even a pixel of it)?
Yes: choose alpha=1
No: choose a value of alpha that keeps the "interesting" portion of
the image predictably inside the undistorted image.
In the latter case (again, depending on the application) you may need to compute the boundary of the undistorted image within the input one. This is just a poly-curve, that can be be approximated by a polygon to any level of accuracy you need, down to the pixel. Or you can use a mask.
I'm learning the Vulkan API, and I came across a little "problem":
Currently my program is able to draw, using the Projection-View-Model matrix transformation, a cube at the axis origin:
I'm using 3 images/imageViews/framebuffers, so for each transformation matrix I have a vector of size 3 that holds them, and everything work perfectly (no errors from the validation layers etc)... the problem is:
I now want to draw another object near my cube, so I thought I just had to update the model matrix twice every frame, the first time to position the cube, the second time for the other object, but this cannot work because the cube isn't drawn immediately when registering the command buffer, but when submitting it, so in the end the command buffer would simply use the second update of the model matrix for both the cube and the other object:
How to handle this situation?
Thanks.
Make the uniform buffer bigger put the second matrix after the first and point the second draw to the correct offset in the uniform buffer.
You can use either separate descriptors or dynamic offsets.
Consider the really simple difference kernel
kernel vec4 diffKernel(__sample image1, __sample image2)
{
return vec4(image1.rgb - image2.rgb, 1.0);
}
When used as a CIColorKernel, this produces the difference between two images. However, any valus for which image1.rgb < image2.rgb (pointwise) will be forced to zero due to the "clamping" nature of the outputs of kernels in CIKernel.
For many image processing algorithms, such as those involving image pyramids (see my other question on how this can be achieved in Core Image), it is important to preserve these negative values for later use (reconstructing the pyramid, for example). If 0's re used in their place, you will actually get an incorrect output.
I've seen that one such way is to just store abs(image1.rgb - image2.rgb) make a new image, who's RGB values store 0 or 1 whether a negative sign is attached to that value, then do a multiply blend weighted with -1 to the correct places.
What are some other such ways one can store the sign of a pixel value? Perhaps we can use the alpha channel if it being unused?
I actually ended up figuring this out -- you can use an option in CIContext to make sure that things are computed using the kCIFormatAf key. This means that any calculations done on that context will be done in a floating point precision, so that values beyond the scope of [0,1] are preserved from one filter to the next!
I'm using SharpDX and I want to do antialiasing in the Depth buffer. I need to store the Depth Buffer as a texture to use it later. So is it a good idea if this texture is a Texture2DMS? Or should I take another approach?
What I really want to achieve is:
1) Depth buffer scaling
2) Depth test supersampling
(terms I found in section 3.2 of this paper: http://gfx.cs.princeton.edu/pubs/Cole_2010_TFM/cole_tfm_preprint.pdf
The paper calls for a depth pre-pass. Since this pass requires no color, you should leave the render target unbound, and use an "empty" pixel shader. For depth, you should create a Texture2D (not MS) at 2x or 4x (or some other 2Nx) the width and height of the final render target that you're going to use. This isn't really "supersampling" (since the pre-pass is an independent phase with no actual pixel output) but it's similar.
For the second phase, the paper calls for doing multiple samples of the high-resolution depth buffer from the pre-pass. If you followed the sizing above, every pixel will correspond to some (2N)^2 depth values. You'll need to read these values and average them. Fortunately, there's a hardware-accelerated way to do this (called PCF) using SampleCmp with a COMPARISON sampler type. This samples a 2x2 stamp, compares each value to a specified value (pass in the second-phase calculated depth here, and don't forget to add some epsilon value (e.g. 1e-5)), and returns the averaged result. Do 2x2 stamps to cover the entire area of the first-phase depth buffer associated with this pixel, and average the results. The final result represents how much of the current line's spine corresponds to the foremost depth of the pre-pass. Because of the PCF's smooth filtering behavior, as lines become visible, they will slowly fade in, as opposed to the aliased "dotted" line effect described in the paper.
i want to make an image scrambling using DCT in matlab. i used a grayscale image. i want to random dc with random value. how to set the DC coefficient of each block to a random
value 0-255 and leave all others??these are steps for experiment:Divide a gray image into 8x8 blocks;Perform DCT on each block;set the DC coefficient of each block to a random
value 0-255 and leave all others;Perform inverse DCT and restore the image;Compare the restored image with the original one
by SSIM.
thank you
The question is "how to set the DC coefficient of each block to a (given) value...". So the procedure you mention (DCT, set DC coefficient, then iDCT) should work. You would use Matlab's dct2 and idct2 functions.
However, from the DCT definition, the DC coefficient is the sum of the values of the pixels in each of your blocks; setting it to a random value and taking the inverse transform will produce a block that would be different from the original one only by a constant. That's no surprise because you are just changing the DC level. So you could skip the DCT/iDCT and directly add or subtract a random value to all pixels in each block.
But you can see that each block would look like the original one, except for a different luminosity; also, the boundaries between blocks would be quite visible, so the scrambling method could be easily reversed.