I am looking to set a certain object's alpha to fade in relation to its distance.
These values changes over time.
The range for distance is 0 to 51.
The range for alpha is 0 to 255.
I start by using the map function:
alpha = map(d,0,51,0,255);
Now, if the value of d is, for example '16', the alpha value is '80'.
The extremes of this would be if distance at '0', alpha is '0' and distance is '51', alpha is '255'.
What I'm looking to achieve is to inverse the relationship, wherby a distance value of 51 will result in a output alpha of '0' instead.
I have tried using the standard y=k/x formula but something's messing with my head and I cannot get it to work alongside the mapping.
Can't you just subtract it from 255?
alpha = 255 - map(d,0,51,255,0);
Now if the original value was 255, the new value is 0. If the original value was 0, the new value is 255.
You could also subtract the distance from 51.
If this doesn't do exactly what you want, then I suggest making a chart of the old values vs the new values you want. Do you notice a pattern you can apply in the code?
Related
I have a 3 dimensional ndarray and cannot understand the Axis and Shape in the variable explorer of Spyder.
Below variable is my 3 dimensional array and I would appreciate if someone can explan the axis and shapes for me:
t = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]
t = np.reshape(t,(4,3,2))
for example when I set the Axis to 0, a 3*2 frame will show up. when I set it to 1, a 4*2 frame will show up and finally when I set the Axis to 2, a 4*3 frame will show up and I am having trouble visualizing them in a 3 dimensional form.
PS: I know it sounds completely unprofessional ...
Axis 0 would list values on y-z plane
Axis 1 gives x-z plane values
Axis 2 would list values on x-y plane
Index would be the dimension of the third axis whose value is not shown for a given axis.
a=np.ones(90)
a = np.ones(90)
a= a.reshape(9,5,2)
a[0]=2*a[0]
a[:,1,:]=3*a[:,1,:]
a[:,:,1]=7*a[:,:,1]
a.shape
Try this code and see the results with different values. If any index in slicing is -1, it means the value for that index is calculated based on the othee dimensions of the dataframe.
I have a 8-bit image and I want to filter it with a matrix for edge detection. My kernel matrix is
0 1 0
1 -4 1
0 1 0
For some indices it gives me a negative value. What am I supposed to with them?
Your kernel is a Laplace filter. Applying it to an image yields a finite difference approximation to the Laplacian operator. The Laplace operator is not an edge detector by itself.
But you can use it as a building block for an edge detector: you need to detect the zero crossings to find edges (this is the Marr-Hildreth edge detector). To find zero crossings, you need to have negative values.
You can also use the Laplace filtered image to sharpen your image. If you subtract it from the original image, the result will be an image with sharper edges and a much crisper feel. For this, negative values are important too.
For both these applications, clamping the result of the operation, as suggested in the other answer, is wrong. That clamping sets all negative values to 0. This means there are no more zero crossings to find, so you can't find edges, and for the sharpening it means that one side of each edge will not be sharpened.
So, the best thing to do with the result of the Laplace filter is preserve the values as they are. Use a signed 16-bit integer type to store your results (I actually prefer using floating-point types, it simplifies a lot of things).
On the other hand, if you want to display the result of the Laplace filter to a screen, you will have to do something sensical with the pixel values. Common in this case is to add 128 to each pixel. This shifts the zero to a mid-grey value, shows negative values as darker, and positive values as lighter. After adding 128, values above 255 and below 0 can be clipped. You can also further stretch the values if you want to avoid clipping, for example laplace / 2 + 128.
Out of range values are extremely common in JPEG. One handles them by clamping.
If X < 0 then X := 0 ;
If X > 255 then X := 255 ;
I have some data that do not sum to 1 that I would like to have sum to 1.
0.0232
0.05454
0.2154
0.5
0.005426
0.024354
0.00000456
sum: 0.82292456
I could just multiply each value by 1.0/0.82292456 and then have the sum be 1.00, but then all values would receive a factor adjustment of the same proportion (i.e 0.17707544).
I'd like to increase each value based on the size of the value itself. In other words, 0.5 would get a proportionally larger adjustment than 00000456 would.
I am not sure how to determine these adjustments that could potentially be additive, multiplicative, or both.
Any hints or suggestions would be helpful!
Thanks!
I could just multiply each value by 1.0/0.82292456 and then have the sum be 1.00, but then all values would receive a factor adjustment of the same proportion (i.e 0.17707544).
OK, That's what I'd do. Why is "a factor adjustment of the same proportion" a problem?
I'd like to increase each value based on the size of the value itself.
In that case, you should multiply each value by 1.0/0.82292456 because that's what multiplication does.
I tried to display an image of CV_32F type using imshow function but it showed a WHITE image. In the Documentation its given that floating point images will be mapped to 0-255 and displayed but it just showed a white image.I tried to convert it to CV_8U using
Mat A=Mat::ones(300,300,CV_32FC1)*1000;
do some processing - assigning float values to pixels in A
......
Mat B;
A.convertTo(B,CV_8U)
When I imshow 'B' i get a black & white image, there are no shades of gray. Are the float valued pixels in A properly mapped to 0-255 ? Am I doing anything wrong?
Few values in A are 1000 as initialized and rest are some floating point numbers which are assigned during processing.
In OpenCV, if the image is of floating point type, then only those pixels can be visualized using imshow, which have value from 0.0 to 1.0, if the value is greater than 1.0, it will be shown as a white pixel and if less than 0.0, it will be shown as black pixel.
To visualize a floating point image, scale its values to the range 0.0 - 1.0.
As for the conversion part.... When used with default arguments, the cv::Mat::convertTo function just creates a matrix of the specified type, and then copies the values from the source matrix and then rounds them to the nearest possible value of the destination data type.
If the value is out of range, it is clamped to the minimum or maximum values.
In the documentation of imshow, it is written that:
If the image is 32-bit floating-point, the pixel values are multiplied
by 255. That is, the value range [0,1] is mapped to [0,255].
It means that only the values in the range 0.0 to 1.0 will be mapped to 0 to 255. If a value is greater than 1.0, and multiplied by 255, it will become greater than 255. Then it will be clamped to the range of CV_8U and eventually it will also become 255.
In your example, all the values which are 1000, will become 255 in the destination matrix as the destination type is CV_8U and the maximum possible value is 255. All the floating point values will be floored. No automatic mapping is done.
To appropriately map the values to the range of CV_8U use the 3rd and 4th parameters of the function cv::Mat::convertTo, so that the values are scaled before the conversion is done.
Suppose the matrix A has minimum and maximum values Min and Max, where Min!=Max.
To properly scale the values from 0 to 255, you can do the following:
if (Min!=Max){
A -= Min;
A.convertTo(B,CV_8U,255.0/(Max-Min));
}
You can also do this directly like this:
if (Min!=Max)
A.convertTo(B,CV_8U,255.0/(Max-Min),-255.0*Min/(Max-Min));
(edited taking into account zhangxaochen's comment)
In the context of image processing for edge detection or in my case a basic SIFT implementation:
When taking the 'difference' of 2 Gaussian blurred images, you are bound to get pixels whose difference is negative (they are originally between 0 - 255, when subtracting they are possibly between -255 - 255). What is the normal approach to 'fixing' this? I don't see taking the absolute value to be very correct in this situation.
There are two different approaches depending on what you want to do with the output.
The first is to offset the output by 128, so that your calculation range of -128 to 127 maps to 0 to 255.
The second is to clamp negative values so that they all equal zero.