in opencv frontalface cascade .xml file what are tree node threshold and left value
such as
3 7 14 4 -1. rect
3 9 14 2 2. rect
4.0141958743333817e-003 threshold
0.0337941907346249 left value
0.8378106951713562 right value
If u know what these values are represent plese give me solution
i assume following please correct me if i am wrong
1) there are 21 stage each stage contain weak classifier
2)3 7 14 -1 represent the rectangle where 3 and 7 (are corodinates of pixel that we need to sum for bright area )14 width and 4 height of image amd multiple that sumt with -1 (weight)
3)3 9 14 2 2 2 is a rect similar to above for dark area and calculate sum and multiple it with weight
4) now make the substraction from values in step (2) and (3) if sutraction < threshold then add left value else add right value
si=milarly for all feature in that stage now add all values from all feature if it is less than stage thershold then go for next stage else reject that area for detection
(5)up to how much stage i need to pass to confirm that it is a face is all
4.0141958743333817e-003 threshold what is reperesnt either the complete exponential value
or threshold range in between 4.0141958743333817e 3
please correct me if i am wrong
thanks in advance
Related
The homography I have is non-degenerate (det!=0), and generated from a valid planar pose. When I use it to warp the four corners of image, it returns something like the following:
1 0
3 2
instead of something like
0 1
3 2
where 0 represents top-left corner, 1 top right, 2 bottom left, 3 bottom right. It doesn't follow the clock-wise order anymore, and it's twisted.
The weird thing is, if I apply it to a local patch within the image, i.e. where the plane is, the returned result is valid.
How can this happen? Shouldn't it always return a valid quadrangle?
I posted this question in another forum and here is the answer I like: https://dsp.stackexchange.com/a/73553/55971
I am trying to visualize a time series data set on one plot as a pseudo 3d figure. However, I am having some trouble getting the filledcurves capability working properly. It seems to be adding an unwanted border at the "bottom" of my functions and I do not know how to fix this.
This is my current set up: I have nb_of_frames different files that I want to plot on one figure. Without the filledcurves option, I can do something like this
plot for [i=1:nb_of_frames] filename(i) u ($1):(50.0 * $2 + (11.0 - (i-1)*time_step)) w l linewidth 1.2 lt rgb "black" notitle
which produces a figure like this:
no fill options
Instead of doing this, I want to use the filledcurves option to bring my plots "forward" and highlight the function that is more "forward" which I try to do with:
plot for [i=1:nb_of_frames] filename(i) u ($1):(50. * $2 + (11. - (i-1)*time_step)) w filledcurves fc "white" fs solid 1.0 border lc "black" notitle
This produces a figure as follows:
This is very close to what I want, but it seems that the border option adds a line underneath the function which I do not want. I have tried several variants of with filledcurves y1=0.0 with different values of y1, but nothing seems to work.
Any help would be appreciated. Thank you for your time.
Here is another workaround for gnuplot 5.2.
Apparently, gnuplot closes the filled area from the last point back to the first point. Hence, if you specifiy border, then this line will also have a border which is undesired here (at least until gnuplot 5.4rc2 as #Ethan says).
A straightforward solution would be to plot the data with filledcurves without border and then again with lines. However, since this is a series of shifted data, this has to be plotted alternately. Unfortunately, gnuplot cannot switch plotting styles within a for loop (at least I don't know how). As a workaround for this, you have to build your plot command in a previous loop and use it with a macro # (check help macros) in the plot command. I hope you can adapt the example below to your needs.
Code:
### filledcurves without bottom border
reset session
set colorsequence classic
$Data <<EOD
1 0
2 1
3 2
4 1
5 4
6 5
7 2
8 1
9 0
EOD
myData(i) = sprintf('$Data u ($1-0.1*%d):($2+%d/5.)',i,i)
myFill = ' w filledcurves fc "0xffdddd" fs solid 1 notitle'
myLine = ' w l lc rgb "black" notitle'
myPlotCmd = ''
do for [i=11:1:-1] {
myPlotCmd = myPlotCmd.myData(i).myFill.", ".myData(i).myLine.", "
}
plot #myPlotCmd
### end of code
Result:
I can reproduce this in gnuplot 5.2.8 but not in the output from the release candidate for version 5.4. So I think that some bug-fix or change was applied during the past year or so. I realize that doesn't help while you are using verion 5.2, but if you can download and build from source for the 5.4 release candidate that would take care of it.
Update
I thought of a work-around, although it may be too complicated to be worth it.
You can treat this as a 2D projection of a 3D fence plot constructed using plot style with zerrorfill. In this projection the y coordinate is the visual depth. X is X. Three quantities are needed on z: the bounding line, the bottom, and the top. I.e. 5 fields in the using clause: x depth zline zbase ztop.
unset key
set view 90, 180
set xyplane at 0
unset ytics
set title "3D projection into the xz plane\nplot with zerrorfill" offset 0,-2
set xlabel "X axis" offset 0,-1
set zlabel "Z"
splot for [i=1:25] 'foo.dat' using ($1+i):(i/100.):($2-i):(-i):($2-i) \
with zerrorfill fc "light-cyan" lc "black" lw 2
I have the following issue:
I'm creating a uniform gray color video (for testing) using OpenCV VideoWriter. The output video will reproduce a constant image where all the pixels must have the same value x (25, 51, 76,... and so on).
When I generate the video using MJPG Encoder:
vw = cv2.VideoWriter('./videos/input/gray1.mp4',
cv2.VideoWriter_fourcc(*'MJPG'),
fps,(resolution[1],resolution[0]))
and read the output using the VideoCapture class, everything just works fine. I got a frame array with all pixel values set to (25,51,76 and so on).
However when I generate the video using HEV1 (H.265) or also H264:
vw = cv2.VideoWriter('./videos/input/gray1.mp4',
cv2.VideoWriter_fourcc(*'HEV1'),
fps,(resolution[1],resolution[0]))
I run into the following issue. The frame I got in BGR format follows the next configuration:
The blue channel value is the expected value (x) minus 4 (25-4=21, 51-4=47, 76-4=72, and so on).
The green channel is the expected value (x) minus 1 (25-1=24, 51-1=50, 76-1=75).
The red channel is the expected value (x) minus 3 (25-3=22, 51-3=48, 76-3=73).
Notice that the value is reduced with a constant value of 4,1,3, independently of the pixel value (so there is a constant effect).
What I could explain is a pixel value dependable feature, instead of a fixed one.
What is worse is that if I choose to generate a video with frames consisting in every color (pixel values [255 0 0],[0 255 0] and [0 0 255]) I get the corresponding outputs values ([251 0 0],[0 254 0] and [0 0 252])
I though that this relation was related to the grayscale Y value, where:
Y = 76/256 * RED + 150/256 * GREEN + 29/256 * BLUE
But this coefficients are not related with the output obtained. Maybe the problem is the reading with VideoCapture?
EDIT:
In case that I want to have the same output value for the pixels (Ej: [10,10,10] experimentally I have to create a img where the red and blue channel has the green channel value plus 2:
value = 10
img = np.zeros((resolution[0],resolution[1],3),dtype=np.uint8)+value
img[:,:,2]=img[:,:,2]+2
img[:,:,1]=img[:,:,1]+0
img[:,:,0]=img[:,:,0]+2
Anyone has experience this issue? It is related to the encoding process or just that OpenCV treats the image differently, prior encoding, depending on the fourcc parameter value?
Quoting from the HDF5 Hyperslab doc -:
The block array determines the size of the element block selected from
the dataspace.
The example shows in a 2x2 dataset having the parameters set to the following-:
start offset is specified as [1,1], stride is [4,4], count is [3,7], and block is [2,2]
will result in 21 2x2 blocks. Where the selections will be (1,1), (5,1), (9,1), (1,5), (5,5) I can understand that because the starting point is (1,1) the selection starts at that point, also since the stride is (4,4) it moves 4 in each dimension, and the count is (3,7) it increments 3 times 4 in direction X and 7 times 4 in direction Y ie. in its corresponding dimension.
But what I don't understand is what is block size doing ? Does it mean that I will get 21 2x2 dimensional blocks ? That means each block contains 4 elements, but the count is already set in 3 in 1 dimension so how will that be possible ?
A hyperslab selection created through H5Sselect_hypserslab() lets you create a region defined by a repeating block of elements.
This is described in section 7.4.2.2 of the HDF5 users guide found here (scroll down a bit to 7.4.2.2). The H5Sselect_hyperslab() reference manual entry might also be helpful.
Here is a diagram from the UG:
And here are the values used in that figure:
offset = (0,1)
stride = (4,3)
count = (2,4)
block = (3,2)
Notice how the repeating unit is a 3x2 element block. So yes, you will get 21 2x2 blocks in your case. There will be a grid of three blocks in one dimension and seven in the other, each spaced 4 elements apart in each direction. The first block will be offset by 1,1.
The most confusing thing about this API call is that three of the parameters have elements as their units, while count has blocks as its unit.
Edit: Perhaps this will make how block and count are used more obvious...
HDFS default block size is 64 mb which can be increased according to our requirements.1 mapper processes 1 block at a time.
I am trying to change the white point/white balance programmatically. This is what I want to accomplish:
- Choose a (random) pixel from the image
- Get color of that pixel
- Transform the image so that all pixels of that color will be transformed to white and all other colors shifted to match
I have accomplished the first two steps but the third step is not really working out.
At first I thought that, as per Apples documentation CIWhitePointAdjust should be the thing to accomplish exactly that but, although it does change the image it is not doing what I would like/expect it to do.
Then it seemed that CIColorMatrix should be something that would help me to shift the colors but I was (and still am) at a loss of what to input to it with those pesky vectors.
I have tried almost everything (same RGB values on all vectors, corresponding values (R for R, etc.) on each vector, 1 - corresponding value, 1 + corresponding value, 1/corresponding value. RGB values and different (1 - x, 1 + x, 1 / x).
I have also come across CITemperatureAndTint that, as per Apples documentation should also help, but I have not yet figured out how to convert from RGB to temperature and tint. I have seen algorithms and formulas about converting from RGB to Temperatur, but nothing regarding tint. I will continue experimenting with this a little though.
Any help much appreciated!
After a lot of experimenting and mathematics I finally got my app to work almost the way I want.
If anyone else will find themselves facing a similar problem then here is what I did.
I ended up using CITemperatureAndTint filter supplying a color in Kelvins calculated from the selected pixels RGB value and user suppliable tint value.
To get to Kelvins I:
- firstly converted RGB to XYZ using the D65 illuminant (ie Daylight).
- then converted from XYZ to Yxy. Both of these conversions were made using the algorithms found from EasyRGB.
- I then calculated Kelvins from Yxy using the McCamry's formula I found in a paper here.
These steps got the image in the ballpark but not quite there, so I added a UISlider for the user to supply the tint value ranging from -100 to 100.
With selecting a point that should be white and choosing values from the positive side of the tint scale (all the images I on my phone tend to be more yellow) an image can now be converted to (more) neutral colors. Yey!
I supplyed the calculated temperature and user chosen tint as inputNeutral vector values.
6500 (D65 daylight) and 0 as inputTargetNeutral vector values to CITTemperatureAndTint filter.