ZedGraph v. 5.15, multi y-axis alignment - alignment

The issue I have is when using two Y-axes (y1 and y2), wherein the y1 value is: (min,max) = (zero,positive) and the y2 value (min, max) = (negative, positive), in such case, the zero marking of y1 coincides with the max (negative) value of the y2 axis (through the x-axis), that is the problem since I want zero point of both y-axis to flush together.
If I knew the value of min and max for both y-axes then this problem could be fixed easily, but I only know whether the range starts from positive or negative value, not the value itself.
Note that this problem is not there when both y-axes have values (data points) above zero. They automatically align such that both their zero points passes through the x-axis.

I managed to do so by fixing the proportion between axis:
public void SetY1Y2CommonZero()
{
AxisChange();
ZedGraph.Scale source, dest;
if (GraphPane.YAxis.Scale.Min != 0)
{
source = GraphPane.YAxis.Scale;
dest = GraphPane.Y2Axis.Scale;
}
else if (GraphPane.Y2Axis.Scale.Min != 0)
{
source = GraphPane.Y2Axis.Scale;
dest = GraphPane.YAxis.Scale;
}
else
{
return;
// do nothing - both axis have 0 on min...
}
double proportion = source.Max / source.Min;
// we want to ENLARGE the other axis accordingly:
if (proportion * dest.Min > dest.Max)
dest.Max = proportion * dest.Min;
else
dest.Min = dest.Max / proportion;
}

Related

Polar coordinate point generation function upper bound is not 2Pi for theta?

So I wrote the following function to take a frame, and polar coordinate function and to graph it out by generating the cartesian coordinates within that frame. Here's the code.
func cartesianCoordsForPolarFunc(frame: CGRect, thetaCoefficient:Double, cosScalar:Double, iPrecision:Double, largestScalar:Double) -> Array<CGPoint> {
// Frame: The frame in which to fit this curve.
// thetaCoefficient: The number to scale theta by in the cos.
// cosScalar: The number to multiply the cos by.
// largestScalar: Largest cosScalar used in this frame so that scaling is relative.
// iPrecision: The step for continuity. 0 < iPrecision <= 2.pi. Defaults to 0.1
// Clean inputs
var precision:Double = 0.1 // Default precision
if iPrecision != 0 {// Can't be 0.
precision = iPrecision
}
// This is ther polar function
// var theta: Double = 0 // 0 <= theta <= 2pi
// let r = cosScalar * cos(thetaCoefficient * theta)
var points:Array<CGPoint> = [] // We store the points here
for theta in stride(from: 0, to: Double.pi * 2 , by: precision) { //TODO: Try to recreate continuity. WHY IS IT NOT 2PI
let x = cosScalar * cos(thetaCoefficient * theta) * cos(theta) // Convert to cartesian
let y = cosScalar * cos(thetaCoefficient * theta) * sin(theta) // Convert to cartesian
// newvalue = (max'-min')/(max-min)*(value-max)+max'
let scaled_x = (Double(frame.width) - 0)/(largestScalar*2)*(x-largestScalar)+Double(frame.width) // Scale to the frame
let scaled_y = (Double(frame.height) - 0)/(largestScalar*2)*(y-largestScalar)+Double(frame.height) // Scale to the frame
points.append(CGPoint(x: scaled_x, y:scaled_y)) // Add the result
}
print("Done points")
return points
}
The polar function I'm passing is r = 100*cos(9/4*theta) which looks like this.
I'm wondering why my function returns the following when theta goes from 0 to 2. (Please note I'm in this image I'm drawing different sizes flowers hence the repetition of the pattern)
As you can see it's wrong. Weird thing is that when theta goes from 0 to 2Pi*100 (Also works for other random values such as 2Pi*4, 2Pi*20 but not 2Pi*2 or 2Pi*10)it works and I get this.
Why is this? Is the domain not 0 to 2Pi? I noticed that when going to 2Pi*100 it redraws some petals so there is a limit, but what is it?
PS: Precision here is 0.01 (enough to act like it's continuous). In my images I'm drawing the function in different sizes and overlapping (last image has 2 inner flowers).
No, the domain is not going to be 2π. Set up your code to draw slowly, taking 2 seconds for each 2π, and watch. It makes a whole series of full circles, and each time the local maxima and minima land at different points. That's what your petals are. It looks like your formula repeats after 8π.
It looks like the period is the denominator of the theta coefficient * 2π. Your theta coefficient is 9/4, the denominator is 4, so the coefficient is 4*2π, or 8π.
(That is based on playing in Wolfram Alpha and observing the results. I may be wrong.)

Logarithmic axis not showing appropriate ticks for small numbers

To preface, I am using the extended logarithm functions for negative and small numbers here:
/**
* Custom Axis extension to allow emulation of negative values on a
logarithmic
* Y axis. Note that the scale is not mathematically correct, as a true
* logarithmic axis never reaches or crosses zero.
*/
(function (H) {
// Pass error messages
H.Axis.prototype.allowNegativeLog = true;
// Override conversions
H.Axis.prototype.log2lin = function (num) {
var isNegative = num < 0,
adjustedNum = Math.abs(num),
result;
if (adjustedNum < 10) {
console.log('adjustedNum: ', adjustedNum);
adjustedNum += (10 - adjustedNum) / 10;
}
result = Math.log(adjustedNum) / Math.LN10;
if (adjustedNum < 10) console.log('result: ', result);
return isNegative ? -result : result;
};
H.Axis.prototype.lin2log = function (num) {
var isNegative = num < 0,
absNum = Math.abs(num),
result = Math.pow(10, absNum);
if (result < 10) {
result = (10 * (result - 1)) / (10 - 1);
}
return isNegative ? -result : result;
};
}(Highcharts));
So if I have a y-axis with a dataMin of 0.22 and a dataMax of 2.34 using a log scale, the only ticks I get back are [0,1] designating 0 and 10, the bottom and top of the chart, with no tick marks in between.
1) Is there a way I can specify how many ticks I want in a log chart (tickAmount: 5 would not work)?
2) Is there a way I can tighten the range over which ticks fall? (like in this case, it sets the axis with data values between 0 and 10 even though my data series only falls between 0.22 and 2.34).
Thanks!
1.
tickPositions and tickPositioner options allow you to control where exactly the ticks will fall.
2.
You can try adjusting the tickInterval option: http://jsfiddle.net/BlackLabel/8v3xuyot/
yAxis: {
type: 'logarithmic',
tickInterval: 0.2
}
API references:
https://api.highcharts.com/highcharts/yAxis.tickInterval
https://api.highcharts.com/highcharts/yAxis.tickPositioner
https://api.highcharts.com/highcharts/yAxis.tickPositions

standard deviation of a UIImage/CGImage

I need to calculate the standard deviation on an image I have inside a UIImage object.
I know already how to access all pixels of an image, one at a time, so somehow I can do it.
I'm wondering if there is somewhere in the framework a function to perform this in a better and more efficient way... I can't find it so maybe it doensn't exist.
Do anyone know how to do this?
bye
To further expand on my comment above. I would definitely look into using the Accelerate framework, especially depending on the size of your image. If you image is a few hundred pixels by a few hundred. You will have a ton of data to process and Accelerate along with vDSP will make all of that math a lot faster since it processes everything on the GPU. I will look into this a little more, and possibly put some code in a few minutes.
UPDATE
I will post some code to do standard deviation in a single dimension using vDSP, but this could definitely be extended to 2-D
float *imageR = [0.1,0.2,0.3,0.4,...]; // vector of values
int numValues = 100; // number of values in imageR
float mean = 0; // place holder for mean
vDSP_meanv(imageR,1,&mean,numValues); // find the mean of the vector
mean = -1*mean // Invert mean so when we add it is actually subtraction
float *subMeanVec = (float*)calloc(numValues,sizeof(float)); // placeholder vector
vDSP_vsadd(imageR,1,&mean,subMeanVec,1,numValues) // subtract mean from vector
free(imageR); // free memory
float *squared = (float*)calloc(numValues,sizeof(float)); // placeholder for squared vector
vDSP_vsq(subMeanVec,1,squared,1,numValues); // Square vector element by element
free(subMeanVec); // free some memory
float sum = 0; // place holder for sum
vDSP_sve(squared,1,&sum,numValues); sum entire vector
free(squared); // free squared vector
float stdDev = sqrt(sum/numValues); // calculated std deviation
Please explain your query so that can come up with specific reply.
If I am getting you right then you want to calculate standard deviation of RGB of pixel or HSV of color, you can frame your own method of standard deviation for circular quantities in case of HSV and RGB.
We can do this by wrapping the values.
For example: Average of [358, 2] degrees is (358+2)/2=180 degrees.
But this is not correct because its average or mean should be 0 degrees.
So we wrap 358 into -2.
Now the answer is 0.
So you have to apply wrapping and then you can calculate standard deviation from above link.
UPDATE:
Convert RGB to HSV
// r,g,b values are from 0 to 1 // h = [0,360], s = [0,1], v = [0,1]
// if s == 0, then h = -1 (undefined)
void RGBtoHSV( float r, float g, float b, float *h, float *s, float *v )
{
float min, max, delta;
min = MIN( r, MIN(g, b ));
max = MAX( r, MAX(g, b ));
*v = max;
delta = max - min;
if( max != 0 )
*s = delta / max;
else {
// r = g = b = 0
*s = 0;
*h = -1;
return;
}
if( r == max )
*h = ( g - b ) / delta;
else if( g == max )
*h=2+(b-r)/delta;
else
*h=4+(r-g)/delta;
*h *= 60;
if( *h < 0 )
*h += 360;
}
and then calculate standard deviation for hue value by this:
double calcStddev(ArrayList<Double> angles){
double sin = 0;
double cos = 0;
for(int i = 0; i < angles.size(); i++){
sin += Math.sin(angles.get(i) * (Math.PI/180.0));
cos += Math.cos(angles.get(i) * (Math.PI/180.0));
}
sin /= angles.size();
cos /= angles.size();
double stddev = Math.sqrt(-Math.log(sin*sin+cos*cos));
return stddev;
}

How to check if obtained homography matrix is good?

This question was already asked, but I still don't get it. I obtain a homography matrix by calling cv::findHomography from a set of points. I need to check whether it's relevant or not.The proposed method is to calculate maximum reprojection error for inliers and compare it with a threshold. But after such filtration I keep getting insane transformations with object bounding box transforming to almost a straight line or some strange non-convex quadrangle, with self-intersections etc.What constraints can be used to check if the homography matrix itself is adequate?
Your question is mathematical. Given a matrix of 3x3 decide whether it represents a good rigid transformation.
It is hard to define what is "good" but here are some clues that can help you
Homography should preserve the direction of polygonal points. Design a simple test. points (0,0), (imwidth,0), (width,height), (0,height) represent a quadrilateral with clockwise arranged points. Apply homography on those points and see if they are still clockwise arranged if they become counter clockwise your homography is flipping (mirroring) the image which is sometimes still ok. But if your points are out of order than you have a "bad homography"
The homography doesn't change the scale of the object too much. For example if you expect it to shrink or enlarge the image by a factor of up to X, just check this rule. Transform the 4 points (0,0), (imwidth,0), (width-1,height), (0,height) with homography and calculate the area of the quadrilateral (opencv method of calculating area of polygon) if the ratio of areas is too big (or too small), you probably have an error.
Good homography is usually uses low values of perspectivity. Typically if the size of the image is ~1000x1000 pixels those values should be ~0.005-0.001. High perspectivity will cause enormous distortions which are probably an error. If you don't know where those values are located read my post:
trying to understand the Affine Transform
. It explains the affine transform math and the other 2 values are perspective parameters.
I think that if you check the above 3 condition (condition 2 is the most important) you will be able to detect most of the problems.
Good luck
Edit: This answer is irrelevant to the question, but the discussion may be helpful for someone who tries to use the matching results for recognition like I did!
This might help someone:
Point2f[] objCorners = { new Point2f(0, 0),
new Point2f(img1.Cols, 0),
new Point2f(img1.Cols, img1.Rows),
new Point2f(0, img1.Rows) };
Point2d[] sceneCorners = MyPerspectiveTransform3(objCorners, homography);
double marginH = img2.Width * 0.1d;
double marginV = img2.Height * 0.1d;
bool homographyOK = isInside(-marginH, -marginV, img2.Width + marginH, img2.Height + marginV, sceneCorners);
if (homographyOK)
for (int i = 1; i < sceneCorners.Length; i++)
if (sceneCorners[i - 1].DistanceTo(sceneCorners[i]) < 1)
{
homographyOK = false;
break;
}
if (homographyOK)
homographyOK = isConvex(sceneCorners);
if (homographyOK)
homographyOK = minAngleCheck(sceneCorners, 20d);
private static bool isInside(dynamic minX, dynamic minY, dynamic maxX, dynamic maxY, dynamic coors)
{
foreach (var c in coors)
if ((c.X < minX) || (c.Y < minY) || (c.X > maxX) || (c.Y > maxY))
return false;
return true;
}
private static bool isLeft(dynamic a, dynamic b, dynamic c)
{
return ((b.X - a.X) * (c.Y - a.Y) - (b.Y - a.Y) * (c.X - a.X)) > 0;
}
private static bool isConvex<T>(IEnumerable<T> points)
{
var lst = points.ToList();
if (lst.Count > 2)
{
bool left = isLeft(lst[0], lst[1], lst[2]);
lst.Add(lst.First());
for (int i = 3; i < lst.Count; i++)
if (isLeft(lst[i - 2], lst[i - 1], lst[i]) != left)
return false;
return true;
}
else
return false;
}
private static bool minAngleCheck<T>(IEnumerable<T> points, double angle_InDegrees)
{
//20d * Math.PI / 180d
var lst = points.ToList();
if (lst.Count > 2)
{
lst.Add(lst.First());
for (int i = 2; i < lst.Count; i++)
{
double a1 = angleInDegrees(lst[i - 2], lst[i-1]);
double a2 = angleInDegrees(lst[i], lst[i - 1]);
double d = Math.Abs(a1 - a2) % 180d;
if ((d < angle_InDegrees) || ((180d - d) < angle_InDegrees))
return false;
}
return true;
}
else
return false;
}
private static double angleInDegrees(dynamic v1, dynamic v2)
{
return (radianToDegree(Math.Atan2(v1.Y - v2.Y, v1.X - v2.X))) % 360d;
}
private static double radianToDegree(double radian)
{
var degree = radian * (180d / Math.PI);
if (degree < 0d)
degree = 360d + degree;
return degree;
}
static Point2d[] MyPerspectiveTransform3(Point2f[] yourData, Mat transformationMatrix)
{
Point2f[] ret = Cv2.PerspectiveTransform(yourData, transformationMatrix);
return ret.Select(point2fToPoint2d).ToArray();
}

Binary Image Orientation

I'm trying to find the orientation of a binary image (where orientation is defined to be the axis of least moment of inertia, i.e. least second moment of area). I'm using Dr. Horn's book (MIT) on Robot Vision which can be found here as reference.
Using OpenCV, here is my function, where a, b, and c are the second moments of area as found on page 15 of the pdf above (page 60 of the text):
Point3d findCenterAndOrientation(const Mat& src)
{
Moments m = cv::moments(src, true);
double cen_x = m.m10/m.m00; //Centers are right
double cen_y = m.m01/m.m00;
double a = m.m20-m.m00*cen_x*cen_x;
double b = 2*m.m11-m.m00*(cen_x*cen_x+cen_y*cen_y);
double c = m.m02-m.m00*cen_y*cen_y;
double theta = a==c?0:atan2(b, a-c)/2.0;
return Point3d(cen_x, cen_y, theta);
}
OpenCV calculates the second moments around the origin (0,0) so I have to use the Parallel Axis Theorem to move the axis to the center of the shape, mr^2.
The center looks right when I call
Point3d p = findCenterAndOrientation(src);
rectangle(src, Point(p.x-1,p.y-1), Point(p.x+1, p.y+1), Scalar(0.25), 1);
But when I try to draw the axis with lowest moment of inertia, using this function, it looks completely wrong:
line(src, (Point(p.x,p.y)-Point(100*cos(p.z), 100*sin(p.z))), (Point(p.x, p.y)+Point(100*cos(p.z), 100*sin(p.z))), Scalar(0.5), 1);
Here are some examples of input and output:
(I'd expect it to be vertical)
(I'd expect it to be horizontal)
I worked with the orientation sometimes back and coded the following. It returns me the exact orientation of the object. largest_contour is the shape that is detected.
CvMoments moments1,cenmoments1;
double M00, M01, M10;
cvMoments(largest_contour,&moments1);
M00 = cvGetSpatialMoment(&moments1,0,0);
M10 = cvGetSpatialMoment(&moments1,1,0);
M01 = cvGetSpatialMoment(&moments1,0,1);
posX_Yellow = (int)(M10/M00);
posY_Yellow = (int)(M01/M00);
double theta = 0.5 * atan(
(2 * cvGetCentralMoment(&moments1, 1, 1)) /
(cvGetCentralMoment(&moments1, 2, 0) - cvGetCentralMoment(&moments1, 0, 2)));
theta = (theta / PI) * 180;
// fit an ellipse (and draw it)
if (largest_contour->total >= 6) // can only do an ellipse fit
// if we have > 6 points
{
CvBox2D box = cvFitEllipse2(largest_contour);
if ((box.size.width < imgYellowThresh->width) && (box.size.height < imgYellowThresh->height))
{
cvEllipseBox(imgYellowThresh, box, CV_RGB(255, 255 ,255), 3, 8, 0 );
}
}

Resources