Insert an area with decimal values in a CPTXPlotSpace - core-plot

Everyone!
II'm a new user of CorePlot and I'm needing to insert an area with decimal values in a CPTXPlotSpace, these values should be between 3 parallel CPTScatterPlot straight lines drawn in CPTXPlotSpace.  
In "DelegateshouldHandlePointingDeviceDraggedEvent" method I can take the "doublePrecisionPlotPoint" and "point" values,  as shown below:
double dataPoint[2];
[space doublePrecisionPlotPoint:dataPoint numberOfCoordinates:2 forEvent:event];
NSLog(#"Data Point (X) : %f, Data Point (Y) %f ", dataPoint[0], dataPoint[1]);
NSLog(#"Point Cordenada (X) : %f, Ponto Cordenada (Y) %f ", point.x, point.y);
Now what I need is to get a new median value between these parallel lines in this event, when the user hovers the cursor between those lines.  Below follows a sample image: 
Anyone have already come across this?

Compute the distance between and your point and each line and determine which one is closest. See Wikipedia for details of the math.

Related

How to convert TangoXyxIjData into a matrix of z-values

I am currently using a Project Tango tablet for robotic obstacle avoidance. I want to create a matrix of z-values as they would appear on the Tango screen, so that I can use OpenCV to process the matrix. When I say z-values, I mean the distance each point is from the Tango. However, I don't know how to extract the z-values from the TangoXyzIjData and organize the values into a matrix. This is the code I have so far:
public void action(TangoPoseData poseData, TangoXyzIjData depthData) {
byte[] buffer = new byte[depthData.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
depthData.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer, depthData.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Mat m = new Mat(depthData.ijRows, depthData.ijCols, CvType.CV_8UC1);
m.put(0, 0, buffer);
}
Does anyone know how to do this? I would really appreciate help.
The short answer is it can't be done, at least not simply. The XYZij struct in the Tango API does not work completely yet. There is no "ij" data. Your retrieval of buffer will work as you have it coded. The contents are a set of X, Y, Z values for measured depth points, roughly 10000+ each callback. Each X, Y, and Z value is of type float, so not CV_8UC1. The problem is that the points are not ordered in any way, so they do not correspond to an "image" or xy raster. They are a random list of depth points. There are ways to get them into some xy order, but it is not straightforward. I have done both of these:
render them to an image, with the depth encoded as color, and pull out the image as pixels
use the model/view/perspective from OpenGL and multiply out the locations of each point and then figure out their screen space location (like OpenGL would during rendering). Sort the points by their xy screen space. Instead of the calculated screen-space depth just keep the Z value from the original buffer.
or
wait until (if) the XYZij struct is fixed so that it returns ij values.
I too wish to use Tango for object avoidance for robotics. I've had some success by simplifying the use case to be only interested in the distance of any object located at the center view of the Tango device.
In Java:
private Double centerCoordinateMax = 0.020;
private TangoXyzIjData xyzIjData;
final FloatBuffer xyz = xyzIjData.xyz;
double cumulativeZ = 0.0;
int numberOfPoints = 0;
for (int i = 0; i < xyzIjData.xyzCount; i += 3) {
float x = xyz.get(i);
float y = xyz.get(i + 1);
if (Math.abs(x) < centerCoordinateMax &&
Math.abs(y) < centerCoordinateMax) {
float z = xyz.get(i + 2);
cumulativeZ += z;
numberOfPoints++;
}
}
Double distanceInMeters;
if (numberOfPoints > 0) {
distanceInMeters = cumulativeZ / numberOfPoints;
} else {
distanceInMeters = null;
}
Said simply this code is taking the average distance of a small square located at the origin of x and y axes.
centerCoordinateMax = 0.020 was determined to work based on observation and testing. The square typically contains 50 points in ideal conditions and fewer when held close to the floor.
I've tested this using version 2 of my tango-caminada application and the depth measuring seems quite accurate. Standing 1/2 meter from a doorway I slid towards the open door and the distance changed form 0.5 meters to 2.5 meters which is the wall at the end of the hallway.
Simulating a robot being navigated I moved the device towards a trash can in the path until 0.5 meters separation and then rotated left until the distance was more than 0.5 meters and proceeded forward. An oversimplified simulation, but the basis for object avoidance using Tango depth perception.
You can do this by using camera intrinsics to convert XY coordinates to normalized values -- see this post - Google Tango: Aligning Depth and Color Frames - it's talking about texture coordinates but it's exactly the same problem
Once normalized, move to screen space x[1280,720] and then the Z coordinate can be used to generate a pixel value for openCV to chew on. You'll need to decide how to color pixels that don't correspond to depth points on your own, and advisedly, before you use the depth information to further colorize pixels.
The main thing is to remember that the raw coordinates returned are already using the basis vectors you want, i.e. you do not want the pose attitude or location

Calculating point coordinates from user tap with constraints

I am trying to calculate the coordinates along a circle corresponding to the tap location. The coordinates should be on the border of the circle nearest to the tap location (e.g. the border that is less distant from the radius). To facilitate this I am detecting only taps that are distant by 80% of the radius from the circle center.
Input:
P (GPPoint) - center of the circle
P1 (GPPoint) current position of an image displayed
r (float) radius of circle
P3 (CGPoint) user tap coordinate
Desired output:
P2 (CGPoint) - new coordinates for the image corresponding to P3 but along the circle. Sorry for the bad explanation, I try to explain it in other words: once the user taps on the screen I would like to move the image in P2. P2 should be derived by moving P2 to the border of the circle. It should be possible to do so by using the radius information.
The idea is to create from P3 coordinates a new coordinate called P2 as described above - the key is that P2 distance from the centre should correspond exactly to the radius and the ANGLE should be the same as tapPoint.
Would anyome be able to suggest a formula to calculate the corresponding coordinate given a tap? I simply need to calculate P3 using the input I have.
Code so far:
-(void)tapInImageView:(UITapGestureRecognizer *)tap
{
CGPoint tapPoint = [tap locationInView:tap.view];
if ([self isInOuternCircle:tapPoint]) {
// then create from tapPoint coordinates a new coordinate P2 as described above - but have no idea how.. the key is that P2 distance from the centre should correspond exactly to the radius and the ANGLE should be the same as tapPoint.
}
}
-(BOOL)isInOuternCircle:(CGPoint)point
{
double distanceToCenter = sqrt((point.x - _timerView.center.x)*(point.x - _timerView.center.x) + (point.y - _timerView.center.y)*(point.y - _timerView.center.y));
if (distanceToCenter < _innerCircleRadius) {
return false;
}
return true;
}
I've done this once before, but the math usually depends on how you've set up your coordinate system, so I'll just outline what I did. You'll need a bit of geometry, and a few formulae to determine the new coordinate along the circle.
Calculate the formula of a line passing through the center (P) and your tap point (P3) using this: http://en.wikipedia.org/wiki/Linear_equation#Two-point_form
Determine the equation for your circle: http://en.wikipedia.org/wiki/Circle#Equations
Using the above two equations, you'll have a system of a linear and a quadratic equation: http://www.mathsisfun.com/algebra/systems-linear-quadratic-equations.html
Once you have the equation above, you need to solve it. The result will yield two possible points (the line will intersect the circle in two places), and the point you are looking for is the point closer the tap point. In this case, just compare the distances to P3 between the two solutions, and the shorter distance will show your required solution - P2.

MKMapView How-to convert from longitude latitude back to cm?

1) I am using MKMapView to display a custom Image (for example width=350 cm,height =230 cm) in a MKOverlayView.
2) The center of the Image is now at longitude=0 and latitude = 0 and covers the whole world
4) I place a MKPointAnnotation at longitude = 60.749995 and latitude =56.091651
Now I want convert this Point(Longitude,Latitude) back to x,y in cm.
So that I can create a JPG on the server with the annotation on top of the Image.
So how do I calculate the x,y values?
Thanx Craig
so something like:
CLLocationCoordinate2D coordinateOrigin = CLLocationCoordinate2DMake(90, -180);
CLLocationCoordinate2D coordinateMax = CLLocationCoordinate2DMake(-90, 180);
MKMapPoint maxMap=MKMapPointForCoordinate(coordinateMax);
MKMapPoint minMap=MKMapPointForCoordinate(coordinateOrigin);
double width = maxMap.x-minMap.x;
double height = maxMap.y-minMap.y;
MKMapPoint p = MKMapPointForCoordinate(wanted_coord);//wanted_coord is the one needed
double pixel_x=p.x/width;
double pixel_y=p.y/height;
1) You're not really dealing with cm, you're dealing with pixels. An image has a certain number of pixels in each direction, the physical measurement of cm depends on how big your screen/printer's pixels are.
2) to convert from lat long to pixels use MKMapPoints via the MKMapPointForCoordinate function. That will give you a x/y coordinate and you'll need to scale those values to fit your custom image, therefore you need to work out what MKMapPoints it covers. For example if your image covered the entire world you could find the minimum values for MKMapPoint by using MKCoordinateForMapPoint with (-180,-90) and the maximum values with (180,90). Now you'll have the max/min for MKMapPoint's x and y, you know the max/min for your image, so it's trivial to scale from one to the other.

Best practice for using lat/long within a UIView (not MKMapView)

Basically i have a list of POI's (name,lat,long) and i want to draw them on the UIView, relative to my current lat/long. I'm looking for some best practice for mapping these POI (lat/long) to a UIView.
I don't want to use MKMapView (no need for displaying map-data).
I was reading:
http://developer.apple.com/library/ios/#documentation/general/conceptual/Devpedia-CocoaApp/CoordinateSystem.html
But I'm clueless how i get from a CLLocation to a (x,y) on my UIView. I only want to draw those POI's around my current location. So, for example if my screen would represent a 20 by 30 KM region, how do i map my POI's to their corresponding (x,y) coordinates?
Thanks.
What you're doing is a little strange, but you can convert latitude/longitude to a CGPoint-like struct called an MKMapPoint. An MKMapPoint has an x and y value which correspond to points on a map. Imagine if you laid out a flat map of the world, and 0,0 was the top left. MKMapPoint is a point on that map using that coordinate system.
Use the function MKMapPointForCoordinate() to convert a CLLocationCoordinate2D to an MKMapPoint
MKMapPoint myMapPoint = MKMapPointForCoordinate(myLocationCoordinate);
When you get the list of points, you'll have to do something like finding the max and min x and y values, then fitting all the points into your view using those values, otherwise you'll end up with a load of very close points in one place in your view.
My guess is that, for a 20KM by 30KM region, you can consider the earth to be flat and there fore linearly extrapolate the coordinates. I am sure you can google and find out as to how much distance is a difference in 0.00001 in latitude and longitude.
So if you have 20Km to be represented on X axis, and your current location is 30.1234567 in latitude, and 0.0000001 is 1 km then you can put your coordinate in the center of the screen and 30.1234557 as the left most X coordinate and so on.
I am not trying to provide an answer here, but just trying to think out loud, because I wanted to do some thing similar as well and did it as an Internet based app (without display though), where given two coordinates, I had to find the distance between them.
There are many (many) different approaches to modelling the planet and translating 3D coordinates onto a 2D surface, and the errors introduced by the various methods vary depending on what part of the globe you are. This question seems to cover most of what you are after though:
Converting Longitude & Latitude to X Y on a map with Calibration points
I think its best way (correctly work for Mercator projection map):
extension UIView
{
func addLocation(coordinate: CLLocationCoordinate2D)
{
// max MKMapPoint values
let maxY = Double(267995781)
let maxX = Double(268435456)
let mapPoint = MKMapPointForCoordinate(coordinate)
let normalizatePointX = CGFloat(mapPoint.x / maxX)
let normalizatePointY = CGFloat(mapPoint.y / maxY)
let pointView = UIView(frame: CGRectMake(0, 0, 5, 5))
pointView.center = CGPointMake(normalizatePointX * frame.width, normalizatePointY * frame.height)
pointView.backgroundColor = UIColor.blueColor()
addSubview(pointView)
}
}
My simple project for adding coordinate on UIView: https://github.com/Glechik/MapCoordinateDrawer

How to select the label In coreplot

I wanted to get the axisLabel value by clicking on it to plot a line graph? How can achieve this? Is there any possibilities to select each label value?. I have tried plot space delegate method of -(BOOL)plotSpace:(CPTPlotSpace *)space shouldHandlePointingDeviceDownEvent:(id)event atPoint:(CGPoint)point.By this i can able get bound values only. what would be the best solution?
Thanks in advance.
Convert the point from the coordinate system of the graph layer to the plot area:
CGPoint pointInPlotArea = [space.graph convertPoint:interactionPoint
toLayer:space.graph.plotAreaFrame.plotArea];
Convert the point to data coordinates:
NSDecimal plotPoint[2];
[space plotPoint:plotPoint forPlotAreaViewPoint:pointInPlotArea];
or
double plotPoint[2];
[space doublePrecisionPlotPoint:plotPoint forPlotAreaViewPoint:pointInPlotArea];

Resources