how i can get the value of y-axis from CGPoint - core-plot

i am Chinese, my English is very poor, this is my first question
here is my code
- (BOOL)plotSpace:(CPTPlotSpace *)space shouldHandlePointingDeviceDownEvent:(UIEvent *)event atPoint:(CGPoint)point
{
//i want get the value of y-axis or x-axis from the point!
{
is there has api to get it or i must count it by myself
waiting your answer
think you very much

Use the plot space:
NSDecimal dataPoint[2];
[space plotPoint:dataPoint forEvent:event];
or
double dataPoint[2];
[space doublePrecisionPlotPoint:dataPoint forEvent:event];
After this method call, dataPoint[CPTCoordinateX] will hold the x-value and dataPoint[CPTCoordinateY] will hold the y-value.

Related

How to highlight missing data

I have an obj-c iOS app that use CorePlot to print a chart of computed angles. x-axis for angle domain and y-axis for time. The chart change during time scrolling from top to bottom in order to show the degree. The y-axis is centred on mean value of these angle array. The data source i s NSMutableArray that a contains degrees.
An external module, every second, analyse raw data in order to generate a new computed angle to show in chart. The chart is updated very second and the y-axis scroll down showing the flow of time (something like CorePlot demo project with timer for continuous scrolling data but vertical not horizontal).
Sometimes this operation cannot generate (the angle is the same as the last one computed) a new angle but I still have to update the chart to show the flow of time. The question are:
1) Can I change color for these point in orde to highlight missing data for this range of time?
2) Can I highlight this area to show where the data is not present? Something like this (but vertical style)
https://keystrokecountdown.com/articles/corePlot/index.html
This is my current implementation:
1) Configuring plot area (some graphics setup)
- (void)initPlot {
[self configureHost];
[self configureGraph];
[self configurePlots];
[self configureAxes];
}
2) Data source. Select data from data source (array) to show in chart. x=array value, y=array index
#pragma mark - Core Plot Data Source
- (NSUInteger)numberOfRecordsForPlot:(CPTPlot *)plot {
if ([((NSString *)plot.identifier) isEqualToString:#"MY_ID"]) {
return [degreesArray count];
}
}
- (NSNumber *)numberForPlot:(CPTPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index {
NSNumber *num;
if ([((NSString *)plot.identifier) isEqualToString:#"MY_ID"]) {
if (fieldEnum == CPTScatterPlotFieldX) {
num = [degreesArray objectAtIndex:index];
} else {
num = [NSNumber numberWithInt:(int)index + currentIndex - (int)degreesArray.count];
}
}
return num;
}
3) Updating chart with new angle
- (void)newSetData:(int)newData {
// Check the correct plot
CPTGraph *leftGraph = self.leftPlot.hostedGraph;
CPTPlot *leftPlot = [leftGraph plotWithIdentifier:#"MY_ID"];
if (leftPlot) {
if (degreesArray.count > kDataOnGraph) {
NSRange range;
range.location = 0;
range.length = degreesArray.count-kDataOnGraph;
[degreesArray removeObjectsInRange:range];
[leftPlot deleteDataInIndexRange:range];
}
// Updating y axis range
[self updateSetYAxis];
currentIndex++;
newAngle = ...;
// Saving new SET data
[degreesArray addObject:[NSNumber numberWithInt:(int)newAngle]];
lastAngle_ = [NSNumber numberWithInt:newAngle];
// Adding new SET data to chart
[leftPlot insertDataAtIndex:degreesArray.count - 1 numberOfRecords:1];
// Updating axis
[self updateSetXAxis];
[self updateSetLabels];
}
}
In your <CPTScatterPlotDataSource>, implement the -symbolForScatterPlot:recordIndex: method to provide symbols for each data point. Use one for points with new data and another for points without. You will get better drawing performance if you reuse the symbol object for points with the same appearance rather that creating a new symbol for every plot point.
Use background limit bands on the y-axis to highlight the missing data ranges.

caretRectForPosition returns wrong coordinate after text insert

I try to get the UITextView caret position. For this, I use caretRectForPosition method. It works fine while typing text manually. But if I insert text into the text view, the method returns nonsensical negative coordinate.
Here is subject part of my code:
- (BOOL) textView:(UITextView *)textView shouldChangeTextInRange:(NSRange)range replacementText:(NSString *)text {
// Truncated part of the code: text preparation, objects declaration and so on.
// Past calculated text into the textView
textView.text = newTextViewText;
// Calculate cursor position to avoid its jump to the end of the string. This part works fine.
NSInteger cursorPosition = range.location + allowedText.length;
textView.selectedRange = NSMakeRange(cursorPosition, 0);
// Try to get caret coordinates. It doesn't work properly when text is pasted
cursorCoordinates = [textView caretRectForPosition:textView.selectedTextRange.end].origin;
}
I suppose there is some delay after text insert and the string is been processed when I try to get the cursor coordinates. But I have no Idea where to look for this time gap source. Any Idea?
Update: I found out that this occurs when the inserting text is placed in 2 and more lines. Still don't know how to fix it.
I've got the same problem... and yes, it seems to be a timing problem.
My solution is:
A: Detect the invalid result from caretRectForPosition. In my case, the invalid coordinates seem always to be either large negative values (-1.0 seems to be i.O.!) or 'infinite' for origin.y.
B: Re-ask the text view for the caret position after a short period of time. I checked a few values for delay; 0.05 seems to be largely enough.
The code:
- (void)textViewDidChange:(UITextView *)pTextView {
UITextPosition* endPos = pTextView.selectedTextRange.end;
CGRect caretRectInTextView = [pTextView caretRectForPosition:endPos];
if ((-1.0 > CGRectGetMinY(caretRectInTextView)) ||
(INFINITY == CGRectGetMinY(caretRectInTextView))) {
NSLog(#"Invalid caretRectInTextView detected!");
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.05 * NSEC_PER_SEC)),
dispatch_get_main_queue(),
^{
// Recall
[self textViewDidChange:pTextView];
});
return;
}
... your code ...
}
So, you are right.
You have to use:
- (void)textViewDidChangeSelection:(UITextView *)textView
I thing it will be call last one. Just after that you will have the current position of caret.

Unpinch custom gesture recognizer with three fingers in iOS

I want to make a custom gesture recognizer with three fingers. Which is similar to unpinch gesture recognizer.
All I need is an idea about how to recognize it.
My gesture needs to recognize three fingers with three directions. For example:
I hope images makes sense. I need to make it flexible for any three opposite directions. Thanks in advance. Any help would be appreciated.
I am aware about the subclass methods and I've created custom gestures already with single finger like semicircle, full circle. I need a coding idea about how to handle that.
You need to create a UIGestureRecognizer subclass of your own (let's call it DRThreeFingerPinchGestureRecognizer) and in it to implement:
– touchesBegan:withEvent:
– touchesMoved:withEvent:
– touchesEnded:withEvent:
– touchesCancelled:withEvent:
These methods are called when touches are accepted by the system and possibly before they are sent to the view itself (depending on how you setup the gesture recognizer). Each of these methods will give you a set of touches, for which you can check the current location in your view and previous location. Since pinch gesture is relatively very simple, this information is enough for you to test if the user is performing a pinch, and fail the test (UIGestureRecognizerStateFailed). If state was not failed by – touchesEnded:withEvent:, you can recognize the gesture.
I say pinch gestures are simple, because you can easily track each touch and see how it moves compared to other touches and itself. If a threshold of an angle is passed and broken, you fail the test, otherwise you allow it to continue. If touches do not move in separate angles to each other, you fail the test. You will have to play with what angles of the vectors are acceptable, because 120 degrees are not optimal for the three most common fingers (thumb + index + middle fingers). You may just want to check that the vectors are not colliding.
Make sure to read the UIGestureRecognizer documentation for an in-depth look at the various methods, as well as subclassing notes.
Quick note for future readers: the way you do an unpinch/pinch with three fingers is add the distances ab,bc,ac.
However if your graphics package just happens to have on hand "area of a triangle" - simply use that. ("It saves one whole line of code!")
Hope it helps.
All you need to do is track:
the distance between the three fingers!
Simply add up "every" permutation
(Well, there's three .. ab, ac and cb. Just add those; that's all there is to it!)
When that value, say, triples from the start value, that's an "outwards triple unpinch".
... amazingly it's that simple.
Angles are irrelevant.
Footnote if you want to be a smartass: this applies to any pinch/unpinch gesture, 2, 3 fingers, whatever:
track the derivative of the sum-distance (I mean to say, the velocity) rather than the distance. (Bizarrely this is often EASIER TO DO! because it is stateless! you need only look at the previous frame!!!!)
So in other words, the gesture is trigger when the expansion/contraction VELOCITY of the fingers reaches a certain value, rather than a multiple of the start value.
More interesting footnote!
However there is a subtle problem here: whenever you do anything like this (any platform) you have to be careful to measure "on the glass".
IF You are just doing distance (ie, my first solution above) of course everything cancels out and you can just say "if it doubles" (in pixels -- points -- whatever the hell). BUT if you are doing velocity as part of the calculation in any gesture, then somewhat surprisingly, you have to literally find the velocity in meters per second in the real world, which sounds weird at first! Of course you can't do this exactly (particularly with android) coz glass sizes vary somewhat, but you have to get close to it. Here is a long post discussing this problem http://answers.unity3d.com/questions/292333/how-to-calculate-swipe-speed-on-ios.html In practice you usually have to make do with "screen-widths-per-second" which is pretty good. (But this may be vastly different on phones, large tablets, and these days "surface" type things. on your whole iMac screen, 0.1 screenwidthspersecond may be fast, but on an iPhone that is nothing, not a gesture.)
Final footnote! I simply don't know if Apple use "distance multiple" or "glass velocity" in their gesture recognition, or also likely is some subtle mix. I've never read an article from them commenting on it.
Another footnote! -- if for whatever reason you do want to find the "center" of the triangle (I mean the center of the three fingers). This is a well-travelled problem for game programmers because, after all, all 3D mesh is triangles.
Fortunately it's trivial to find the center of three points, just add the three vectors and divide by three! (Confusingly this even works in higher dimensions!!)
You can see endless posts on this issue...
http://answers.unity3d.com/questions/445442/calculate-uv-at-center-of-triangle.html
http://answers.unity3d.com/questions/424950/mid-point-of-a-triangle.html
Conceivably, if you were incredibly anal, you would want the "barycenter" which is more like the center of mass, just google if you want that.
I think track angles is leading you down the wrong path. I think it's likely a more flexible and intuitive gesture if you don't constrain it based on the angles between the fingers. It'll be less error prone if you just deal with it as a three-fingered pinch regardless of how the fingers move relative to each other. This is what I'd do:
if(presses != 3) {
state = UIGestureRecognizerStateCancelled;
return;
}
// After three fingers are detected, begin tracking the gesture.
state = UIGestureRecognizerStateBegan;
central_point_x = (point1.x + point2.x + point3.x) / 3;
central_point_y = (point1.y + point2.y + point3.y) / 3;
// Record the central point and the average finger distance from it.
central_point = make_point(central_point_x, central_point_y);
initial_pinch_amount = (distance_between(point1, central_point) + distance_between(point2, central_point) + distance_between(point3, central_point)) / 3;
Then on each update for touches moved:
if(presses != 3) {
state = UIGestureRecognizerStateEnded;
return;
}
// Get the new central point
central_point_x = (point1.x + point2.x + point3.x) / 3;
central_point_y = (point1.y + point2.y + point3.y) / 3;
central_point = make_point(central_point_x, central_point_y);
// Find the new average distance
pinch_amount = (distance_between(point1, central_point) + distance_between(point2, central_point) + distance_between(point3, central_point)) / 3;
// Determine the multiplicative factor between them.
difference_factor = pinch_amount / initial_pinch_amount
Then you can do whatever you want with the difference_factor. If it's greater than 1, then the pinch has moved away from the center. If it's less than one, it's moved towards the center. This will also give the user the ability to hold two fingers stationary and only move a third to perform your gesture. This will address certain accessibility issues that your users may encounter.
Also, you could always track the incremental change between touch move events, but they won't be equally spaced in time and I suspect you'll have more troubles dealing with it.
I also apologize for the pseudo-code. If something isn't clear I can look at doing up a real example.
Simple subclass of UIGestureRecognizer. It calculates the relative triangular center of 3 points, and then calculates the average distance from that center, angle is not important. You then check the average distance in your Gesture Handler.
.h
#import <UIKit/UIKit.h>
#import <UIKit/UIGestureRecognizerSubclass.h>
#interface UnPinchGestureRecognizer : UIGestureRecognizer
#property CGFloat averageDistanceFromCenter;
#end
.m
#import "UnPinchGestureRecognizer.h"
#implementation UnPinchGestureRecognizer
-(CGPoint)centerOf:(CGPoint)pnt1 pnt2:(CGPoint)pnt2 pnt3:(CGPoint)pnt3
{
CGPoint center;
center.x = (pnt1.x + pnt2.x + pnt3.x) / 3;
center.y = (pnt1.y + pnt2.y + pnt3.y) / 3;
return center;
}
-(CGFloat)averageDistanceFromCenter:(CGPoint)center pnt1:(CGPoint)pnt1 pnt2:(CGPoint)pnt2 pnt3:(CGPoint)pnt3
{
CGFloat distance;
distance = (sqrt(fabs(pnt1.x-center.x)*fabs(pnt1.x-center.x)+fabs(pnt1.y-center.y)*fabs(pnt1.y-center.y))+
sqrt(fabs(pnt2.x-center.x)*fabs(pnt2.x-center.x)+fabs(pnt2.y-center.y)*fabs(pnt2.y-center.y))+
sqrt(fabs(pnt3.x-center.x)*fabs(pnt3.x-center.x)+fabs(pnt3.y-center.y)*fabs(pnt3.y-center.y)))/3;
return distance;
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
if ([touches count] == 3) {
[super touchesBegan:touches withEvent:event];
NSArray *touchObjects = [touches allObjects];
CGPoint pnt1 = [[touchObjects objectAtIndex:0] locationInView:self.view];
CGPoint pnt2 = [[touchObjects objectAtIndex:1] locationInView:self.view];
CGPoint pnt3 = [[touchObjects objectAtIndex:2] locationInView:self.view];
CGPoint center = [self centerOf:pnt1 pnt2:pnt2 pnt3:pnt3];
self.averageDistanceFromCenter = [self averageDistanceFromCenter:center pnt1:pnt1 pnt2:pnt2 pnt3:pnt3];
self.state = UIGestureRecognizerStateBegan;
}
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
if ([touches count] == 3)
{
NSArray *touchObjects = [touches allObjects];
CGPoint pnt1 = [[touchObjects objectAtIndex:0] locationInView:self.view];
CGPoint pnt2 = [[touchObjects objectAtIndex:1] locationInView:self.view];
CGPoint pnt3 = [[touchObjects objectAtIndex:2] locationInView:self.view];
CGPoint center = [self centerOf:pnt1 pnt2:pnt2 pnt3:pnt3];
self.averageDistanceFromCenter = [self averageDistanceFromCenter:center pnt1:pnt1 pnt2:pnt2 pnt3:pnt3];
self.state = UIGestureRecognizerStateChanged;
return;
}
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
[super touchesEnded:touches withEvent:event];
self.state = UIGestureRecognizerStateEnded;
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
[super touchesEnded:touches withEvent:event];
self.state = UIGestureRecognizerStateFailed;
}
#end
implementation of Gesture, I have a max avg distance set to start, and then a minimum to end, you can also check during changed as well:
-(IBAction)handleUnPinch:(UnPinchGestureRecognizer *)sender
{
switch (sender.state) {
case UIGestureRecognizerStateBegan:
//If you want a maximum starting distance
self.validPinch = (sender.averageDistanceFromCenter<75);
break;
case UIGestureRecognizerStateEnded:
//Minimum distance from relative center
if (self.validPinch && sender.averageDistanceFromCenter >=150) {
NSLog(#"successful unpinch");
}
break;
default:
break;
}
}

How to find if two google map routes are being intersected in an IOS project

I want to achieve the following and I am unaware whether it is possible or not. I am having two points on a road (imagine like a finishing line - they are the two edges of the pavements - they are in a straight line) and I want to check whether a user's route has passed between these points.
So I thought, I can do something like this:
get the route between these two points (they are quite near - the road is 20 m wide at most)
get the users route
check if these routes are interecting, if there is any crossing between them.
If it was pure geometry, it would be easy. I am having two lines and I want to know if there is any intersection between them.
I want to use the following in an IOS project, if it makes any difference. For example, I thought that maybe there is a programmatically way of drawing the MKPolylines and see if there is intersection. I do not want to see it visually, I just need it programmatically.
Is it possible? Can you suggest me anything else?
There is no direct method to check for that intersection.
You can turn the problem into a geometry problem by converting the lat/long positions into map positions (applying the transform to flatten onto a normalised grid). This can be done with MKMapPointForCoordinate.
One thing to consider is the inaccuracy of the GPS reported positions. In a number of cases you will find that the reported track from the GPS isn't actually on the road but running beside the road. Particularly when turning (tight) corners you will often get a large curve in the track. As such you may want to extend the width of the 'finishing line' to compensate for this.
If you just care about whether the user is within a certain distance of a set point then you can create a CLRegion to represent the target location and then call containsCoordinate: with the current user location. This removes any projection and uses the lat/long directly. You can also get the system to monitor this for you and give you a callback when the user enters or exits the region with startMonitoringForRegion:desiredAccuracy:. Again, you need to consider GPS accuracy when setting the radius.
I would try to solve this problem in three steps:
Step 1. Convert each coordinate user's track has to CGPoint and save it an array.
// in viewDidLoad
locManager = [[CLLocationManager alloc] init];
[locManager setDelegate:self];
[locManager setDesiredAccuracy:kCLLocationAccuracyBest];
[locManager startPdatingLocation];
self.userCoordinatePoints = [NSMutableArray alloc]init];
- (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation {
CLLocationCoordinate2D loc = [newLocation coordinate];
CGPoint *currentPoint = [self.mapView convertCoordinate:loc toPointToView:self.mapView];
// add points to array
self.userCoordinatePoints addObject:currentpoint];
}
Step 2. Convert MKPolylineView to CGPathRef
Create a class variable of type CGPathRef
{
CGPathRef path;
}
This method you must have implemented to create the route between two points:
- (MKOverlayView*)mapView:(MKMapView*)theMapView
viewForOverlay:(id <MKOverlay>)overlay
{
MKPolylineView *overlayView = [[MKPolylineView alloc]
initWithOverlay:overlay];
overlayView.lineWidth = 3;
overlayView.strokeColor = [[UIColor blueColor]colorWithAlphaComponent:0.5f];
// overlayView.fillColor = [[UIColor purpleColor] colorWithAlphaComponent:0.1f];
path = overlayView.path;
return overlayView;
}
Step 3: Create a custom method to check whether point lies in CGPath or not
- (BOOL)userRouteIntersectsGoogleRoute
{
// Loop through all CGPoints
for(CGPoint point in self.userCoordinatePoints)
{
BOOL val = CGPathContainsPoint(path, NULL, point);
if(val)
{
return YES;
}
}
return NO;
}

Small issue with cocos2d sketch board

I'm trying to add a "whiteboard", so that people can draw lines on it.
The only problem is, if I draw very fast, it spaces the sprites pretty far away, so it's barely even legible if they are trying to draw letters or numbers. There's a ton of space between the different sprites.
Here's my method where most of the drawing is happening I think.
-(void) update:(ccTime)delta
{
CCDirector* director = [CCDirector sharedDirector];
CCRenderTexture* rtx = (CCRenderTexture*)[self getChildByTag:1];
// explicitly don't clear the rendertexture
[rtx begin];
for (UITouch* touch in touches)
{
CGPoint touchLocation = [director convertToGL:[touch locationInView:director.openGLView]];
touchLocation = [rtx.sprite convertToNodeSpace:touchLocation];
// because the rendertexture sprite is flipped along its Y axis the Y coordinate must be flipped:
touchLocation.y = rtx.sprite.contentSize.height - touchLocation.y;
CCSprite* sprite = [[CCSprite alloc] initWithFile:#"Cube_Ones.png"];
sprite.position = touchLocation;
sprite.scale = 0.1f;
[self addChild:sprite];
[placedSprites addObject:sprite];
}
[rtx end];
}
Maybe this is the cause?
[self scheduleUpdate];
I'm not entirely sure how to decrease the time between updates though.
Thanks in advance
The problem is simply that the user can move the touch location (ie his/her finger) a great distance between two touch events. You may receive one event at 100x100 and the next the finger is already at 300x300. There's nothing you can do about that.
You can however assume that the change between two touch locations is a linear move. That means you can simply split any two touches that are farther apart than, say, 10 pixels distance and split them in 10 pixel distance intervals. So you'd actually generate the in-between touch locations yourself.
If you do that, it's a good idea to limit the minimum distance between two touches, otherwise the user could draw lots and lots of sprites in a very small area, which is not what you want. So you would only draw a new sprite if the new touch location is, say, 5 pixels away from the previous one.

Resources