First I should say MPAndroidChart is awesome, you have brought in several functionalities and customization which makes it really cool.
I'm looking to add user interaction with the MPAndroidChart,
My requirement is:
In a combined chart (line chart and bubble chart), I want the user to drag and move the data point in x,y co-ordinate space of the MPAndroidChart.
I want the user to drag the data shown,
How to achieve this, which class can be sub-classed to achieve this.
To add interactivity to the graph data
- you need a view with pan gesture
Need to subclass CombinedChartView, LineChartRenderer, BubbleChartRenderer
Create the Renderer objects, - this renderer objects needed to be replaced by the renderers created in CombinedChartView, Hence we subclass it and replace the renderer objects with that of our subclassed version.
In our CombinedChartView(subclass) override the getter,setter of the data variable, Hence while assigning the data, we create our UIView once with Pan gesture, take a reference of it's corresponding ChartDataEntry in UIView, Hence in UIView's Pan handler function we change this DataEntry's xIndex etc and call setNeedsDisplay.(UIView's frame position is not know yet, keep it all at origin)
In our Renderer class (subclass of LineChartRenderer or BubbleChartRenderer) override the DrawDataSet function -this is where the actual drawing happens, this is the place where we could get the exact x,y co-ordinates for our UIView, We finally set the frame position for the views created in CombinedChartView.
That's it, when we pan the UIView, the view will pan and in-turn move the xIndex of the ChartDataEntry
Related
The Question
When the user taps on a view which of these two functions is called first? touchesBegan(_:with:) or point(inside:with:).
The Context
I want to subclass a PKCanvasView (which inherits from UIScrollView) to allow interaction through the view (i.e. allow interaction to the view below and disable interaction to the PKCanvasView) when the point of the touch is outside of the UIBezierPath of a stroke on the canvas.
This is easy enough by overriding point(inside:with:). My issue lies in the fact that I only want to allow interaction to the view below if the touch event UITouch.TouchType is not an apple pencil : .pencil. (so that the user can draw with the apple pencil and interact with the view below using their finger)
The only way I think can get this information is by also overriding touchesBegan(_:with:). Here I can access the event and its touch type. I would then somehow pass this to be read inside of point(inside:with:).
However, that all relies on extracting the UITouch.TouchType information before I check if the touch point is overlapping with any PKStroke paths.
So: Is touchesBegan(_:with:) called before point(inside:with:)?
There is a similar SO Question that exists for this problem but unfortunately there were no suitable answers provided.
I have a google maps view (GMSMapView) that is entirely covered by a transparent sibling view that acts as a container for thumbnail images. The thumbnails are child views of the container view, not the map view. These child views are randomly scattered about the map and therefore partially hide portions of the map's surface.
Tapping on one of these thumbnails triggers a segue to a different VC that shows a zoomed view of the image.
The problem:
Given that these thumbnails lie on top of the map, they prevent the normal map gestures from occurring if the gesture intersects one of the thumbnails. For example, if a user wishes to pinch-zoom, rotate or pan the map and one of his/her fingers begins overtop of a thumbnail, the touches are intercepted by the thumbnail.
Non-starters:
Obviously, I can't set userInteractionEnabled to false on a thumbnail because I need to detect tap gestures to trigger the segue.
I don't think that I can customize the responder chain using UIView's hitTest:withEvent and pointInside:withEvent methods on the thumbnail view because they are not in the same branch in the view hierarchy as the map view AND the dispatching logic is dependent on the type of gesture (which I don't think is available at this point - touchesBegan, etc. are called once the appropriate view has been chosen to receive the event). Please correct me if I'm wrong...
Attempted Solution:
Given the above, the strategy I'm attempting is to overlay all other views in the view controller with a transparent "touch interceptor view". This view's only purposes is to receive all touch messages -- by overriding touchesBegan(), touchesMoved(), touchesEnded() -- and dispatch them to other views as appropriate.
In other words, depending on the type of gesture recognized (tap vs. other), I could call the appropriate target view's (either one of the thumbnails or the map) touchesBegan(), touchesMoved(), touchesEnded() methods directly by forwarding the touches and event parameter.
Unfortunately, while this works when the target view is a simple UIView, it seems most UIView subclasses (including GMSMapView) don't allow forwarding of touch events in this manner; as described in the following article (see section: A Tempting Non-Solution).
Any ideas would be greatly appreciated.
I've been looking at this thread as I'm trying to implement the same thing. However, I see that the Canvas class is implemented as a subclass of UIImageView. I'm trying to do the same thing except in a UIView. How will using a UIView rather than UIImageView affect the implementation of this solution? I see self.image used a couple times, but I don't know how I'd change that since I don't think that is available in a generic UIView.
Yes, you can implement this as a UIView subclass. Your model should hold the locations of the touch events (or the paths constructed from those locations) and then the drawRect of the view can render these paths. Or you create CAShapeLayer objects associated with those paths, too. Both approaches work fine.
Note, there is some merit to the approach of making snapshots (saved as UIImage) objects that you either show in a UIImageView or manually draw in drawRect of your UIView subclass. As your drawings get more and more complicated, you'll start to suffer performance issues if your drawRect has to redraw all of path segments (it can become thousands of locations surprisingly quickly because there are a lot of touches associated with a single screen gesture) upon every touch.
IMHO, I think that other answer you reference goes too far, making a new snapshot upon every touchesMoved. When you look at full resolution image for retina iPad or iPhone 6 plus, that's a large image snapshot to create upon every touch event. I personally adopt a hybrid approach: My drawRect or CAShapeLayer will render the current path associated with the current gesture (or the collection of touchesMoved events between touchesBegan and touchesEnded), but when the gesture finishes, it will create a new snapshot.
In the answer to that question, self.image is drawn into the drawing context first, then drawing is applied on top, then finally the image is updated to be the old image with new content drawn on top.
Since you just want to add a UIBezierPath, I'd just create a CAShapeLayer into which you place your bezier path, and place it on top of your views backing layer (self.view.layer). There's no need to do anything with DrawRect.
I'm facing a delicated problem handling touch events. This is problably not a usual stuff to make but i think it is possible. I just don't know how...
I have a Main View (A) and Main View (B) with a lot of subviews 1,2,3,4,5,...
MainView
SubView(A)
1
2
3
SubView(B)
1
2
3
Some of these sub sub views (1,2,4) are scrollviews.
It happens that I want to change between A and B with a two finger pan.
I have tried to attach a UIPanGestureRecognizer to MainView but the scrollviews cancel the touches and it only works sometimes.
I need a consistent method to first capture the touches, detect if it is a two finger pan, and only then decide if it will pass the touches downwards (or upwards... i'm not sure) the responder chain.
I tried to create a top level view to handle that, but I cant make to have the touches being passed through that view.
I have found a lot of people with similar problems but couldn't find a solution to this problem from their solutions.
If any one could give me a light, that would be great as i'm already desperate with this.
You can create a top level view to capture the touches and the coordinates of touches then you can check if the coordinates of touch is inside of the sub views. You can do that using
BOOL CGRectContainsPoint(CGRect rect, CGPoint point)
Method. Rect is a frame of the view, and point is point of touch.
Please not that the frames and touch locations are relative to their super views, therefore you need to convert them to coordinate system of app window.
Or maybe it can be more helpful
Receiving touch events on more then one UIView simultaneously
I am making an app where one function is the ability to place pre-made doodles on the page, They will have the ability to zoom the doodles in and out and place them where they wish. This is all for the iPad. Do I need to use open GL for this or is there a better / easier way? (New to iOS programming)
You should be able to achieve this using CALayers. You will need to need to add the QuartzCore framework for this to work. The idea would be to represent each doodle as a single CALayer. If your doodles are images, you can use the contents property to assign the doodle to the layer. You will need to assign a CGImageRef object which you can easily retrieve using CGImage property of a UIImage object.
You will need a view which will be your drawing board. Since you want to be able to move and alter the sizes of the doodles, you will have to attach a UIPanGestureRecognizer object for moving the layers and a UIPinchGestureRecognizer to zoom the doodles in and out. Since the recognizers can only be attached to a view and not layers, the non-trivial part when the gesture handlers are called will be identify which sublayer of the view are they manipulating. You can get the touches for the gestures using locationInView: for the pan gesture and locationOfTouch:inView: for the pinch gesture with the view argument being the view that the gesture is being done on which can be retrieved using gesture.view. Once you identify the layer in focus, you can use translationInView: for pan gesture to move the layer and use scale property of the pinch gesture to transform the layer.
While CALayer objects are lightweight objects, you could face problems when there are just too many of them. So stress test your application. Another roadblock is that images are usually memory hogs so you might not be able to get a lot of doodles in.