Quartz2D good way to Draw point or line on touch? - ios

I'm trying to develop an app that requires drawing based on user touch. I'm using Quartz2D and CoreGraphics for drawing, now I'm wondering what's a good way to manage the points that i'm drawing? Currently I'm adding each touchMoved point to an array and setNeedsDisplay them on every move. This lags the system very fast. So Therefore I'm wondering if anyone know a good way to draw smoothly with user touch for a good amount of time? Thanks!

Touch events are fired very frequently. since quartz2d is slow your system will saturate.
several options
Switch to opengl ^^ (bu that is an overkill)
don't do a draw on every single event. put you touch to sleep (acutally thats an android soluiton so I'm not sure its good for Iphone), only draw 1 out of x lines.
store the coordinates of your touch somewhere and when your app is ready to refresh the ui, get the current values stored and do you draw.
another solution I had put in place is to test if the new position had actually moved of more that a certain amount from the las draw (lets say 1~3 px) this way I avoid refreshing and redrawing if the updated position was to small.
This are just pointers, there might be a better option for you case ^^

Related

handling finger detection on small objects

The application I am working on requires a 4px bar height with a full screen size width. I need to be able to select this 4px bar and move it around. I also can not change the size of this bar it has to be 4px in height.
This wouldn't be that big of an issue if I wasn't using OpenGL to create the object. OpenGL obviously does not have its own selection features so I am needing to program my own.
Initially after research I built a color selector to identify the object. How my color selector works is what ever x and y my finger touch returns from touchesBegan: is the pixel I grab from a screenshot of the OpenGL View. The issue with this is finger location is not precise at all. If I use the mouse it works perfect...
I decided to maybe loop through a buffer zone of the selected x and y but unfortunately a screenshot of the OpenGL view has antialiasing happens to the image when it's stored in memory and the buffer returns several shade of my objects color. I could possibly do a comparative color look up, to see if its in the range of colors but that seems overly complicated with how much I have already had to do. Plus cycling through the buffer zone isn't quick.
I also have thought maybe just remembering the location of my line on the screen and if my finger is close to that location just know that that's the one I want to select and move it around.
The future of this application can have up to 4 lines just like this so, I want something more secure then just knowing the location of where it is in memory.
What better way is there out there of handling selection of small objects?
How about maintaining an array of frames for the four objects, but expand the heights to something more manageable (8px or bigger)? Then, a touch within the larger region could be compared against the array (using CGRectContainsPoint). If you get a hit, then "snap to" the center point of the smaller (4px) rectangle before beginning the drag.
I do something like this by maintaining a list of "drop targets" for drag & drop, where it snaps to the drop target when it gets pretty close. Don't know if I'm conveying the idea very well, but it ought to work.
If the four 4px rectangles are going to be contiguous or very close together, you'll have to be able to make the selected one stand out or the user won't be able to tell which they're dragging -- but you could do that by making it bigger (maybe 6-8 px) then bringing it to the front so it overlays its adjacent neighbors.
More of an idea than an answer I guess.
John,
I would suggest a different approach. As you've discovered, touches in iOS are very imprecise. Apple usually suggests that the "hit box" for your controls be at least 40x40 points. I've gone as small as 30x30 points, but that starts to get hard.
What I would suggest you do is to factor your code so the app knows where the line is, and keeps track of it as a logical object. Then in your touch handler, interpret touches based on a large "buffer area" around the things you want the user to be able to move. If you just have a single horizontal bar, this should work great. Where you'll get into trouble is if you have multiple, thin horizontal bars that are close together. In that case you might need to rethink your app design and find another way to solve the problem.
As for the implementation details, you might add a pan gesture recognizer to your OpenGL view, and have it notify the OpenGL view of touch and drag actions. Then your OpenGL view can use knowledge of where your draggable objects are to decide how to interpret the touches.

Best Practice for Rendering Context of free hand drawing on iPad 3

I currently have a free hand drawing iPad app, that adds lines to a mutable path via quad curves in the touches methods then calls setNeedsDisplayInRect on the new area.
Problem is when the drawing (path) gets rather large, it takes longer to redraw, and begins to bog down. As well as whenever the user changes the brush size or color, it applies this to overlapping parts of the previously drawn path on redraw.
To counter this, I call renderInContext in a background thread in touchesEnded, and merge this with another UIImage in an imageview behind the draw view. Then clear the draw view.
This also helps so when the user hits save, the drawing is usually already rendered in a single UIImage - ready to go.
This works fine on other devices, but on he iPad 3 retina display, the performance is really awful and tends to crash whenever the user lifts his finger multiple times when drawing quickly.
I am seeking any type of advice for best practice in handling this type of situation? Aside from adding additional views to render off of in the background to prevent the main and background thread from accessing the same view at a time - which sounds rather hack-ish - I feel like I'm beating a dead horse?
In my current app, I made a working implementation that works fine on iPad 2 as well as 3, regardless of path length or number of paths. It seems that the graphics card is better at drawing lots of small paths then a few large paths, and either one is faster than rendering an image into a context. So, what I do is even if the user is continuously drawing, I break the path into many smaller paths and add those to an array. This approach gives me one advantage, and one disadvantage.
Advantage: The ability to zoom and redraw the image crisply
Disadvantage: Can't do pixel perfect erasing
As far as multiple colors, I made a subclass of UIBezierPath that includes a color property. Since colors are now serializable via NSCoding, they are easily saveable. In addition, I have a "stroke" object, which holds all of the paths the user created in one continuous stroke. This way I can handle undo / redo correctly.
Hope this info helps.

Menu from the Contre Jour app

I'm trying to do a menu like the one that "Contre Jour" game has, with 3 elements spinning in a circle when user drags left and right. I'm using CALayers with CATransforms to position them in a 3d spinning wheel (no problem so far).
I need a way (maybe with NSTimers?) to calculate in-between values, because CoreAnimation just interpolates values, but if you NSLog them, it's just gonna show the start and the end, or just the end. I need all the in-between values, I need to snap the wheel movement when I release the finger (touches ends)in one position (there are 3 elements, each one shoud be at 120 degrees.
My guess and am quite sure I'm correct is that they are using a game engine such as Unity3D or Cocos2D or any other of the many to manage their sprites, animations, textures, physics and basically everything. Trying to replicate it outside of game engine will most likely result in crummy performance and a lot of hair pulling. I would suggest looking into a dedicated game engine and give it a shot there.
I am not sure I understand exactly what Contre Jour does with the spinners, anyway, I think that a reasonable approach for your case is using a UIPanGestureRecognizers to update the status of your spinning wheels according to the panning.
Now, it is not clear what you do to animate the spinning wheel (if you could provide some code, this would help understanding exactly what you are trying to do), but the idea would be this: instead of specifying an animation ending point far away from the starting point (and letting Core Animation do all the handling for you, even when the dragging has stopped), you would only modify the status of the spinning wheel in small increments.
If your only issue is stopping the animation when the dragging stops, you could try calling removeAnimationForKey on your layer to halt a specific animation.
Look into CADisplayLink. This works very much like an NSTimer, except its refresh rate is tied to that of the display, so your animations will be smoother than if you were to use timers. This will allow you to calculate all the in-between values and update your control.
I'm not clear what you are asking, but I do have one insight for you: To get the in-between values of an in-flight animation, query the layer's presentationLayer property. the property that's being animated will have a value that's a close approximation of it's on-screen appearance at the moment you fetch the value.

Free hand painting and erasing using UIBezierPath and CoreGraphics

I have been trying so much but have no solution find out yet. I have to implement the painting and erasing on iOS so I successfully implemented the painting logic using UIBezierPath. The problem is that for erasing, I implemented the same logic as for painting by using kCGBlendModeClear but the problem is that I cant redraw on the erased area and this is because in each pass in drawRect i have to stroke both the painting and erasing paths. So is there anyway that we can subtract erasing path from drawing path to get the resultant path and then stroke it. I am very new to Core Graphics and looking forward for your reply and comments. Or any other logic to implement the same. I can't use eraser as background color because my background is textured.
You don't need to stroke the path every time, in fact doing so is a huge performance hit. I guarantee if you try it on an iPad 3 you will be met with a nearly unresponsive screen after a few strokes. You only need to add and stroke the path once. After that, it will be stored as pixel data. So don't keep track of your strokes, just add them, stroke them, and get rid of them. Also look into using a CGLayer (you can draw to that outside the main loop, and only render it to your rect in the main loop so it saves lots of time).
These are the steps that I use, and I am doing the exact same thing (I use a CGPath instead of UIBezierPath, but the idea is the same):
1) In touches began, store the touch point and set the context to either erase or draw, depending on what the user has selected.
2) In touches moved, if the point is a certain arbitrary distance away from the last point, then move to the last point (CGContextMoveToPoint) and draw a line to the new point (CGContextAddLineToPoint) in my CGLayer. Calculate the rectangle that was changed (i.e. contains the two points) and call setNeedsDisplayInRect: with that rectangle.
3) In drawRect render the CGLayer into the current window context ( UIGraphicsGetCurrentContext() ).
On an iPad 3 (the one that everyone has the most trouble with due to its enormous pixel count) this process takes between 0.05 ms and 0.15ms per render (depending on how fast you swipe). There is one caveat though, if you don't take the proper precautions, the entire frame rectangle will be redrawn even if you only use setNeedsDisplayInRect: My hacky way to combat this (thanks to the dev forums) is described in my self answered question here, Otherwise, if your view takes a long time to draw the entire frame (mine took an unacceptable 150 ms) you will get a short stutter under certain conditions while the view buffer gets recreated.
EDIT With the new info from your comments, it seems that the answer to this question will benefit you -> Use a CoreGraphic Stroke as Alpha Mask in iPhone App
Hai here is the code for making painting, erasing, undo, redo, saving as picture. you can check sample code and implement this on your project.
Here

C# Faster Drawing Vector Graphics

In my application I want to draw polygons using Windows Create Graphics method and later edit the polygon by allowing the user to select the points of the polygon and allowing to re-position them.
I use moue move event to get the new position of the point to get the new coordinates of the point being moved and use Paint event to re-draw the polygon. The application is working but when a point is moved the movement is not smooth. This is probably due to large numbe of mouse move and paint events firing while moving the point.
I dont know weather the mouse move or the paint event the performance hindrance.
Can anyone make a suggestion as to how to improve this?
look at double buffering in c#. this can speed thing up greatly.
From my Win32 expirience (not a .NET) the fastest vector graphic is Metafile. I dont know if the C# System.Drawing.Imaging.Metafile is as fast as Win32 one. Could be as fast as your video 2D hardware.

Resources