Swift - Detect the user drawing through defined points/areas. - ios

My app already has the drawing enabled for the user on the screen. How can I define certain points in the screen and detect when the user draws through that point? Read about a few swift methods but can't quite grasp if they are applicable for what I need, also I can't find any "collision" methods.

You can use the .containspoint method. However I would recommend you to use a rectangle and not a point. It`s very difficult to draw over one certain point. So you could use CGRect for a rectangle and then again .containspoint(from the user touched point).

Related

How to properly use setNeedsDisplayInRect for iOS apps?

I'm on Yosemite 10.10.5 and Xcode 7, using Swift to make a game targeting iOS 8 and above.
EDIT: More details that might be useful: This is a 2D puzzle/arcade game where the player moves stones around to match them up. There is no 3D rendering at all. Drawing is already too slow and I haven't even gotten to explosions with debris yet. There is also a level fade-in, very concerning. But this is all on the simulator so far. I don't yet have an actual iPhone to test with yet and I'm betting the actual device will be at least a little faster.
I have my own Draw2D class, which is a type of UIView, set up as in this tutorial. I have a single NSTimer which initiates the following chain of calls in Draw2D:
[setNeedsDisplay]; // which calls drawRect, which is the master draw function of Draw2D
drawRect(rect: CGRect)
{
scr_step(); // the master update function, which loops thru all objects and calls their individual update functions. I put it here so that updating and drawing are always in sync
CNT = UIGraphicsGetCurrentContext(); // get the curret drawing context
switch (Realm) // based on what realm im in, call the draw function for that realm
{
case rlm.intro: scr_draw_intro();
case rlm.mm: scr_draw_mm();
case rlm.level: scr_draw_level(); // this in particular loops thru all objects and calls their individual draw functions
default: return;
}
var i = AARR.count - 1; // loop thru my own animation objects and draw them too, note it's iterating backwards because sometimes they destroy themselves
while (i >= 0)
{
let A = AARR[i];
A.scr_draw();
i -= 1;
}
}
And all the drawing works fine, but slow.
The problem is now I want to optimize drawing. I want to draw only in the dirty rectangles that need drawing, not the whole screen, which is what setNeedsDisplay is doing.
I could not find any tutorials or good example code for this. The closest I found was apple's documentation here, but it does not explain, among other things, how to get a list of all dirty rectangles so far. It does not also explicitly state if the list of dirty rectangles is automatically cleared at the end of each call to drawRect?
It also does not explain if I have to manually clip all drawing based on the rectangles. I found conflicting info about that around the web, apparently different iOS versions do it differently. In particular, if I'm gonna hafta manually clip things then I don't see the point of apple's core function in the first place. I could just maintain my own list of rectangles and manually compare each drawing destination rectangle to the dirty rectangle to see if I should draw anything. That would be a huge pain, however, because I have a background picture in each level and I would hafta draw a piece of it behind every moving object. What I'm really hoping for is the proper way to use setNeedsDisplayInRect to let the core framework do automatic clipping for everything that gets drawn on the next draw cycle, so that it automatically draws only that piece of the background plus the moving object on top.
So I tried some experiments: First in my array of stones:
func scr_draw_stone()
{
// the following 3 lines are new, I added them to try to draw in only dirty rectangles
if (xvp != xv || yvp != yv) // if the stone's coordinates have changed from its previous coordinates
{
MyD.setNeedsDisplayInRect(CGRectMake(x, y, MyD.swc, MyD.shc)); // MyD.swc is Draw2D's current square width in points, maintained to softcode things for different screen sizes.
}
MyD.img_stone?.drawInRect(CGRectMake(x, y, MyD.swc, MyD.shc)); // draw the plain stone
img?.drawInRect(CGRectMake(x, y, MyD.swc, MyD.shc)); // draw the stone's icon
}
This did not seem to change anything. Things were drawing just as slow as before. So then I put it in brackets:
[MyD.setNeedsDisplayInRect(CGRectMake(x, y, MyD.swc, MyD.shc))];
I have no idea what the brackets do, but my original setNeedsDisplay was in brackets just like they said to do in the tutorial. So I tried it in my stone object, but it had no effect either.
So what do I need to do to make setNeedsDisplayInRect work properly?
Right now, I suspect there's some conditional check I need in my master draw function, something like:
if (ListOfDirtyRectangles.count == 0)
{
[setNeedsDisplay]; // just redraw the whole view
}
else
{
[setNeedsDisplayInRect(ListOfDirtyRecangles)];
}
However I don't know the name of the built-in list of dirty rectangles. I found this saying the method name is getRectsBeingDrawn, but that is for Mac OSX. It doesn't exist in iOS.
Can anyone help me out? Am I on the right track with this? I'm still fairly new to Macs and iOS.
You should really avoid overriding drawRect if at all possible. Existing view/technologies take advantage of any hardware capabilities to make things a lot faster than manually drawing in a graphics context could, including buffering the contents of views, using the GPU, etc. This is repeated many times in the "View Programming Guide for iOS".
If you have a background and other objects on top of that, you should probably use separate views or layers for those rather than redraw them.
You may also consider technologies such as SpriteKit, SceneKit, OpenGL ES, etc.
Beyond that, I'm not quite sure I understand your question. When you call setNeedsDisplayInRect, it will add that rect to those that need to be redrawn (possibly merging with rectangles that are already in the list). drawRect: will then be called a bit later to draw those rectangles one at a time.
The whole point of the setNeedsDisplayInRect / drawRect: separation is to make sure multiple requests to redraw a given part of the view are merged together, and drawing only happens once per redraw cycle.
You should not call your scr_step method in drawRect:, as it may be called multiple times in a cycle redraw cycle. This is clearly stated in the "View Programming Guide for iOS" (emphasis mine):
The implementation of your drawRect: method should do exactly one
thing: draw your content. This method is not the place to be updating
your application’s data structures or performing any tasks not related
to drawing. It should configure the drawing environment, draw your
content, and exit as quickly as possible. And if your drawRect: method
might be called frequently, you should do everything you can to
optimize your drawing code and draw as little as possible each time
the method is called.
Regarding clipping, the documentation of drawRect states that:
You should limit any drawing to the rectangle specified in the rect
parameter. In addition, if the opaque property of your view is set to
YES, your drawRect: method must totally fill the specified rectangle
with opaque content.
Not having any idea what your view shows, what the various method you call do, what actually takes time, it's difficult to provide much more insight into what you could do. Provide more details into your actual needs, and we may be able to help.

Interact with complex figure in iOS

I need to be able to interact with a representation of a cilinder that has many different parts in it. When the users taps over on of the small rectangles, I need to display a popover related to the specific piece (form).
The next image demonstrates a realistic 3d approach. But, I repeat, I need to solve the problem, the 3d is NOT required (would be really cool though). A representation that complies the functional needs will suffice.
The info about the parts to make the drawing comes from an API (size, position, etc)
I dont need it to be realistic really. The simplest aproximation would be to show a cilinder in a 2d representation, like a rectangle made out of interactable small rectangles.
So, as I mentioned, I think there are (as I see it) two opposite approaches: Realistic or Simplified
Is there a way to achieve a nice solution in the middle? What libraries, components, frameworks that I should look into?
My research has led me to SceneKit, but I still dont know if I will be able to interact with it. Interaction is a very important part as I need to display a popover when the user taps on any small rectangle over the cylinder.
Thanks
You don't need any special frameworks to achieve an interaction like this. This effect can be achieved with standard UIKit and UIView and a little trigonometry. You can actually draw exactly your example image using 2D math and drawing. My answer is not an exact formula but involves thinking about how the shapes are defined and break the problem down into manageable steps.
A cylinder can be defined by two offset circles representing the end pieces, connected at their radii. I will use an orthographic projection meaning the cylinder doesn't appear smaller as the depth extends into the background (but you could adapt to perspective if needed). You could draw this with CoreGraphics in a UIView drawRect.
A square slice represents an angle piece of the circle, offset by an amount smaller than the length of the cylinder, but in the same direction, as in the following diagram (sorry for imprecise drawing).
This square slice you are interested in is the area outlined in solid red, outside the radius of the first circle, and inside the radius of the imaginary second circle (which is just offset from the first circle by whatever length you want the slice).
To draw this area you simply need to draw a path of the outline of each arc and connect the endpoints.
To check if a touch is inside one of these square slices:
Check if the touch point is between angle a from the origin at a.
Check if the touch point is outside the radius of the inside circle.
Check if the touch point is inside the radius of the outside circle. (Note what this means if the circles are more than a radius apart.)
To find a point to display the popover you could average the end points on the slice or find the middle angle between the two edges and offset by half the distance.
Theoretically, doing this in Scene Kit with either SpriteKit or UIKit Popovers is ideal.
However Scene Kit (and Sprite Kit) seem to be in a state of flux wherein nobody from Apple is communicating with users about the raft of issues folks are currently having with both. From relatively stable and performant Sprite Kit in iOS 8.4 to a lot of lost performance in iOS 9 seems common. Scene Kit simply doesn't seem finished, and the documentation and community are both nearly non-existent as a result.
That being said... the theory is this:
Material IDs are what's used in traditional 3D apps to define areas of an object that have different materials. Somehow these Material IDs are called "elements" in SceneKit. I haven't been able to find much more about this.
It should be possible to detect the "element" that's underneath a touch on an object, and respond accordingly. You should even be able to change the state/nature of the material on that element to indicate it's the currently selected.
When wanting a smooth, well rounded cylinder as per your example, start with a cylinder that's made of only enough segments to describe/define the material IDs you need for your "rectangular" sections to be touched.
Later you can add a smoothing operation to the cylinder to make it round, and all the extra smoothing geometry in each quadrant of unique material ID should be responsive, regardless of how you add this extra detail to smooth the presentation of the cylinder.
Idea for the "Simplified" version:
if this representation is okey, you can use a UICollectionView.
Each cell can have a defined size thanks to
collectionView:layout:sizeForItemAtIndexPath:
Then each cell of the collection could be a small rectangle representing a
touchable part of the cylinder.
and using
collectionView:(UICollectionView *)collectionView
didSelectItemAtIndexPath:(NSIndexPath *)indexPath
To get the touch.
This will help you to display the popover at the right place:
CGRect rect = [collectionView layoutAttributesForItemAtIndexPath:indexPath].frame;
Finally, you can choose the appropriate popover (if the app has to work on iPhone) here:
https://www.cocoacontrols.com/search?q=popover
Not perfect, but i think this is efficient!
Yes, SceneKit.
When user perform a touch event, that mean you knew the 2D coordinate on screen, so your only decision is to popover a view or not, even a 3D model is not exist.
First, we can logically split the requirement into two pieces, determine the touching segment, showing right "color" in each segment.
I think the use of 3D model is to determine which piece of data to show in your case if I don't get you wrong. In that case, the SCNView's hit test method will do most of work for you. What you should do is to perform a hit test, take out the hit node and the hit's local 3D coordinate of this node, you can then calculate which segment is hit by this touch and do the decision.
Now how to draw the surface of the cylinder would be the only left question, right? There are various ways to do, for example simply paint each image you need and programmatically and attach it to the cylinder's material or have your image files on disk and use as material for the cylinder ...
I think the problem would be basically solved.

Advanced custom control features in Swift

I'm working on building a custom control. Basically I want to allow the application to generate rectangles (positioned at x = 0 with a variable y value that increases as each rectangle is added).
I'd like them to respond to gestures where they have two positions (closed - which mostly hidden, open - expanded fully so that the entire rectangle is still visible but tethered to the side).
I've already designed an application with this in mind. Seeing as the rectangles will be generated by the users, I assume core graphics would be best for the job. Also, I want the rectangles to display different information based on their gesture-related position.
Is it possible to combine core graphics with these types of controls? I know this is asking a lot.
It's just that I'm having trouble determining how to combine each component in code.
Any advice would be greatly appreciated. Thanks!
Clearly, we're not here to write code for you, but a few thoughts:
You say that you assume Core Graphics would be best for the job. You definitely could, but you could also use CAShapeLayer, too.
So you might create a gesture recognizer whose handler:
Creates a CAShapeLayer when the gesture's state is UIGestureStateBegan and adds it as a sublayer of the view's layer.
Replace that shape layer's path property with the CGPath of a UIBezierPath which is created on the basis of updated location that the gesture recognizer handler captures when the gesture's state is UIGestureStateChanged.
I'd suggest you take a crack at that (googling "CAShapeLayer tutorial" or "UIPanGestureRecognizer example" or what have you, if any of these concepts are unfamiliar).
If you really want to use Core Graphics, you would have a custom UIView subclass whose drawRect draws all of the rectangles. Conceptually it's very similar to the above, but you have to also writing your own rectangle drawing code that you'll put in drawRect, rather than letting CAShapeLayer do that for you.

Drawing lines in cocos2d

I'm trying to draw lines in Cocos2d using touches.
I had a system where it would just add a small sprite where you touched, but it's working terribly. So I've been trying to find a way to draw actual lines using a method like ccDrawLine, but every tutorial I find seems to leave out something, and I just can't figure it out.
I've found this tutorial, Drawing line on touches moved in COCOS2D but I don't understand a few things about that.
It seems to reference the same variable from two different files, so I don't understand how it's doing that. (The naughtyTouchArray variable)
I can't find a complete guide on drawing lines, so sorry for the codeless question, but I'm getting frustrated.
Thanks.
The answer you've linked in your question provides good solution to your problem. There is no "two different files". Just two different methods of one layer. One method (ccTouchesMoved:withEvent:) handles touches and fill the array of points to be connected to each other one-by-one with lines. From cocos2d documentation, all drawing must be placed in the draw method of the node. So, another (draw) method just draws lines according to the given array. Cocos2d is based on OpenGL and it fully redraws scene every tick, so you cannot just draw new line. You had to draw all of them.
Or any other node can draw your array in it's draw method, so you can simply pass stored array of points from the layer, that detects touches, to this node.

Free hand painting and erasing using UIBezierPath and CoreGraphics

I have been trying so much but have no solution find out yet. I have to implement the painting and erasing on iOS so I successfully implemented the painting logic using UIBezierPath. The problem is that for erasing, I implemented the same logic as for painting by using kCGBlendModeClear but the problem is that I cant redraw on the erased area and this is because in each pass in drawRect i have to stroke both the painting and erasing paths. So is there anyway that we can subtract erasing path from drawing path to get the resultant path and then stroke it. I am very new to Core Graphics and looking forward for your reply and comments. Or any other logic to implement the same. I can't use eraser as background color because my background is textured.
You don't need to stroke the path every time, in fact doing so is a huge performance hit. I guarantee if you try it on an iPad 3 you will be met with a nearly unresponsive screen after a few strokes. You only need to add and stroke the path once. After that, it will be stored as pixel data. So don't keep track of your strokes, just add them, stroke them, and get rid of them. Also look into using a CGLayer (you can draw to that outside the main loop, and only render it to your rect in the main loop so it saves lots of time).
These are the steps that I use, and I am doing the exact same thing (I use a CGPath instead of UIBezierPath, but the idea is the same):
1) In touches began, store the touch point and set the context to either erase or draw, depending on what the user has selected.
2) In touches moved, if the point is a certain arbitrary distance away from the last point, then move to the last point (CGContextMoveToPoint) and draw a line to the new point (CGContextAddLineToPoint) in my CGLayer. Calculate the rectangle that was changed (i.e. contains the two points) and call setNeedsDisplayInRect: with that rectangle.
3) In drawRect render the CGLayer into the current window context ( UIGraphicsGetCurrentContext() ).
On an iPad 3 (the one that everyone has the most trouble with due to its enormous pixel count) this process takes between 0.05 ms and 0.15ms per render (depending on how fast you swipe). There is one caveat though, if you don't take the proper precautions, the entire frame rectangle will be redrawn even if you only use setNeedsDisplayInRect: My hacky way to combat this (thanks to the dev forums) is described in my self answered question here, Otherwise, if your view takes a long time to draw the entire frame (mine took an unacceptable 150 ms) you will get a short stutter under certain conditions while the view buffer gets recreated.
EDIT With the new info from your comments, it seems that the answer to this question will benefit you -> Use a CoreGraphic Stroke as Alpha Mask in iPhone App
Hai here is the code for making painting, erasing, undo, redo, saving as picture. you can check sample code and implement this on your project.
Here

Resources