In an iOS program I use a UILongPressGestureRecognizer on a large view. Once the long press has been triggered, I remove the large view, and create another thumbnail view centered under my finger. To the user it looks as if the large view has shrunk to a thumbnail that can then be moved.
Once this new thumbnail is created under my finger, I want to be able to move it somewhere else. However, currently, in order to move it I have to lift my finger up and place it back down on the thumbnail in order to get UITouchesBegan/UITouchesMoved messages to be sent.
How can I ensure that UITouchesMoved start to be sent to the newly created view, without having to re-touch the screen? Or what other workaround should I use?
Is there a reason not to actually shrink the view, as this is the effect you seem to be after anyways? This would also enable you to easily use a short animation for this to get a "Apple-like" UX.
You cann't do it without touch up. When you down your finger to large view, then this view will receive all move events until you touch up finger.
But there is a one trick - your large view continues to receive events when you move your finger on the screen. You can get access to all new coordinates and set it to thumbnail. It will like you moving thumbnail but in fact you will interact only with large view.
Related
I need to build an app in which there is an image. On image there are many points where user can tap and depend upon that location of tap we need to take input. Tap locations are fixed.
User can zoom image. Detect multiple taps. (Single tap, double tap, etc.)
Biggest problem we are facing is there are too many points near to each other. So if we tap on one point we are getting other points clicked.
Following is the image according which I need to work on.
I need to detect tap on all red dots and take decision based upon that. That red dots will not be visible to user.
What I have tried is following.
Placed buttons on image as shown image. But problem is when user tap on button either button's tap event is not calling or it's not tapping right button which user seems to tap.
What I am thinking to do now is.
Taken image in scroll view then detect tap of scroll view and then based upon co-ordinates detect tap.
Is there any easier way to detect tap?
Your requirement is a pretty complex one.
Here you have to take a help of Core image. You need to process that image and get the core details of that image. Also "Morphological Operations" will help you to detect object from image. Take a look on links:
Core image processing
Morphological Operations
I have a UIWebView that loads a new request when the user scrolls a set distance past the end, or before the beginning of the content (ie. into the area that "bounces" back), via the scrollView delegate
Unfortunately, I have found that once the bounce animation starts, it has to complete, and completely bounce back to the edge of the content area, before the UIWebView will display the new content that it has loaded.
Is there any way to interrupt this, and have it not need to bounce back before displaying the new content? I would prefer for the old content just to disappear without needing to bounce all the way back to its edge . Either that, or speed up the animation.
Things to note:
I do NOT want to turn off bouncing.
I have tried importing quartzcore and using -removeAllAnimations on every view I can think of, including the superview, and it doesn't help.
I have tested it, and it is not because of latency in the loading of the request that is causing the delay in displaying the new content.
There are a few UIViewAnimationOptions that look like they could be helpful, but I can't see where I would use them, and I don't really want to subclass UIWebview if I can help it.
Any ideas?
thanks in advance
I know this has been probably asked before but I've seen many approaches and i don't know which is best for me, so plz don't send me a link to another post unless it addresses my problem directly.
I have a controller which has a uiview on the top (like a header) (this header is bigger than it seems because is partially hidden on top). on that view i have a uibutton which now with a touch up inside shows the entire header view and taping again returns it to its starting position (changing frame with animation). I want to also be able to drag the view but only changing position on the y axis(dragging up and down)... i was thinking of adding the dragInside/Outside event to the button but this doesn't give me the position of the finger... and also want to know when the user releases the drag so the view ends animation to any of its two possible states (showing or partially hidden). Is this a "touches began" , "touches moved" , "touches ended" thing? if it is please provide a code example. I also want to do this with another view but this is on the left side... same thing but this one moves on the X axis... any help is appreciated. or maybe it can be made with drag event if i only can save a CGpoint of last touch, maybe that's better, any other suggestions
Look at using a UIPanGestureRecognizer to detect the touch movements. Use the translationInView: of the gesture to set the view y position. The translation is the total movement since the start of the gesture so you don't need to remember and accumulate the offset position yourself.
The main thing to worry about while implementing this is bounding the y position of the view so that no matter how far the user drags the view won't go too high or low on the screen.
Use a UIPanGestureRecognizer, that's a class dedicated to handling such drag/pan gestures.
Everything is described here in Apple's documentation, including examples, so you should find your answer here.
There is also some sample code in Apple Developer Library that shows you how to use Gesture Recognizers if needed.
I am trying to achieve the following in xcode: One a pan gesture has reached a certain point, I want to draw a draggable item, and have the user's finger automatically keep dragging it around to move the new item. Everything works, except for one problem: The finger needs to be picked up and put down again on the new item when it is initially drawn (I assume to trigger touchesbegan).
Does anyone know if there is a way to force start touchesbegan if the finger is already pressed when the newly drawn item is first rendered (so the user doesn't have to pick up and put down his finger)?
I've got a UIView which the user drags on and off of the screen (like a drawer).
The UIView has a bunch of buttons, some always visible, others hidden until the user 'opens' the drawer.
For some reason, the UIButtons that are 'off screen' on the initial UIView presentation aren't being passed events when they're later moved onto screen.
While the others receive all the events all the time.
Seems buggy to me, I would have thought the SDK would handle all this itself?
I've got a very simple example which you can take a look at: http://cl.ly/2r1c0k2p361B3B1A461L
Thanks in advance.
After a bit more research, it had to do with the views clipping boundaries.
The buttons were being drawn, but were essentially out of the views frame, so no events were registered.
More details on this question: Programmatically Created UIButtons Below Screen Not Working?