I need to build an app in which there is an image. On image there are many points where user can tap and depend upon that location of tap we need to take input. Tap locations are fixed.
User can zoom image. Detect multiple taps. (Single tap, double tap, etc.)
Biggest problem we are facing is there are too many points near to each other. So if we tap on one point we are getting other points clicked.
Following is the image according which I need to work on.
I need to detect tap on all red dots and take decision based upon that. That red dots will not be visible to user.
What I have tried is following.
Placed buttons on image as shown image. But problem is when user tap on button either button's tap event is not calling or it's not tapping right button which user seems to tap.
What I am thinking to do now is.
Taken image in scroll view then detect tap of scroll view and then based upon co-ordinates detect tap.
Is there any easier way to detect tap?
Your requirement is a pretty complex one.
Here you have to take a help of Core image. You need to process that image and get the core details of that image. Also "Morphological Operations" will help you to detect object from image. Take a look on links:
Core image processing
Morphological Operations
Related
I read through a few similar questions here, but most of them are for much older versions of Swift.
This tutorial shows how to create a gesture recognizer and works pretty well: https://www.ioscreator.com/tutorials/swipe-gesture-ios-tutorial-ios11
What I'd like to accomplish is to add functionality that would allow the user to swipe up or down after pressing a button, while still holding the button, and have my app react to the combination of the specific button being pressed and the upward or downward swipe gesture.
Here's the specific design I'm trying to implement. Basically I'd like the user to press the "A" button and then swipe up or down to get the "#" or "b".
Is this possible? The # & b could be image views or buttons (though if they're buttons, I don't want them to be pressable on their own). If this is a crazy design, I welcome suggestions for improvement.
You want to use a UILongPressGestureRecognizer (probably in conjunction with image views). It has the advantage that first it recognizes a finger held down in one spot (the "A") and then it tracks the movement of that finger (panning up to the sharp or down to the flat). Where the finger is held down — i.e., is it in the "A" or not — will determine whether to recognize in the first place. Then if you do recognize, you watch where the finger goes and decide whether it has entered the sharp or the flat.
I ended up using a Pan Gesture Recognizer, and it worked out really well! I am simply using the y coordinate of the pan gesture to determine if the user is moving his/her finger up to the sharp or down to the flat.
So I have this project that I took from somebody else and they have implemented this OneFingerRotationGestureRecognizer (https://github.com/melle/OneFingerRotationGestureDemo/blob/master/OneFingerRotationGestureDemo/OneFingerRotationGestureRecognizer.m) for a circular slider. Additionally, they have added a UITapGestureRecognizer on top of that, so you could tap a value within that circular slider and the value would jump to that specific one. Now the problem is, when I drag that thing just a very small amount (imagine putting your thumb onto the control and tilting left/right), then the UITapGestureRecognizer also fires! And this is a problem, because I want to be able to grab the circular slider wherever I want (there is no handle or something). And when I only drag it a little, then the value just jumps to that spot where I did that small dragging. Somehow I need to cancel that tap gesture as soon as that OneFingerRotationGestureRecognizer started registering touches. I tried what is described here: https://developer.apple.com/documentation/uikit/touches_presses_and_gestures/coordinating_multiple_gesture_recognizers/preferring_one_gesture_over_another?language=objc but didn't have any success with that :-(.
What can I do? I'm afraid the solution is so simple that I just don't see it.
I am creating a camera roll feature similar to snapchat where the camera is the bottom layer and then after tapping a button the camera roll appears. The camera roll does not occupy the whole screen the search bar on the top is still maintained and upon dismissal with a swipe gesture down the camera is still running. I am not sure how this affect is achieved. Is it done by using a scroll view or a segue of some kind? Thank you for your help.
Below is the video in question
Snap Chat Camera Roll
Firstly, this type of effect cannot be achieved using the standard camera UIImagePickerController. You will need to create your own camera view.
Here is a good guide to get you started: https://github.com/codepath/ios_guides/wiki/Creating-a-Custom-Camera-View
You could also try using a custom library that can be easily customized such as: https://github.com/omergul/LLSimpleCamera/
Now, in terms of the actual visual, I do not believe there is any actual segue/change of viewcontroller involved. The camera view is probably always on screen (except perhaps when it is fully covered by the Memories screen), it is simply overlayed by other things.
The Memories screen is most likely 'presented' in a custom manner. The show/dismissal logic can be achieved by attaching a UIPanGestureRecognizer to the UIView and translating the view up and down on pan event. If the pan's y value passes a certain threshold up or down, it automatically continues its animation to show or hide the view.
enter image description here
I am making app like sudoku (9*9 boxes) but it has only binary choices (on/off) and using button gave me horrible results. Can anyone give me demo version of 9*9 (or 3*3) box where depending upon the tap location, that specific box gets toggled (on/off)
You could create a custom subclass of UIView that had an attached tap gesture recognizer and interpreted the tap location to figure out which cell is being tapped, but it would be a lot of work.
It would be better to have custom view that contains a grid of buttons and set up the button actions to do what you want.
You said "...using button gave me horrible results." Can you elaborate? That should be a good way to go, so any "horrible results" are likely the result of something you did wrong, rather than that being the wrong way to go.
In an iOS program I use a UILongPressGestureRecognizer on a large view. Once the long press has been triggered, I remove the large view, and create another thumbnail view centered under my finger. To the user it looks as if the large view has shrunk to a thumbnail that can then be moved.
Once this new thumbnail is created under my finger, I want to be able to move it somewhere else. However, currently, in order to move it I have to lift my finger up and place it back down on the thumbnail in order to get UITouchesBegan/UITouchesMoved messages to be sent.
How can I ensure that UITouchesMoved start to be sent to the newly created view, without having to re-touch the screen? Or what other workaround should I use?
Is there a reason not to actually shrink the view, as this is the effect you seem to be after anyways? This would also enable you to easily use a short animation for this to get a "Apple-like" UX.
You cann't do it without touch up. When you down your finger to large view, then this view will receive all move events until you touch up finger.
But there is a one trick - your large view continues to receive events when you move your finger on the screen. You can get access to all new coordinates and set it to thumbnail. It will like you moving thumbnail but in fact you will interact only with large view.