On an iPhone, in the standard messaging app, the text bubble will keep going up even after it has seemingly met the edge-of-the-screen limit, if you keep pushing it with two fingers.
What is allowing that to happen?
How would I replicate that so my code isn't so "fixed" and just stops exactly at the variable where it's supposed to?
Also, is there a name for the "bouncy" text physics that we see in android and iOS?
UIKit Dynamics is the high level name. http://www.doubleencore.com/2013/09/ios-7-uikit-dynamics/
Don't make a boundary - make a constraint.
Is it the case that you can't push the bubble 'outside of the screen', or is it more that you can't drag it further than the initial drag point?
Related
I'd like to know if there is a way to best detect a users tap on a label?
The new iOS15 Maps app allows a tap on e.g. a cities name and then shows informations about that city.
I am now wondering if something similar can be done with mapbox?
I know that there is a mapView.visibleFeatures(in: myRect) function that can somehow help here. So I can convert my finger location to a rect and then get all features there.
BUT... my city e.g. might have a label that is let's say 200 px wide. So I would need to have a quite large rect to find the point of my city label. And then I will also get all kinds of other labels that might be there. Maybe even not visible, but in the dataset.
Is there no way to ask the map what the frontmost element was when I tapped? So that when I tap on the far end of the label, I still get that ONE feature?
I am still using Mapbox V6.3... the latest before their last major update.
But if it's not possible with that version, an answer about the latest V10.something would also be great.
For v10, this example demonstrates how to identify features near a click. While the overall example is to a different end, the onMapClick functions shows the method to find a feature and then build an annotation.
https://docs.mapbox.com/ios/maps/examples/view-annotation-marker/
I am making an application that works essentially like a simple Drag-and-Drop Playground with the command blocks on the left and a droppable area on the right. I want to make it fully compatible with VoiceOver and I'm running into trouble with some of the accessibility aspects since this is my first Swift application.
This is what the playground currently looks like: (App Screenshot)
My goal is to provide the users with audio cues/feedback while they are dragging the elements to help them figure out what part of the screen they are currently at. The ideal functionality would be exactly like what one uses when editing an iOS device's Home screen (the arrangement layout of the apps).
When trying to rearrange apps on the home screen with VoiceOver enabled, you hear a row/column alert when you are dragging an app over an open area. I want a similar type of feedback that says "Droppable Area" when you are over the correct area (see scenario 1).
When trying to rearrange apps on the home screen with VoiceOver enabled, you hear a sound when you tap on an area that has no app icon. (This also happens when you are not editing the layout and simply tap on an open area with no app.) I want that noise to be what you hear when you drag a command over an area that is not droppable (see scenario 2).
Any ideas on how this might be possible or good references to look at?
I need a reliable way to get the dimensions of the screen.
I know MediaQuery.of(context), but it removes the bottom padding when the bottom UI item is visible.
This appears to be impossible from within dart at the moment on Android due to flutter ignoring the bottom system UI (i.e. the buttons).
I thought this might be a bug, but if you look closely at the documentation it never states that window.physicalSize or MediaQueryData.size are the physical dimensions of the screen, but rather the size to which flutter can render. That probably makes sense, or else every single app would have to make sure to take that into account.
So what you're going to have to do is use method channels to communicate with android directly. I took a look already and there doesn't appear to be any plugins doing this, so you could wrap it up into one if you feel ambitious. But what you'll want to do is make a call to native and then get the physical screen size directly in java code. If you do that you'd probably be best off implementing it for iOS as well, although this same problem doesn't exist there (you could even do it directly in flutter with an if/else).
Luckily, someone has done this before so you can use it as an example: https://github.com/magnatronus/flutter-displaymetrics. Assuming that displayMetrics gives you the right size.
Hope that helps and sorry I don't have a simpler answer for you!
When an app is running in a third of an iPad screen, there is a small drag handle at the top of its window. In iOS 10, dragging on that handle lets you switch what app is open there. In iOS 11, you can use it to change the app from taking up a third of the screen to floating over the rest of the screen.
My question: how do I know when this handle is present, or at least know that there's something taking up that space? I need to lay out my UI content around it without conflicting with it. It doesn't appear to work with iOS 11's Safe Area APIs.
See here for a sample project trying to put a label at the top of a window without overlaying the drag handle. Run it in a third of an iPad screen.
Start by duplicating the radar. This is definitely something that should be handled by the safe area magic.
The issue here is that the handle is rendered by SpringBoard, so you can only apply tricks to guess whether it is currently visible. You can determine whether the window is at the right size and whether it is at the correct location on screen, and then add some extra safe area insets. This is normally ill-advised for several reasons, such as not knowing all cases where the handle appears, having to take into account left-to-right systems, etc., but in this case, the problem seems so egregious, I'm not sure I'd recommend leaving as is.
Edit
One more option is to see if UIWindow.safeAreaInsets returns a correct value. UINavigationController is able to deduce the safe area correctly, so it is hiding there somewhere.
I am currently working on a new project and I need to detect whether my SpriteNode has been touched or not. That is not the problem [yet]. But my SpriteNode includes an Png-Image and due to this it also includes an translucent part. And what I want to do is to "delete"/ignore this non-visible part and only detect whether the visible part has been touched. I tried it on several ways but nothing really worked. I need an reliable way to do that (Swift 4 or 3). I already read other posts but they did not help.
Or maybe you have other ways to determine an touchable area from Png-Image than using a SpriteNode.