Rikulo - touchEnd check if ended inside bounds - dart

When "listening" to touchEnd event of a View, how can I determine weather the touch ended inside it's bounds?
in other words, the equivalent of the iOS touchUpInside.
Note-
I'm adding a touchEnd handler and not only on.click because iOS devices have this 300 ms intentional delay and I would like to eliminate it (please point out if there is a better solution to this).

Regarding the event handling, Rikulo UI doesn't handle specially but adding a thin layer on top of what Dart API provides. After all, a view is simply a DIV element with some special attributes.
Here is the list of events we wrapped: Rikulo events. If there is a miss, please let us know (by posting an issue to Github).
Regarding simulation of touchUpInside, this one might help.

Related

OpenLayers: How to detect the map view is completely loaded?

I'm implementing map exporting functionality using OpenLayers 3.
But there is one problem: one cannot determine whether the map view is completely loaded or a few tiles is missing yet.
It seems there is no such API or event. The close one is tileloadstart - tileloadend pair. But OpenLayers loads tiles asynchronously, and before the tile is actually loading the tileloadstart is not fired - that is, a tile that is queued in the tile queue does not fire the event before actually loading.
Hot can I detect the map view is completely loaded?
postrender event seems to do the trick, like this:
map.once('postrender', function(event) {
doyourmagic();
});
Works at least from OpenLayers 3.8.2. There is fine answer there about the subject.
I eventually implemented the export function successfully. Below is the rough explanation.
Register tileloadstart, tileloadend, tileloaderror event handlers on the ol.sources using ol.source.on(), and start managing the tile load count.
Register postcompose event handlers on the ol.Map using ol.Map.once().
call ol.Map.renderSync(). This triggers the tile loading, so from now if there is no tile loading, it will mean that all tile have been loaded.
On postcompose event handler, capture the map content from event.context using event.context.canvas.toDataURL(), and register postrender function using event.frameState.postRenderFunctions.push() (a bit hacky).
On the registered postrender function, check the tile load count (which can be managed by the tileload* event handlers). If the count is not zero, abandon the captured content. Else, the capture is done.
On tileloadend and tileloaderror, if the tile load count becomes zero, retry from the step 3 above.
Meanwhile, OpenLayers provides a much sought after rendercomplete event which may be handy.
Basically to make sure everything is rendered on your map you need to listen for loadend events for each layer you have on the map. For wms and wfs layers this is clear and I guess you know how to do it.
For the tile layers , check this example here

Detect UI changes

I have a function that continuously takes screenshots of an UI element to then draw it itself. The UI element can change, so I take a screenshot in very short intervals to not have the second drawing lag behind (please don't question this and just assume that redrawing it is the right way. The use case is a bit more complicated).
However, taking the screenshot and invalidating the previous drawing to redraw it is quite an expensive operation, and most often not needed as the UI element doesn't update that often. Is there a way to detect when a UI element changes in such a way that it needs redrawing, including when it happens to one of its subviews? One solution would be to copy the state and the states of all its descendents and then check that, but that doesn't seem like a good solution either. iOS must know internally when it needs to redraw/update the views, is there any way to hook into this? Note that I tagged this UIKit and Core-Animation, I suppose the way to go for this is Core-Animation, but I'm open for a solution that uses either of these.

Delphi iOS touch points

I am wanting to get the touch points in an iOS app under Delphi (preferably as an event).
Is there an interface for this that is exposed?
I found this:
Get touch points in UIScrollView through UITapGestureRecognizer.
But I would not know if its possible to convert this code.
I am trying to implement a number of on-screen sliders (audio faders) so that the user and slide their finger to move it up and down. I plan to put a transparent rectangle control over top of the sliders so that I can get multiple touch points and move more than one simultaneously.
After going down this rabbit hole about as far as possible I am sad to say that's its not possible to get multiple touch points in Delphi. This is very shortsighted and appears to be a trivial oversight by Embarcadero.
Essentially the Delphi UIView implementation does not expose the "multipleTouchEnabled" property so the touch events inside of FMX.Platform.iOS.pas will never receive more than one touch event.
I am posting this to save the next person the time that I put into find this disappointing answer.

iOS: How to make the CADisplayLink's event called BEFORE actual screen draw?

I'm building a cross platform UI library's iOS implementation using UIKit, one of the library's primary function is allow user to change the child control's size freely, and the parent control's size will automatically adapt.
Since refresh the parent's size everytime when a child's size changed is inefficient and unnecessery, so I designed the UI system to refresh all "dirty" control's position, size, and a lot of things before actual device draw/render happen. On iOS, I use CADisplayLink to call the refresh method, then I discovered the event was called AFTER everything has presented onto screen, that caused the following problem:
User will see a "crashed" layout first. (The render happens first)
After a short period (CADisplayLink's event triggered), everything will return to normal.
At first I thought my way of using CADisplayLink is wrong, but the solution cannot be found anywhere, so I'm quite despaired right now (I'm going to hang my self!!)
Or maybe I shouldn't use CADisplayLink at all?
Please Help me!
PS. Since I'm building a cross platform thing I'm actually using MonoTouch, but I believe the basic concept is same.
PS2. And since I'm using MonoTouch, which is C#, so the description above may not fit in the Objective-C world (like the word "event", I think the Obj-C relevant is selector, or something ^_^)
PS3. Also please pardon my poor English, feel free to ask me any questions if my description isn't clear enough.
Codes here:
CADisplayLink _displayLink = CADisplayLink.Create(Update); //Update is my refresh method
_displayLink.AddToRunLoop(NSRunLoop.Current, NSRunLoop.NSDefaultRunLoopMode);
Should be easy enough to understand ^_^
From all the information of what I can gather, is that basiclly there is no way of doing that. So I have modified my layout code, which now apply the properties immediatly after a value is set. Some optimization still required, but no need to rely on CADisplayLink anymore.
Thanks anyway!

Touch/Drag Heuristics have changed in iOS 6. Any way to get the old behavior back?

Our iPad app uses a webkit UI for a lot of the user interaction, and we are now fielding complaints from users that in iOS 6, the UI is ignoring their touches. We've done side-by-side comparisons, and are now quite certain that whereas a touch-small-drag-release gesture in iOS 5 would trigger on onclick event, a touch-small-drag-release gesture in iOS 6 does not. Thus, in iOS 6, you need to be very careful to never move your finger while pressing a button on the UI. (Or, perhaps they just changed the definition of "small" in small-drag.)
We believe that disabling multi-touch gestures in the Settings > General page improves things somewhat, although we're not convinced this isn't a placebo effect.
As a test, I tried removing the scroll-preventing:
document.body.addEventListener('touchmove', function(e){ e.preventDefault(); });
from our code, but it made no difference (other than making it really obvious that the drag events are dragging).
My next idea is to go through and change everywhere that we rely on onclick to instead rely on ontouchstart, but, well, yuck. (Particularly, yuck, in cases where we also need the same code to work in desktop browsers.)
Are we alone here? I'm not finding any complaints about this in my searches. Any clever ideas?
You are not alone!
We hit a similar problem in our new game Blue Pilot, while we extend the support to iPhone 5. We are now handling both events.

Resources