The Appium Java client version 6.0.0 removed the driver.swipe(fromX, toX, fromY, toY, duration) API. From what I can tell, we are now supposed to use the TouchAction class to achieve the same, using the following code:
(new TouchAction(driver))
.press(PointOption.point(fromX, fromY))
.waitAction(WaitOptions.waitOptions(Duration.ofMillis(1000)))
.moveTo(PointOption.point(offsetX, offsetY))
.release()
.perform();
I think we have a gap there, because the duration of the swipe sounds like something that should be passed along in the moveTo() call and there is no method overload to achieve that.
The code I pasted above performs the actions: press, wait, move, release. What I would like to do is: press, then immediately start moving and make sure the swipe gesture spans exactly 1 second, then release. What would be the proper way to achieve this?
You will need to chain the waitAction() to happen after the moveTo() rather than after the press(). This will control the moveTo() gesture to take as long as you're defining in the waitAction():
(new TouchAction(driver))
.press(PointOption.point(fromX, fromY))
.moveTo(PointOption.point(offsetX, offsetY))
.waitAction(WaitOptions.waitOptions(Duration.ofMillis(1000)))
.release()
.perform();
Related
Because of general psychosis, Apple put an update() call in SKScene, but they forgot to put an Update call in SKSpriteNode.
Now, as far as all our testing can determine,
In SpriteKit, just using a "customAction" on a sprite seems to be exactly the same as running something in update in the scene.
func teste() {
let a = SKAction.customAction(withDuration: 5.0) { [weak self] node, elapsedTime in
print("Honest to goodness, this is the run loop. I think.")
print("\(self?.k) \(elapsedTime)")
self?.k += 1
}
run(a)
}
I have scoured the documentation to no avail.
Does anyone know if it's really true that "customAction" indeed runs every frame? Is it effectively and safely an Update call?
(Example: conceivably, there could be a horrific coincidence that they run it "every 1/60th" or something, and it's not really running on the same runloop as the scene.)
Or, since apparently it's really just Box2D, maybe someone can shed light on this from the Box2D milieu?
Resolution of the issue:
Thanks to Knight, we now know that using customAction "as the Update call" is almost the same as in the update call: it happens in the second phase of the run loop, which is immediately after the update phase.
{If you prefer to have it actually happen in the update phase, you would need to use the usual workaround in SpriteKit - just call your own update function in the game objects, from, the Update call apple provide in the scene.}
All action happen in the action phase of the update cycle
See https://developer.apple.com/documentation/spritekit/skscene
The problem
I'm trying to scroll on an Ionic App. I would like to scroll until an element is visible.
To test the procedure, I've written two sequential actions.
While testing just the first one is evaluated, the second one throws an exception Support for this gesture is not yet implemented.
How do I scroll, like a user would do, untill an element is visible if I cannot repeat actions?
Environment
Appium version 1.6.4-beta
Desktop OS/version used to run Appium: OSX Sierra
Real device or emulator/simulator: iPad Mini
Code To Reproduce Issue
TouchAction action = new TouchAction(this.driver);
Thread.sleep(5000);
action.press(150, 150).moveTo(0, 350).release().perform();
Thread.sleep(10000);
action.press(150, 150).moveTo(0, 350).release().perform();
The only possible, and horrible, solution that I've found is:
while( !element.isDisplayed() ){
TouchAction action = new TouchAction(this.driver);
action.press(150, 150).moveTo(0, 350).release().perform();
Thread.sleep(5000);
}
Hopefully someone can suggest a cleaner solution.
I'm making an iOS app which lets you remotely control music in an app playing on your desktop.
One of the hardest problems is being able to update the position of the "tracker" (which shows the time position and duration of the currently playing song) correctly. There are several sources of input here:
At launch, the remote sends a network request to get the initial position and duration of the currently playing song.
When the user adjusts the position of the tracker using the remote, it sends a network request to the music app to change the position of the song.
If the user uses the app on the desktop to change the position of the tracker, the app sends a network request to the remote with the new position of the tracker.
If the song is currently playing, the position of the tracker is updated every 0.5 seconds or so.
At the moment, the tracker is a UISlider which is backed by a "Player" model. Whenever the user changes the position on the slider, it updates the model and sends a network request, like so:
In NowPlayingViewController.m
[[slider rac_signalForControlEvents:UIControlEventTouchUpInside] subscribeNext:^(UISlider *x) {
[playerModel seekToPosition:x.value];
}];
[RACObserve(playerModel, position) subscribeNext:^(id x) {
slider.value = player.position;
}];
In PlayerModel.m:
#property (nonatomic) NSTimeInterval position;
- (void)seekToPosition:(NSTimeInterval)position
{
self.position = position;
[self.client newRequestWithMethod:#"seekTo" params:#[positionArg] callback:NULL];
}
- (void)receivedPlayerUpdate:(NSDictionary *)json
{
self.position = [json objectForKey:#"position"]
}
The problem is when a user "fiddles" with the slider, and queues up a number of network requests which all come back at different times. The user could be have moved the slider again when a response is received, moving the slider back to a previous value.
My question: How do I use ReactiveCocoa correctly in this example, ensuring that updates from the network are dealt with, but only if the user hasn't moved the slider since?
In your GitHub thread about this you say that you want to consider the remote's updates as canonical. That's good, because (as Josh Abernathy suggested there), RAC or not, you need to pick one of the two sources to take priority (or you need timestamps, but then you need a reference clock...).
Given your code and disregarding RAC, the solution is just setting a flag in seekToPosition: and unsetting it using a timer. Check the flag in recievedPlayerUpdate:, ignoring the update if it's set.
By the way, you should use the RAC() macro to bind your slider's value, rather than the subscribeNext: that you've got:
RAC(slider, value) = RACObserve(playerModel, position);
You can definitely construct a signal chain to do what you want, though. You've got four signals you need to combine.
For the last item, the periodic update, you can use interval:onScheduler::
[[RACSignal interval:kPositionFetchSeconds
onScheduler:[RACScheduler scheduler]] map:^(id _){
return /* Request position over network */;
}];
The map: just ignores the date that the interval:... signal produces, and fetches the position. Since your requests and messages from the desktop have equal priority, merge: those together:
[RACSignal merge:#[desktopPositionSignal, timedRequestSignal]];
You decided that you don't want either of those signals going through if the user has touched the slider, though. This can be accomplished in one of two ways. Using the flag I suggested, you could filter: that merged signal:
[mergedSignal filter:^BOOL (id _){ return userFiddlingWithSlider; }];
Better than that -- avoiding extra state -- would be to build an operation out of a combination of throttle: and sample: that passes a value from a signal at a certain interval after another signal has not sent anything:
[mergedSignal sample:
[sliderSignal throttle:kUserFiddlingWithSliderInterval]];
(And you might, of course, want to throttle/sample the interval:onScheduler: signal in the same way -- before the merge -- in order to avoid unncessary network requests.)
You can put this all together in PlayerModel, binding it to position. You'll just need to give the PlayerModel the slider's rac_signalForControlEvents:, and then merge in the slider value. Since you're using the same signal multiple places in one chain, I believe that you want to "multicast" it.
Finally, use startWith: to get your first item above, the inital position from the desktop app, into the stream.
RAC(self, position) =
[[RACSignal merge:#[sampledSignal,
[sliderSignal map:^id(UISlider * slider){
return [slider value];
}]]
] startWith:/* Request position over network */];
The decision to break each signal out into its own variable or string them all together Lisp-style I'll leave to you.
Incidentally, I've found it helpful to actually draw out the signal chains when working on problems like this. I made a quick diagram for your scenario. It helps with thinking of the signals as entities in their own right, as opposed to worrying about the values that they carry.
What is the difference between TOUCH_BEGIN, TOUCH_OVER and TOUCH_ROLL_OVER in the TouchEvent for AS3?
I'm trying to find the correct one to use for an "on-tap"/while tapping state, and also one for after the button has been "tapped".
Count TOUCH_BEGIN as touchpad's MOUSE_DOWN version, with an additional parameter - the index of the touching point (0-based), so that you won't mix up touch events. TOUCH_END is the touchpad's MOUSE_UP, TOUCH_ROLLOVER is roughly mapped to MOUSE_DOWN condition and MOUSE_OVER event at once.
I'd like to do the equivalent of chrome.tabs.onUpdated in Firefox. tabs.on('ready', function(tab){}) does not work because it does not detect the back button. How do I fire an action on every page load such that it also detects the back button using the Firefox SDK?
You'd have to use require('window-utils').WindowTracker to all windows, filter for browser windows with the require('sdk/window/utils').isBrowser(window) method, then listen to click events on the back button.
It's currently impossible, but will be possible in a future version of Firefox:
https://github.com/mozilla/addon-sdk/commit/e4ce238090a6e243c542c2b421f5906ef465acd0
A bit of a late answer, but for anyone reading this now (from 2016), it is now possible to do using the SDK!
Using the High-Level API tabs, you need to listen for the pageshow event. (More about this on MDN)
An example:
tabs.on('pageshow', function(tab) {
// Your code here
})
It is very similar to the load and ready events, the main difference being that is is also fired when a page is loaded from BFCache (which it is when the back button is pressed).
I think the following snippet gives the functionality of chrome.tabs.onUpdated
var tabs = require("sdk/tabs");
tabs.on('ready', function(tab){
console.log(tab.url);
});