This isn't exactly a critical bug, but I always feel weird shaking phones at my desk at work, even more so when it doesn't work first time. If we start talking about shaking iPad Pros, it just feels wrong.
On Android, I can run the following command: adb shell input keyevent KEYCODE_MENU
Is there an iOS equivalent?
Thanks
Sadly no.
You can vote for it on Canny here. Until then, your best bet for iOS is to use a workaround such as one of the ones suggested from the original Github issue. For example, creating your own multi-touch shortcut for opening the dev menu as seen here. It's not ideal but it should work. (code copy pasted below)
import React from 'react';
import {
View,
PanResponder,
NativeModules,
} from 'react-native';
const DevMenuTrigger = ({children}) => {
const {DevMenu} = NativeModules;
const panResponder = PanResponder.create({
onStartShouldSetPanResponder: (evt, gestureState) => {
if (gestureState.numberActiveTouches === 3) {
DevMenu.show();
}
},
});
return <View style={{flex: 1}} {...panResponder.panHandlers}>{children}</View>;
};
...
AppRegistry.registerComponent('myApp', (): any => <DevMenuTrigger><MyApp></DevMenuTrigger>
Related
I am running some tests via Nightwatch and Appium, and I have been unable to successfully implement a scroll action using the iOS Simulator. My tests are set up to be able to run on a Chrome, Safari, or Firefox browser, or an iOS Simulator using Safari. The application is built using React, and I am using Javascript. Everything runs smoothly until I have to scroll to a particular element on the screen.
When using the web browsers, all I need to do is send a .click() method on a specific element, and that automatically brings the element into view, but on the iOS Simulator that does not appear to be the case.
This is what I have set up for iOS in my nightwatch.conf.js file. So far any methods that I have seen from my searching online have come up short. The Appium docs have several methods listed, but none of them have been executable, or have failed silently. Does anyone have a possible solution or suggestion about how to properly execute a scroll using this set up? Thanks, and much appreciated
"ios": {
"selenium_start_process": false,
"selenium_port": 4723,
"selenium_host": "localhost",
"silent": true,
"automationName": "XCUITest",
"desiredCapabilities": {
"browserName": "safari",
"platformName": "iOS",
"platformVersion": "12.2",
"deviceName": "iPad Pro (9.7 -inch)"
}
},
Here is an example of what I have tried to implement (The goal is to move to the button element and click it once it's into view - it's a dropdown)
browser
.assert.elementPresent('div[data-mr="quiz-dont-know"]')
.click('div[data-mr="quiz-dont-know"]')
.assert.elementPresent('div[data-mr="grey-info"]')
browser.pause(2000)
browser.element('css selector', 'div[data-"mr=grey-info"]', function(button) {
console.log("THIS ELEMENT IS " + button.value.ELEMENT);
browser.moveTo(button.value.ELEMENT, 10, 10)
browser.elementIdClick(button.value.ELEMENT);
})
.pause(2000)
.assert.elementPresent('div[data-mr="grey-info-options"]')
I think you want to use ".scrollIntoView()"
buttons.value.forEach((button, index) => {
browser.elementIdText(button.ELEMENT, result => {
if(result.value === 'None') {
console.log("THIS ELEMENT IS " + button.ELEMENT);
browser.execute(function(button) {
//var elmnt =button // based on user comment
var elmnt = document.querySelector([data-mr="grey-info"]')
elmnt.scrollIntoView();
});
browser.elementIdClick(button.ELEMENT);
}
});
});
Piggybacking on Jortega's suggestion, I found a solution. Though his initial suggestion was unsuccessful, I wrestled with it a bit and found out that if I use document.querySelector([data-mr="grey-info"]') and assign a variable to that, THEN I was able to call the scrollIntoView method. Much obliged
I want to trigger an action on double right-click of mouse when electron app is running in the background.
I read the documentation and seems like there are no globalshortcuts for mouse events.
Any other way to achieve this? perhaps some node module compatible with electron app?
Unfortunately, we can't achieve that yet.
As MarshallOfSound commented on this official issue
"globalShortcut intercepts the key combination globally and prevents any application from receiving those key events. If you blocked apps from receiving mouse button presses things would break everywhere very quickly 👍"
https://github.com/electron/electron/issues/13964
For macOS, I'm currently using Keyboard Maestro App.
I'm getting my mouse keys with this app and triggering a globalShortcut key combination register in my Electron App.
Maybe for Windows, AHK (auto hot keys)
I found this nice solution for HTML code
<script type = "text/javascript">
const {remote} = require('electron')
const {Menu, MenuItem} = remote
const menu = new Menu()
// Build menu one item at a time, unlike
menu.append(new MenuItem ({
label: 'MenuItem1',
click() {
console.log('item 1 clicked')
}
}))
menu.append(new MenuItem({type: 'separator'}))
menu.append(new MenuItem({label: 'MenuItem2', type: 'checkbox', checked: true}))
menu.append(new MenuItem ({
label: 'MenuItem3',
click() {
console.log('item 3 clicked')
}
}))
// Prevent default action of right click in chromium. Replace with our menu.
window.addEventListener('contextmenu', (e) => {
e.preventDefault()
menu.popup(remote.getCurrentWindow())
}, false)
</script>
Put this as first item in your HTML Body and it should work. At least it worked on my project
EDIT, cause I forgot it: Credits to google for answer on 6th entry
I want to simulate a mobile device in desktop environment. I can't find a argument to transform mouse event to touch event.
How can I approach this job? Any hint will be great. Thank you so much.
I think I figured it out. Dev environment:
node 7.4.0
Chromium 54.0.2840.101
Electron 1.5.1
const app = electron.app
const BrowserWindow = electron.BrowserWindow
let mainWindow;
function createWindow() {
mainWindow = new BrowserWindow({
width: 1024,
height: 768,
frame: false,
x: -1920,
y: 0,
autoHideMenuBar: true,
icon: 'assets/icons/win/app-win-icon.ico'
});
try {
// works with 1.1 too
mainWindow.webContents.debugger.attach('1.2')
} catch (err) {
console.log('Debugger attach failed: ', err)
}
const isDebuggerAttached = mainWindow.webContents.debugger.isAttached()
console.log('debugger attached? ', isDebuggerAttached)
mainWindow.webContents.debugger.on('detach', (event, reason) => {
console.log('Debugger detached due to: ', reason)
});
// This is where the magic happens!
mainWindow.webContents.debugger.sendCommand('Emulation.setTouchEmulationEnabled', {
enabled: true,
configuration: 'mobile',
});
mainWindow.loadURL(url.format({
pathname: path.join(__dirname, 'index.html'),
protocol: 'file:',
slashes: true,
show: false,
backgroundColor: '#8e24aa ',
}));
// Show the mainwindow when it is loaded and ready to show
mainWindow.once('ready-to-show', () => {
mainWindow.show()
})
}
// Listen for app to be ready
app.on('ready', createWindow);
Take a look at this Electron github issue or this Atom Discussion for ideas on how to get the touch working with Electron.
As far as how to approach it, I would look through the mouse events and touch events and just wire up a function that combines the Electron api and the relevant web api for mouse/touch.
Looking at the web-contents API, the only thing you can do is to open the dev tools with:
// win being a BrowserWindow object
win.webContents.openDevTools();
Then you will have to manually click on the responsive tools (the smartphone icon), and you will get into the mode you want.
But I am afraid there is no way to do it programatically. Mostly because it is considered as a development tool, and not a browser feature, so you will have the toolbar at the top and all these things. Not really something you want on Production.
You can try to use Microsoft Windows Simulator. You need to install Visual Studio 2019 with Universal Windows Platform development then run the simulator through:
C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Simulator\16.0\Microsoft.Windows.Simulator.exe
I tested my electron app that also needs to run well both on touchscreen devices and devices without touch screen using this and it works really well to simulate touchscreen devices.
Do not forget to switch to touchscreen input on the right side of the panel, otherwise, the default input will simulate a mouse pointer.
I'm trying to create a component using React Native like so:
export class IndicatorOverlay extends Component {
render() {
return (
<View>
<Text>text</Text>
</View>
);
}
};
The above works, but when I try to make it stateless like so...
export default ({ text = 'text' }) => {
return (
<View>
<Text>{text}</Text>
</View>
);
};
I get the following error:
Element type is invalid: expected a string (for built-in components)
or a class/function (for composite components) but got: undefined. You
likely forgot to export your component from the file it's defined in.
I'm sure I'm missing something basic, but I just can't see it. I use a similar stateless component in a React web app and it's fine.
Using react 16.0.0-alpha.6 and react-native 0.43.2, and am seeing this error in the iPhone simulator.
Hope someone can help :)
This is likely because the first example is a named export, while the second one is a default one therefore the way need to import them is different.
Assuming you import your module like this:
import { IndicatorOverlay } from 'IndicatorOverlay';
you have two options. Either:
1) change the way you import your module (since the stateless component is a default export now):
import IndicatorOverlay from 'IndicatorOverlay';
2) keep the import intact, but refactor your stateless component to something like this:
export const IndicatorOverlay = ({text = 'text'}) => {
return (
<View>
<Text>{text}</Text>
</View>
);
};
You can make it more DRY btw:
export const IndicatorOverlay = ({ text = 'text' }) => (
<View>
<Text>{text}</Text>
</View>
);
You can read more about imports and exports on MDN:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/export
I'm running a React.js/Cordova/OnsenUI application that is intended to be used both in the browser and on mobile devices. I'd like the user to be able to scan a QR code, then jump to a screen in my application.
This is what the application looks like right now:
import React from 'react';
import {
Navigator
} from 'react-onsenui';
import MainPage from './MainPage';
import Vendor from './Vendor';
const renderPage = (route, navigator) => (
<route.component key={route.key} navigator={navigator} />
);
const App = () => (
<Navigator
renderPage={renderPage}
initialRoute={{component: MainPage, key: 'MAIN_PAGE'}}
/>
);
export default App;
When I start up, depending on the URL, I might want to start with a Vendor component or a MainPage component.
I figured that the easiest thing to do would be to dynamically create the initialRoute object based on the QR code that was scanned. Given that I might be on an iOS device, how do I know what the URL was that was scanned? Is there a different way that I should be jumping to a specific screen when I start the app?