Creating Selenium node - appium

I am getting this error while i am trying to attach my real Android device as a Selenium node to hub?
Couldn't register this node: The hub is down or not responding: Connect to x.x.x.x:4445 [/x.x.x.x] failed: Network is unreachable
Any suggestions would be appreciated.
config.json file :
{ "capabilities":
[ {
"browserName": "Android",
"version":"4.4.2",
"maxInstances": 3,
"platform":"ANDROID",
"deviceName":"emulator-5554" } ],
"configuration":
{ "nodeTimeout":120,
"port":4728,
"hubPort":4444,
"proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"url":"127.0.0.1:4728/wd/hub",
"hub": "127.0.0.1:4444/grid/register",
"hubHost":"127.0.0.1",
"nodePolling":2000,
"registerCycle":10000,
"register":true,
"cleanUpCycle":2000,
"timeout":30000, "maxSession":1 }
}

Giving config.json file which is working fine for me on selenium grid for android device as well as emulators. Give it a try.
{
"capabilities":
[
{
"browserName": "",
"version":"",
"maxInstances": 1,
"platform":"ANDROID"
}
],
"configuration":
{
"cleanUpCycle":2000,
"timeout":30000,
"proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"url":"http://127.0.0.1:4720/wd/hub",
"host":"192.168.1.163",
"port": 4723,
"maxSession": 1,
"register": true,
"registerCycle": 5000,
"hubPort": 4444,
"hubHost": "192.168.1.163"
}
}
I am getting below error message.
error: `
uncaughtException: Cannot read property 'url' of undefined date=Fri
Oct 30 2015 10:31:30 GMT+0530 (IST), pid=2850, uid=1020000013,
gid=1020000013,
cwd=/home/sandeepreddy/.npm-packages/lib/node_modules/appium,
execPath=/usr/bin/nodejs, version=v0.10.37, argv=[node,
/usr/bin/appium, --nodeconfi**
`

Related

Not able to run test cases in nightwatch framework on ec2 amazon linux instance through jenkins

While running the testcases through jenkins on ec2 instance,I am getting error message.
Here's my nightwatch configuration:
{
"src_folders" : ["test"],
"globals_path": "globals.js",
"output_folder" : "reports",
"custom_commands_path" : "./commands",
"custom_assertions_path" : "./assertions",
"page_objects_path":"./pages",
"test_workers" : {
"enabled" : false,
"workers" : "auto"
},
"selenium" : {
"start_process" : true,
"server_path" : "./bin/selenium-server-standalone-4.0.0.jar",
"log_path" : "",
"port" : 4444,
"cli_args" : {
"webdriver.chrome.driver" : "./bin/chromedriver_linux"
}
},
"test_settings" : {
"default" : {
"request_timeout_options": {
"timeout": 100000
},
"videos": {
"enabled": false,
"delete_on_pass": false,
"path": "reports/videos",
"format": "mp4",
"resolution": "1280x720",
"fps": 15,
"display": ":",
"pixel_format": "yuv420p",
"inputFormat": "mjpeg"
},
"launch_url" : "http://localhost",
"selenium_port" : 4444,
"selenium_host" : "localhost",
"screenshots" : {
"enabled" : false,
"on_failure" : true,
"on_error" : true,
"path" : "./screenshots"
},
"end_session_on_fail" : true,
"skip_testcases_on_fail" : false,
"use_xpath" : true,
"globals" : {
"url" : "http://ec30-3-100-2-16.us-north-10.compute.amazonws.com:1000/login"
},
"desiredCapabilities": {
"browserName": "chrome",
"chromeOptions": {
"w3c": false,
"args" : ["headless","no-sandbox"]
},
"javascriptEnabled": true,
"acceptSslCerts": true
}
}
}
}
getting below error message in the console :
Login Test Test Suite
==========================
- Connecting to localhost on port 4444...
Connected to localhost on port 4444 (31794ms).
Using: chrome (81.0.4044.129) on Linux platform.
Running: Verify user is able to login
POST /wd/hub/session/2a3ca3b508f6dda4d0933225c41824a4/url - ECONNRESET
Error: socket hang up
at connResetException (internal/errors.js:604:14)
at Socket.socketCloseListener (_http_client.js:400:25)
Error while running .navigateTo() protocol action: An unknown error has occurred.
POST /wd/hub/session/2a3ca3b508f6dda4d0933225c41824a4/elements - ECONNRESET
Error: socket hang up
at connResetException (internal/errors.js:604:14)
at Socket.socketCloseListener (_http_client.js:400:25)
Error while running .locateMultipleElements() protocol action: An unknown error has occurred.
I have installed the chrome browser(81.0.4044.129) in ec2 instance and their respective chrome linux driver
selenium server : selenium-server-standalone-4.0.0.jar
Note:
I configured the Jenkins in my local machine(MAC OS) and its working fine.
Please let me know if you need more information.
I believe you security group attached to the EC2 server doesn't have ICMP IPV4 inbound rules accessible to your server running this nightwatch script. Try adding your nightwatch server IP address in the ICMP IPV4 inbound rules of the ec2 server you provided in the URL or you can even make it publicly accessible. I hope this resolves your issue.

Having trouble accessing /dev/serial0 on a Raspi from within a Module in Azure IoT Edge

I'm trying to set up a Module which will interact with /dev/serial0 on a Raspberry Pi B+ running Raspian Stretch. I've used dtoverlay=pi3-miniuart-bt in /boot/config.txtto restore UART0/ttyAMA0 to GPIOs 14 and 15 (which is what my Raspi-based HW needs me to do).
I have attempted to make that device accessible to the Module using the following Container Create Options:
{
"HostConfig": {
"PortBindings": {
"1880/tcp": [
{
"HostPort": "1880"
}
]
},
"Privileged": true,
"Devices": [
{
"PathOnHost": "/dev/serial0",
"PathInContainer": "/dev/serial0",
"CgroupPermissions": "rwm"
},
{
"PathOnHost": "/dev/ttyAMA0",
"PathInContainer": "/dev/ttyAMA0",
"CgroupPermissions": "rwm"
},
{
"PathOnHost": "/dev/ttyS0",
"PathInContainer": "/dev/ttyS0",
"CgroupPermissions": "rwm"
}
]
}
}
I can see /dev/serial0 when I ssh in, but I can't see it from within the running Module:
pi#azure-iot-test:~ $ ls -l /dev/ser*
lrwxrwxrwx 1 root root 7 Sep 24 21:17 /dev/serial0 -> ttyAMA0
lrwxrwxrwx 1 root root 5 Sep 24 21:17 /dev/serial1 -> ttyS0
pi#azure-iot-test:~ $ sudo docker exec hub-nodered ls -l /dev/ser*
ls: /dev/serial0: No such file or directory
ls: /dev/serial1: No such file or directory
Any ideas?
Followup:
Additional things I have tried the following ideas gleaned from here:
Adding "User": "node-red" to the root of the Container Create Options
Adding "User": "root" to the root of the Container Create Options
Adding "GroupAdd": "dialout" to "HostConfig": {...} in the Container Create Options
Followup #2
While I still can't interact with /dev/serial0, I am able to interact with /dev/ttyAMA0 using the following Container Create Options:
{
"HostConfig": {
"GroupAdd": [
"dialout"
],
"PortBindings": {
"1880/tcp": [
{
"HostPort": "80"
}
]
},
"Devices": [
{
"PathOnHost": "/dev/serial0",
"PathInContainer": "/dev/serial0",
"CgroupPermissions": "rwm"
},
{
"PathOnHost": "/dev/ttyAMA0",
"PathInContainer": "/dev/ttyAMA0",
"CgroupPermissions": "rwm"
}
]
}
}
The noteworthy items appear to be:
I didn't need "Privileged": true in "HostConfig"
I don't seem to need a "User" added
I needed "GroupAdd": ["dialout"] in "HostConfig"
So, while it's satisfying that I can interact with a serial device as I wanted to, it seems odd that I can't interact with /dev/serial0, which seems like it's "the recommended way" from the reading I've done.
Thanks to help and insight from Raymond Mouthaan over on the very helpful Node-RED Slack channel, I found my way to this Container Create Options providing me access to /dev/serial0:
{
"User": "node-red:dialout",
"HostConfig": {
"PortBindings": {
"1880/tcp": [
{
"HostPort": "80"
}
]
},
"Devices": [
{
"PathOnHost": "/dev/serial0",
"PathInContainer": "/dev/serial0",
"CgroupPermissions": "rwm"
}
]
}
}
This is different than the partial solution I found in "Followup #2" above in that I now do get access to /dev/serial0 as desired.
UPDATE: I originally posted this answer using "User": "root:dialout" rather than the "User": "node-red:dialout" you currently see above.
The first working solution was with "User": "root:root", but it seemed good to me to constrain to just the devices that are called out in Devices, which root:dialout seemed to do.
But I wondered whether I should be concerned security-wise with running as root at all.
Then I tried using node-red:dialout and it seems to be working perfectly, so I've updated the Container Create Options above to be what I think is the best answer.

Hyperledger Composer - multi host installation

I've been experimenting with Hyperledger Fabric, deployed over 2 VirtualBox Ubuntu images with docker swarm. But I have some issues when it comes to the Composer installation.
Network Setup:
Host1: Orderer, Peer1.Org1, Peer2.Org1, CLI
Host2: Peer1.Org2, Peer2.Org2
When it comes to the Fabric setup everything appears to be ok. I'm able to start the network, join the peers from the second host, and update the anchor peers (one for each organisation).
The Composer installation starts with creating and importing the business network card and then installing the .bna file onto the network.
The issues appear when I try to start the network:
Error: Error trying to start business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: failed to execute transaction 14f90ad938da64fbbdb2923b07f4985251391937fc3fdc8bab19c2d13135ecd3: error starting container: error starting container: API error (500): Could not attach to network net_example: rpc error: code = NotFound desc = network net_example not found
Response from attempted peer comms was an error: Error: failed to execute transaction 14f90ad938da64fbbdb2923b07f4985251391937fc3fdc8bab19c2d13135ecd3: error starting container: error starting container: API error (500): Could not attach to network net_example: rpc error: code = NotFound desc = network net_example not found
Response from attempted peer comms was an error: Error: Failed to connect before the deadline
Response from attempted peer comms was an error: Error: Failed to connect before the deadline
When I inspect the first peer from org1, I'm seeing the following error:
2018-09-24 13:46:44.665 UTC [lscc] executeInstall -> INFO 03c Installed Chaincode [example-network] Version [0.0.1] to peer
2018-09-24 13:46:46.993 UTC [dockercontroller] Start -> ERRO 03d start-could not start container: API error (500): Could not attach to network net_example: rpc error: code = NotFound desc = network net_example not found
2018-09-24 13:46:47.008 UTC [chaincode] Launch -> ERRO 03e start failed: API error (500): Could not attach to network net_example: rpc error: code = NotFound desc = network net_example not found
error starting container
error starting container
2018-09-24 13:46:47.008 UTC [endorser] SimulateProposal -> ERRO 03f [mychannel][14f90ad9] failed to invoke chaincode name:"lscc" , error: API error (500): Could not attach to network net_example: rpc error: code = NotFound desc = network net_example not found
error starting container
error starting container
failed to execute transaction 14f90ad938da64fbbdb2923b07f4985251391937fc3fdc8bab19c2d13135ecd3
github.com/hyperledger/fabric/core/chaincode.(*ChaincodeSupport).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/chaincode_support.go:181
github.com/hyperledger/fabric/core/endorser.(*SupportImpl).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/support.go:131
github.com/hyperledger/fabric/core/endorser.(*Endorser).callChaincode
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:173
github.com/hyperledger/fabric/core/endorser.(*Endorser).SimulateProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:287
github.com/hyperledger/fabric/core/endorser.(*Endorser).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:501
github.com/hyperledger/fabric/core/handlers/auth/filter.(*expirationCheckFilter).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/handlers/auth/filter/expiration.go:61
github.com/hyperledger/fabric/core/handlers/auth/filter.(*filter).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/handlers/auth/filter/filter.go:31
github.com/hyperledger/fabric/protos/peer._Endorser_ProcessProposal_Handler
/opt/gopath/src/github.com/hyperledger/fabric/protos/peer/peer.pb.go:112
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).processUnaryRPC
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:923
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).handleStream
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:1148
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:637
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
2018-09-24 13:46:51.770 UTC [lscc] Invoke -> ERRO 040 error getting chaincode example-network on channel [mychannel]: could not find chaincode with name 'example-network'
This is my connectionProfile.json:
{
"name": "example-network",
"x-type": "hlfv1",
"version": "1.0.0",
"channels": {
"mychannel": {
"orderers": [
"orderer.example.com"
],
"peers": {
"peer0.manager.example.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"ledgerQuery": true,
"eventSource": true
},
"peer1.manager.example.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"ledgerQuery": true,
"eventSource": true
},
"peer0.sponsor.example.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"ledgerQuery": true,
"eventSource": true
},
"peer1.sponsor.example.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"ledgerQuery": true,
"eventSource": true
}
}
}
},
"organizations": {
"Manager": {
"mspid": "ManagerMSP",
"peers": [
"peer0.manager.example.com",
"peer1.manager.example.com"
],
"certificateAuthorities": [
"ca.manager.example.com"
]
},
"Sponsor": {
"mspid": "SponsorMSP",
"peers": [
"peer0.sponsor.example.com",
"peer1.sponsor.example.com"
],
"certificateAuthorities": [
"ca.sponsor.example.com"
]
}
},
"orderers": {
"orderer.example.com": {
"url": "grpcs://localhost:7050",
"grpcOptions": {
"ssl-target-name-override": "orderer.example.com"
},
"tlsCACerts": {
"pem": "INSERT_ORDERER_CA_CERT"
}
}
},
"peers": {
"peer0.manager.example.com": {
"url": "grpcs://localhost:7051",
"eventUrl": "grpcs://localhost:7053",
"grpcOptions": {
"ssl-target-name-override": "peer0.manager.example.com"
},
"tlsCACerts": {
"pem": "INSERT_MANAGER_CA_CERT"
}
},
"peer1.manager.example.com": {
"url": "grpcs://localhost:8051",
"eventUrl": "grpcs://localhost:8053",
"grpcOptions": {
"ssl-target-name-override": "peer1.manager.example.com"
},
"tlsCACerts": {
"pem": "INSERT_MANAGER_CA_CERT"
}
},
"peer0.sponsor.example.com": {
"url": "grpcs://10.0.0.113:9051",
"eventUrl": "grpcs://10.0.0.113:9053",
"grpcOptions": {
"ssl-target-name-override": "peer0.sponsor.example.com"
},
"tlsCACerts": {
"pem": "INSERT_SPONSOR_CA_CERT"
}
},
"peer1.sponsor.example.com": {
"url": "grpcs://10.0.0.112:10051",
"eventUrl": "grpcs://10.0.0.112:10053",
"grpcOptions": {
"ssl-target-name-override": "peer1.sponsor.example.com"
},
"tlsCACerts": {
"pem": "INSERT_SPONSOR_CA_CERT"
}
}
},
"certificateAuthorities": {
"ca.manager.example.com": {
"url": "https://localhost:7054",
"caName": "ca-manager",
"httpOptions": {
"verify": false
}
},
"ca.sponsor.example.com": {
"url": "https://10.0.0.111:8054",
"caName": "ca-sponsor",
"httpOptions": {
"verify": false
}
}
}
}
Does anybody know what I could try next?
The composer network start command is attempting to start a 'chaincode' container for each of the peers, and from error you show, there are 2 of your peers that cannot start these new container.
The error looks like a Docker error - the containers cannot be started on a network bridge called "net_example". I would guess that you have an environment variable defined in one of your docker-compose-xxx.yaml files that defines this variable: CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE which determines which network bridge the new containers are attached to.
You will need to determine which network bridge our other Fabric containers are using, then set this variable in the .yaml to that bridge.
This previous post has some discussion on multi-host fabric.
I dont think you need Docker Swarm for multi host Fabric setup.
I tried this tool from Altoros for easy setup of multi host Fabric. it did work with Fabric 1.1. For 1.2 it did not work - i think it requires some changes.
https://www.altoros.com/blog/deploying-a-multi-node-hyperledger-fabric-network-in-5-steps/
There is another tool from Debut Infotech. you can try this too.
https://www.fabricdeployer.com/
Early on i tried setting up with Docker Swarm using the link below. it worked.
https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f

Get values from JSON in Ruby

I am trying to get the VolumeId and State of the Volume attached to the machines using aws API .
Code
#!/usr/local/bin/ruby
require "aws-sdk"
require "rubygems"
list=Aws::EC2::Client.new(region: "us-east-1")
volume=list.describe_volumes()
volumes=%x( aws ec2 describe-volumes --region='us-east-1' )
puts volumes
Below is the sample output of the command
aws ec2 describe-volumes --region='us-east-1' .
Please help to get VolumeID and state from the below
Sample Output of API(JSON):
{
"Volumes": [
{
"AvailabilityZone": "us-east-1d",
"Attachments": [
{
"AttachTime": "2015-02-02T07:31:36.000Z",
"InstanceId": "i-bca66353",
"VolumeId": "vol-892a2acd",
"State": "attached",
"DeleteOnTermination": true,
"Device": "/dev/sda1"
}
],
"Encrypted": false,
"VolumeType": "gp2",
"VolumeId": "vol-892a2acd",
"State": "in-use",
"Iops": 100,
"SnapshotId": "snap-df910966",
"CreateTime": "2015-02-02T07:31:36.380Z",
"Size": 8
},
]
}
for getting just the volume_ids ->
JSON.parse(volumes)['Volumes'].map{|v|v["VolumeId"]}
for getting just the states ->
JSON.parse(volumes)['Volumes'].map{|v|v["state"]}
for getting a hash/map with volume-ids as keys and their states as values ->
JSON.parse(volumes)['Volumes'].map{|v| [v["VolumeId"],v["state"]] }.to_h

Deploying docker container of Kibana 4 with port-mapping on Mesos/Marathon

I'm using mesos and marathon to deploy a container of Kibana 4. The JSON to deploy is:
{
"id": "/org/products/kibana/webapp",
"instances": 1,
"cpus": 1,
"mem": 768,
"uris": [],
"constraints": [
["hostname", "UNIQUE"]
],
"upgradeStrategy": {
"minimumHealthCapacity": 0.5
},
"healthChecks": [
{
"protocol": "HTTP",
"path": "/",
"portIndex": 0,
"initialDelaySeconds": 600,
"gracePeriodSeconds": 10,
"intervalSeconds": 30,
"timeoutSeconds": 120,
"maxConsecutiveFailures": 10
}
],
"env": {
"ES_HOST":"172.23.10.23",
"ES_PORT":"9200"
},
"container": {
"type": "DOCKER",
"docker": {
"image": "myregistry.local.com:5000/org/kibana:4.0.0",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 5601,
"hostPort": 0,
"servicePort": 50061,
"protocol": "tcp"
}
]
},
"volumes": [
{
"containerPath": "/etc/localtime",
"hostPath": "/etc/localtime",
"mode": "RO"
}
]
}
}
But when I post it, the kibana app never wake up and the stderr log is:
I0227 12:22:44.666357 1149 exec.cpp:132] Version: 0.21.1
I0227 12:22:44.669059 1178 exec.cpp:206] Executor registered on slave 20150225-040104-1124079532-5050-952-S0
/kibana/src/index.js:58
throw error;
^
Error: listen EADDRNOTAVAIL
at errnoException (net.js:905:11)
at Server._listen2 (net.js:1024:19)
at listen (net.js:1065:10)
at net.js:1147:9
at asyncCallback (dns.js:68:16)
at Object.onanswer [as oncomplete] (dns.js:121:9)
After that I try to eliminate a port mapping, because I found some references indicating that it's an port or network configuration problem. Then my Kibana 4 web app wake up fine, but I need configure a port-mapping to access via HTTP. I have not idea at about why marathon has problem with network and portMappings config. Some help will be appreciated.
This is a nasty problem, and I encountered it as well (running Kibana 4 on Mesos + Marathon).
The short answer:
If you use current master of the Kibana repository, this won't happen - the relevant code has changed in the 4.1.0 snapshot which is master at the time of writing.
The long answer:
4.0.0 has this chunk of code in src/server/index.js:
var port = parseInt(process.env.PORT, 10) || config.port || 3000;
var host = process.env.HOST || config.host || '127.0.0.1';
Marathon supplies HOST and POST environment variables by default, and the HOST variable is set to the Mesos slave's hostname. The above code makes Kibana try to bind to the IP address of the Mesos slave (which Marathon has placed in HOST), which will fail, as it's running inside a Docker container.
If you want to run 4.0.0, you can just patch these lines to:
var port = config.port || 3000;
var host = config.host || '127.0.0.1';
The code looks like this in master at the moment.

Resources