Selenium job configuration error in Jenkins - jenkins

I am just integrating Protractor end to end test job in my Jenkins. Jenkins is running on CentOS 7. With the help of one of my test engineer created a config.js file and created a Jenkins job for the same.
I am getting the following error in Jenkins console while executing this job:
+ cd '/var/lib/jenkins/workspace/UI Automation Test/UI-automation-tests/Test/steps'
+ protractor config.js
(node:11138) [DEP0022] DeprecationWarning: os.tmpDir() is deprecated. Use os.tmpdir() instead.
[07:12:43] I/launcher - Running 1 instances of WebDriver
[07:12:43] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
[07:12:43] W/launcher - Ignoring uncaught error WebDriverError: unknown error: Chrome failed to start: exited abnormally
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
(Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 3.10.0-862.2.3.el7.x86_64 x86_64) (WARNING: The server did not provide any stacktrace information)

It is a bug of ChromeDriver (already reported). For Linux systems you can use this argument to avoid this error:
--disable-dev-shm-usage

Related

chrome webdriver fails when run from linux docker container with errors [1594905471.000][SEVERE]: bind() failed: Cannot assign requested address (99)]

Scenario:
Running tests from docker container, in docker container.
Results:
Tests fail with below errors:
org.openqa.selenium.WebDriverException: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
T E S T S
Running org.hobsoft.docker.mavenchrome.BrowserTest
Starting ChromeDriver 84.0.4147.30 (48b3e868b4cc0aa7e8149519690b6f6949e110a8-refs/branch-heads/4147#{#310}) on port 16079
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
[1594905471.000][SEVERE]: bind() failed: Cannot assign requested address (99)
ChromeDriver was started successfully.
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 1.28 sec <<< FAILURE! - in org.hobsoft.docker.mavenchrome.BrowserTest
canDuck(org.hobsoft.docker.mavenchrome.BrowserTest) Time elapsed: 1.274 sec <<< ERROR!
org.openqa.selenium.WebDriverException: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:17:03'
System info: host: '3cf859c01b07', ip: '172.17.0.2', os.name: 'Linux', os.arch: 'amd64', os.version: '4.19.76-linuxkit', java.version: '1.8.0_252'
Driver info: driver.version: ChromeDriver
remote stacktrace: #0 0x55bb64b8aea9
Root cause: Cause lays in trying to launch a browser in the container.
Fix: Launch selenium in headless mode in your tests.
For Java, this is shown as below.
ChromeOptions options = new ChromeOptions().setHeadless(true);
WebDriver driver = new ChromeDriver(options);
Results: Now tests run successfully, without any errors.

Failed to execute fabric8 docker plugin

Running mvn clean install pulls up this error( Windows)
[ERROR] Failed to execute goal io.fabric8:docker-maven-plugin:0.20.1:start (prepare-environment) on project integration-test: Execution prepare-environment of goal io.fabric8:docker-maven-plugin:0.20.1:start failed: Start-Job failed with unexpected exception: [sebp/elk:latest] "elk": Timeout after 120365 ms while waiting on url http://localhost:32774/

SonarQube server can not be reached by Jenkins using Docker

I have added a SonarQube Scanner analysis step to my Jenkins build, but the step fails:
[Test_gitlab] $ /var/jenkins_home/tools/hudson.plugins.sonar.SonarRunnerInstallation/http_INTERNAL_DOCKER_IP_ADDRESS_9000/bin/sonar-scanner -e -Dsonar.host.url=SERVER_IP_ADDRESS:9000 ******** -Dsonar.projectBaseDir=/var/jenkins_home/workspace/Test_gitlab
INFO: Option -e/--errors is no longer supported and will be ignored
INFO: Scanner configuration file: /var/jenkins_home/tools/hudson.plugins.sonar.SonarRunnerInstallation/http_INTERNAL_DOCKER_IP_ADDRESS_9000/conf/sonar-scanner.properties
INFO: Project root configuration file: NONE
INFO: SonarQube Scanner 2.8
INFO: Java 1.8.0_102 Oracle Corporation (64-bit)
INFO: Linux 3.10.0-327.10.1.el7.x86_64 amd64
INFO: User cache: /var/jenkins_home/.sonar/cache
ERROR: SonarQube server [SERVER_IP_ADDRESS:9000] can not be reached
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 0.214s
INFO: Final Memory: 4M/209M
INFO: ------------------------------------------------------------------------
ERROR: Error during SonarQube Scanner execution
org.sonarsource.scanner.api.internal.ScannerException: Unable to execute SonarQube
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory$1.run(IsolatedLauncherFactory.java:84)
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory$1.run(IsolatedLauncherFactory.java:71)
at java.security.AccessController.doPrivileged(Native Method)
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:71)
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:67)
at org.sonarsource.scanner.api.EmbeddedScanner.doStart(EmbeddedScanner.java:218)
at org.sonarsource.scanner.api.EmbeddedScanner.start(EmbeddedScanner.java:156)
at org.sonarsource.scanner.cli.Main.execute(Main.java:72)
at org.sonarsource.scanner.cli.Main.main(Main.java:61)
Caused by: java.lang.IllegalStateException: Fail to download libraries from server
at org.sonarsource.scanner.api.internal.Jars.downloadFiles(Jars.java:93)
at org.sonarsource.scanner.api.internal.Jars.download(Jars.java:70)
at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:39)
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory$1.run(IsolatedLauncherFactory.java:75)
... 8 more
Caused by: java.lang.IllegalArgumentException: unexpected url: SERVER_IP_ADDRESS:9000/batch_bootstrap/index
at org.sonarsource.scanner.api.internal.shaded.okhttp.Request$Builder.url(Request.java:141)
at org.sonarsource.scanner.api.internal.ServerConnection.callUrl(ServerConnection.java:109)
at org.sonarsource.scanner.api.internal.ServerConnection.downloadString(ServerConnection.java:98)
at org.sonarsource.scanner.api.internal.Jars.downloadFiles(Jars.java:78)
... 11 more
ERROR:
ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging.
ERROR: SonarQube scanner exited with non-zero code: 1
Finished: FAILURE
My SonarQube Scanner is declared in Jenkins Global Tool Configuration. The name used is "http://SERVER_IP_ADDRESS:9000". This is the same address as the server base URL that I have set within SonarQube General Settings.
I'm using Docker: Jenkins is in a Docker container, and so does SonarQube.
The "unexpected url" mentioned in the stacktrace SERVER_IP_ADDRESS:9000/batch_bootstrap/index can be opened in a browser,
which displays sonar-scanner-engine-shaded-6.1.jar|SOME_LETTERS_AND_NUMBERS.
So why can't Jenkins reach the server?
I've also tried with Docker internal IP address, that can be found with:
docker inspect SONARQUBE_CONTAINER_ID | grep IP
Find place where you define the "unexpected url" mentioned in the stacktrace (parameter sonar.host.url of SonarQube Scanner) and prefix it with http://.
URLs must start with a scheme - see Wikipedia. Browser simply adds http:// by default.

Random failure of creating a New Cassandra Cluster using OpsCenter

OpsCenter version: 5.1.0 and
DSE Version: 4.6.0
Creating a brand new cluster by using OpsCenter directly, gives us the following error. It randomly works with the same settings but 95% of the times it fails with the same error. Opscenter is running on its own box but sharing the same Security groups as the cluster instances. For good measure, I have opened up all TCP ports to all IPs. The following is the stack trace of the error from the opscenterd.log:
*2015-03-19 10:06:12+0000 [] INFO: Starting provisioning process
2015-03-19 10:06:12+0000 [] INFO: Starting installation phase of cluster provisioning
2015-03-19 10:06:13+0000 [] WARN: HTTP request http://10.x.x.x:61621/alive? failed: Connection was refused by other side: 111: Connection refused.
2015-03-19 10:06:13+0000 [] INFO: Beginning install of OpsCenter agent to 54.x.x.x
2015-03-19 10:06:26+0000 [] WARN: HTTP request http://10.x.x.x:61621/alive? failed: Connection was refused by other side: 111: Connection refused.
2015-03-19 10:06:31+0000 [] INFO: Agent for ip 10.x.x.x is version None
2015-03-19 10:06:31+0000 [] INFO: Agent for ip 10.x.x.x is version u'5.1.0'
2015-03-19 10:07:23+0000 [] INFO: Successfully installed agent and dse on node 10.x.x.x
2015-03-19 10:07:23+0000 [] INFO: Beginning "stop" phase of cluster provisioning
2015-03-19 10:07:25+0000 [] WARN: Marking request '10.x.x.x: /ops/stop' (f6708fa2-b45f-42b4-b992-90a82b460ac7) as failed: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] ERROR: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] WARN: Marking request 'stop stage' (0b6fcb6b-96ba-404e-a484-b4b6b167b309) as failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] ERROR: Stop stage failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] WARN: Marking request 'provision' (daf1c15d-92e3-40b0-83ca-34d548ea835b) as failed: Stop stage failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] ERROR:
2015-03-19 10:07:25+0000 [] ERROR: Cluster provisioning failed: Exception: Stop stage failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] ERROR: Failed to provision cluster: Cluster provisioning failed: Exception: Stop stage failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] WARN: Marking request 28c021fd-d21a-4fed-bb5c-a4fe17d362e0 as failed: Cluster provisioning failed: Exception: Stop stage failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:41+0000 [] WARN: Unable to find a matching cluster for node with IP [u'fe80:0:0:0:2000:aff:feeb:31c7%2', u'10.x.x.x', u'0:0:0:0:0:0:0:1%1', u'127.0.0.1']; the message was [u'5.1.0', u'/1947480708/conf']. This usually indicates that an OpsCenter agent is still running on an old node that was decommissioned or is part of a cluster that OpsCenter is no longer monitoring.
Appreciate any help!
Thanks in advance
Harsha
OpCenter developer here. I make the OpsCenter provisioning features go zoom (or splat occasionally as you've seen). It is with sadness and shame that I must tell you that you're hitting a bug.
The Datastax AMI version 2.4 used by OpsCenter provisioning (https://github.com/riptano/ComboAMI/tree/2.4) does quite a bit of work at boot time via startup scripts. One of those tasks is to set up some gpg repository keys used to validate packages. Intermittently that process can fail, breaking package installs and leading to the series of errors that you're seeing. This failure is intermittent and has greatly increased in frequency recently. If you check /home/ubuntu/datastax-ami/ami.log you should see the gpg key failures that begin the rest of the failure chain.
Unfortunately, this error is pretty far down the technology stack and is difficult to manually work around. If you just need to provision a single cluster you can retry until you get a good run. Otherwise your best best is to manually launch the instances and use local provisioning to deploy dse/dsc to their private ip addresses:
Launch instances using ami-ada2b6c4 (assuming you're in us-east-1)
Make sure to add the instances to the OpsCenterSecurity group.
Make sure you have the private half of the keypair you use (you'll need it during local provisioning)
On the instance data page, hit the advanced pulldown and add the following userdata as text "--raidonly --java7"
Do a local-provisioning run against the private-ip's
Not a super-simple workaround. I wish your experience with OpsCenter this time around was more awesome. The good news is I'm on this bug and it will be fixed in an upcoming point release.
Edit: No longer necessary to manually remove /etc/security/limits.d/cassandra.conf
if its just complaining about java then install the java 7 preferably datastax wants oracle jdk and jre. you might already have java 7 and another version on your nodes but java 7 is not the default version. to change this do:
sudo update-java-alternatives -s java-7-oracle
which is a command you can script to run with ssh so you dont have to log in to each node

Appium Chrome driver port 9515

I am able to execute the script for the first with appium, selenium and C# on android device. But whenever I try to run the script next time I am getting below error.
info: [CHROMEDRIVER STDERR] [0.028][SEVERE]: Could not bind socket to 0.0.0.0:9515
info: [CHROMEDRIVER] Port not available. Exiting...
info: Chromedriver exited with code 1
ERROR: debug: executing: "c:\android-sdk\platform-tools\adb.exe" -s 4d00b33d4ae241bf devices
info: [ADB] Getting connected devices...
info: [ADB] 1 device(s) connected
ERROR: debug: executing: "c:\android-sdk\platform-tools\adb.exe" -s 4d00b33d4ae241bf shell "am force-stop com.android.chrome"
ERROR: error: Chromedriver create session did not work. Status was 200 and body was {"sessionId":"79cdf9fec37fb4700e10ce34566a7e11","status":13,"value":{"message":"unknown error: Device 4d00b33d4ae241bf is already in use\n (Driver info: chromedriver=2.9.248315,platform=Windows NT 6.1 SP1 x86_64)"}}
ERROR: error: Failed to start an Appium session, err was: Error: Did not get session redirect from Chromedriver
But if I change the Chrome driver port from 9515 to something else the script is getting executed. Then againw henever I want to execute the script, I have to change the Chrome driver port to something new from the existing one. My OS is windows 7. Need help in this matter.
I think you are not closing the driver instance after the script run so the acquired port is not free for next run.
Possible Solutions:
Findout the process of Chromedriver and stop it
Restart Appium server
Try with driver.quit() or equivalent at the end point of your script

Resources