How can I run Selenium 2 Grid from an Ant build? - ant

I'm working on modifying our existing Selenium Grid setup so that it will work with Selenium 2. The process of setting up the hub and nodes seems to be much simpler, but I'm having a problem getting it running in an Ant build the way I did before.
I've read through the wiki on Selenium 2 Grid and tried to set up the Ant build accordingly. My problem is that the first target runs, starting the hub. Then, the other targets do not run, but the build completes. I'm attempting to run these on my own machine, with Selenium 1 (RC) Junit tests, and TestNG as a test runner.
The targets I have are as follows:
<taskdef resource="testngtasks" classpath="testng-${testng.version}.jar" />
<target name="start-hub" description="Start the Selenium Grid hub">
<java classpathref="runtime.classpath"
jar="${basedir}/selenium-server-standalone-${server.version}.jar"
fork="true"
spawn="true">
<arg value="-v" />
<arg value="-role" />
<arg value="hub" />
</java>
</target>
<target name="start-node"
description="Start the Selenium Grid node"
depends="start-hub">
<java classpathref="runtime.classpath"
jar="${basedir}/selenium-server-standalone-${server.version}.jar"
fork="true"
spawn="true">
<arg value="-role" />
<arg value="rc" />
<arg value="-hub" />
<arg value="http://localhost:4444/grid/register" />
<arg value="-port" />
<arg value="5555" />
<arg value="-browser" />
<arg value="browserName=firefox,version=3.6,maxInstances=5,platform=WINDOWS"/>
</java>
</target>
<target name="run-tests" description="Run the tests" depends="start-node">
<testng classpathref="runtime.classpath"
haltonfailure="true">
<sysproperty key="java.security.policy"
file="${grid.location}/lib/testng.policy" />
<arg value="testng.xml"/>
</testng>
</target>
It seems like the Ant thread is finished after the first target runs. I looked at a way to start them in a new window, like the previous grid, but I didn't see a way to do that except for the exec task. I also tried running the hub in an exec task and the node as a java task. That resulted in the ant build stopping execution after the start-hub target as opposed to finishing.
Is there a way I can get this running, or is there a better way to accomplish it?

Take a look at the way the Mozilla team does it here:
https://github.com/mozilla/moz-grid-config
Note that they're still using the Grid 1 node launchers, since Grid 2 is backwards-compatible in that respect. But it should give you an idea of how to handle this in ant.

Related

How to execute custom ant target using exec task?

In our project we are using a proprietary product which contains its own custom ant executable to build the deployment artifact.
So, we have two build files 'build-artifcat.xml' and 'build.xml'.
Here,
'build-artifcat.xml' - contains ant targets which can be executed with this products custom ant executable.
'build.xml' - contains ant targets which can be executed by Apache Ant.
We are using Bamboo CICD tool and this tool provides facility to execute only Apache Ant targets.
So, we are planning to use task to invoke ant targets from 'build-artifcat.xml' using this products custom ant executable.
I found below code while checking older projects but here I am not sure how can I specify the target to be executed from from 'build-artifcat.xml'.
Can you please help.
<target name="buildArtifact">
<echo>Running ant exec: ${run.custom.ant.exec}</echo>
<exec dir="${run.ant.exec.dir}" executable="${run.custom.ant.exec}" error="${basedir}/${error.log.file}">
<arg value="-f" />
<arg file="${basedir}/build-artifcat.xml" />
<arg value="-DimportDir=${import.location}" />
<arg value="-DartifactDir=${artifact.dir}" />
<arg value="-data" />
<arg value="${temp.workspace}" />
<arg value="-vmargs" />
<arg value="-Xmx1024m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./crashes/my-heap-dump.hprof -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:./crashes/gc.log" />
<arg value="-verbose" />
</exec>
</target>

JSCover - Excluding coverage files

Currently trying to get JSCover to exclude js files that are used as libraries. I have a set of ant scripts below which will
start the JSCover server
Run & Generate Json report
Stop the server
Finally, i have a shell command to convert the Json file to LCov so that i can use it with sonarqube. I also get coverage in jscoverage.html but it includes every file under web/ which is something i do not want. Image below
Ant scripts below:
<target name="jstest-start">
<java jar=".../JSCover.jar" fork="true" spawn="true">
<arg value="-ws"/>
<arg value="--report-dir=coverage"/>
<arg value="--document-root=web"/>
<arg value="--port=8082"/>
<!-- Aim is to exclude folder js under web/ as it contains libraries, not source code. Currently does not work -->
<arg value="--no-instrument=web/js/"/>
<arg value="--no-instrument=js/"/>
</java>
<waitfor maxwait="5" maxwaitunit="second" checkevery="250" checkeveryunit="millisecond" timeoutproperty="failed">
<http url="http://localhost:8082/jscoverage.html"/>
</waitfor>
<fail if="failed"/>
</target>
<target name="jstest-run">
<exec dir="/usr/local/CI/phantomjs/bin/" executable="phantomjs" failonerror="true">
<arg line=".../run-jscover-qunit.js http://localhost:8082/index.html"/>
</exec>
</target>
<target name="jstest-stop">
<get src="http://localhost:8082/stop" dest="stop.txt" />
</target>
<target name="jstest" description="Run javascript tests">
<antcall target="jstest-start"/>
<antcall target="jstest-run"/>
<antcall target="jstest-stop"/>
</target>
My folder structure is:
And finally, my sonar standalone analysis settings:
So, what seems to be happening is that JSCover is recursively reading for all js files and i cannot prevent that from sonar or ant.
Can anyone shed some light?
<arg value="--no-instrument=/js/"/>
should work, and to remove the test itself,
<arg value="--no-instrument=/test/"/>
The paths are as seen by the web-server, so the 'web' prefix in:
<arg value="--no-instrument=web/js/"/>
has no effect.
i have resolved my own issue by correcting the shell command which generates an LCOV report.
java -cp JSCover.jar jscover.report.Main --format=LCOV /usr/local/CI/jenkins/workspace/PhantomJS/coverage/phantom/ /usr/local/CI/jenkins/workspace/PhantomJS/web/
Prior to this, the SRC-DIR and REPORT-DIR were the same which was an error on my part. As far as i can understand, SRC-DIR should point to the source folder and REPORT-DIR should point to where the lcov file exists.
I hope this helps someone

Trouble passing argument to Ant exec task

I'm using Ant 1.8. I want to pass a property that I define in my script to an exec command. Although I can see the property has a value in my echo statements, when I pass it to the script and output its value in the script, its value prints out as "${myco.test.root}", without being converted. What is the correct way to pass the property's value to the script? Below is the relevant code from my build.xml file …
<target name="checkout-selenium-tests" depends="set-critical-path-test-suite,set-default-test-suite,check-local-folders-exist">
<echo message=" test root ${myco.test.root}" />
<stcheckout servername="${st.servername}"
serverport="${st.serverport}"
projectname="${st.myco.project}"
viewname="${st.myco.viewname}"
username="${st.username}"
password="${st.password}"
rootstarteamfolder="${myco.starteam.test.root}"
rootlocalfolder="${myco.test.root}"
forced="true"
deleteuncontrolled="true"
/>
<delete file="${myco.testsuite.file}" />
<echo message="test root ${myco.test.root}" />
<exec failonerror="true" executable="perl" dir="${scripts.dir}">
<arg value="generate_test_suite.pl" />
<arg value="My Tests" />
<arg value="${myco.test.root}" />
<arg value="${myco.testsuite.file}" />
</exec>
</target>
Thanks, - Dave
It actually looks good to me. Try running the build.xml with both the verbose and debug options turned on in Ant:
ant -d -v checkout-selenium-tests
That'll help trace down where the error could be coming from.

How to quickly deploy assets to Amazon S3 with an Ant target?

What is the quickest way to deploy content to a CDN with an Ant target? My Ant target is running on a continuous integration server (Hudson). My current solution uses curl and is a bit slow. Should I use wput or something else and how would I do that in ant?
<target name="Deploy">
<for param="file">
<path>
<fileset dir="${basedir}/output" includes="**/*"/>
</path>
<sequential>
<echo> Deploy #{file} </echo>
<exec executable="curl">
<arg value="-F name=value"/> <!-- params for secure access -->
<arg value= "-F file=#{file}"/>
<arg value="http://cdn.com/project"/>
</exec>
</sequential>
</for>
</target>
Several ideas have come up to speed up the transfer of content to the cdn
1) max out the pipe bandwidth by using the parallel ant task to simultaneously transfer several mutually exclusive filesets. For example, if there are three sub-folders in the output folder, each can be given to a different parallel task, and each would iterate through the files, calling curl on each file to transfer it to the cdn. http://ant.apache.org/manual/Tasks/parallel.html
2) write a custom ant task (bash script?) that would have local knowledge about the build so that any files that were changed by the last build get marked and only those files would be transfered. This would prevent sending a file that is already on the cdn.
3) read the remote directory from the cdn and use timestamps to determine which files to send. This may not be possible depending on the cdn and whether it allows such queries. I was hoping wput could do this but I don't see an option for that. http://wput.sourceforge.net/wput.1.html
RESOLVED
I found a blog titled "Deploying assets to Amazon S3 with Ant" which was extremely helpful. It uses a python script 's3cmd sync' which only transfers files that don’t exist at the destination.
I ended up with this ant target:
<target name="s3Upload">
<property name="http.expires" value="Fri, 31 Dec 2011 12:00:00 GMT" />
<exec executable="${PYTHON_DIR}\python.exe" failonerror="true">
<arg value="${PYTHON_DIR}\Scripts\s3cmd" />
<arg value="--guess-mime-type" />
<arg value="--add-header=Cache-Control:public, max-age=630657344" />
<arg value="--add-header=Expires:${http.expires}" />
<arg value="--encoding=UTF-8" />
<arg value="--skip-existing" />
<arg value="--recursive" />
<arg value="--exclude=*.log" />
<arg value="--acl-public" />
<arg value="sync" />
<arg value="${CDN_DIR}/" />
<arg value="s3://my-project-cdn/" />
</exec>
</target>

Inconsistent NoClassDefFoundError in subant java task

My ant build script starts with a java task that uses fork=true
<java fork="true"
classname="org.apache.tools.ant.launch.Launcher"
jvm="${java.home}/bin/java"
classpathref="class.path">
<arg value="-f" />
<arg value="${ant.file}" />
<arg value="generate" />
</java>
The <arg value="generate" /> points to another task in the same ant build file.
This task starts another target with a subant task that points to another file.
<subant verbose="true" target="replace">
<fileset dir="${basedir}" includes="refactor.xml" />
</subant>
This file refactor.xml starts a java task again with fork=true.
<java classpathref="class.path"
classname="namespace.Tool"
fork="true"/>
The strange behaviour is: everything works fine, except once in a while I get the NoClassDefFoundError error for the namespace.Tool java source file.
After e.g. closing, reopening the file the error may disappear, however there is no reproducible behaviour.
I tried avoiding the subant construction (used to unclutter) but this doesn't help.
Finally the class.path that is referenced is like this:
<path id="class.path">
<pathelement location="../common/bin" />
<pathelement location="./bin" />
<fileset dir="${build.dir}">
<include name="...jar" />
</fileset>
</path>
Any ideas?
Cause was <pathelement location="./bin" />.
This bin folder was recompiled by Eclipse as soon as in other steps in the sequence of Ant tasks e.g. a folder was deleted. The default setting in Eclipse is to recompile all code at such a moment.
As a result the Ant process may or may not find a specific class in this bin folder resulting in the inconsistent NoClassDefFoundError.

Resources