How to run a single protractor testsuite in jenkins - docker

I am trying to run my protractor tests in Jenkins and its working. The problem is, running the tests on a docker container and use multiple headless browsers doesnt work, because my test is using actions like hovering an element. My idea was to use multiple test suites:
local: not headless, for presentations
external: headless (test cases without actions like hovering an element), Jenkins
I added the suites in the protractor.conf.js file.
Typing
protractor protractor.conf.js --suite local using the terminal works fine and as expected, but in my Jenkinsfile.feature it says
npm run e2e..., which runs also the testsuite I dont want to be executed.
So my Jenkins tests fail, just because the wrong testsuite is executed.
Replacing npm run e2e... with
protractor protractor.conf.js --suite local in the Jenkinsfile.feature doesnt work neither. I hope there is a way to tell Jenkins what suite to run. Thank you

Even though your question sounds like 'how can one develop a website' I'll try to walk you through the steps. But your question it too broad especially given that you didn't provide your error stack and the content of package.json, config file, docker file and jenkins job you're running
You gotta break your task into multiple layers (and this relates to any parameter you want, headless or multiple suites) and resolve them one by one
Make sure your protractor can take this parameter, by passing from the CLI. For example
protractor protractor.conf.js --suite local
But don't forget to add default value if nothing is passed
Once 1 is tested and works, go to the next layer, which should be docker in your case. Open Dockerfile where you declare the image, and add environment variables that you will be passing to protractor. Define the command to run your script once a container is spun up from your image
When you have your docker image working as expected, find out how to pass variables to that image when spinning up a container. Normally it should be like following
docker run -e SUITE=“regression” protractor_image
When you have the image in Jenkins master or slave, and you can run tests from CLI, you can work on your job. Again, depends on what you're doing pipeline job or regular freelance job, your steps maybe different. But the logic remains the same. You need to add a job, and make it run tests by hardcoding parameters
When the job works, add input parameters and ensure that can be used
I'm sure you'll have questions about each item or the list, so I'd suggest to open a new question for each and provide more details for them. Good luck

Related

Parameterize Performance Build in Jenkins

I am new to jenkins. I am trying to moniter performance test for my project.
I have my scripts in Jmeter.
I have created paarmeterize job in jenkins as shown.
Threads: 1
RampUp: 1
Loop: 40.
I am using backend listner to check data in Grafana and Appdynamics.
Now when i start the build the scripts run only once, but i am expecting script must run for 40 times (With build success).
But when i run it through jmeter, scripts run for 40 times succesfully. (Some issue with jenkins i suppose)
Please suggest how can i resolve the issue in jenkins as my project requeiremnt needs it to run script from jenkins.
Thank you in advance!
It's hard to say what's wrong without seeing your Jenkins job and JMeter Thread Group configuration.
In order to apply external settings in JMeter you need to define threads, ramp-up and the number of iterations using JMeter Properties via __P() function like:
Once done you will be able to override the values of these properties using -J command line argument, for example in Jenkins:
jmeter -Jthreads=1 -Jrampup=1 -Jloops=40 -n -t test.jmx -l result.jtl
This way you will be able to pass whatever number of virtual users/iterations without having to change your script.
More information: Apache JMeter Properties Customization Guide

how to deploy Drupal with Jenkins only if the tests are successful

I have some doubts about the correct configuration of Jenkins to ensure the continuous integration of a Drupal project but I arrive at some contradictions.
Let me explain: the deployment, after all, consists in executing:
cd / path / to / web / root
pull from git
drush config:import
drush cache:rebuild
The tests are launched with the command
../vendor/bin/phpunit --verbose --log-junit ../tests_output/phpunit.xml -c ../phpunit.xml
The contradiction is that I do not understand when to run the tests.
Before the pull does not make sense because the last changes are missing, after the pull if any test goes wrong I should be able to restore the situation before the pull (but I'm not sure it's a safe action).
I'm trying to run the tests directly in the workspace of jenkins and to do this I also created a separate database, but at the moment I get the error:
Drupal\Tests\field_example\Functional\TextWidgetTest::testSingleValueField
Behat\Mink\Exception\ElementNotFoundException: Button with id|name|label|value "Log in" not found
What could be the best strategy to follow?
So, your order seems ok - pull first then run tests.
However, you can have 2 Jenkins jobs. First runs, your tasks. 2nd runs ONLY if your first job completes without failure.
There are ways to get exit status from scripts - see following plugins/notes about that.
How to mark Jenkins builds as SUCCESS only on specific error exit values (other than 0)?
How to mark a build unstable in Jenkins when running shell scripts

how to add Jenkins job parameters into system variable of a windows PC

I am trying to figure a way to add my Jenkins job parameters displayed in job configure page into slave node (windows PC).
So meanwhile if anyone know How to add Jenkins job parameters into system variable of a windows PC.
Please do share it.
Consider the following use case:
You set up a test job on Jenkins, and it accepts a distribution bundle
as a parameter and perform tests against it. You want to have
developers do local builds and let them submit builds for test
execution
on Jenkins. In such a case, your parameter is a zip file that contains
a distribution.
Your test suite takes so much time to run that in normal execution you
can't afford to run the entire test cycle. So you want to control the
portion of the test to be executed. In such a case, your parameter is
perhaps a string token that indicates that test suite to be run. The
parameters are available as environment variables. So e.g. a shell
($FOO, %FOO%) or Ant ( ${env.FOO} ) can access these values.
Example:
my.prop=${env.BAR}
More info at here

Using Jenkins as graphical frontend - parametrized builds

I have a few build scripts which can be run from the command line. I'd like to have a web UI to run them and I thought about using Jenkins. I see that the Jenkins job supports parameters and then defined parameters are set as environment variables in the build environment. However, I would not like to have to alter my scripts to accept input from environment variables, it would be easier to continue to accept input from command line. I thought about adding the following shell command to the Jenkins job:
eg <build_script> --option1 $JENKINS_PARAM1 --option2 $JENKINS_PARAM2
Then, I would not need to alter my existing build scripts. Is that a common/recommended usage of Jenkins?
Yes, This seems to be perfectly fine for me.

Jenkins - Make pass/fail dependent on results from script command

I'm admittedly very new to using Jenkins, so I apologize if this is something simple I'm overlooking. In my Jenkins job, I have a bash command to run a Python script. Everything runs correctly at the moment, and the Python script works. However, the script can give a pass or fail result after running (to clarify, a fail doesn't mean the script crashed, just that it ran through and, with the variables given, gave the result that with those variables it is wrong). I need to make it so the job fails when the "fail" result is given, but I can't figure out a way to make the Jenkins pass/fail dependent on anything other than whether everything runs properly. How can I set it in such a way that whether the job passes or fails depends on the python script output? Thanks in advance for your help!
So from what I could figure out, the easiest way to do this is to modify the Python script so that it exits with a non-zero value when it returns the undesirable value, and with zero otherwise, so the success of the job will mimic the success of the script.

Resources