Trouble passing Jenkins parameters to run Protractor scripts - jenkins

I am currently using 2 config files to run my Protractor scripts using Jenkins.
devConfig.ts and prodConfig.ts
These have the dev creds and URL and the prod creds and URL.
I have two Jenkins jobs that run different commands
npm run tsc && protractor tmp/devConfig.js --suite devRegression
npm run tsc && protractor tmp/devConfig.js --suite prodRegression
Instead of having two config files, how is it possible to do in one? By passing params for URL, Creds, Suite and Browser?
I was able to setup on Jenkins:
and this leads to
But I am not able to pass them back to the protractor scripts. Is there a straightforward way to construct these parameters and pass them on to protractor?

For protractor side check out this page
Per its content, having this in your conf.js:
module.exports = {
params: {
login: {
email: 'default',
password: 'default'
}
},
// * other config options *
}
you can pass any parameter to it in CMD as follows:
protractor --baseUrl='http://some.server.com' conf.js --parameters.login.email=example#gmail.com
--parameters.login.password=foobar
so you end up having this in your specs:
describe('describe some test', function() {
it('describe some step', function() {
browser.get(browser.baseUrl);
$('.email').sendKeys(browser.params.login.email);
$('.password').sendKeys(browser.params.login.password);
});
});
For Jenkins just construct the command as follows:
protractor --baseUrl=${url} conf.js --parameters.login.email=${email}
--parameters.login.password=${password}
Another way if you want to just pass one parameter is have object in your config.js with mapping of all related params like this:
let param_mapping = {
prod: {
url: "https://prod.app.com",
email: "prod#gmail.com",
password: "Test1234"
},
dev: {
url: "https://dev.app.com",
email: "dev#gmail.com",
password: "Test1234"
},
stage: {
url: "https://stage.app.com",
email: "stage#gmail.com",
password: "Test1234"
}
};
let parameters = param_mapping[process.ENV.CUSTOM_ENV];
exports.config = {
baseUrl: parameters.url,
params: parameters,
// ...
};
and then start your process with an environment variable:
CUSTOM_ENV=dev protractor protractor.conf.js
Please note, I haven't tested this particular code now, but I did test the logic a little while ago, so this can be your approach

Related

Jmeter test error : URI does not specify a valid host name

I have configured Jmeter in Jenkins pipeline but when the stage runs it appears as if the tests are not executing as expected and in the console output I can see an error
Non HTTP response message: URI does not specify a valid host name: http:/http:10.XXX.XXX.XXX:32518/account?field4=3025202645050&field7=generic01&field10=abc098 . The URL is being intepreted incorrectly as it has http appearing twice.
This is part of the Jenkins pipeline :
pipeline {
agent any
triggers {
githubPush()
}
environment {
.......
REPOSITORY_URI = "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_REPO_NAME}"
}
stages {
......
.......
stage ("UnitTest Report") {
steps{
publishHTML target: [
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: '/var/lib/jenkins/workspace/FlexToEcocash_main/IntegrationTests/BuildReports/Coverage',
reportFiles: 'index.html',
reportName: 'Code Coverage'
]
archiveArtifacts artifacts: 'IntegrationTests/BuildReports/Coverage/*.*'
}
}
stage ("Perfomance Test") {
steps {
build job: 'EcoToFlexPerfomanceTests'
}
}
}
}
The stage Perfomance Test triggers a Freestyle Job named EcoToFlexPerfomanceTests and this will be the one that runs Jmeter tests.
Part of the console output :
Looking at the Performance Test Reports I am not sure if they are also showing the data correctly, they seem not populated as of now:
Environment:
Debian 10 Buster
.Net 5 API running on k0s
What am I missing ?
There is a problem with the URL of the HTTP Request sampler in your script:
It should look like:
http://10.XXX.XXX.XXX:32518/account......
so use Debug Sampler and View Results Tree listener combination to see all JMeter Variables with their values and correct the URL of the sampler and the problem should go away.

In Playwright, how to pass a baseUrl via command line so my spec files dont have a hardcoded url for the app I am testing?

In Protractor, I am currently passing a flag in the command line that indicates what my URL is for the app I am testing. And now we are switching to Playwright, I want to do something similar. Since the same tests will be used to test the app in different environments (dev, test, CI) I need a way to pass different URLs and ideally it will be nice if I can control that via command line.
UPDATE:
The baseURL option has been added in Playwright v1.13.0.
import { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
use: {
baseURL: 'http://localhost:3000',
},
};
export default config;
Currently, in Playwright Test v1.12.0 there is no baseUrl property as we have in Protractor. But you can accomplish it with a system variable.
For instance:
import { test } from '#playwright/test';
const BASE_URL = process.env.URL;
test('verify title of the page', async ({ page }) => {
await page.goto(BASE_URL);
});
and then run it on dev env:
URL=https://playwright.dev npx playwright test
or prod:
URL=https://playwright.prod npx playwright test
I added an environment variable to the playwright.config.js and it seems to work here.
import { PlaywrightTestConfig } from '#playwright/test';
const config = {
use: {
baseUrl: process.env.MY_CUSTOM_BASE_URL
}
}
then run something like the answer above.
MY_CUSTOM_BASE_URL=https://example.com npx playwright test
or
export MY_CUSTOM_BASE_URL=https://example.com
npx playwright test

How to read log file from within pipeline?

I have a pipeline job that runs a maven build. In the "post" section of the pipeline, I want to get the log file so that I can perform some failure analysis on it using some regexes. I have tried the following:
def logContent = Jenkins.getInstance()
.getItemByFullName(JOB_NAME)
.getBuildByNumber(
Integer.parseInt(BUILD_NUMBER))
.logFile.text
Error for the above code
Scripts not permitted to use staticMethod jenkins.model.Jenkins
getInstance
currentBuild.rawBuild.getLogFile()
Error for the above code
Scripts not permitted to use method hudson.model.Run getLogFile
From my research, when I encounter these, I should be able to go to the scriptApproval page and see a prompt to approve these scripts, but when I go to that page, there are no new prompts.
I've also tried loading the script in from a separate file and running it on a different node with no luck.
I'm not sure what else to try at this point, so that's why I'm here. Any help is greatly appreciated.
P.S. I'm aware of the BFA tool, and I've tried manually triggering the analysis early, but in order to do that, I need to be able to access the log file, so I run into the same issue.
You can use pipeline step httpRequest from here
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Test fetch build log'
}
post {
always {
script {
def logUrl = env.BUILD_URL + 'consoleText'
def response = httpRequest(
url: logUrl,
authentication: '<credentialsId of jenkins user>',
ignoreSslErrors: true
)
def log = response.content
echo 'Build log: ' + log
}
}
}
}
}
}
If your jenkins job can run on linux machine, you can use curl to archive same goal.
pipeline {
agent any
stages {
stage('Build') {
environment {
JENKINS_AUTH = credentials('< credentialsId of jenkins user')
}
steps {
sh 'pwd'
}
post {
always {
script {
def logUrl = env.BUILD_URL + 'consoleText'
def cmd = 'curl -u ${JENKINS_AUTH} -k ' + logUrl
def log = sh(returnStdout: true, script: cmd).trim()
echo 'Build log: ' +
echo log
}
}
}
}
}
}
Above two approaches both require the credentials is Username and password format. More detail about what is it and how to add in Jenkins, please look at here
Currently this is not possible via the RunWrapper object that is made available. See https://issues.jenkins.io/browse/JENKINS-46376 for a request to add this.
So the only options are:
explicitly whitelisting the methods
read the log via the URL as described in the other answer, but this requires either anonymous read access or using proper credentials.

Problem when generating JUnit report on Jenkins Pipeline

I'm creating a post job on Jenkins pipeline to publish test results using junit, html and cobertura. The code looks like this
post {
always {
publishTestResults(
script: this,
junit: [
active:true,
allowEmptyResults:true,
archive: true,
pattern: '**/reports/mocha.xml',
updateResults: true
],
cobertura: [
active:true,
allowEmptyResults:true,
archive:true,
pattern: '**/coverage/cobertura/cobertura-coverage.xml'
],
html: [
active:true,
allowEmptyResults:true,
archive:true,
name: 'NYC/Mocha',
path: '**/coverage/html'
],
lcov: [
active:true,
allowEmptyResults:true,
archive:true,
name: 'LCOV Coverage',
path: '**/coverage/lcov/lcov-report'
]
)
cobertura coberturaReportFile: 'coverage/cobertura/cobertura-coverage.xml'
junit 'reports/mocha.xml'
cleanWs()
// deleteDir()
script {
FAILED_STAGE = env.STAGE_NAME
}
}
}
}
The problem is when I execute the job on Jenkins I receive an error message:
find . -wholename **/reports/mocha.xml -exec touch {} ;
touch: cannot touch './reports/mocha.xml': Permission denied
I suppose the issue raised by junit command. How could I solve this problem?
P/S: Jenkins server runs on Ubuntu. I tried to modify /etc/sudoers and add this line to make Jenkins executes command as root. It still could not solve my problem.
jenkins ALL=(ALL) NOPASSWD: ALL
From checking the code at: https://github.com/SAP/jenkins-library/blob/5c13a0e2a20132336824c70b743c364bcb5341f4/vars/testsPublishResults.groovy#L136
Looks like you can avoid the issue by setting updateResults to false
If you absolutely have to update the timestamp on the result file, you'll have to open a terminal session, go to the project workspace (with jenkins user) and try to run touch ./reports/mocha.xml and then debug it from there.

Passing parameter in Jenkinsfile to a shell command within a Docker container

I have a Jenkinsfile with a String parameter env_vars. With this parameter I want to set custom environment variables which I want to set later with a shell command within the started Docker container. It is important to set such environment variables on runtime.
This is my simple Jenkinsfile:
pipeline {
options {
timestamps()
}
agent {
node {
label 'master'
}
}
parameters {
string(name: 'env_vars', defaultValue: 'MY_USER_PASSWORD=abc MY_USER_NAME=def', description: 'the ENV variables to set before starting the tests')
}
stages {
stage ('TESTS') {
steps {
script {
withDockerRegistry([credentialsId: 'XXX', url: 'http://example.com']) {
withDockerContainer(image: 'myDockerImage:latest') {
withCredentials([string(credentialsId: 'cred1', variable: 'cred1'), string(credentialsId: 'cred2', variable: 'cred2')]) {
sh '''
# here we go to run npm
${env_vars} npm run test -- chrome --tag=enabled
'''
}
}
}
}
}
}
}
}
And this error I will get in Jenkins:
/var/lib/jenkins/jenkins3/jobs/zTestMG/workspace#tmp/durable-40340d0e/script.sh: line 4: MY_USER_PASSWORD=abc: command not found
One possible workaround is using eval for the shell command:
eval "${env_vars} npm run test -- chrome --tag=enabled"
But I don't want to use eval, because later I have to evaluate the result of the npm run command. And when using eval I will get new problems.
How can I solve the problem to use the String parameter in the shell command within the Docker container?
I have found a possible solution for me. I replace my shell command in two different once:
export ${env_vars}
npm run ${run_script_method} -- ${browser} --tag=${tags}

Resources