RestAssured. Is it possible to log the body in a ResponseSpecification for all the tests? - rest-assured

I try to setup #BeforeClass for test inheriting and I need always to log the request body in all of my tests so I try to do so:
import static io.restassured.filter.log.LogDetail.*;
#BeforeClass
public void config() {
RequestSpecification jsonServerRequestSpecification =
new RequestSpecBuilder()
.setBaseUri("http://localhost")
.setPort(3000)
.log(METHOD).log(URI).log(PARAMS)
.setContentType(ContentType.JSON)
.build();
ResponseSpecification jsonServerResponseSpecification =
new ResponseSpecBuilder()
.expectContentType(ContentType.JSON)
.log(STATUS).log(BODY)
.build();
requestSpecification = jsonServerRequestSpecification;
responseSpecification = jsonServerResponseSpecification;
}
but after running one of my tests I'm getting this:
Request method: GET
Request URI: http://localhost:3000/comments?postId=1&id=1
Request params: <none>
Query params: postId=1
id=1
Form params: <none>
Path params: <none>
Multiparts: <none>
===============================================
Default Suite
Total tests run: 1, Passes: 1, Failures: 0, Skips: 0
===============================================
Process finished with exit code 0
As you can see body don't log but I expected it from the #BeforeClass method.
RestAssured - v4.2.0 | Java - v8

Related

"Rest Assured" requests log: Duplicate requests prints

I am working with InteliJ although I don't know if that is important...
When I debug a my code which uses "Rest Assured", every request is being printed twice to the Intelij Run\Debug window.
For example:
#testR
Feature: tests Feature
Run Before Feature
**Request method: POST**
Request URI: https://10.188.10.30:443/auth/api/login
Proxy: <none>
Request params: <none>
Query params: <none>
Form params: username=admin
password=admin
**Request method: POST**
Request URI: https://10.188.10.30:443/auth/api/login
Proxy: <none>
Request params: <none>
Query params: <none>
Form params: username=admin
password=admin
10:34:44: Step: Given Login to tenant "system" with username "admin" and password "admin"(Scenario: New Login)
I defined my requestSpecification as:
requestSpecification =
RestAssured
.with()
.baseUri(baseUri)
.port(port)
.filter(new ResponseLoggingFilter())
.filter(new RequestLoggingFilter())
.log().all();
Use either only one
.filter(new RequestLoggingFilter())
.log().all();

Trouble passing Jenkins parameters to run Protractor scripts

I am currently using 2 config files to run my Protractor scripts using Jenkins.
devConfig.ts and prodConfig.ts
These have the dev creds and URL and the prod creds and URL.
I have two Jenkins jobs that run different commands
npm run tsc && protractor tmp/devConfig.js --suite devRegression
npm run tsc && protractor tmp/devConfig.js --suite prodRegression
Instead of having two config files, how is it possible to do in one? By passing params for URL, Creds, Suite and Browser?
I was able to setup on Jenkins:
and this leads to
But I am not able to pass them back to the protractor scripts. Is there a straightforward way to construct these parameters and pass them on to protractor?
For protractor side check out this page
Per its content, having this in your conf.js:
module.exports = {
params: {
login: {
email: 'default',
password: 'default'
}
},
// * other config options *
}
you can pass any parameter to it in CMD as follows:
protractor --baseUrl='http://some.server.com' conf.js --parameters.login.email=example#gmail.com
--parameters.login.password=foobar
so you end up having this in your specs:
describe('describe some test', function() {
it('describe some step', function() {
browser.get(browser.baseUrl);
$('.email').sendKeys(browser.params.login.email);
$('.password').sendKeys(browser.params.login.password);
});
});
For Jenkins just construct the command as follows:
protractor --baseUrl=${url} conf.js --parameters.login.email=${email}
--parameters.login.password=${password}
Another way if you want to just pass one parameter is have object in your config.js with mapping of all related params like this:
let param_mapping = {
prod: {
url: "https://prod.app.com",
email: "prod#gmail.com",
password: "Test1234"
},
dev: {
url: "https://dev.app.com",
email: "dev#gmail.com",
password: "Test1234"
},
stage: {
url: "https://stage.app.com",
email: "stage#gmail.com",
password: "Test1234"
}
};
let parameters = param_mapping[process.ENV.CUSTOM_ENV];
exports.config = {
baseUrl: parameters.url,
params: parameters,
// ...
};
and then start your process with an environment variable:
CUSTOM_ENV=dev protractor protractor.conf.js
Please note, I haven't tested this particular code now, but I did test the logic a little while ago, so this can be your approach

Jenkins: Failed tests: testApp(GitProject.gittest.AppTest): connection refused(..)

I am trying to use Jenkins in combination with Eclipse and when I run my code locally on Eclipse it is successful, but when I run it with Jenkins(also on my machine) it gives me this error:
Failed tests: testApp(GitProject.gittest.AppTest): connection refused(..)
The thing is that with chromedriver it is fine, but when I run it with geckodriver it fails. I am running a very simple code:
public class AppTest
{
#Test
public void testApp() throws InterruptedException
{
String exePath = "/Users/Shared/Jenkins/Home/geckodriver";
System.setProperty("webdriver.gecko.driver", exePath);
WebDriver driver = new FirefoxDriver();
System.out.println(driver.manage().window().getSize());
driver.get("https://www.apple.com");
driver.manage().window().setSize(new Dimension(1024, 768));
System.out.println(driver.manage().window().getSize());

How to force pull docker images in DC OS?

For docker orchestration, we are currently using mesos and chronos to schedule job runs.
Now, we dropped chronos and try to set it up via DCOs, using mesos and metronome.
In chronos, I could activate force pulling a docker image via its yml config:
container:
type: docker
image: registry.example.com:5001/the-app:production
forcePullImage: true
Now, in DC/OS using metronome and mesos, I also want it to force it to always pull the up-to-date image from the registry, instead of relying on its cached version.
Yet the json config for docker seems limited:
"docker": {
"image": "registry.example.com:5001/the-app:production"
},
If I push a new image to the production tag, the old image is used for the job run on mesos.
Just for the sake of it, I tried adding the flag:
"docker": {
"image": "registry.example.com:5001/my-app:staging",
"forcePullImage": true
},
yet on the put request, I get an error:
http PUT example.com/service/metronome/v1/jobs/the-app < app-config.json
HTTP/1.1 422 Unprocessable Entity
Connection: keep-alive
Content-Length: 147
Content-Type: application/json
Date: Fri, 12 May 2017 09:57:55 GMT
Server: openresty/1.9.15.1
{
"details": [
{
"errors": [
"Additional properties are not allowed but found 'forcePullImage'."
],
"path": "/run/docker"
}
],
"message": "Object is not valid"
}
How can I achieve that the DC OS always pulls the up-to-date image? Or do I have to always update the job definition via a unique image tag?
The Metronome API doesn't support this yet, see https://github.com/dcos/metronome/blob/master/api/src/main/resources/public/api/v1/schema/jobspec.schema.json
As this is currently not possible I created a feature request asking for this feature.
In the meantime, I created workaround to be able to update the image tag for all the registered jobs using typescript and request-promise library.
Basically I fetch all the jobs from the metronome api, filter them by id starting with my app name, and then change the docker image, and issue for each changed job a PUT request to the metronome api to update the config.
Here's my solution:
const targetTag = 'stage-build-1501'; // currently hardcoded, should be set via jenkins run
const app = 'my-app';
const dockerImage = `registry.example.com:5001/${app}:${targetTag}`;
interface JobConfig {
id: string;
description: string;
labels: object;
run: {
cpus: number,
mem: number,
disk: number,
cmd: string,
env: any,
placement: any,
artifacts: any[];
maxLaunchDelay: 3600;
docker: { image: string };
volumes: any[];
restart: any;
};
}
const rp = require('request-promise');
const BASE_URL = 'http://example.com';
const METRONOME_URL = '/service/metronome/v1/jobs';
const JOBS_URL = BASE_URL + METRONOME_URL;
const jobsOptions = {
uri: JOBS_URL,
headers: {
'User-Agent': 'Request-Promise',
},
json: true,
};
const createJobUpdateOptions = (jobConfig: JobConfig) => {
return {
method: 'PUT',
body: jobConfig,
uri: `${JOBS_URL}/${jobConfig.id}`,
headers: {
'User-Agent': 'Request-Promise',
},
json: true,
};
};
rp(jobsOptions).then((jobs: JobConfig[]) => {
const filteredJobs = jobs.filter((job: any) => {
return job.id.includes('job-prefix.'); // I don't want to change the image of all jobs, only for the same application
});
filteredJobs.map((job: JobConfig) => {
job.run.docker.image = dockerImage;
});
filteredJobs.map((updateJob: JobConfig) => {
console.log(`${updateJob.id} to be updated!`);
const requestOption = createJobUpdateOptions(updateJob);
rp(requestOption).then((response: any) => {
console.log(`Updated schedule for ${updateJob.id}`);
});
});
});
I had a similar problem where my image repo was authenticated and I could not provide the necessary auth info using the metronome syntax. I worked around this by specifying 2 commands instead of the directly referencing the image.
docker --config /etc/.docker pull
docker --config /etc/.docker run
I think the "forcePullImage": true should work with the docker dictionary.
Check:
https://mesosphere.github.io/marathon/docs/native-docker.html
Look at the "force pull option".

ALM Jenkins Configuration failed

I currently face the problem that my HP ALM test automations are not running when executed with the HP Application Automation Tools. They are running fine when I trigger them from within HP ALM.
This is the output of the job:
Does anybody know what "Execution status: Error, Message: Access is denied"
means. Is there maybe some permission configuration missing in HP ALM?
Building in workspace D:\Tools\Jenkins\workspace\Dani-JenkinsWithQC
[Dani-JenkinsWithQC] $ D:\Tools\Jenkins\workspace\Dani-JenkinsWithQC\HpToolsLauncher.exe -paramfile props20022014150821066.txt
"Started..."
Timeout is set to: 5
Run mode is set to: RUN_REMOTE
============================================================================
Starting test set execution
Test set name: JenkinsIntegartionTest, Test set id: 2457
"Number of tests in set: "2
Test 1: [1]Login will run on host: si0vm839
Test 2: [1]Logout will run on host: si0vm839
"Scheduler started at:15.09.2015 15:08:28
-------------------------------------------------------------------------------------------------------
15.09.2015 15:08:29 Running: [1]Login
15.09.2015 15:08:29 Running test: [1]Login, Test id: 938, Test instance id: 1412
Test: [1]Login, Id: 1412, Execution status: Running
Test: [1]Login, Id: 1412, Execution status: Error, Message: Access is denied
15.09.2015 15:08:33 Test complete: [1]Login
-------------------------------------------------------------------------------------------------------
15.09.2015 15:08:33 Running: [1]Logout
15.09.2015 04:15:08:33 Running test: [1]Logout, Test id: 939, Test instance id: 1413
Test: [1]Logout, Id: 1413, Execution status: Running
Test: [1]Logout, Id: 1413, Execution status: Error, Message: Access is denied
==============\nJob timed out!\n==============
================================================
Run status: Job failed, total tests: 2, succeeded: 0, failures: 0, errors: 2
Build step 'Execute HP tests from HP ALM' changed build result to FAILURE
Finished: FAILURE
The user you trigger running from with in ALM and the user you configured in jenkins may not the same. Check it here to see what user you're using for running from jenkins. Login to ALM, check which group does the user you configured in jenkins belong to. Maybe he is just a viewer or belongs to a group which doesn't have the permission of running test.
No idea about what the plugin version you're using. The latest release of jenkins plugin is recommended. You can get it here.

Resources