pytest: capture stdout/stderr at setup/teardown - docker

In my tests, I use fixture to run a web server Docker container via docker-py and detach=True.
When test is executed, I want to output container logs in case of failure. This is in principle achieved with
print(container.logs().decode(), file=sys.stderr)
on fixture teardown; but I get logs even for successful tests, not only for failed ones as if I printed the logs in the test body.
What's the best way to output the logs so the behavior is similar to printing them in the test bodies as they come?

Related

How does BusyBox evade my redirection of stdout, and can I work around it?

I have a BusyBox based system, and another one with vanilla Ubuntu LTS.
I made a C++ program which takes main()'s argv[1] as a command name, to fork() and execl() that command in the child process. Right before, I did dup2() to redirect the child's standard output, similar to this, so the parent process can read() its output.
Then, the text read from the child is written to the console, with markers, to see that it was the parent who output this.
Now if I run this program with, as its arg, "dd --help", then two different things happen:
on the Ubuntu system, the output clearly comes from the parent process (and only)
on the BusyBox system, the parent process reads back nothing, and the output of dd writes directly to the console, apparently bypassing my (attempt at) redirection.
Since all the little commands on the BusyBox system are symlinks to the one BusyBox executable and I thought there could be a problem, I also tried making a child process out of "busybox dd --help".
That changed nothing, though.
But: if I do "busybox --help", all of the output is caught by the child process and nothing "spilled besides" it. (note I left out the "sub command" dd here, only --help for BusyBox itself)
What's the reason for this happening, and (how) can I get this to work as intended, on the BusyBox system, too?
Busybox is outputting its own output, e.g. when calling it with options like --help, on stdout. But its implemented commands, like "dd", are output on stderr - even if it's not errors, which I found rather unintuitive and hence didn't look down that alley at first.

Specify Taurus test as a Blazemeter Functional test

How do I tell Taurus that my (Postman/Newman) test is a Blazemeter Functional test, and not a Performance test? Below is my bzt.yaml I created with the help of https://gettaurus.org/docs/Postman/.
execution:
- executor: newman
iterations: 1
scenario: functional/simple
scenarios:
functional/simple:
script: my.postman_collection.json
reporting:
- module: blazemeter
modules:
blazemeter:
request-logging-limit: 20240
public-report: false
report-name: my-postman-collection
test: newmantrials
project: test
final-stats:
summary-labels: true
I run it using the taurus Docker image:
docker run --rm -t -v `pwd`:/bzt-configs -v `pwd`/artifacts:/tmp/artifacts blazemeter/taurus:1.14.0 bzt.yaml -o modules.blazemeter.token="${token}"
When I log into the Blazemeter UI, I see that it's listed under the "Performance" tab, and looks like a performance test. I would like it to run as a Functional test to get more details on the request and response payloads.
I do not believe it's possible at the moment, because presently BlazeMeter functional tests are geared toward either straight API functional tests or GUI (Selenium) functional tests.
The problem is that from BlazeMeter's side, the file validator is failing to correctly identify the Postman/Newman JSON file (despite the YAML file referencing it properly). I reported this to the BlazeMeter R&D team fairly recently, so it's being looked into.
In the meantime though, I don't expect this to work in BlazeMeter. It likely won't correctly identify your Newman script unless you run it as a Performance test for the interim.
(Sorry for the bad news on this one -- Hopefully it'll get sorted soon!)
Feel free to bring this up with BlazeMeter support at support#blazemeter.com as well.

Running Google Cloud ML training job but getting no stdout output in logs

I've built a trainer and when I submit the job, the job starts and logs get populated. But none of my output to stdout ever appears in the log. I do get messages like "The TensorFlow library wasns't compiled to use AVX2 instructions..."
The entire job takes about 5 to 10 minutes on my laptop; I let it run for over an hour on the cloud server and still never saw any output (and the first line of output occurs almost immediately when I run it locally.)
I can run my job locally by invoking it directly, but I haven't been able to get it to run using the "gcloud local" command... when I do this, I get an error "No module named tensorflow"
The log message "The TensorFlow library wasn't compiled to use AVX2 instructions" indicates that log messages are flowing from TensorFlow to Cloud Logging. So most likely there is a problem with the way you have configured logging and as a result log messages aren't being correctly written to stderr/stdout.
This easiest way to debug this would be to create a simple example to try to reproduce this error.
I'd suggest creating a simply python program that does nothing but log a message and then submitting that to the service to see if a log message is printed.
Something like the following
import logging
import time
if __name__ == "__main__":
logging.getLogger().setLevel(logging.INFO)
# Output logs for 5 minutes. We do this for 5 minutes just to ensure
# the job doesn't terminate before logs can be flushed.
for i in range(30):
logging.info("This is an info message.")
logging.error("This is an error message.")
time.sleep(10)
For the issue importing TensorFlow when running locally please take a look at this SO Question which has some suggestions on how to check the Python path used by gcloud and verifying that it includes TensorFlow.

How do I capture output from one Rundeck step to be used in a later step?

I'm attempting to build, launch, and link a set of docker containers using Rundeck. In short (for those not familiar with docker), when an image is launched, it returns a container ID. I would like to use this container ID in the launching of subsequent jobs.
When run from the command line, it would look something like this (example only!!):
# docker run -Pd 23ABCD45
34DEF123
# docker run -Pd --link 34DEF123:host1 ABC123EF
321CB456
(note the use of the first return value in the second command line)
At this point, there would be two containers running. The second would be linked to the first by the --link option, and it would be addressable using the hostname host1 from inside the second container. To be fair, docker generates (or may be given) a specific container name which can be used in place of the container id. I would prefer to use the container ID to avoid the hassle of having to create/track unique names.
I would like to be able to capture the output of the first command (the container ID) so that it can be reused in the second command. Is this possible?
Edit: These images are being used for testing immediately following a
"docker build" (which also outputs a similar ID I would like to
include in my chain) and might be followed by "docker rm" and "docker
rmi" commands, so there are a number of uses for capturing this type
of output and carrying it through a related set of operations. This
is not just about launching/linking containers.
There is no direct Rundeck implementation that allows you pass an output from one job to another job as an input, but there are work around I've tried in the past, and I've settled on the second approach.
1. Use a file to pass data
Save the ID/output into a tmp file in first job
Second job read that file
Things might go wrong since you depend on a file, but good code can improve.
2. Call two jobs using Rundeck CLI from another job
This is the approach I am using.
JobA printout two random numbers.
echo $RANDOM;echo $RANDOM
JobB print out the second random produced from JobA which is passed as an option "number"
echo "$RD_OPTION_NUMBER is the number JobB received"
JobC calls first job, save last line to a variable and pass it to JobB
#!/bin/bash
OUTPUT_FROM_JOB_A=`run -f --id <ID of JobA> | tail -n 1`
run -f --id <ID of JobB> -- -number $OUTPUT_FROM_JOB_A
Output:
[5394] execution status: succeeded
Job execution started:
[5395] JobB <https://hostname:4443/project/Project/execution/show/5395>
6186 is the number JobB received
[5395] execution status: succeeded
This is just primitive code sample. you can do alot with python subprocess or just use bash.

Unable to get JSCover and PhantomJS to run Jasmine test on Cloudbees

I am currently trying to run JSCover in web server mode to determine the coverage of my Jasmine tests that are executed in the PhantomJS headless browser. I am also using grunt+nodejs to kick off the tests.
The code I use in my gruntfile to start the JSCover server and execute phantomJS is:
// Start JSCover Server
var childProcess = require('child_process'),
var JSCOVER_PORT = "43287";
var JAVA_HOME = process.env.JAVA_HOME;
var jsCoverChildArgs = [
"-jar", "src/js/test/tools/JSCover-all.jar",
"-ws",
"--branch",
"--port="+JSCOVER_PORT,
"--document-root=./",
"--report-dir=target/",
"--no-instrument=src/js/lib/",
"--no-instrument=src/js/test/",
"--no-instrument=src/js/test/lib/"
];
var jsCoverProc = childProcess.spawn(JAVA_HOME + "/bin/java", jsCoverChildArgs);
// Start PhantomJS
var phantomjs = require('phantomjs'),
var binPath = phantomjs.path,
var childArgs = [
'src/js/test/lib/phantomjs_jasminexml_runner.js',
'http://localhost:'+JSCOVER_PORT+'/src/js/test/SpecRunner.html',
'target/surefire-reports'
];
runner = childProcess.execFile(binPath, childArgs);
runner.on('exit', function (code) {
// Tests have finished, so clean up the process
var success = (code === 0) ? true : false;
jsCoverProc.kill(); // kill the JSCover server now that we are done with it
done(success);
});
However, when I run the web server on a Jenkins node in cloudbees and then run phantomjs against it, I get one of the following errors:
Some tests start to run, but then the process fails:
A spec : should be able to have a mock lo-dash ...
Warning: Task "test" failed. Use --force to continue.
Aborted due to warnings.
Build step 'Execute shell' marked build as failure
Recording test results
Finished: FAILURE
PhantomJS is unable to access the JSCover server:
Running "test" task
phantomjs> Could not load 'http://127.0.0.1:43287/src/js/test/SpecRunner.html'.
Warning: Task "test" failed. Use --force to continue.
For the second error, I have tried to use different ports and hostnames that I set (e.g. 127.0.0.1 or localhost for hostnames, and 4327, 43287, etc. for ports). The ports are not being dynamically set at build time - I have them hardcoded in my grunt script.
Any thoughts on why the errors above might be occurring or why I am having issues running and accessing the JSCover server on a Cloudbees Jenkins node (but never on my local machine)?
So when you execute JSCover with any process, it takes time to be up. If we expect it to be up earlier that it is, the errors are bound to come.
Quoting from the great article: http://blog.johnryding.com/post/46757192364/javascript-code-coverage-with-phantomjs-jasmine-and
Now that I had a code coverage tool that met all of my requirements,
the last part was to get this code to run as part of our Jenkins build
(which utilizes a grunt script). This was easy to get running, but I
encountered two errors that consistently broke my builds:
Sometimes phantomJS would fail to connect to the JSCover server
Sometimes phantomJS would connect to the server, but then give up executing my tests at a random point during the run.
These were really weird issues that only occurred on my team’s Jenkins nodes and were hard to diagnose - even though they turned out to be simple fixes.
For issue 1, that error was the result of my grunt script not waiting for JSCover to start before I executed phantomJS.
For the second issue, it turns out that my team was using a special jasmine test runner to help with producing XML files after tests completed. The problem with this file was that it had a function that waited for Jasmine to complete its execution, but utilized an extremely short timeout before it gave up running the tests. This was a problem with Jenkins + JSCover because it took a longer time for the tests to load and run now that they had to be loaded from a web server instead of straight from the file system. Fortunately, this fix was as easy as increasing the timeout.
I would say that you need to wait for a while after spawning JSCover - in the past I have done things with webdriver when I have spawned, and then waited for it to be available (ideally you can look for a response and sleep, repeat, until the spawned process is ready).
Ie look for a valid http reponse from 127.0.0.1:43287 before continuing (whatever "valid" means that the server is up).

Resources