Why my individual working BDD test case not working, if they were running in tag mode? - bdd

The test scenarios that I am dealing with is to login to a web page => get into profile tab => try different combinations to have a profile created, updated and deleted, etc. There is nothing special there.
And this is the first time I am using BDD framework to create "feature" files, their steps in python.
So far, I have created some test cases and was able to run individual ones in such command:
behave -f allure_behave.formatter:AllureFormatter -f pretty -o .\allure_reports
features\profile.create.case1.feature
behave -f allure_behave.formatter:AllureFormatter -f pretty -o .\allure_reports
features\profile.create.case2.feature
...
behave -f allure_behave.formatter:AllureFormatter -f pretty -o .\allure_reports
features\profile.delete.case10.feature
But if I was to have a subset of my test features running in a tagged way, such as:
behave -f allure_behave.formatter:AllureFormatter -f pretty --tags #create -o
.\allure_reports features
It always have the first feature executed, but the rests are failed. What could be wrong ?
Thanks,
Jack

Related

How to fail a AppCenter build on script exit code?

I've got a bunch of scripts that get called when the appcenter-pre-build.sh is called. For example, one of them is a simple check to see if the current branch tag already exists on the repository.
#!/usr/bin/env bash
set -e # Exit immediately if a command exits with a non-zero status (failure)
# 1 Fetch tags
git fetch --tags
# See if the tag exists
if git tag --list | egrep -q "^$VERSION_TAG$"
then
echo "Error: Found tag. Exiting."
exit 1
else
git tag $VERSION_TAG
git push origin $VERSION_TAG
fi
If the tag is found, I want to abort the build in AppCenter and fail it. This worked perfectly fine when I was running everything through Xcode Server but for some reason, I cannot figure out how to abort the build upon failure of my script. I'm not seeing much documentation on this particular subject and the AppCenter folk over at Microsoft are taking their sweet time getting back to me.
Anyone have experience with this and/or know how to fail an AppCenter build from their scripts? Thanks in advance for your thoughts!
Okay, figured it out. Looks like sending a curl request to cancel the build using the env variable "$APPCENTER_BUILD_ID" takes care of the issue. Exiting your script with a non-zero is NOT working inside AppCenter.
Here's a sample of what to do. I just put it in a special "cancelAppCenterBuild.sh" script and called it in place of my exits.
API_TOKEN="<YourAppToken>"
OWNER_NAME="<YourOwnerOrOrganizationName>"
APP_NAME="<YourAppName>"
curl -iv "https://appcenter.ms/api/v0.1/apps/$OWNER_NAME/$APP_NAME/builds/$APPCENTER_BUILD_ID" \
-X PATCH \
-d "{\"status\":\"cancelling\"}" \
--header 'Content-Type: application/json' \
--header "X-API-Token: $API_TOKEN"
Pro tip: If you've ever renamed your app, AppCenter servers are having issues referencing the new name. I was getting a 403 with a forbidden message. You might have to change your app name to whatever the original name was or just rebuild the app from scratch within AppCenter.

How to run repo from a script inside a container in a jenkins job

I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.

Can't get to exec one makefile in another one

Some background: I have a project based on ESP-IDF which has a complex builtin building system which you plug into with your own makefile (their documentation on using it).
This works fine (apart from occasional horrendous build times), but now I wanted to add a build target for unit tests for a component, which requires building this component against another project (the unit-test-app).
So, I need another build target that calls another make with another makefile and directory. Something like this works fine:
make -C $(path to unit-test-app) \
EXTRA_COMPONENT_DIRS=$(my component directory) \
TEST_COMPONENTS=$(my component name) \
ESPPORT=$(my serial port) \
-j clean app-flash monitor
But only if I execute it from bash. If I try to execute it from another makefile, it breaks either not finding some header files (the include path is different between the main and unit test project) or ignores the change of project (-C argument) and executes the main project build.
What I tried:
using $(MAKE), $(shell which $(MAKE)) and make in the custom target
using env -i $(shell which $(MAKE) ) -C ... with forwarding required environment arguments to the child make
using bash -l make -C ... and bash -c make -C ...
What works but is a dirty hack: using echo $(MAKE) -C ... in the make target and then running $(make tests) from the command line.
As far as I know, this is an issue of the parent makefile setting something up in the environment that I did not separate the child makefile from. What else can I do to separate these two?
UPDATE: I have created an example project that shows the issue more clearly, please look at the top Makefile of https://github.com/chanibal/esp-idf-template-with-unit-tests
I reproduced your situation as you are describing it and everything works fine, both if I call the inner make from bash or from the outer make.
So there is something you are not telling us that is causing the failure.
On the other hand, I feel there are several irrelevant details in your description.
So, I suggest you try to further isolate the problem, removing irrelevant stuff, and reproducing the problem only from the description in your question, and then when you are doing it you will probably find out what is breaking. If not, then post here the minimal setup with all the other details that are needed for the failure to occur.
By the way, what you are doing is not good practice, so maybe just avoiding it would solve your problems.
What I mean is, there is one case and one case only, where recursive make is good practice: make -C ${directory}
where in directory you have a completely self-contained build, not using anything from the outside.
It seems this is not the case for you, because you seem to be passing some outside location variables. This kind of recursive make is bad practice and should be avoided.

Use all cores to make OpenCV 3 [duplicate]

Quick question: what is the compiler flag to allow g++ to spawn multiple instances of itself in order to compile large projects quicker (for example 4 source files at a time for a multi-core CPU)?
You can do this with make - with gnu make it is the -j flag (this will also help on a uniprocessor machine).
For example if you want 4 parallel jobs from make:
make -j 4
You can also run gcc in a pipe with
gcc -pipe
This will pipeline the compile stages, which will also help keep the cores busy.
If you have additional machines available too, you might check out distcc, which will farm compiles out to those as well.
There is no such flag, and having one runs against the Unix philosophy of having each tool perform just one function and perform it well. Spawning compiler processes is conceptually the job of the build system. What you are probably looking for is the -j (jobs) flag to GNU make, a la
make -j4
Or you can use pmake or similar parallel make systems.
People have mentioned make but bjam also supports a similar concept. Using bjam -jx instructs bjam to build up to x concurrent commands.
We use the same build scripts on Windows and Linux and using this option halves our build times on both platforms. Nice.
If using make, issue with -j. From man make:
-j [jobs], --jobs[=jobs]
Specifies the number of jobs (commands) to run simultaneously.
If there is more than one -j option, the last one is effective.
If the -j option is given without an argument, make will not limit the
number of jobs that can run simultaneously.
And most notably, if you want to script or identify the number of cores you have available (depending on your environment, and if you run in many environments, this can change a lot) you may use ubiquitous Python function cpu_count():
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.cpu_count
Like this:
make -j $(python3 -c 'import multiprocessing as mp; print(int(mp.cpu_count() * 1.5))')
If you're asking why 1.5 I'll quote user artless-noise in a comment above:
The 1.5 number is because of the noted I/O bound problem. It is a rule of thumb. About 1/3 of the jobs will be waiting for I/O, so the remaining jobs will be using the available cores. A number greater than the cores is better and you could even go as high as 2x.
make will do this for you. Investigate the -j and -l switches in the man page. I don't think g++ is parallelizable.
distcc can also be used to distribute compiles not only on the current machine, but also on other machines in a farm that have distcc installed.
I'm not sure about g++, but if you're using GNU Make then "make -j N" (where N is the number of threads make can create) will allow make to run multple g++ jobs at the same time (so long as the files do not depend on each other).
GNU parallel
I was making a synthetic compilation benchmark and couldn't be bothered to write a Makefile, so I used:
sudo apt-get install parallel
ls | grep -E '\.c$' | parallel -t --will-cite "gcc -c -o '{.}.o' '{}'"
Explanation:
{.} takes the input argument and removes its extension
-t prints out the commands being run to give us an idea of progress
--will-cite removes the request to cite the software if you publish results using it...
parallel is so convenient that I could even do a timestamp check myself:
ls | grep -E '\.c$' | parallel -t --will-cite "\
if ! [ -f '{.}.o' ] || [ '{}' -nt '{.}.o' ]; then
gcc -c -o '{.}.o' '{}'
fi
"
xargs -P can also run jobs in parallel, but it is a bit less convenient to do the extension manipulation or run multiple commands with it: Calling multiple commands through xargs
Parallel linking was asked at: Can gcc use multiple cores when linking?
TODO: I think I read somewhere that compilation can be reduced to matrix multiplication, so maybe it is also possible to speed up single file compilation for large files. But I can't find a reference now.
Tested in Ubuntu 18.10.

Register for messages from collabd like XCSBuildService to receive Xcode Bots integration number

During an Xcode Bots build each run gets an integration number assigned. This number does not show in the build logs but would be convenient to create http links back to the individual Xcode Bots build in an CI environment or enterprise app store.
In the /Library/Server/Xcode/Logs/xcsbuildd.log I found that XCSBuildService gets a message/event from collabd with a structure that contains the integration number.
Is there a way to register to the same event in an own (OSX) program and receive this message/event?
The only ugly way I found so far to get the Xcode Bots integration number was by parsing the xcsbuildd.log file with the drawback that I can't be sure that the latest integration number corresponds with my current build when several builds are executing in parallel.
This log file is also located in an read protected folder/file so that I have to either change the permissions (ugh!) or use sudo (I don't really want to do that)!?
Example:
sudo grep -r "integration =" /Library/Server/Xcode/Logs/xcsbuildd.log | tail -1 | cut -d'=' -f 2| cut -d';' -f 1 |tr -d '\040\011\012\015'
Gives me the latest integration number stripped from whitespace ...
Edit:
Just found out that if you actually include the following script in your scheme as Build-post-action it will create a file (e.g. /Library/Server/Xcode/Data/BotRuns/Cache/22016b1e-2f91-4757-b3b8-322233e87587/source/integration_number.txt) with the integration number without requiring sudo. Xcode Bots seems to serialize the different builds so that they are not executed in parallel and so the created integration number in the file could be used.
grep -r "integration =" /Library/Server/Xcode/Logs/xcsbuildd.log | tail -1 | cut -d'=' -f 2| cut -d';' -f 1 |tr -d '\040\011\012\015' > ${PROJECT_DIR}/integration_number.txt
For specifically the case of getting the integration build number, I've added a simple answer here.
To repeat myself (and keep this from being down voted as a 1-line answer):
I implemented this using a Shell Script Build Phase in my Xcode project. In my case, I used the integration number to set the internal version of my built product. My script looks like this:
if [ "the$XCS_INTEGRATION_NUMBER" == "the" ]; then
echo "Not an integration build…"
xcrun agvtool new-version "10.13"
else
echo "Setting integration build number: $XCS_INTEGRATION_NUMBER"
xcrun agvtool new-version "$XCS_INTEGRATION_NUMBER"
fi
Note that XCS_INTEGRATION_NUMBER exists by default in the Xcode Server build environment. If you want to simulate an integration build (for the purposes of this script), you can simply add it to your build settings as a custom variable.

Resources