so, i need to integrate kcov in my gitlab-ci to see code coverage on a test executable executable. the documentation from kcov states that i need to run "kcov /path/to/outdir ./myexec" to generate a report in an html file. however, even if the command succedes, /path/to/outdir is still empty and i dont know why since the tests pass and kcov returns no errors
here is the .gitlab-ci.yml:
stage: coverage
dependencies:
- Build
script:
- mkdir build/test/kcov
- cd build/test
- kcov --include-path=../../src /kcov ./abuse-test
- cd kcov
- ls
artifacts:
paths:
- TP3/build
- TP3/src
my test exec is abuse-test, it is generated via cmake->make and is in a folder called TP3->build->test->abuse-test
the output of the console in the ci is the following:
on igl601-runner3 5d2b3c01
Using Docker executor with image depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Pulling docker image depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Using docker image sha256:c2cf0a7c10687670c7b28ee23ac06899de88ebb0d86e142bfbf65171147fc167 for depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Running on runner-5d2b3c01-project-223-concurrent-0 via dinf-prj-16...
Fetching changes...
Removing TP3/build/
HEAD is now at b2e1277 Update .gitlab-ci.yml
From https://depot.dinf.usherbrooke.ca/e19-igl601/eq09
b2e1277..7cf0af5 master -> origin/master
Checking out 7cf0af56 as master...
Skipping Git submodules setup
Downloading artifacts for Build (8552)...
Downloading artifacts from coordinator... ok id=8552 responseStatus=200 OK token=Pagxjp_C
$ cd TP3
$ mkdir build/test/kcov
$ cd build/test
$ kcov --include-path=../../src /kcov ./abuse-test
===============================================================================
All tests passed (3 assertions in 3 test cases)
$ cd kcov
$ ls
Uploading artifacts...
TP3/build: found 2839 matching files
TP3/src: found 211 matching files
Uploading artifacts to coordinator... ok id=8554 responseStatus=201 Created token=PxDHHjxf
Job succeeded
the kcov documentation states: "/path/to/outdir will contain lcov-style HTML output generated continuously while the application run"
and yet, when i browse the artefacts, i find nothing
Hi it looks like you're specifying /kcov as the outdir:
kcov --include-path=../../src /kcov ./abuse-test
Since you're working on a *nix based system, the / implies an absolute path from the root of your filesystem.
The cd kcov step assumes a relative path (down from your current directory) since it is missing the /.
So I guess changing your kcov command to:
kcov --include-path=../../src kcov ./abuse-test
Would fix your issue.
Related
In Jenkins I have a docker container, in which I want to run my Cypress tests (cucumber preprocessor).
All feature files are placed in myrepo/cypress/integration/features/**/.feature
That means I have multiple subfolders under feature folder and in each subfolder are placed the feature files. (e.g. myrepo/cypress/integration/features/admin/testadmin.feature)
In package.json I have defined for scripts test:
"test": "node_modules\\.bin\\cypress run --spec \"cypress/integration/features/**/*.feature\""
and in jenkins trying to run it via:
sh 'docker exec image1 npm run test'
But it can't find the feature files.
Then I tried to specify the path directly in the command and changing to cypress run:
sh 'docker exec image1 ./node_modules/.bin/cypress run --spec "cypress/integration/features/**/*.feature" '
But I have this error:
Can't run because no spec files were found.
We searched for any files matching this glob pattern:
cypress/integration/features/**/*.feature
Relative to the project root folder: /myrepo
I also tried it withou quotemarks:
sh 'docker exec testrepo ./node_modules/.bin/cypress run --spec cypress/integration/features/**/*.feature'
Then it says:
Warning: It looks like you're passing --spec a space-separated list of arguments:
"cypress/integration/features/admin/testadmin.feature cypress/integration/features/admin/anothertest.feature ... etc.
This will work, but it's not recommended.
The most common cause of this warning is using an unescaped glob pattern. If you are
trying to pass a glob pattern, escape it using quotes...
We searched for any files matching this glob pattern:
cypress/integration/features/admin/testadmin.feature, cypress/integration/features/admin/anothertest.feature
Relative to the project root folder:/myrepo
So it seems, my first solution with quote marks is correct, and also it knows about the feature files if they are listed there, but still says can't find them.
So what am I doing wrong?
Well, I've found out that I forgot to COPY the cypress folder to docker image in Dockerfile. Now it solved everything :)
After having everything ready to deploy, I realized JHipster doesn't have a Dockerfile anymore and the packaging is done with jib. The generated gitlab-ci.yml has a docker-push stage with a command like this:
./mvnw jib:build -Djib.to.image=$IMAGE_TAG -Djib.to.auth.username=gitlab-ci-token -Djib.to.auth.password=$CI_BUILD_TOKEN
but it fails with
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:0.9.11:build (default-cli) on project test-project: Obtaining project build output files failed; make sure you have compiled your project before trying to build the image. (Did you accidentally run "mvn clean jib:build" instead of "mvn clean compile jib:build"?): /builds/amatos/test-project/target/classes -> [Help 1]
As that failed, I tried to run the command locally like this:
./mvnw jib:build -Djib.to.image=registry.gitlab.com/amatos/test-project:v6.0.1 -Djib.to.auth.username=amatos -Djib.to.auth.password=password
but instead of trying to connect to Gitlab's registry, it tries to connect to registry.hub.docker.com:
[INFO] Retrieving registry credentials for registry.hub.docker.com...
What I would like to know is: how do I set it to connect to Gitlab instead of Docker Hub?
In order to connect to a custom repository, I changed -Djib.to.image to -Dimage and it worked
This is followed by jhipster/generator-jhipster issue 9761 which states:
as the docker-push is done in another stage, there is a missing - target/classes in the previous stage.
It is needed by jib. It should look like:
maven-package:
stage: package
script:
- ./mvnw verify -Pprod -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME
artifacts:
paths:
- target/*.jar
- target/classes
expire_in: 1 day
Possibly addressed by PR (merged) 9762, commit 50cc009, which is only in master, not yet referenced by any tag.
I'm using a bitbucket docker pipeline to validate my builds for an android app on push. One of my dependancies is a private package which I am hosting on another bitbucket repository. For builds on the user machine, I use Gradle's private maven repository plugin which can resolve my dependency with encrypted username and password.
This works well for developer machines, but I want to avoid hard coding usernames and passwords in the pipeline. Instead, since bitbucket support sshkeys across repositories for authentication, I have built in my pipeline script to clone the repository with the my private packages, and copy them over to the gradle cache. I have tried both:
/home/gradle/.gradle/caches/modules-2/files-2.1/com.mycompany
~/.gradle/caches/modules-2/files-2.1/com.mycompany
as caches. The clone and copy work just fine as I can see the files in their respective directories in the cache with by adding an ls in the pipeline, but gradlew still tries to go to the internet (the other bitbucket repository) to resolve the dependancies, as if there is no cache. Further, i'm using gradle docker image gradle:3.4.1 (the version of gradle in my project level build.gradle file), but gradle build fails with a google() is not a function.
Gradlew build fails trying to resolve my package.pom file, with a lack of a username (because there is no gradle.properties in the pipeline). But why doesn't it use the cache, instead of trying to go to the repository?
I have tried the standard java:8 docker image, the gradle docker image up to 5.1.1, and i have tried copying the package files into various gradle caches in the docker image. I have also tried altering permissions with chmod 775 to no avail. I have also tried gradlew assembleDebug with the same results as gradlew build. I'm a bit new to gradle, docker and bitbucket so i'm not sure what is causing the issue.
image: gradle:3.4.1
pipelines:
default:
- step:
caches:
- gradle
- android-sdk
script:
# Download and unzip android sdk
- wget --quiet --output-document=android-sdk.zip https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip
- unzip -o -qq android-sdk.zip -d android-sdk
# Define Android Home and add PATHs
- export ANDROID_HOME="/opt/atlassian/pipelines/agent/build/android-sdk"
- export PATH="$ANDROID_HOME/tools:$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools:$PATH"
# Download packages.
- yes | sdkmanager "platform-tools"
- yes | sdkmanager "platforms;android-27"
- yes | sdkmanager "build-tools;27.0.3"
- yes | sdkmanager "extras;android;m2repository"
- yes | sdkmanager "extras;google;m2repository"
- yes | sdkmanager "extras;google;instantapps"
- yes | sdkmanager --licenses
# Build apk
- git clone git#bitbucket.org:myorg/myrepo.git
- scp -r myrepo/com/mycomp/* /home/gradle/.gradle/caches/modules-2/files-2.1/com.mycomp
#- gradle build
- ./gradlew build
#- ./gradlew assembleDebug
definitions:
caches:
android-sdk: android-sdk
Gradlew build error:
> Could not resolve all files for configuration ':app:debugCompileClasspath'.
> Could not resolve com.mycomp:mypackage:1.0.1.
Required by:
project :app
> Could not resolve com.mycomp:mypackage:1.0.1.
> Could not get resource 'https://bitbucket.org/myorg/myrepo/raw/releases/com/mycomp/mypackage/1.0.11/package-1.0.1.pom'.
> Username may not be null
I'm trying to create CI pipeline with GitHub, Travis CI and AWS ECS. When I'm push commit to master branch, I'm getting error in travis CI: 'Could not parse .travis.yml'. I can't figure out, where is the problem. Travis dosen't provide more information about error.
There is a code, which I'm using:
.travis.yml
language: csharp
dist: trusty
sudo: required
mono: none
dotnet: 2.0.0
branches:
only:
- master
before_script:
- chmod -R a+x scripts
script:
- ./scripts/dotnet-build.sh
- ./scripts/dotnet-publish.sh
- ./scripts/docker-publish-travis.sh
dotnet-build.sh
dotnet restore
dotnet build
dotnet-publish.sh
dotnet publish ./BookMeMobi2 -c Release -o ./bin/Docker
dotnet-publish-travis.sh
pip install --user awscli
eval $(aws ecr get-login --no-include-email --region eu-central-1)
docker build -t bookmemobi2 .
docker ps
docker tag bookmemobi2:latest 601510060817.dkr.ecr.eu-central-1.amazonaws.com/bookmemobi2:latest
docker push 601510060817.dkr.ecr.eu-central-1.amazonaws.com/bookmemobi2:latest
I don't know where is the problem. Could you help me?
Use yamllint, which you can install, or just copy&paste to a web-based version.
With the example in the question, I get:
(<unknown>): found character that cannot start any token while scanning for the next token at line 7 column 1
There's a tab on line 7. See "https://stackoverflow.com/q/19975954".
I had a similar problem. In my case I was using python to launch a couple of scripts. I placed them one after the other with a hyphen at the beginning, exactly as you. So I searched to found out that I could place all of them in one line with "&" between each script and I got rid of the hyphen.
What I had:
script:
- python test_Math_DC.py
- python test_Math_Moy.py
- python test_Math_Var.py
- python test_Math_SQRT.py
Changed to :
script: python test_Math_DC.py & python test_Math_Moy.py & python test_Math_Var.py & python test_Math_SQRT.py
In your case you could try :
script: ./scripts/dotnet-build.sh & ./scripts/dotnet-publish.sh & ./scripts/docker-publish-travis.sh
or something like this :
script: sh ./scripts/dotnet-build.sh & sh ./scripts/dotnet-publish.sh & sh ./scripts/docker-publish-travis.sh
And see how it works out.
The travis cli tool has a linter
gem install travis
However, it only gives warnings for the example. Also, it currently does not work with all features, for example stages.
$ travis lint
Warnings for .travis.yml:
[x] unexpected key mono, dropping
[x] unexpected key dotnet, dropping
According to the docs, a git url can be passed to a build command:
But what happens if the git url needs to be a branch name? In other words, how do I do the equivalent of this:
git clone -b my-firefox-branch
git#github.com:creack/docker-firefox.git
Start your URL with git:// (or https://) and simply append the branch name after #.
I just forked the OP's repo and created a branch to confirm it works (docker version 1.11.1):
root#box:~# docker build git://github.com/michielbdejong/docker-firefox#michielbdejong-patch-1
Sending build context to Docker daemon 52.22 kB
Step 1 : FROM ubuntu:12.04
12.04: Pulling from library/ubuntu
4edf76921243: Downloading
[==========> ] 9.633 MB/44.3 MB
^Croot#box:~#
See https://docs.docker.com/engine/reference/commandline/build/ for full docs.
So far, No. it can't.
Here's what I got:
$ docker build git#github.com:shawnzhu/docker-ruby.git#branch1
2014/12/04 08:19:04 Error trying to use git: exit status 128 (Cloning into '/var/folders/9q/bthxttfj2lq7jtz0b_f938gr0000gn/T/docker-build-git859493111'...
fatal: remote error:
is not a valid repository name
Email support#github.com for help
)
If you take a look at this line of docker CLI code, it only do recursive git clone against given URL of a git repo (even no --depth=1) when using docker build <git-repo-url>.
However, it could be an interesting improvement to docker (if people want it) since #<branch-name> and #<commit> are popular syntax to github URL adopted by lots of tools like npm and bower.
Well it works more or less depending on versions
For recent versions : (docker-engine 1.5.0-0~trusty and+)
docker build https://github.com/rzr/iotjs.git#master
docker build https://github.com/rzr/iotjs.git
docker build github.com/rzr/iotjs.git
For older ones: (docker.io 1.4-5ubuntu1 and -)
docker build https://github.com/rzr/iotjs.git
docker build git://github.com/rzr/iotjs.git
docker build github.com/rzr/iotjs.git
Maybe this can be handled in helper script like:
curl -sL https://rawgit.com/rzr/iotjs/master/run.sh | bash -x -