I'm using a bitbucket docker pipeline to validate my builds for an android app on push. One of my dependancies is a private package which I am hosting on another bitbucket repository. For builds on the user machine, I use Gradle's private maven repository plugin which can resolve my dependency with encrypted username and password.
This works well for developer machines, but I want to avoid hard coding usernames and passwords in the pipeline. Instead, since bitbucket support sshkeys across repositories for authentication, I have built in my pipeline script to clone the repository with the my private packages, and copy them over to the gradle cache. I have tried both:
/home/gradle/.gradle/caches/modules-2/files-2.1/com.mycompany
~/.gradle/caches/modules-2/files-2.1/com.mycompany
as caches. The clone and copy work just fine as I can see the files in their respective directories in the cache with by adding an ls in the pipeline, but gradlew still tries to go to the internet (the other bitbucket repository) to resolve the dependancies, as if there is no cache. Further, i'm using gradle docker image gradle:3.4.1 (the version of gradle in my project level build.gradle file), but gradle build fails with a google() is not a function.
Gradlew build fails trying to resolve my package.pom file, with a lack of a username (because there is no gradle.properties in the pipeline). But why doesn't it use the cache, instead of trying to go to the repository?
I have tried the standard java:8 docker image, the gradle docker image up to 5.1.1, and i have tried copying the package files into various gradle caches in the docker image. I have also tried altering permissions with chmod 775 to no avail. I have also tried gradlew assembleDebug with the same results as gradlew build. I'm a bit new to gradle, docker and bitbucket so i'm not sure what is causing the issue.
image: gradle:3.4.1
pipelines:
default:
- step:
caches:
- gradle
- android-sdk
script:
# Download and unzip android sdk
- wget --quiet --output-document=android-sdk.zip https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip
- unzip -o -qq android-sdk.zip -d android-sdk
# Define Android Home and add PATHs
- export ANDROID_HOME="/opt/atlassian/pipelines/agent/build/android-sdk"
- export PATH="$ANDROID_HOME/tools:$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools:$PATH"
# Download packages.
- yes | sdkmanager "platform-tools"
- yes | sdkmanager "platforms;android-27"
- yes | sdkmanager "build-tools;27.0.3"
- yes | sdkmanager "extras;android;m2repository"
- yes | sdkmanager "extras;google;m2repository"
- yes | sdkmanager "extras;google;instantapps"
- yes | sdkmanager --licenses
# Build apk
- git clone git#bitbucket.org:myorg/myrepo.git
- scp -r myrepo/com/mycomp/* /home/gradle/.gradle/caches/modules-2/files-2.1/com.mycomp
#- gradle build
- ./gradlew build
#- ./gradlew assembleDebug
definitions:
caches:
android-sdk: android-sdk
Gradlew build error:
> Could not resolve all files for configuration ':app:debugCompileClasspath'.
> Could not resolve com.mycomp:mypackage:1.0.1.
Required by:
project :app
> Could not resolve com.mycomp:mypackage:1.0.1.
> Could not get resource 'https://bitbucket.org/myorg/myrepo/raw/releases/com/mycomp/mypackage/1.0.11/package-1.0.1.pom'.
> Username may not be null
Related
I'm trying to setup a GitLab CI pipeline on a Docker runner.
I have a Docker runner with a Maven image (maven:3.8.6-openjdk-11) that I'm trying to use for my pipeline that compiles a Maven project.
I have set maven repository cache but I feel like this cache is not working properly. Every time my pipeline runs, it downloads the dependencies that are useful for compiling my project, onto my Nexus artifactory... I expected it to download only the "new" dependencies, and for the others, they use the cache without downloading them.
Below is the content of the .gitlab-ci.yml file:
variables:
MAVEN_OPTS: >-
-Dhttps.protocols=TLSv1.2
-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository
-Dorg.slf4j.simpleLogger.showDateTime=true
-Djava.awt.headless=true
cache:
paths:
- .m2/repository/
build:
tags:
- tdev
script:
- mvn --settings $MAVEN_SETTINGS clean compile
On each run it download the dependencies:
[...]
Downloading from infogreffe: https://xxxx/nexus/repository/infogreffe/org/wildfly/core/wildfly-core-security/19.0.0.Final/wildfly-core-security-19.0.0.Final.jar
Downloaded from infogreffe: https://xxxx/nexus/repository/infogreffe/org/wildfly/core/wildfly-controller-client/19.0.0.Final/wildfly-controller-client-19.0.0.Final.jar (214 kB at 70 kB/s)
Downloading from infogreffe: https://xxxx/nexus/repository/infogreffe/org/wildfly/security/wildfly-elytron-auth/2.0.0.Final/wildfly-elytron-auth-2.0.0.Final.jar
Downloaded from infogreffe: https://xxxx/nexus/repository/infogreffe/com/oracle/ojdbc5/11.2.0.2.0/ojdbc5-11.2.0.2.0.jar (2.0 MB at 666 kB/s)
[...]
Note that I use a local cache.
Thanks.
I expect that with using a cache for my maven repository, it won't try to download dependencies on every pipeline run unless there are new dependencies. But maybe I misunderstood how caching works...
I am attempting to refactor a maven build process, and I am trying to populate a local maven repository to help with this refactor.
My build depends on obsolete versions of jar files that exist only in a maven repo on my network (not in maven central). Example: org.foo:example:1.7:jar
I have been attempting to run my maven build in a docker image image with the hope that I could identify all of the obsolete components being pulled from my maven repository.
My goal is to explicitly pull down dependencies from my maven repo and then build the application using only maven central as an external repository.
I have a docker file to run the build
FROM maven:3-jdk-8 as build
WORKDIR /build
# This pom.xml file only references maven central.
COPY pom.xml .
# Explicitly download artifacts into /root/.m2/...
RUN mvn dependency:get -Dartifact=org.foo:example:1.7.jar \
-DrepoUrl=https://my.maven.repo
# Run the build making use of the dependencies loaded into the local repo
RUN mvn install
Unfortunately, I see an error: could not resolve dependencies for project ... The following artifacts could not be resolved: org.foo:example:jar:1.7.
I presume there might be some metadata in my local org.foo:example:1.7:pom that has an awareness of its origin repository. I had hoped I could satisfy this dependency by pulling it into my local repository.
I also attempted to add the following flag
RUN mvn install --no-snapshot-updates
After further investigation, I discovered that the downloads in my .m2/repository contained files named _remote.repositories.
Solution: set -Dmaven.legacyLocalRepo=true when running dependency:get
With this flag, the _remote.repositories files are not created.
RUN mvn dependency:get -Dmaven.legacyLocalRepo=true \
-Dartifact=org.foo:example:1.7.jar \
-DrepoUrl=https://my.maven.repo
This question helped to provide a solution: _remote.repositories prevents maven from resolving remote parent
so, i need to integrate kcov in my gitlab-ci to see code coverage on a test executable executable. the documentation from kcov states that i need to run "kcov /path/to/outdir ./myexec" to generate a report in an html file. however, even if the command succedes, /path/to/outdir is still empty and i dont know why since the tests pass and kcov returns no errors
here is the .gitlab-ci.yml:
stage: coverage
dependencies:
- Build
script:
- mkdir build/test/kcov
- cd build/test
- kcov --include-path=../../src /kcov ./abuse-test
- cd kcov
- ls
artifacts:
paths:
- TP3/build
- TP3/src
my test exec is abuse-test, it is generated via cmake->make and is in a folder called TP3->build->test->abuse-test
the output of the console in the ci is the following:
on igl601-runner3 5d2b3c01
Using Docker executor with image depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Pulling docker image depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Using docker image sha256:c2cf0a7c10687670c7b28ee23ac06899de88ebb0d86e142bfbf65171147fc167 for depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Running on runner-5d2b3c01-project-223-concurrent-0 via dinf-prj-16...
Fetching changes...
Removing TP3/build/
HEAD is now at b2e1277 Update .gitlab-ci.yml
From https://depot.dinf.usherbrooke.ca/e19-igl601/eq09
b2e1277..7cf0af5 master -> origin/master
Checking out 7cf0af56 as master...
Skipping Git submodules setup
Downloading artifacts for Build (8552)...
Downloading artifacts from coordinator... ok id=8552 responseStatus=200 OK token=Pagxjp_C
$ cd TP3
$ mkdir build/test/kcov
$ cd build/test
$ kcov --include-path=../../src /kcov ./abuse-test
===============================================================================
All tests passed (3 assertions in 3 test cases)
$ cd kcov
$ ls
Uploading artifacts...
TP3/build: found 2839 matching files
TP3/src: found 211 matching files
Uploading artifacts to coordinator... ok id=8554 responseStatus=201 Created token=PxDHHjxf
Job succeeded
the kcov documentation states: "/path/to/outdir will contain lcov-style HTML output generated continuously while the application run"
and yet, when i browse the artefacts, i find nothing
Hi it looks like you're specifying /kcov as the outdir:
kcov --include-path=../../src /kcov ./abuse-test
Since you're working on a *nix based system, the / implies an absolute path from the root of your filesystem.
The cd kcov step assumes a relative path (down from your current directory) since it is missing the /.
So I guess changing your kcov command to:
kcov --include-path=../../src kcov ./abuse-test
Would fix your issue.
I have a single BitBucket repository containing the code for an Angular app in a folder called ui and a Node API in a folder called api.
My BitBucket pipeline runs ng test for the Angular app, but the node_modules folder isn't being cached correctly.
This is my BitBucket Pipeline yml file:
image: trion/ng-cli-karma
pipelines:
default:
- step:
caches:
- angular-node
script:
- cd ui
- npm install
- ng test --watch=false
definitions:
caches:
angular-node: /ui/node_modules
When the builds runs it shows:
Cache "angular-node": Downloading
Cache "angular-node": Extracting
Cache "angular-node": Extracted
But when it performs the npm install step it says:
added 1623 packages in 41.944s
I am trying to speed the build up and I can't work out why npm needs to install the dependencies assuming they are already contained in the cache which has been restored.
my guess is, your cache position is not correct. there is a pre-configured node cache (named "node") that can just be activated. no need to do a custom cache for that. (the default cache fails, because your node build is in a sub folder of the clone directory, so you need a custom cache)
cache positons are relative to the clone directory. bitbucket clones into /opt/atlassian/pipelines/agent/build thats probably why your absolute cache-path did not work.
simply making the cache reference relative should do the trick
pipelines:
default:
- step:
caches:
- angular-node
script:
- cd ui
- npm install
- ng test --watch=false
definitions:
caches:
angular-node: ui/node_modules
that may fix your issue
I have a CI setup with Jenkins and Artifactory for Java. I would like also to build and deploy deb packages. For building deb packages, I might use a Maven plugin (called from Gradle), e.g., http://mojo.codehaus.org/deb-maven-plugin/.
I am now investigating Debian repository implementations. I would like to deploy a private Debian repository to host my packages (http://wiki.debian.org/HowToSetupADebianRepository).
Are there any plugin in Jenkins that would make it easier to deploy deb packages? Which debian repository implementation should I use?
Just adding my 2 cents to this post.
Internally we use Freight (https://github.com/rcrowley/freight#readme) as our Debian/Ubuntu repository.
A lot of us tend to use fpm (https://github.com/jordansissel/fpm#readme) by Jordan Sissel for creating debs for internal use.
This can be easily scripted inside your source repository like I do here:
https://github.com/stuart-warren/logit/blob/master/make-deb
#!/bin/bash
# SET SOME VARS
installdir='/usr/lib/logit'
NAME='logit-java'
VERSION='0.5.8'
ITERATION='1'
WEBSITE='https://github.com/stuart-warren/logit'
REPO='http://nexus.stuartwarren.com/nexus'
# REMOVE PREVIOUS BUILD IF PRESENT
echo "Delete ${installdir}"
rm -rf .${installdir}
# CREATE FOLDER STRUCTURE
echo "create base dir ${installdir}"
mkdir -p .${installdir}
# PUT FILES IN THE CORRECT LOCATIONS
wget ${REPO}/content/repositories/releases/com/stuartwarren/logit/${VERSION}/logit-${VERSION}-tomcatvalve.jar -O .${installdir}/logit-${VERSION}-tomcatvalve.jar
wget ${REPO}/content/repositories/releases/com/stuartwarren/logit/${VERSION}/logit-${VERSION}-jar-with-dependencies.jar -O .${installdir}/logit-${VERSION}-jar-with-dependencies.jar
wget https://raw.github.com/stuart-warren/logit/master/LICENSE -O .${installdir}/LICENCE
wget https://raw.github.com/stuart-warren/logit/master/README.md -O .${installdir}/README.md
pushd .${installdir}
ln -sf logit-${VERSION}-tomcatvalve.jar logit-tomcatvalve.jar
ln -sf logit-${VERSION}-jar-with-dependencies.jar logit-jar-with-dependencies.jar
popd
# REMOVE OLD PACKAGES
echo "Delete old packages"
rm ${NAME}_*_all.deb
# CREATE THE DEB
echo "Build new package"
fpm \
-n $NAME \
-v $VERSION \
--iteration ${ITERATION} \
-a all \
-m "Stuart Warren <stuart#stuartwarren.com>" \
--description "Library to extend Log4J 1.2 (plus now Logback 1.0,
Java.util.logging and Tomcat AccessLog Valve) by providing
json layouts (for logstash/greylog) and a zeromq appender" \
--url $WEBSITE \
--license 'Apache License, Version 2.0' \
--vendor 'stuartwarren.com' \
-t deb \
-s dir \
${installdir:1}
echo "Delete ${installdir}"
rm -rf .${installdir}
echo "Done!"
Obviously you could just copy in any compiled files directly rather than downloading from a server, maven repo in my case.
Then you can SCP the deb upto some 'incoming' directory on your repository server.
I am not aware of a Debian package plugin for Jenkins, and I didn't find the maven-deb-plugin suitable for my needs (see "What doesn't work" on page you linked to). Where I have a maven build job in Jenkins I add a post step shell script which increments the version in debian/changelog and runs dpkg-buildpackage -b -nc.
-nc suppresses a clean before build, which is necessary because my debian/rules file will otherwise try to run the maven targets to build jars, which Jenkins has already done. Snippet from my debian/rules:
pre-built-stamp
mvn package
touch pre-built-stamp
override_dh_auto_build: pre-built-stamp
So after the maven steps in Jenkins it runs the following
touch pre-built-stamp
dpkg-buildpackage -b -nc
This part is personal preference, but I do not have Jenkins push built debs straight to my repository. Instead it saves the .deb and .changes files as build artifacts so I can use the Promoted Builds Plugin to sign the .changes file and copy it to the repository (rsync). This lets my developers download and test the deb out before approving it to be pushed to our staging repository. A second promotion can then be used to push the package to a live repository.
I chose reprepro as our repository manager. Its one major drawback is that it cannot handle more than one version of a package in a distribution at once which makes rollback more painful. Aside from this found it reliable and usable, and now use it to completely mirror the main Debian repositories as well as using it to host my private repos.
Reprepro uses inoticoming to spot new incoming packages and verifies the signature on the changes file, ensuring that only Jenkins can add new packages.
I find some of the reprepro documentation online lacking, but I recommend installing it and reading the reprepro and inoticoming man pages.
Debian Package Builder Plugin for Jenkins
Yes there is a plugin that helps with deploying Debian packages into package repositories. The Debian Package Builder Plugin has two features: a build step (which you don't seem to need) and a post-build publishing step. Your target repositories are configured in the system configuration. Just select one of them in the job configuration. The plugin uses dupload(1) "under the hood".
As a repository manager for Debian packages I recommend Aptly. It is powerful, easy to use, well-documented and actively developed.