How to should I setup my CI (jenkins) for deb packages? - jenkins

I have a CI setup with Jenkins and Artifactory for Java. I would like also to build and deploy deb packages. For building deb packages, I might use a Maven plugin (called from Gradle), e.g., http://mojo.codehaus.org/deb-maven-plugin/.
I am now investigating Debian repository implementations. I would like to deploy a private Debian repository to host my packages (http://wiki.debian.org/HowToSetupADebianRepository).
Are there any plugin in Jenkins that would make it easier to deploy deb packages? Which debian repository implementation should I use?

Just adding my 2 cents to this post.
Internally we use Freight (https://github.com/rcrowley/freight#readme) as our Debian/Ubuntu repository.
A lot of us tend to use fpm (https://github.com/jordansissel/fpm#readme) by Jordan Sissel for creating debs for internal use.
This can be easily scripted inside your source repository like I do here:
https://github.com/stuart-warren/logit/blob/master/make-deb
#!/bin/bash
# SET SOME VARS
installdir='/usr/lib/logit'
NAME='logit-java'
VERSION='0.5.8'
ITERATION='1'
WEBSITE='https://github.com/stuart-warren/logit'
REPO='http://nexus.stuartwarren.com/nexus'
# REMOVE PREVIOUS BUILD IF PRESENT
echo "Delete ${installdir}"
rm -rf .${installdir}
# CREATE FOLDER STRUCTURE
echo "create base dir ${installdir}"
mkdir -p .${installdir}
# PUT FILES IN THE CORRECT LOCATIONS
wget ${REPO}/content/repositories/releases/com/stuartwarren/logit/${VERSION}/logit-${VERSION}-tomcatvalve.jar -O .${installdir}/logit-${VERSION}-tomcatvalve.jar
wget ${REPO}/content/repositories/releases/com/stuartwarren/logit/${VERSION}/logit-${VERSION}-jar-with-dependencies.jar -O .${installdir}/logit-${VERSION}-jar-with-dependencies.jar
wget https://raw.github.com/stuart-warren/logit/master/LICENSE -O .${installdir}/LICENCE
wget https://raw.github.com/stuart-warren/logit/master/README.md -O .${installdir}/README.md
pushd .${installdir}
ln -sf logit-${VERSION}-tomcatvalve.jar logit-tomcatvalve.jar
ln -sf logit-${VERSION}-jar-with-dependencies.jar logit-jar-with-dependencies.jar
popd
# REMOVE OLD PACKAGES
echo "Delete old packages"
rm ${NAME}_*_all.deb
# CREATE THE DEB
echo "Build new package"
fpm \
-n $NAME \
-v $VERSION \
--iteration ${ITERATION} \
-a all \
-m "Stuart Warren <stuart#stuartwarren.com>" \
--description "Library to extend Log4J 1.2 (plus now Logback 1.0,
Java.util.logging and Tomcat AccessLog Valve) by providing
json layouts (for logstash/greylog) and a zeromq appender" \
--url $WEBSITE \
--license 'Apache License, Version 2.0' \
--vendor 'stuartwarren.com' \
-t deb \
-s dir \
${installdir:1}
echo "Delete ${installdir}"
rm -rf .${installdir}
echo "Done!"
Obviously you could just copy in any compiled files directly rather than downloading from a server, maven repo in my case.
Then you can SCP the deb upto some 'incoming' directory on your repository server.

I am not aware of a Debian package plugin for Jenkins, and I didn't find the maven-deb-plugin suitable for my needs (see "What doesn't work" on page you linked to). Where I have a maven build job in Jenkins I add a post step shell script which increments the version in debian/changelog and runs dpkg-buildpackage -b -nc.
-nc suppresses a clean before build, which is necessary because my debian/rules file will otherwise try to run the maven targets to build jars, which Jenkins has already done. Snippet from my debian/rules:
pre-built-stamp
mvn package
touch pre-built-stamp
override_dh_auto_build: pre-built-stamp
So after the maven steps in Jenkins it runs the following
touch pre-built-stamp
dpkg-buildpackage -b -nc
This part is personal preference, but I do not have Jenkins push built debs straight to my repository. Instead it saves the .deb and .changes files as build artifacts so I can use the Promoted Builds Plugin to sign the .changes file and copy it to the repository (rsync). This lets my developers download and test the deb out before approving it to be pushed to our staging repository. A second promotion can then be used to push the package to a live repository.
I chose reprepro as our repository manager. Its one major drawback is that it cannot handle more than one version of a package in a distribution at once which makes rollback more painful. Aside from this found it reliable and usable, and now use it to completely mirror the main Debian repositories as well as using it to host my private repos.
Reprepro uses inoticoming to spot new incoming packages and verifies the signature on the changes file, ensuring that only Jenkins can add new packages.
I find some of the reprepro documentation online lacking, but I recommend installing it and reading the reprepro and inoticoming man pages.

Debian Package Builder Plugin for Jenkins

Yes there is a plugin that helps with deploying Debian packages into package repositories. The Debian Package Builder Plugin has two features: a build step (which you don't seem to need) and a post-build publishing step. Your target repositories are configured in the system configuration. Just select one of them in the job configuration. The plugin uses dupload(1) "under the hood".
As a repository manager for Debian packages I recommend Aptly. It is powerful, easy to use, well-documented and actively developed.

Related

How to reduce size when building from source

The bazel build is 90gb. I just want to use the commercial solvers. Is there any way to reduce build size?
Check https://drake.mit.edu/from_source.html near "Building the Python Bindings". The section title is misleading: it works for building all of Drake, not just the Python bindings.
The docs should do a better job of showing the GUROBI settings, though.
Here's a working example:
git clone https://github.com/RobotLocomotion/drake.git
mkdir drake-build
cd drake-build
env GUROBI_HOME=/path/to/Downloads/gurobi951/linux64 \
cmake -DWITH_GUROBI=ON \
../drake
make install
The install ends up in drake-build/install by default.
If you'd like to build Drake as an external in your existing CMakeLists project, see also drake-external-examples/drake_cmake_external.

Isolate maven jar dependencies that are unavailable from maven central

I am attempting to refactor a maven build process, and I am trying to populate a local maven repository to help with this refactor.
My build depends on obsolete versions of jar files that exist only in a maven repo on my network (not in maven central). Example: org.foo:example:1.7:jar
I have been attempting to run my maven build in a docker image image with the hope that I could identify all of the obsolete components being pulled from my maven repository.
My goal is to explicitly pull down dependencies from my maven repo and then build the application using only maven central as an external repository.
I have a docker file to run the build
FROM maven:3-jdk-8 as build
WORKDIR /build
# This pom.xml file only references maven central.
COPY pom.xml .
# Explicitly download artifacts into /root/.m2/...
RUN mvn dependency:get -Dartifact=org.foo:example:1.7.jar \
-DrepoUrl=https://my.maven.repo
# Run the build making use of the dependencies loaded into the local repo
RUN mvn install
Unfortunately, I see an error: could not resolve dependencies for project ... The following artifacts could not be resolved: org.foo:example:jar:1.7.
I presume there might be some metadata in my local org.foo:example:1.7:pom that has an awareness of its origin repository. I had hoped I could satisfy this dependency by pulling it into my local repository.
I also attempted to add the following flag
RUN mvn install --no-snapshot-updates
After further investigation, I discovered that the downloads in my .m2/repository contained files named _remote.repositories.
Solution: set -Dmaven.legacyLocalRepo=true when running dependency:get
With this flag, the _remote.repositories files are not created.
RUN mvn dependency:get -Dmaven.legacyLocalRepo=true \
-Dartifact=org.foo:example:1.7.jar \
-DrepoUrl=https://my.maven.repo
This question helped to provide a solution: _remote.repositories prevents maven from resolving remote parent

Why doesn't gradlew build use local cache in docker (bitbucket) pipeline?

I'm using a bitbucket docker pipeline to validate my builds for an android app on push. One of my dependancies is a private package which I am hosting on another bitbucket repository. For builds on the user machine, I use Gradle's private maven repository plugin which can resolve my dependency with encrypted username and password.
This works well for developer machines, but I want to avoid hard coding usernames and passwords in the pipeline. Instead, since bitbucket support sshkeys across repositories for authentication, I have built in my pipeline script to clone the repository with the my private packages, and copy them over to the gradle cache. I have tried both:
/home/gradle/.gradle/caches/modules-2/files-2.1/com.mycompany
~/.gradle/caches/modules-2/files-2.1/com.mycompany
as caches. The clone and copy work just fine as I can see the files in their respective directories in the cache with by adding an ls in the pipeline, but gradlew still tries to go to the internet (the other bitbucket repository) to resolve the dependancies, as if there is no cache. Further, i'm using gradle docker image gradle:3.4.1 (the version of gradle in my project level build.gradle file), but gradle build fails with a google() is not a function.
Gradlew build fails trying to resolve my package.pom file, with a lack of a username (because there is no gradle.properties in the pipeline). But why doesn't it use the cache, instead of trying to go to the repository?
I have tried the standard java:8 docker image, the gradle docker image up to 5.1.1, and i have tried copying the package files into various gradle caches in the docker image. I have also tried altering permissions with chmod 775 to no avail. I have also tried gradlew assembleDebug with the same results as gradlew build. I'm a bit new to gradle, docker and bitbucket so i'm not sure what is causing the issue.
image: gradle:3.4.1
pipelines:
default:
- step:
caches:
- gradle
- android-sdk
script:
# Download and unzip android sdk
- wget --quiet --output-document=android-sdk.zip https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip
- unzip -o -qq android-sdk.zip -d android-sdk
# Define Android Home and add PATHs
- export ANDROID_HOME="/opt/atlassian/pipelines/agent/build/android-sdk"
- export PATH="$ANDROID_HOME/tools:$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools:$PATH"
# Download packages.
- yes | sdkmanager "platform-tools"
- yes | sdkmanager "platforms;android-27"
- yes | sdkmanager "build-tools;27.0.3"
- yes | sdkmanager "extras;android;m2repository"
- yes | sdkmanager "extras;google;m2repository"
- yes | sdkmanager "extras;google;instantapps"
- yes | sdkmanager --licenses
# Build apk
- git clone git#bitbucket.org:myorg/myrepo.git
- scp -r myrepo/com/mycomp/* /home/gradle/.gradle/caches/modules-2/files-2.1/com.mycomp
#- gradle build
- ./gradlew build
#- ./gradlew assembleDebug
definitions:
caches:
android-sdk: android-sdk
Gradlew build error:
> Could not resolve all files for configuration ':app:debugCompileClasspath'.
> Could not resolve com.mycomp:mypackage:1.0.1.
Required by:
project :app
> Could not resolve com.mycomp:mypackage:1.0.1.
> Could not get resource 'https://bitbucket.org/myorg/myrepo/raw/releases/com/mycomp/mypackage/1.0.11/package-1.0.1.pom'.
> Username may not be null

How can i transfer all installed plugins to another jenkins?

I want to install same plugins in to my local Jenkins which are already installed in other Jenkins.
Want to avoid installing all the 50-60 odds plugin manually
The official Jenkins documentation on installing plugins gives two ways of installing plugins:
Via the web interface
Save the downloaded *.hpi/*.jpi file into the $JENKINS_HOME/plugins directory.
So my answer to your question would be: copy the $JENKINS_HOME/plugins directory from server A to server B.
Don't forget to restart Jenkins afterwards!
There's another way, that's ideal if you're using Jenkins inside a docker container, first you need to extract a list of installed plugins by running curl against your jenkins domain in terminal:
export JENKINS_URL=http://<jenkins_domain>
curl -sSL "$JENKINS_URL/pluginManager/api/xml?depth=1&xpath=/*/*/shortName|/*/*/version&wrapper=plugins" | perl -pe 's/.*?<shortName>([\w-]+).*?<version>([^<]+)()(<\/\w+>)+/\1 \2\n/g'|sed 's/ /:/'
this you return you a list of installed plugins formatted like this:
aws-credentials:1.15
aws-beanstalk-publisher-plugin:1.6.0
aws-java-sdk:1.10.45.2
Then you can run this script against the list saved in a txt file to install all the exported plugins or add it to the end of your Dockerfile like this:
# copy script to container's bin
ADD ./plugin.sh /usr/local/bin/plugins.sh
# copy plugins list to inside the container
COPY plugins.txt /plugins.txt
# runs it
RUN /usr/local/bin/plugins.sh /plugins.txt
Just remember export your JENKINS_HOME variable before doing this.

How to install a plugin in Jenkins manually

Installing a plugin from the Update center results in:
Checking internet connectivity Failed to connect to
http://www.google.com/. Perhaps you need to configure HTTP proxy? Deploy Plugin Failure - Details hudson.util.IOException2: Failed to download from
http://updates.jenkins-ci.org/download/plugins/deploy/1.9/deploy.hpi
Is it possible to download the plugin and install it manually into Jenkins?
Yes, you can. Download the plugin (*.hpi file) and put it in the following directory:
<jenkinsHome>/plugins/
Afterwards you will need to restart Jenkins.
Download the plugin.
Inside Jenkins: Manage Jenkins → Manage Plugins → There is a tab called Advanced and on that page there is an option to upload a plugin (the extension of the file must be hpi).
Sometimes, when you download plugins you may get (.zip) files then just rename with (.hpi) and use the UI to install the plugin.
If you use Docker, you should read this file: https://github.com/cloudbees/jenkins-ci.org-docker/blob/master/plugins.sh
Example of a parent Dockerfile:
FROM jenkins
COPY plugins.txt /plugins.txt
RUN /usr/local/bin/plugins.sh /plugins.txt
plugins.txt
<name>:<version>
<name2>:<version2>
I have created a simple script that does the following:
Download one or more plugins to the plugin directory
Scan all plugins in that directory for missing dependencies
download this dependencies as well
loop until no open dependencies are left
The script requires no running jenkins - I use it to provision a docker box.
https://gist.github.com/micw/e80d739c6099078ce0f3
Sometimes when you download plugins you may get (.zip) files then just rename with (.hpi) and then extract all the plugins and move to <jenkinsHome>/plugins/ directory.
Update for Docker: use the install-plugins.sh script. It takes a list of plugin names minus the '-plugin' extension. See the description here.
install-plugins.sh replaces the deprecated plugins.sh which now warns :
WARN: plugins.sh is deprecated, please switch to install-plugins.sh
To use a plugins.txt as per plugins.sh see this issue and this workaround:
RUN /usr/local/bin/install-plugins.sh $(cat /usr/share/jenkins/plugins.txt | tr '\n' ' ')
Use https://updates.jenkins-ci.org/download/plugins/. Download it from this central update repository for Jenkins.
The accepted answer is accurate, but make sure that you also install all necessary dependencies as well. Installing using the CLI or web seems to take care of this, but my plugins were not showing up in the browser or using java -jar jenkins-cli.jar -s http://localhost:8080 list-plugins until I also installed the dependencies.
The answers given work, with added plugins.
If you want to replace/update a built-in plugin like the credentials plugin, that has dependencies, then you have to use the frontend. To automate I use:
curl -i -F file=#pluginfilename.hpi http://jenkinshost/jenkins/pluginManager/uploadPlugin
In my case, I needed to install a plugin to an offline build server that's running a Windows Server (version won't matter here). I already installed Jenkins on my laptop to test out changes in advance and it is running on localhost:8080 as a windows service.
So if you are willing to take the time to setup Jenkins on a machine with Internet connection and carry these changes to the offline server Jenkins (it works, confirmed by me!), these are steps you could follow:
Jenkins on my laptop: Open up Jenkins, http://localhost:8080
Navigator: Manage Jenkins | Download plugin without install option
Windows Explorer: Copy the downloaded plugin file that is located at "c:\program files (x86)\Jenkins\plugins" folder (i.e. role-strategy.jpi)
Paste it into a shared folder in the offline server
Stop the Jenkins Service (Offline Server Jenkins) through Component Services, Jenkins Service
Copy the plugin file (i.e. role-strategy.jpi) into "c:\program files (x86)\Jenkins\plugins" folder on the (Offline Jenkins) server
Restart Jenkins and voila! It should be installed.
This is a way to copy plugins from one Jenkins box to another.
Copy over the plugins directory:
scp -r jenkins-box.url.com:/var/lib/jenkins/plugins .
Compress the plugins:
tar cvfJ plugins.tar.xz plugins
Copy them over to the other Jenkins box:
scp plugins.tar.xz different-jenkins-box.url.com
ssh different-jenkins-box.url.com "tar xvfJ plugins.tar.xz -C /var/lib/jenkins"
Restart Jenkins.
use this link to download the lastest version of the plugins' hpi. https://updates.jenkins-ci.org/download/plugins/
Then upload the plugin through 'manage plugin' in Jenkins
To install plugin "git" with all its dependencies:
curl -XPOST http://localhost:8080/pluginManager/installNecessaryPlugins -d '<install plugin="git#current" />'
Here, the plugin installed is git ; the version, specified as #current is ignored by Jenkins. Jenkins is running on localhost port 8080, change this as needed. As far as I know, this is the simplest way to install a plugin with all its dependencies 'by hand'. Tested on Jenkins v1.644
RUN /usr/local/bin/install-plugins.sh amazon-ecs:1.37 configuration-as-code:1.47 workflow-aggregator:2.6 \
cloudbees-folder:6.15 antisamy-markup-formatter:2.1 build-timeout:1.20 credentials-binding:1.24
Cat out the plugins.txt and install in Dockerfile as above.

Resources