I have a multi-container job which runs on k8s via kubernetes-jenkins plugin. everything works great but I am unable to junit or archiveArtifacts anything. I suspect it's because it exists only in the container but not sure. code is below:
def label = "foo-${UUID.randomUUID().toString()}"
podTemplate(
label: label,
containers: [
containerTemplate(name: 'c1', image: 'c1'),
containerTemplate(name: 'c2', image: 'c2'),
containerTemplate(name: 'c3', image: 'c3'),
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
],
) {
node(label) {
stage('test') {
container('c1') {
sh """
cd /some-path
./generate-junit-xml
"""
archiveArtifacts allowEmptyArchive: true, artifacts: '/some-path/foo.xml'
sh "cat /some-path/foo.xml"
}
}
}
}
def label = "foo-${UUID.randomUUID().toString()}"
podTemplate(
label: label,
namespace: 'jenkins',
imagePullSecrets: [ 'myreg' ],
containers: [
containerTemplate(name: 'c1', image: 'c1'),
containerTemplate(name: 'c2', image: 'c2'),
containerTemplate(name: 'c3', image: 'c3'),
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
],
) {
node(label) {
stage('test') {
container('c1') {
sh """
./something-that-generates-junit-foo-xml
"""
archiveArtifacts allowEmptyArchive: true, artifacts: '/abs/path/to/foo.xml'
sh "cat /abs/path/to/foo.xml"
}
}
}
}
build log shows the following output:
[Pipeline] archiveArtifacts
Archiving artifacts
WARN: No artifacts found that match the file pattern "/some-path/foo.xml". Configuration error?
[Pipeline] sh
[test-pipeline] Running shell script
+ cat /some-path/unittest.xml
<?xml version="1.0" encoding="utf-8"?>...</xml>
would appreciate your help!
both junit and archiveArtifacts can only archive files that are inside WORKSPACE, containers do not shave any volumes with host (where jenkins WORKSPACE is) unless you explicitly do so
I solved this with:
- adding additional volume where I save files
hostPathVolume(hostPath: '/tmp', mountPath: '/tmp')
- copying files from tmp to WORKSPACE with File Operations Plugin
dir("/tmp/screenshots") {
fileOperations([fileCopyOperation(excludes: '', flattenFiles: true, includes: '*.png', targetLocation: "${WORKSPACE}/screenshots")])
}
- archiving artifacts
archiveArtifacts allowEmptyArchive: true, artifacts: 'screenshots/**/*.png'
copying the artifacts to the Jenkins workspace will solve it
age('test') {
container('c1') {
sh """
./something-that-generates-junit-foo-xml
"""
sh 'cp /abs/path/to/foo.xml ${WORKSPACE}/abs/foo.xml'
archiveArtifacts allowEmptyArchive: true, artifacts: 'abs/foo.xml'
sh "cat /abs/path/to/foo.xml"
}
you can copy the dir if you need all the dir content
sh 'cp -r /abs/path/to/ ${WORKSPACE}/abs/to'
archiveArtifacts allowEmptyArchive: true, artifacts: 'abs/to/*.xml'
Related
TL;DR:
I would like to use ActiveChoice parameters in a Multibranch Pipeline where choices are defined in a YAML file in the same repository as the pipeline.
Context:
I have config.yaml with the following contents:
CLUSTER:
dev: 'Cluster1'
test: 'Cluster2'
production: 'Cluster3'
And my Jenkinsfile looks like:
pipeline {
agent {
dockerfile {
args '-u root'
}
}
stages {
stage('Parameters') {
steps {
script {
properties([
parameters([
[$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
description: 'Select the Environemnt from the Dropdown List',
filterLength: 1,
filterable: false,
name: 'Env',
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: true,
script:
"return['Could not get The environemnts']"
],
script: [
classpath: [],
sandbox: true,
script:
'''
// Here I would like to read the keys from config.yaml
return list
'''
]
]
]
])
])
}
}
}
stage("Loading pre-defined configs") {
steps{
script{
conf = readYaml file: "config.yaml";
}
}
}
stage("Gather Config Parameter") {
options {
timeout(time: 1, unit: 'HOURS')
}
input {
message "Please submit config parameter"
parameters {
choice(name: 'ENV', choices: ['dev', 'test', 'production'])
}
}
steps{
// Validation of input params goes here
script {
env.CLUSTER = conf.CLUSTER[ENV]
}
}
}
}
}
I added the last 2 stages just to show what I currently have working, but it's a bit ugly as a solution:
The job has to be built without parameters, so I don't have an easy track of the values I used for each job.
I can't just built it with parameters and just leave, I have to wait for the agent to start the job, reach the stage, and then it will finally ask for input.
Choices are hardcoded.
The issue I'm currently facing is that config.yaml doesn't exist in the 'Parameters' stage since (as I understand) the repository hasn't been cloned yet. I also tried using
def yamlFile = readTrusted("config.yaml")
within the groovy code but it didn't work either.
I think one solution could be to try to do a cURL to the file, but I would need Git credentials and I'm not sure that I'm going to have them at that stage.
Do you have any other ideas on how I could handle this situation?
I am trying to run my integration tests with Testcontainers on Jenkins Kubernetes Docker in Docker container.
Testcontainer version: 1.15.3
However, it always fails to get the Container.getMappedPort(X) inside the DinD Container.
It works absolutely fine on my local setup and manages to get the mapped port.
Has anyone encounter this issue before or has a solution for this?
My Jenkins file
#!groovy
def label = "debug-${UUID.randomUUID().toString()}"
podTemplate(label: label, slaveConnectTimeout: '10', containers: [
containerTemplate(
name: 'docker-in-docker',
image: 'cfzen/dind:java11',
privileged: true,
workingDir: '/home/jenkins/agent',
ttyEnabled: true,
command: 'cat',
envVars: [
envVar(key: 'TESTCONTAINERS_HOST_OVERRIDE', value: 'tcp://localhost:2375'),
envVar(key: 'TESTCONTAINERS_RYUK_DISABLED', value: 'true'),
]
),
containerTemplate(
name: 'helm-kubectl',
image: 'dtzar/helm-kubectl',
workingDir: '/home/jenkins/agent/',
ttyEnabled: true,
command: 'cat'
)
],
volumes: [hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),],
annotations: [
podAnnotation(key: 'iam.amazonaws.com/role',
value: 'arn:aws:iam::xxxxxxxxxxx')
],
)
{
node(label) {
deleteDir()
stage('Checkout') {
checkout scm
def shortCommit = sh(returnStdout: true, script: "git log -n 1 --pretty=format:'%h'").trim()
currentBuild.description = "${shortCommit}"
}
stage('Run Integration tests') {
container('docker-in-docker') {
withCredentials([
usernamePassword(credentialsId: 'jenkins-artifactory-credentials',
passwordVariable: 'ARTIFACTORY_SERVER_PASSWORD',
usernameVariable: 'ARTIFACTORY_SERVER_USERNAME')])
{
echo 'Run Integration tests'
sh("mvn -B clean verify -q -s mvn/local-settings.xml")
}
}
}
TestRunner:
#RunWith(CucumberWithSerenity.class)
#CucumberOptions(features = "classpath:features")
public final class RunCucumberIT {
#BeforeClass
public static void init(){
Containers.POSTGRES.start();
System.out.println("Exposed port of db is"+Containers.POSTGRES.getExposedPorts());
System.out.println("Assigned port of db is"+Containers.POSTGRES.getFirstMappedPort());
Containers.WIREMOCK.start();
Containers.S3.start();
}
private RunCucumberIT() {
}
}
Fails at Containers.POSTGRES.getFirstMappedPort()
Requested port (X) is not mapped
all !
I have a problem with Plugin Publisher over ftp in pipeline code. I set APP_NAME in "environment {APP_NAME='123'}" on top of pipeline code. but the variable "APP_NAME" not to be konwn by ftpPublisher。
and same as BUILD_NUMBER JOB_NAME etc vars to be known by ftpPublisher.
Any people can help me ? thank you very very much !!!
and my jenkins ver is 2.164.2 ,Publish over ftp ver is 1.15 .
pipeline {
environment {
APP_NAME='123'
}
......
stages {
stage('1. git pull') {
steps {
git(
branch: 'release',
credentialsId: '*****',
url : '*********',
changelog: true
)
sh "ls -lat"
}
}
stage('2. build') {
steps {
sh 'cnpm install'
sh 'bower install --allow-root'
sh 'gulp goluk:pro'
sh 'mkdir -p $APP_NAME target'
sh 'cp -rf dist/* $APP_NAME/'
sh 'tar jcvf $APP_NAME.tar.bz2 $APP_NAME/'
sh 'ls -lh'
sh 'mv $APP_NAME.tar.bz2 target/$APP_NAME.tar.bz2'
sh 'rm -rf $APP_NAME'
}
}
stage('3. send to ftp') {
steps {
sh 'printenv'
ftpPublisher(
masterNodeName: 'master' ,
paramPublish: [parameterName: ''],
alwaysPublishFromMaster: false,
continueOnError: false,
failOnError: false,
publishers: [
[ configName: 'ftpServer_250',
transfers: [
[ asciiMode: false,
cleanRemote: false,
excludes: '',
flatten: false,
makeEmptyDirs: true,
noDefaultExcludes: false,
patternSeparator: '[, ]+',
remoteDirectory: '${APP_NAME}/$BUILD_NUMBER($BUILD_ID)',
remoteDirectorySDF: false,
removePrefix: '',
sourceFiles: 'target/*.tar.bz2'
]
],
usePromotionTimestamp: false,
useWorkspaceInPromotion: false,
verbose: true
]
]
)
}
}
}
}
change to remoteDirectory: "${APP_NAME}/$BUILD_NUMBER($BUILD_ID)"
I'm trying to mount a jenkins pipeline, with jenkinsfile and docker-compose.
My docker-compose run fine. But the next steps (test stage in Jenkinsfile) don't run.
How to tell jenkins "ok fine the docker container is fine, you can do the next thing" but prevent the docker container to stop (this is why I put rails s at the end of the command"
Here the docker-compose.yml :
version: '3'
services:
db-test:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=server_dev
volumes:
- ./tmp/db:/var/lib/postgresql/data
ports:
- "${POSTGRES_PORT}:5432"
web-test:
image: starefossen/ruby-node
command: bash -c "cd /app && bundle install && rake db:migrate && rails s"
volumes:
- /home/xero/jenkins/jenkins_home/workspace/project-open-source:/app # Workspace
- /home/cache/bundle:/usr/local/bundle # Cache gemfiles
- /home/cache/node_modules:/app/node_modules # Cache yarn files
- /home/xero/.ssh:/root/.ssh # SSH keys (for git)
ports:
- "3000:3000"
depends_on:
- db-test
And the Jenkinsfile :
pipeline {
agent any
options {
timeout(time: 1, unit: 'DAYS')
disableConcurrentBuilds()
}
stages {
stage("Init") {
agent any
steps { initialize() }
}
stage("Test") {
agent any
steps { test() }
}
}
}
def initialize() {
sh 'docker-compose -f docker-compose-jenkins.yml up --build --abort-on-container-exit'
}
def test() {
sh 'docker exec -ti web-test sh -c "cd app/ && bundle exec rspec -f documentation"'
}
Here my solution. I used retry and sleep, to wait that the dockers containers starts.
#!groovy
def message = "";
def author = "";
def getLastCommitMessage = {
message = sh(returnStdout: true, script: 'git log -1 --pretty=%B').trim()
}
def getGitAuthor = {
def commit = sh(returnStdout: true, script: 'git rev-parse HEAD')
author = sh(returnStdout: true, script: "git --no-pager show -s --format='%an' ${commit}").trim()
}
pipeline {
agent any
options {
timeout(time: 1, unit: 'DAYS')
disableConcurrentBuilds()
}
stages {
stage("Init RoR and DB") {
agent any
steps { initialize() }
}
stage("Tests") {
agent any
steps { test() }
post {
success {
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: '/var/jenkins_home/workspace/VPX-open-source/coverage/', reportFiles: 'index.html', reportName: 'RspecCoverage', reportTitles: ''])
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: '/var/jenkins_home/workspace/VPX-open-source/coverage/lcov-report', reportFiles: 'index.html', reportName: 'JestCoverage', reportTitles: ''])
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: '/var/jenkins_home/workspace/VPX-open-source/reports/', reportFiles: 'eslint.html', reportName: 'Eslint', reportTitles: ''])
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: '/var/jenkins_home/workspace/VPX-open-source/reports/', reportFiles: 'rubocop.html', reportName: 'Rubocop', reportTitles: ''])
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: '/var/jenkins_home/workspace/VPX-open-source/reports/rubycritic/', reportFiles: 'overview.html', reportName: 'Rubycritic', reportTitles: ''])
}
}
}
}
post {
failure {
script {
getLastCommitMessage()
getGitAuthor()
}
rocketSend channel: 'myproject-ci', emoji: ':x:', message: "Build failed - Commit : '${message}' by ${author}", rawMessage: true
}
}
}
def initialize() {
sh 'docker-compose -f docker-compose-jenkins.yml up --build --detach'
}
def test() {
try {
retry(3){
sleep 25
HEALTH = sh (
script: 'docker inspect -f \'{{json .State.Health.Status}}\' vpx-web-test',
returnStdout: true
).trim()
echo "${HEALTH}"
if(HEALTH == "starting"){
return true
}
}
sh 'docker exec vpx-web-test sh -c "cd app/ && RAILS_ENV=test bundle exec rspec -f documentation"'
sh 'docker exec vpx-web-test sh -c "cd app/ && yarn test"'
sh 'docker exec vpx-web-test sh -c "cd app/ && yarn test --coverage > reports/jest-coverage.html"'
sh 'docker exec vpx-web-test sh -c "cd app/ && yarn lint --f html reports/eslint.html ; exit 0"'
sh 'docker exec vpx-web-test sh -c "cd app/ && rubycritic app/ --no-browser -p reports/rubycritic"'
sh 'docker exec vpx-web-test sh -c "cd app/ && rubocop app/ --format html -o reports/rubocop.html --fail-level error"'
}
catch (exc) {
error("Build failed")
}
finally{
sh 'docker-compose -f docker-compose-jenkins.yml down'
}
}
How to set Time in minutes to retain slave when idle and Max number of instances in pipeline when config podTemplate ?
I see these two config options in System->Could->kubernetes. But I use pipeline and I didn't figure it out how to set them.
Now My pipeline looks like below.
podTemplate(label: 'docker-go',
containers: [
containerTemplate(
name: 'jnlp',
image: 'docker.mydomain.com/library/jnlp-slave:2.62',
command: '',
args: '${computer.jnlpmac} ${computer.name}',
),
containerTemplate(name: 'docker', image: 'docker.mydomain.com/library/docker:1.12.6', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'docker.mydomain.com/library/golang:1.8.3', ttyEnabled: true, command: '')
],
volumes: [hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')]
) {
def image_tag = "docker.mydomain.com/deploy-demo/demo-go:v0.1"
def workdir = "/go/src/demo-go"
node('docker-go') {
stage('setup') {
}
stage('clone') {
}
stage('compile') {
}
stage('build and push image') {
}
}
}
Ok, I figuire it out
Add these two.
idleMinutes: 10
instanceCap: 10
podTemplate(label: 'docker-go',
containers: [
containerTemplate(
name: 'jnlp',
image: 'docker.mydomain.com/library/jnlp-slave:2.62',
command: '',
args: '${computer.jnlpmac} ${computer.name}',
),
containerTemplate(name: 'docker', image: 'docker.mydomain.com/library/docker:1.12.6', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'docker.mydomain.com/library/golang:1.8.3', ttyEnabled: true, command: '')
],
volumes: [hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')],
idleMinutes: 10
instanceCap: 10
) {
def image_tag = "docker.mydomain.com/deploy-demo/demo-go:v0.1"
def workdir = "/go/src/demo-go"
node('docker-go') {
stage('setup') {
}
stage('clone') {
}
stage('compile') {
}
stage('build and push image') {
}
}
}