I have now searched the Internet for 2 whole days without any progress. There might be so, that the answer is just a click away, but I cannot find it. I'm blinded. I work on Windows 10 by the way...
The problem is that I want to create a simple Dockerfile, that can create a Docker image with a Jenkins master with everything I want it to contain. For instance:
general Jenkins configuration
users
plugins
jobs
slaves
etc...
Now I at least some parts working... I'm able to get rid of the initial "hello welcome to jenkins"-thing, and define my first user + install plugins that i wrote down in a text file.
BUT when it comes to the freakin jenkins jobs it seems to get a bit more complicated.
Since my Dockerfile takes a "base image" from jenkins/jenkins:lts-alpine, it creates a volume for /var/jenkins_home/ which seems to be where the jobs are stored at. And it seems to be a bit problematic to copy files to this folder. I tried add a COPY instruction in my Dockerfile to copy folder & files that are created when I manually create a job in Jenkins, but it seems like Jenkins does not read them for some reason, even after restarting/reloading from disk. The thing is, it is actually working when copying the jobs after "installation". Like docker cp jobs-on-my-machine container:/var/jenkins_home/jobs, but I dont want a lot of extra stuff outside my Dockerfile. I want to keep it as simple as possible. With this solution I will most likely not even be able to commit this change since the docker diff show this output:
C:\Dev>docker diff 3
C /tmp
A /tmp/hsperfdata_jenkins
A /tmp/hsperfdata_jenkins/6
A /tmp/jetty-0.0.0.0-8080-war-_-any-6459623464825334329.dir
A /tmp/jna--1712433994
A /tmp/winstone4948624124562796293.jar
you see... Nothing have changed in /var/jenkins_home/jobs/ directory... Nothing trackable at all :(
Take a look at my Dockerfile content below:
# Pull the latest Jenkins docker image from Docker Hub.
# Currently Alpine is used because the Internet says its safer :)
from jenkins/jenkins:lts-alpine
# Disable Jenkins setup wizard normally showing up during initial startup
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
# Copy the groovy script creating the first user into the
# run-on-startup-directory into the docker image
COPY **security.groovy** /usr/share/jenkins/ref/init.groovy.d/security.groovy
# Copy the plugins text file into the docker image
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
# Run the Jenkins default install-plugins script to install
# plugins defined in the plugins text file
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
# Add the predefined Jenkins jobs _inside_ folder 'jenkins-jobs' into Jenkins
COPY jenkins-jobs /var/jenkins_home/jobs/
security.groovy content:
#!groovy
import jenkins.model.*
import hudson.security.*
import jenkins.security.s2m.AdminWhitelistRule
def instance = Jenkins.getInstance()
def hudsonRealm = new HudsonPrivateSecurityRealm(false)
hudsonRealm.createAccount("admin", "admin")
instance.setSecurityRealm(hudsonRealm)
def strategy = new FullControlOnceLoggedInAuthorizationStrategy()
instance.setAuthorizationStrategy(strategy)
instance.save()
Jenkins.instance.getInjector().getInstance(AdminWhitelistRule.class).setMasterKillSwitch(false)
plugins.txt:
cloudbees-folder
bouncycastle-api
structs
script-security
workflow-step-api
scm-api
workflow-api
junit
antisamy-markup-formatter
workflow-support
workflow-job
token-macro
build-timeout
credentials
ssh-credentials
plain-credentials
credentials-binding
timestamper
durable-task
workflow-durable-task-step
matrix-project
resource-disposer
ws-cleanup
ant
gradle
pipeline-milestone-step
jquery-detached
jackson2-api
ace-editor
workflow-scm-step
workflow-cps
pipeline-input-step
pipeline-stage-step
pipeline-graph-analysis
pipeline-rest-api
handlebars
momentjs
pipeline-stage-view
pipeline-build-step
pipeline-model-api
pipeline-model-extensions
apache-httpcomponents-client-4-api
jsch
git-client
git-server
workflow-cps-global-lib
display-url-api
mailer
branch-api
workflow-multibranch
authentication-tokens
docker-commons
docker-workflow
pipeline-stage-tags-metadata
pipeline-model-declarative-agent
workflow-basic-steps
pipeline-model-definition
workflow-aggregator
github-api
git
github
github-branch-source
pipeline-github-lib
mapdb-api
subversion
ssh-slaves
matrix-auth
pam-auth
ldap
email-ext
mercurial
In case someone else find this question:
From the GitHub - jenkinsci/docker:
In such a derived image, you can customize your jenkins instance with hook scripts or additional plugins. For this purpose, use /usr/share/jenkins/ref as a place to define the default JENKINS_HOME content you wish the target installation to look like :
When jenkins container starts, it will check JENKINS_HOME has this reference content, and copy them there if required. It will not override such files, so if you upgraded some plugins from UI they won't be reverted on next start.
In case you do want to override, append '.override' to the name of the reference file. E.g. a file named /usr/share/jenkins/ref/config.xml.override will overwrite an existing config.xml file in JENKINS_HOME.
What I've done myself and the thing that is working for me is: copy the jobs directory found on your Jenkins instance to where you are able to run docker build. Place the jobs directory from the Jenkins instance in to another directory e.g jobs_jenkins. Meaning your will have $PWD/jobs_jenkins/jobs
Add the following line in your Dockerfile:
...
FROM jenkins/jenkins:lts-alpine
COPY jobs_jenkins /usr/share/jenkins/ref/
...
From my understanding I think the jenkins image itself run a "startup-script" utilizing /usr/share/jenkins/ref, and the stuff copied directly to /var/jenkins_home will not be present...
For more information read the README.md. (From the GitHub - jenkinsci/docker)
Related
I have a Dockerfile to create a Jenkins image where I install the locale:1.4 plugin via the method described elsewhere on SO.
The issue is that Jenkins has a tendency to display the UI using the locale set in the browser da_DK
I use the Locale plugin to set it to en_GB and set the Ignore browser preference and force this language to all users (It is easier to search for things on SO, etc when the terms are in english)
My question is how do I set the configuration in the dockerfile?
My Dockerfile
# https://github.com/jenkinsci/docker
#set executors
COPY executors.groovy /usr/share/jenkins/ref/init.groovy.d/executors.groovy
#add plugins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
# drop back to the regular jenkins user - good practice
USER jenkins
EXPOSE 8095
Digging around $JENKINS_HOME on another jenkins installation, I found locale.xml
adding the following lines to Dockerfile after the plugin installation solved it:
# add locale config
COPY locale.xml /var/jenkins_home/locale.xml
We are thinking to move our ci from jenkins to gitlab. We have several projects that have the same build workflow. Right now we use a shared library where the pipelines are defined and the jenkinsfile inside the project only calls a method defined in the shared library defining the actual pipeline. So changes only have to be made at a single point affecting several projects.
I am wondering if the same is possible with gitlab ci? As far as i have found out it is not possible to define the gitlab-ci.yml outside the repository. Is there another way to define a pipeline and share this config with several projects to simplify maintainance?
GitLab 11.7 introduces new include methods, such as include:file:
https://docs.gitlab.com/ee/ci/yaml/#includefile
include:
- project: 'my-group/my-project'
ref: master
file: '/templates/.gitlab-ci-template.yml'
This will allow you to create a new project on the same GitLab instance which contains a shared .gitlab-ci.yml.
First let me start by saying: Thank you for asking this question! It triggered me to search for a solution (again) after often wondering if this was even possible myself. We also have like 20 - 30 projects that are quite identical and have .gitlab-ci.yml files of about 400 - 500 loc that have to each be changed if one thing changes.
So I found a working solution:
Inspired by the Auto DevOps .gitlab-ci.yml template Gitlab itself created, and where they use one template job to define all functions used and call every before_script to load them, I came up with the following setup.
Multiple project repo's (project-1, project-2) requiring a shared set of CI jobs / functions
Functions script containing all shared functions in separate repo
Files
So using a shared ci jobs scipt:
#!/bin/bash
function list_files {
ls -lah
}
function current_job_info {
echo "Running job $CI_JOB_ID on runner $CI_RUNNER_ID ($CI_RUNNER_DESCRIPTION) for pipeline $CI_PIPELINE_ID"
}
A common and generic .gitlab-ci.yml:
image: ubuntu:latest
before_script:
# Install curl
- apt-get update -qqq && apt-get install -qqqy curl
# Get shared functions script
- curl -s -o functions.sh https://gitlab.com/giix/demo-shared-ci-functions/raw/master/functions.sh
# Set permissions
- chmod +x functions.sh
# Run script and load functions
- . ./functions.sh
job1:
script:
- current_job_info
- list_files
You could copy-paste your file from project-1 to project-2 and it would be using the same shared Gitlab CI functions.
These examples are pretty verbose for example purposes, optimize them any way you like.
Lessons learned
So after applying the construction above on a large scale (40+ projects) I want to share some lessons learned so you don't have to find out the hard way:
Version (tag / release) your shared ci functions script. Changing one thing can now make all pipelines fail.
Using different Docker images could cause an issue in the requirement for bash to load the functions (e.g. I use some Alpine-based images for CLI tool based jobs that have sh by default)
Use project based CI/CD secret variables to personalize build jobs for projects. Like environment URL's etc.
Since gitlab version 12.6, it's possible define a external .gitlab-cy.yml file.
To customize the path:
Go to the project's Settings > CI / CD.
Expand the General pipelines section.
Provide a value in the Custom CI configuration path field.
Click Save changes.
...
If the CI configuration will be hosted on an external site, the URL link must end with .yml:
http://example.com/generate/ci/config.yml
If the CI configuration will be hosted in a different project within
GitLab, the path must be relative to the root directory in the other
project, with the group and project name added to the end:
.gitlab-ci.yml#mygroup/another-project
my/path/.my-custom-file.yml#mygroup/another-project
Use include feature, (available from GitLab 10.6):
https://docs.gitlab.com/ee/ci/yaml/#include
So, i always wanted to post, with what i came up with now:
Right now we use a mixed approach of #stefan-van-gastel's idea of a shared ci library and the relatively new include feature of gitlab 11.7. We are very satisfied with this approach as we can now manage our build pipeline for 40+ repositories in a single repository.
I have created a repository called ci_shared_library containing
a shell script for every single build job containing the execution logic for the step.
a pipeline.yml file containing the whole pipeline config. In the before script we load the ci_shared_library to /tmp/shared to be able to execute the scripts.
stages:
- test
- build
- deploy
- validate
services:
- docker:dind
before_script:
# Clear existing shared library
- rm -rf /tmp/shared
# Get shared library
- git clone https://oauth2:${GITLAB_TOKEN}#${SHARED_LIBRARY} /tmp/shared
- cd /tmp/shared && git checkout master && cd $CI_PROJECT_DIR
# Set permissions
- chmod -R +x /tmp/shared
# open access to registry
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
test:
stage: test
script:
- /tmp/shared/test.sh
build:
stage: build
script:
- /tmp/shared/build.sh
artifacts:
paths:
- $CI_PROJECT_DIR/target/RPMS/x86_64/*.rpm
expire_in: 3h
only:
- develop
- /release/.*/
deploy:
stage: deploy
script:
- /tmp/shared/deploy.sh
artifacts:
paths:
- $CI_PROJECT_DIR/tmp/*
expire_in: 12h
only:
- develop
- /release/.*/
validate:
stage: validate
script:
- /tmp/shared/validate.sh
only:
- develop
- /release\/.*/
Every project that want's to use this pipeline config has to have a .gitlab-ci.yml. In this file the only thing to do is to import the shared pipeline.yml file from the ci_shared_library repo.
# .gitlab-ci.yml
include:
- project: 'ci_shared_library'
ref: master
file: 'pipeline.yml'
With this approach really everything regarding to the pipeline lives in one single repository and is reusable. We have the whole pipeline-template in one file, but i think it would even be possible to split this up to have every single job in a yml-file. This way it would be more flexible and one could create default jobs that can be merged together differently for projects that have similar jobs but not every project needing all jobs...
With GitLab 13.5 (October 2020), the include feature is even more useful:
Validate expanded GitLab CI/CD configuration with the API
Writing and debugging complex pipelines is not a trivial task. You can use the include keyword to help reduce the length of your pipeline configuration files.
However, if you wanted to validate your entire pipeline via the API previously, you had to validate each included configuration file separately which was complicated and time consuming.
Now you have the ability to validate a fully-expanded version of your pipeline configuration through the API, with all the include configuration included.
Debugging large configurations is now easier and more efficient.
See Documentation and Issue.
And:
See GitLab 13.6 (November 2020)
Include multiple CI/CD configuration files as a list
Previously, when adding multiple files to your CI/CD configuration using the include:file syntax, you had to specify the project and ref for each file. In this release, you now have the ability to specify the project, ref, and provide a list of files all at once. This prevents you from having to repeat yourself and makes your pipeline configuration less verbose.
See Documentation) and Issue.
You could look into the concept of Dynamic Child pipeline.
It has evolved with GitLab 13.2 (July 2020):
Dynamically generate Child Pipeline configurations with Jsonnet
We released Dynamic Child Pipelines back in GitLab 12.9, which allow you to generate an entire .gitlab-ci.yml file at runtime.
This is a great solution for monorepos, for example, when you want runtime behavior to be even more dynamic.
We’ve now made it even easier to create CI/CD YAML at runtime by including a project template that demonstrates how to use Jsonnet to generate the YAML.
Jsonnet is a data templating language that provides functions, variables, loops, and conditionals that allow for fully parameterized YAML configuration.
See documentation and issue.
I have a structure of artifacts in another build:
/
/bundle/docs
/bundle/bin
/bundle/bin/scripts
I want to copy all files and sudirectories into the current job's workspace subfolder 'product1' from /bundle/bin. I expect to see in %WORKSAPCE%/product1 contents of /bundle/bin.
I've configured it like this:
Artifacts to copy: bundle/bin/**
But it creates %WORKSAPCE%/product1//bundle/bin instead.
Is it possible?
Seems like that's just how the plugin works. Your options are:
Keep the same configuration and manipulate directories later using sh mv (Linux) or cmd move (Windows) command. This is the workaround used in my environment.
Check the "Flatten directories" option (but this will mix together the /bundle/bin and /bundle/bin/scripts)
Improve the plugin and contribute your code to the community :-)
We are using the artifact deployer plugin in a Jenkins freestyle job, and recently Jenkins is displaying warnings about the plugin no longer being safe to use.
This plugin is no longer being distributed according to their wiki site here
Does anyone know of any alternative plugins, or ways in a freestyle job to copy content from one location to another (on same node)
Thanks
To copy all the contents from one directory to another directory in a Linux system use the following command:
cp -a /path/to/source_dir/. /path/to/dest_dir/
You can add an Execute shell step in in your job configuration in Build section and add the above command into it.
I have a jenkins server running, and for a job I need to download a file which is in the jobs/builds/buildname folder.
How to download that file from jenkins job?
If you would use the workspace as suggested by previous post, you can access it within a Pipeline:
sh "wget http://<servername:port>/job/<jobname>/ws/index.txt"
Or inside a script:
wget http://<servername:port>/job/<jobname>/ws/index.txt
Where index.txt is the file you want to download.
I rock a Unix based development machine and a Unix based Jenkins machine up in the cloud. This means I can use the SCP Command to download the remote file over an ssh connection. This is the anatomy of my scp commands:
scp -i <path/to/ssh.pem/file> <user>#<jenkins.remote.url>:<path/to/remote/file> <local/path/where/download/goes>
This works for directories too, for instance I use this to download backups generated by the ThinBackup Plugin
You had already been given the answer for getting the file from the workspace
http://<servername:port>/job/<jobname>/ws/filename.ext
Obviously replace stuff in <..> with values relevant to your setup, and make sure anonymous user has access to read from workspace, else you may have to login.
The only other files you could access are those that are archived from previous job runs.
http://<servername:port>/job/<jobname>/<buildnumber>/artifact/filename.ext
Where <buildnumber> is the build number you see in job build history, or one of the permalinks provided by Eldad (such as lastStableBuild). But this will only have access to archived artifacts.
You cannot arbitrarily access files from Jenkin's filesystem through the web interface... it wouldn't be very secure if it did let you.
The Jenkins job's build folder is meant for logging and plugins reports. You should not need to access it directly.
If you must, you can access it relative to the workspace: $WORKSPACE/../builds/$BUILD_ID/
You can also replace the $BUILD_ID with one of the links Jenkins creates:
lastFailedBuild
lastStableBuild
lastSuccessfulBuild
lastUnstableBuild
lastUnsuccessfulBuild
I hope this helps.
As others have pointed out this path should work, I like to highlight that the "ws" is a directory in Jenkins:
http://<servername:port>/job/<your job>/ws/<your file>
Download the Package lynx (Command line browser)
$ apt-get install lynx
or
$ yum install lynx
then use the command
# lynx http://<servername:port>/job/<jobname>/ws/file
The App Will Ask you to allow cookies and if there are authentication will direct you to login page like the browser.