I'm using drake to orchestrate a workflow where if an external shiny app (stored in project_dir/shiny/app.R) changes, I want to trigger a docker build.
shiny_plan <- drake_plan(
docker_build = system(command = "docker build shiny/. -t docker.com/my-dash")
)
How do I detect a change in app.R to trigger target docker_build, given that drake does not help create app.R?
Best
You can put a file_in() wherever you want.
shiny_plan <- drake_plan(
docker_build = {
file_in("app.R")
system(command = "docker build shiny/. -t docker.com/my-dash")
}
)
Alternatively, you could make the Docker build depend on the UI and server objects. That way, Docker will not trigger unnecessarily if all you do is change the comments or whitespace in your app code.
shiny_plan <- drake_plan(
docker_build = {
ui
server
system(command = "docker build shiny/. -t docker.com/my-dash")
}
)
Related
I'm trying to use docker with Jenkins Scripted pipeline, and faced with several problems.
If I use it in sh docker ... it results in an error
command not found docker
I tried to fix it by changing Install setting in Global Configuration tool - but not succeed with it.
I'm trying to use Docker plugin now.
def run_my_stage(String name, String cmd, String commit) {
return {
stage(name) {
node("builder") {
docker.withRegistry("192.168.1.33:5000") {
def myimg = docker.image("my-img")
sh "docker pull ${myimg.imageName()}"
sh "docker run ${cmd}"
}
}
}
}
Where cmd == --user=\$UID --rm -t -v ./build/:/home/user/build 192.168.1.33:5000/my-img
I use this code for parallel stages (list of stages generated dynamically), and got this error
java.net.MalformedURLException: no protocol: 192.168.1.33:5000
What is proper usage of this plugin?
I found a lot of examples with withRun and other methods from docker, but I don't need to run any commands inside this image, I have command in Dockerfile (so it built-in for my container).
The error itself has the answer :).
java.net.MalformedURLException: no protocol: 192.168.1.33:5000
You are missing protocol in custom registry. Refer https://jenkins.io/doc/book/pipeline/docker/#custom-registry
def run_my_stage(String name, String cmd, String commit) {
return {
stage(name) {
node("builder") {
docker.withRegistry("https://192.168.1.33:5000") {
def myimg = docker.image("my-img")
sh "docker pull ${myimg.imageName()}"
sh "docker run ${cmd}"
}
}
}
}
You are missing the protocol, the registry must be https://192.168.1.33:5000
Also I have problem with relative path, but simple fix with adding pwd before relative path to build fixed.
Thx #yzT
I'm trying to list folders containing a certain file on jenkins and use later this array.
I read about findFiles but I can't find a way to use it in this situation.
The finality is that I need to cd to those folders in a loop and perform some actions.
I have only one jenkins where everything is running
Use case:
I have a workspace in which I have packages. I need to run some commands in some folders, I can't do it from the root of y workspace. They may be in subfolders or subsubfolders. The way I can identify a package is when it contains a package.xml (on ROS). Also I don't have any command to list their paths
If nothing else is working then you can try running a normal linux command like:
folders = sh(
script: "locate myfile",
returnStdout: true
)
Then split this to form an array and use the value like :
folders.split("\n")[1]
def packageDirs = findFiles(glob: '**/package.xml')
.findAll { f -> !f.directory }
.collect{ f -> f.path.replace('/', '\\') - ~/\\[^\\]+$/ }
packageDir.each { d ->
dir(d) {
// Process each package here
}
}
I'm working on building Jenkins pipeline for building a project with Gradle.
Jenkins has several slaves. All the slaves are connected to a NAS.
Some of the build steps run Gradle inside Docker containers while others run directly on the slaves.
The goal is to use as much cache as possible but I have also run into deadlock issues such as
Could not create service of type FileHasher using GradleUserHomeScopeServices.createCachingFileHasher().
> Timeout waiting to lock file hash cache (/home/slave/.gradle/caches/4.2/fileHashes). It is currently in use by another Gradle instance.
Due to the Gradle issue mentioned in the comment above, I do something like this — copying the Gradle cache into the container at startup, and writing any changes back at the end of the build:
pipeline {
agent {
docker {
image '…'
// Mount the Gradle cache in the container
args '-v /var/cache/gradle:/tmp/gradle-user-home:rw'
}
}
environment {
HOME = '/home/android'
GRADLE_CACHE = '/tmp/gradle-user-home'
}
stages {
stage('Prepare container') {
steps {
// Copy the Gradle cache from the host, so we can write to it
sh "rsync -a --include /caches --include /wrapper --exclude '/*' ${GRADLE_CACHE}/ ${HOME}/.gradle || true"
}
}
…
}
post {
success {
// Write updates to the Gradle cache back to the host
sh "rsync -au ${HOME}/.gradle/caches ${HOME}/.gradle/wrapper ${GRADLE_CACHE}/ || true"
}
}
}
I would like to have a post-build hook or similar, so that I can have the same output as e. g. the IRC plugin, but give that to a script.
I was able to get all the info, except for the actual build status. This just doesn't work, neither as a "Post-build script", "Post-build task", "Parameterized Trigger" aso.
It is possible with some very ugly workarounds, but I wanted to ask, in case someone has a nicer option ... short of writing my own plugin.
It works as mentioned with the Groovy Post-Build Plugin, yet without any extra quoting within the string that gets executed. So I had to put the actual functionality into a shell script, that does a call to curl, which in turn needs quoting for the POST parameters aso.
def result = manager.build.result
def build_number = manager.build.number
def env = manager.build.getEnvironment(manager.listener)
def build_url = env['BUILD_URL']
def build_branch = env['SVN_BRANCH']
def short_branch = ( build_branch =~ /branches\//).replaceFirst("")
def host = env['NODE_NAME']
def svn_rev = env['SVN_REVISION']
def job_name = manager.build.project.getName()
"/usr/local/bin/skypeStagingNotify.sh Deployed ${short_branch} on ${host} - ${result} - ${build_url}".execute()
Use Groovy script in post-build step via Groovy Post-Build plugin. You can then access Jenkins internals via Jenkins Java API. The plugin provides the script with variable manager that can be used to access important parts of the API (see Usage section in the plugin documentation).
For example, here's how you can execute a simple external Python script on Windows and output its result (as well as the build result) to build console:
def command = """cmd /c python -c "for i in range(1,5): print i" """
manager.listener.logger.println command.execute().text
def result = manager.build.result
manager.listener.logger.println "And the result is: ${result}"
For this I really like the Conditional Build Step plugin. It's very flexible, and you can choose which actions to take based on build failure or success. For instance, here's a case where I use conditional build step to send a notification on build failure:
You can also use conditional build step to set an environment variable or write to a log file that you use in subsequent "execute shell" steps. So for instance, you might create a build with three steps: one step to compile code/run tests, another to set a STATUS="failed" environment variable, and then a third step which sends an email like The build finished with a status: ${STATUS}
Really easy solution, maybe not to elegant, but it works!
1: Catch all the build result you want to catch (in this case SUCCESS).
2: Inject an env variable valued with the job status
3: Do the Same for any kind of other status (in this case I catch from abort to unstable)
4: After you'll be able to use the value for whatever you wanna do.. in this case I'm passing it to an ANT script! (Or you can directly load it from ANT as Environment variable...)
Hope it can help!
Groovy script solution:-
Here I am using groovy script plugin to take the build status and setting it to the environmental variable, so the environmental variable can be used in post-build scripts using post-build task plugin.
Groovy script:-
import hudson.EnvVars
import hudson.model.Environment
def build = Thread.currentThread().executable
def result = manager.build.result.toString()
def vars = [BUILD_STATUS: result]
build.environments.add(0, Environment.create(new EnvVars(vars)))
Postscript:-
echo BUILD_STATUS="${BUILD_STATUS}"
Try Post Build Task plugin...
It lets you specify conditions based on the log output...
Basic solution (please don't laugh)
#!/bin/bash
STATUS='Not set'
if [ ! -z $UPSTREAM_BUILD_DIR ];then
ISFAIL=$(ls -l /var/lib/jenkins/jobs/$UPSTREAM_BUILD_DIR/builds | grep "lastFailedBuild\|lastUnsuccessfulBuild" | grep $UPSTREAM_BUILD_NR)
ISSUCCESS=$(ls -l /var/lib/jenkins/jobs/$UPSTREAM_BUILD_DIR/builds | grep "lastSuccessfulBuild\|lastStableBuild" | grep $UPSTREAM_BUILD_NR)
if [ ! -z "$ISFAIL" ];then
echo $ISFAIL
STATUS='FAIL'
elif [ ! -z "$ISSUCCESS" ]
then
STATUS='SUCCESS'
fi
fi
echo $STATUS
where
$UPSTREAM_BUILD_DIR=$JOB_NAME
$UPSTREAM_BUILD_NR=$BUILD_NUMBER
passed from upstream build
Of course "/var/lib/jenkins/jobs/" depends of your jenkins installation
I recently updated the configuration of one of my hudson builds. The build history is out of sync. Is there a way to clear my build history?
Please and thank you
Use the script console (Manage Jenkins > Script Console) and something like this script to bulk delete a job's build history https://github.com/jenkinsci/jenkins-scripts/blob/master/scriptler/bulkDeleteBuilds.groovy
That script assumes you want to only delete a range of builds. To delete all builds for a given job, use this (tested):
// change this variable to match the name of the job whose builds you want to delete
def jobName = "Your Job Name"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
// uncomment these lines to reset the build number to 1:
//job.nextBuildNumber = 1
//job.save()
This answer is for Jenkins
Go to your Jenkins home page → Manage Jenkins → Script Console
Run the following script there. Change copy_folder to your project name
Code:
def jobName = "copy_folder"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
My post
If you click Manage Hudson / Reload Configuration From Disk, Hudson will reload all the build history data.
If the data on disk is messed up, you'll need to go to your %HUDSON_HOME%\jobs\<projectname> directory and restore the build directories as they're supposed to be. Then reload config data.
If you're simply asking how to remove all build history, you can just delete the builds one by one via the UI if there are just a few, or go to the %HUDSON_HOME%\jobs\<projectname> directory and delete all the subdirectories there -- they correspond to the builds.
Afterwards restart the service for the changes to take effect.
Here is another option: delete the builds with cURL.
$ curl -X POST http://jenkins-host.tld:8080/jenkins/job/myJob/[1-56]/doDeleteAll
The above deletes build #1 to #56 for job myJob.
If authentication is enabled on the Jenkins instance, a user name and API token must be provided like this:
$ curl -u userName:apiToken -X POST http://jenkins-host.tld:8080/jenkins/job/myJob/[1-56]/doDeleteAll
The API token must be fetched from the /me/configure page in Jenkins. Just click on the "Show API Token..." button to display both the user name and the API token.
Edit: one might have to replace doDeleteAll by doDelete in the URLs above to make this work, depending on the configuration or the version of Jenkins used.
Here is how to delete ALL BUILDS FOR ALL JOBS...... using the Jenkins Scripting.
def jobs = Jenkins.instance.projects.collect { it }
jobs.each { job -> job.getBuilds().each { it.delete() }}
You could modify the project configuration temporarily to save only the last 1 build, reload the configuration (which should trash the old builds), then change the configuration setting again to your desired value.
If you want to clear the build history of MultiBranchProject (e.g. pipeline),
go to your Jenkins home page → Manage Jenkins → Script Console and run the following script:
def projectName = "ProjectName"
def project = Jenkins.instance.getItem(projectName)
def jobs = project.getItems().each {
def job = it
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
}
This one is the best option available.
Jenkins.instance.getAllItems(AbstractProject.class).each {it -> Jenkins.instance.getItemByFullName(it.fullName).builds.findAll { it.number > 0 }.each { it.delete() } }
This code will delete all Jenkins Job build history.
Using Script Console.
In case the jobs are grouped it's possible to either give it a full name with forward slashes:
getItemByFullName("folder_name/job_name")
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
or traverse the hierarchy like this:
def folder = Jenkins.instance.getItem("folder_name")
def job = folder.getItem("job_name")
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
Deleting directly from file system is not safe. You can run the below script to delete all builds from all jobs ( recursively ).
def numberOfBuildsToKeep = 10
Jenkins.instance.getAllItems(AbstractItem.class).each {
if( it.class.toString() != "class com.cloudbees.hudson.plugins.folder.Folder" && it.class.toString() != "class org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject") {
println it.name
builds = it.getBuilds()
for(int i = numberOfBuildsToKeep; i < builds.size(); i++) {
builds.get(i).delete()
println "Deleted" + builds.get(i)
}
}
}
Go to "Manage Jenkins" > "Script Console"
Run below:
def jobName = "build_name"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
job.save()
Another easy way to clean builds is by adding the Discard Old Plugin at the end of your jobs. Set a maximum number of builds to save and then run the job again:
https://wiki.jenkins-ci.org/display/JENKINS/Discard+Old+Build+plugin
Go to the %HUDSON_HOME%\jobs\<projectname> remove builds dir and remove lastStable, lastSuccessful links, and remove nextBuildNumber file.
After doing above steps go to below link from UI
Jenkins-> Manage Jenkins -> Reload Configuration from Disk
It will do as you need
If using the Script Console method then try using the following instead to take into account if jobs are being grouped into folder containers.
def jobName = "Your Job Name"
def job = Jenkins.instance.getItemByFullName(jobName)
or
def jobName = "My Folder/Your Job Name
def job = Jenkins.instance.getItemByFullName(jobName)
Navigate to: %JENKINS_HOME%\jobs\jobName
Open the file "nextBuildNumber" and change the number. After that reload Jenkins configuration. Note: "nextBuildNumber" file contains the next build no that will be used by Jenkins.
Tested on jenkins 2.293 over linux. It will remove all the build logs but not the corellative build number
cd /var/lib/jenkins/jobs
find . -name "builds" -exec rm -rf {} \;
Be careful with this command because it executes a rm -rf on each find result. You could exec this first to validate if the result are only the builds folder of you jobs
find . -name "builds"
If you are looking for a solution where you have job inside a Folder you can use getItemByFullName function. It also supports white space in folder and job name.
def jobName = "folder_name/job_name"
def job = Jenkins.instance.getItemByFullName(jobName)
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()