I would like to use the "input step" of Jenkins to upload a binary file to the current workspace.
However, the code below seems to upload the file to the Jenkins master, not to the workspace of the current job on the slave where the job is running.
Is there any way to fix that?
Preferably without having to add an executor on the master or clutter the master disk with files.
def inFile = input id: 'file1', message: 'Upload a file', parameters: [file(name: 'data.tmp', description: 'Choose a file')]
Seems Jenkins officially doesn't support upload of binary file yet as you can see in JENKINS-27413. You can still make use of the input step to get binary file in your workspace. We will be using a method to get this working but we will not use it inside the Jenkinsfile otherwise we will encounter errors related to In-process Script Approval. Instead, we will use Global Shared Libraries, which is considered one of Jenkins' best practices.
Please follow these steps:
1) Create a shared library
Create a repository test-shared-library
Create a directory named vars in above repository. Inside vars directory, create a file copy_bin_to_wksp.groovy with the following content:
def inputGetFile(String savedfile = null) {
def filedata = null
def filename = null
// Get file using input step, will put it in build directory
// the filename will not be included in the upload data, so optionally allow it to be specified
if (savedfile == null) {
def inputFile = input message: 'Upload file', parameters: [file(name: 'library_data_upload'), string(name: 'filename', defaultValue: 'demo-backend-1.0-SNAPSHOT.jar')]
filedata = inputFile['library_data_upload']
filename = inputFile['filename']
} else {
def inputFile = input message: 'Upload file', parameters: [file(name: 'library_data_upload')]
filedata = inputFile
filename = savedfile
}
// Read contents and write to workspace
writeFile(file: filename, encoding: 'Base64', text: filedata.read().getBytes().encodeBase64().toString())
// Remove the file from the master to avoid stuff like secret leakage
filedata.delete()
return filename
}
2) Configure Jenkins for accessing Shared Library in any pipeline job
Go to Manage Jenkins » Configure System » Global Pipeline Libraries section
Name the library whatever you want (in my case, my-shared-library as shown below)
Keep the default to master (this is the branch where i pushed my code)
No need to check/uncheck the check-boxes unless you know what you're doing
3) Access shared library in your job
In Jenkinsfile, add the following code:
#Library('my-shared-library#master') _
node {
// Use any file name in place of *demo-backend-1.0-SNAPSHOT.jar* that i have used below
def file_in_workspace = copy_bin_to_wksp.inputGetFile('demo-backend-1.0-SNAPSHOT.jar')
sh "ls -ltR"
}
You're all set to run the job. :)
Note:
Make sure Script Security plugin is always up-to-date
How are Shared Libraries affected by Script Security?
Global Shared Libraries always run outside the sandbox. These libraries are considered "trusted:" they can run any methods in Java, Groovy, Jenkins internal APIs, Jenkins plugins, or third-party libraries. This allows you to define libraries which encapsulate individually unsafe APIs in a higher-level wrapper safe for use from any Pipeline. Beware that anyone able to push commits to this SCM repository could obtain unlimited access to Jenkins.
Folder-level Shared Libraries always run inside the sandbox. Folder-based libraries are not considered "trusted:" they run in the Groovy sandbox just like typical Pipelines.
Code Reference: James Hogarth's comment
Related
I am creating jenkins pipeline for all our application where I wanted to build and deploy. I can able to achieve that but all the deployment paths are hard coded on the pipeline script.
We have around 8 application and 5 environments. it means I need to specify 40 different deployable path on the pipeline scripts.
I like to know, are they any best way to store the deployment path?. I thought about storing them in XML and reading that while doing the build, but not sure on implementation.
looking for some ideas.
script {
def msbuild tool name: 'Msbuila', type: 'msbuild'
def action "${msbuild}\\msbuild.exe"
def rootPath "${NORKSPACE}\\test\\test";
def sinPath "${rootPath}\\test.sin"
def binPath "${rootPath}\\test\\bin"
bat “nuget restore \"${sinPath}\""
bat "\"${action}\" \"${sinPath)\" "
robocopy("\"${binPath}\" \"\\\\t.test.com\\test\" /MIR /xF ")
}
What I would do is use a config repository, having it configured this way:
Each application is a different repository (example: app_config)
Each environment is a different file
The same enviroment file in each repository is called by the same name
Each enviroment file is a yaml (key:value)
Then on the jenkins pipeline I would get the repo, read the yaml using readYAML (check the command usage and name, theres is a while since I used it) and load it on a map.
Then you use the variables of the map and that should help you
The tricky part is how to match the code repositories and the config repositories. As I mentioned before, I would use the same name and append "_config"
I had a requirment to build a console application, but i need to change some values in appssettings.json file according to the environment and then build it. I am new to jenkins and want to know how to acheive this.
for dev change values in json file and build it -> for test again change the json values and build it -> till prod
This can be done in multiple ways for example (the common idea between these is to check the incoming branch):
You might find better ways to do it but you can use this as a start.
Using bash, jq, sponge through sh step:
Create a json file as a template like the following (consider keeping this file in a version control to clone every build)
# settings.json
{
environment: 'ENVIRONMENT_NAME',
appVersion: 'APP_VERSION'
}
Check the branch name value through if condition and update the template according to the branch value
jq '.environment = "branch_name"' settings.json|sponge settings.json
Use the customized settings.json in your application's code
Using Config File Provider Plugin which can be used inside the Jenkins pipeline as the following (also update it based on the branch name)
configFileProvider([configFile(fileId: 'FILE_ID', targetLocation: 'FILE_LOCATION')]) {}
Check if the application framework can make use of environment variables.
I've been asked to move some variable from a Groovy script out into a configuration file. I'm fine using something like :-
readFile('../xx-software.cfg').split('\n').each { fileName ->
sh "wget ${theURL}${fileName}"
}
However, even though I have added xx-software.cfg into the same directory as my Groovy script it does become available for use within that groovy script.
I hope this makes sense!?
How can I move my variables out into a config file to make it easier for the application support team to make future edits without changing the code?
There are a few approaches you could use.
Firstly, file format for the configuration and how to read the data into variables. You could use Java Properties format, YAML or JSON and these are all handled by the Pipeline Utility Steps plugin with steps here. You can read the file with these steps:
readProperties
readYaml
readJSON
Next problem, how to get the file available to your pipeline so it can be read from the workspace using these steps. Possibilities are:
In source control with your pipeline code. It can be fetched with the pipeline.
In a separate source control for configuration, your pipeline will need a step to fetch it.
Use the Jenkins Config File Provider plugin. It has a step to provide a config file managed in Jenkins.
Provide it as a Custom Tool zipped archive from a binary server like Artifactory. You can use custom tool definition pipeline steps to make this available to the pipeline.
The Config File Provider option might provide any easy way to have a file that can be updated, but there won't be any version control of it.
I have a Jenkins job that invoke a gradle script to create a .war file from sources.
gradle war command produces a file with name Geo-1.0.5.war because build.gradle use version number:
war {
baseName = 'Geo'
version = '1.0.5'
}
This file will be copied and deployed on a Wildfly server trough SSH using "Publish Over SSH Plugin".
How can I tell to the plugin that the war filename format is something like Geo-$gradle_version.war?
This is documented if you click the (?) help icon next to the "Source files" field within Jenkins:
The string is a comma separated list of includes for an Ant fileset eg. **/*.jar
(see Patterns in the Ant manual).
So in your case, you could use **/Geo-*.war as the source pattern.
This is also shown in the screenshot on the plugin wiki page, and in the Source Files and Examples sections on the linked "Publish Over…" documentation.
In your comment to this answer, you mention that you don't want to communicate that the filename is "something like Geo-$gradle_version.war" for uploading, but rather want to use the exact filename in a script being executed on the SSH host.
You could do this by adding an Execute Shell step which determines the filename, and exporting it as an environment variable using the EnvInject Plugin. For example:
f=$(basename `find . -name 'Geo-*.war'`)
echo WAR_FILENAME=${f} > env.properties
Then, by using an Inject Environment Variables step with its path set to env.properties, the WAR_FILENAME value will be added to the build environment, available for use by subsequent steps.
In the Exec Command field of the SSH-publishing step, you can then use ${WAR_FILENAME} to refer to the exact filename uploaded.
I have a jenkins job that pulls source code from GitHub public repo. I need to pass some files such as instance-specific configuration files containing secrets to the job and merge with source code prior to running build because these files are obviously inappropriate to be put in public SCM. The Jenkins instance is a multi-tenanted shared service.
The config files don't change often so I don't want to implement using file parameter which forces user manually input the file on every run. Another reason file parameter doesn't work is some builds are triggered automatically by SCM.
I don't want to use Config File Provider Plugin either, because the plugin requires jenkins admin access but I want users with job-level privileges manage the files themselves.
Ideally the uploaded files are saved alongside with job config.xml instead of in workspace, because I would like to delete workspace after each build. I can write scripts to copy the files from job config folder to workspace.
Are there any solutions available? Thanks.
If the "special" files are being placed in a folder with say some access privileges to it, couldn't you either run a Pre-SCM-Buildstep to move the files with shell commands, or introduce a regular build step (i.e. after the SCM stuff and before the other build steps) that would also use shell commands to move files?