Upload multiple files using s3upload in Jenkins pipeline - jenkins

Can we upload multiple files (not entire folder) to S3 using s3Upload in Jenkins file?
I was trying to upload all rpm files (*.rpm) in the root directory to S3 using the s3Upload function.

You can upload all the files with following command in one line.
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.svg', workingDir:'dist')
Further explaining, You can create own filtering based on following two possibilities;
1.Include all the files of a certain extention.
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.svg', workingDir:'dist')
2.Include all the files except certain file extention.
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*', workingDir:'dist', excludePathPattern:'**/*.svg')
Reference: https://github.com/jenkinsci/pipeline-aws-plugin (Check under s3Upload)

findFiles solved the issue. Below is the snippet used for the same.
files = findFiles(glob: '*.rpm')
files.each {
println "RPM: ${it}"
withAWS(credentials: '****************'){
s3Upload(file:"${it}", bucket:'rpm-repo', path:"${bucket_path}")
}
}

Refer to the following link AWS s3 documentation. In that, refer section 'Use of Exclude and Include Filters'
Here is a way to upload multiple files of a particular type.
If you only want to upload files with a particular extension, you need to first exclude all files, then re-include the files with the particular extension. This command will upload only files ending with .jpg:
aws s3 cp /tmp/foo/ s3://bucket/ --recursive --exclude "*" --include "*.jpg"
This works for AWS Command Line Interface.

For pipelines, you need to wrap the iteration in script, like
pipeline {
environment {
// Extract concise branch name.
BRANCH = GIT_BRANCH.substring(GIT_BRANCH.lastIndexOf('/') + 1, GIT_BRANCH.length())
}
...
post {
success {
script {
def artifacts = ['file1', 'dir2/file3']
artifacts.each {
withAWS(credentials:'my-aws-token', region:'eu-west-1') {
s3Upload(
file: "build/${it}",
bucket: 'my-artifacts',
path: 'my-repo/',
metadatas: ["repo:${env.JOB_NAME}", "branch:${env.BRANCH}", "commit:${env.GIT_COMMIT}"]
)
}
}
}
}
}
}

Related

Jenkins DSL custom config file folder

We are using DSL to build/setup our Jenkins structure.
In it, we create our folder structure and then all our jobs within the folders.
The jobs end up in the correct folders by including the folder name in the job name
pipelineJob('folder/subfolder/Job Name') {}
While the UI lets me create a config file within a folder, I cannot find a way within the dsl groovy script hierachy to put a custom config file in a folder.
While I can easily create a config file:
configFiles {
customConfig {
name('myCustom.yaml')
id('59f394fc-40fe-489d-989c-7556c1a01153')
content('yaml content goes here')
}
}
There seems to be no way to put this file into a folder / subfolder.
While the Job DSL plugin does not offer an easy way to do this, you can use a configure block to directly modify the xml.
folder('Config-File Example') {
description("Example of a Folder with a Config-File, created via Job DSL")
configure { folder ->
folder / 'properties' << 'org.jenkinsci.plugins.configfiles.folder.FolderConfigFileProperty'() {
configs(class: 'sorted-set') {
comparator(class: 'org.jenkinsci.plugins.configfiles.ConfigByIdComparator')
'org.jenkinsci.plugins.configfiles.json.JsonConfig'() {
id 'my-config-file-id'
providerId 'org.jenkinsci.plugins.configfiles.json.JsonConfig'
name 'My Config-File Name'
comment 'This contains my awesome configuration data'
// Use special characters as-is, they will be encoded automatically
content '[ "1", \'2\', "<>$%&" ]'
}
}
}
}
}

how to call property file syntax and define in JOB DSL in jenkins

I want to use property file in DSL job which will take my project name in job name and svn location . Can anyone have idea how to write and syntax?
For handling properties files stored outside your repository, you have a plugin called "Config File Provider Plugin".
You use it like this:
stage('Add Config files') {
steps {
configFileProvider([configFile(fileId: 'ID-of-file0in-jenkins', targetLocation: 'path/destinationfile')]) {
// some block
}
}
}
It is capable of replacing tokens in json and xml or the whole file (as in the example)
For handling data comming from the SVN or project name you can access the environment variables. See this thread and this link

Gradle Artifact depedency - Artifactory artifact - How to find the path

Building a Java/Groovy project, various tasks like compileJava, compileGroovy, test, etc requires various jar artifacts which Gradle provides them if you have properly defined what you need for each area of that task.
I'm making those available to the build script (build.gradle) and everything is working fine.
One of my other project that I'm working on, requires not only jar rtifacts but also an .xml file as an artifact for doing JIBX / XSLT transformation/processing.
My simple question:
- Gradle build process know how to fetch artifacts from Artifactory (as I have mentioned those Artifactory repositories in init.d/common.gradle file) and during the build it feeds the compile/test etc tasks with those jars, now if I have this .xml artifact uploaded to Artifactory as well, then:
a. How can I get the .xml artifact available to me in build.gradle so that I can perform some operation on it; for ex: Copy that .xml file to a x/y/z folder in a resultant project's jar/war file. Those jar files I can access via project.configurations.compile.each or .find or something like that but I'm not sure if I can access the .xml file the same way. The following code works fine for unjaring a jar file in build/tmpJibx/ folder i.e. if I need httpunit-1.1.1.jar during my build, then the following function when called, will create/unjar this jar in build/tmpJibx/httpunit folder.
// Unpack jar
def unpackJarItem( jarName ) {
println 'unpacking: ' + jarName
def dirName = "$buildDir/tmpJibx/$jarName"
new File( dirName ).mkdirs()
project.configurations.compile.find {
def nameJar = it.name
def iPos = nameJar.lastIndexOf( '-' )
if( iPos > 0 ) {
nameJar = nameJar.substring( 0, iPos )
if( nameJar == jarName ) {
def srcJar = it.toString()
ant {
unjar( src: "$srcJar", dest: "$dirName" )
}
}
}
}
}
Gradle maintains artifacts in its cache at user's ~(home directory inside ~/.gradle) or C:\Users\.gradle under ...\caches..\artifactory..\filestore.........*
All Im trying to achieve is:
If I can do something like below:
copy {
into "build/war/WEB-INF/conf"
from "......<THIS_PATH_IS_WHAT_Im_LOOKING_FOR>:
include "thatxmlartifactfile.xml"
}
I tried defining the entry under dependencies { ... } section, like below, but I'm not sure if Gradle will automatically have access to it somehow as Gradle is so great.
dependencies {
compile 'groupid:artifactid:x.x.x'
compile group: 'xxxx', artifac...: 'yyyy', version: 'x.x.x'
//for ex:
compile 'httpunit:httpunit:1.1.1'
jibxAnt 'groupidnameofxmlfile:artifactidnameofxmlfile:versionnumberofxml#xml"
...
...
....
}
It seems like I have to first copy that .xml from where ever Gradle know it's available to some location n then from that location to my target folder.
// Add libraries for acceptance tests
project.configurations.acceptanceTestCompile.each { File f ->
if( f.isFile() ) {
def nameJar = f.getName()
def jarName = f.getName()
def fileInc = true
def iPos = nameJar.lastIndexOf( '-' )
if( iPos > -1 ) {
jarName = nameJar.substring( 0, iPos )
// Here I can say that one of the file/entry will be that .xml file
// Now I have that in jarName variable and I can play with it, right?
// i.e. if jarName == name of that xml file, then
copy {
into "some/folder/location"
from jarName
}
}
}
}
The easiest solution is to commit the XML file to source control. If you put it under src/main/webapp/WEB-INF/conf/thatxmlartifactfile.xml, it will get included in the War automatically.
If you need to get the file from Artifactory, you can do so as follows:
configurations {
jibx
}
dependencies {
jibx "some.group:someArtifact:1.0#xml"
war {
from { configurations.jibx.singleFile }
}
PS: It's often possible, and also preferable, to add files directly to the final archive, rather than going through intermediate copy steps.

Zip files/Directories in Groovy with AntBuilder

I am trying to zip files and directories in Groovy using AntBuilder. I have the following code:
def ant = new AntBuilder()
ant.zip(basedir: "./Testing", destfile:"${file}.zip",includes:file.name)
This zips the file "blah.txt", but not the file "New Text Document.txt". I think the issue is the spaces. I've tried the following:
ant.zip(basedir: "./Testing", destfile:"${file}.zip",includes:"${file.name}")
ant.zip(basedir: "./Testing", destfile:"${file}.zip",includes:"\"${file.name}\"")
Neither of the above resolved the issue. I'm using Ant because it will zip directories, and I don't have access to org.apache.commons.io.compression at work.
If you look at the docs for the ant zip task, the includes parameter is described as:
comma- or space-separated list of patterns of files that must be included
So you're right, that it is the space separator that's breaking it...
You need to use the longer route to get this to work:
new AntBuilder().zip( destFile: "${file}.zip" ) {
fileset( dir: './Testing' ) {
include( name:file.name )
}
}

Jenkins Continuous Integration with Amazon S3 - Everything is uploading to the root?

I'm running Jenkins and I have it successfully working with my GitHub account, but I can't get it working correctly with Amazon S3.
I installed the S3 plugin and when I run a build it successfully uploads to the S3 bucket I specify, but all of the files uploaded end up in the root of the bucket. I have a bunch of folders (such as /css /js and so on), but all of the files in those folders from hithub end up in the root of my S3 account.
Is it possible to get the S3 plugin to upload and retain the folder structure?
It doesn't look like this is possible. Instead, I'm using s3cmd to do this. You must first install it on your server, and then in one of the bash scripts within a Jenkins job you can use:
s3cmd sync -r -P $WORKSPACE/ s3://YOUR_BUCKET_NAME
That will copy all of the files to your S3 account maintaining the folder structure. The -P keeps read permissions for everyone (needed if you're using your bucket as a web server). This is a great solution using the sync feature, because it compares all your local files against the S3 bucket and only copies files that have changed (by comparing file sizes and checksums).
I have never worked with the S3 plugin for Jenkins (but now that I know it exists, I might give it a try), though, looking at the code, it seems you can only do what you want using a workaround.
Here's what the actual plugin code does (taken from github) --I removed the parts of the code that are not relevant for the sake of readability:
class hudson.plugins.s3.S3Profile, method upload:
final Destination dest = new Destination(bucketName,filePath.getName());
getClient().putObject(dest.bucketName, dest.objectName, filePath.read(), metadata);
Now if you take a look into hudson.FilePath.getName()'s JavaDoc:
Gets just the file name portion without directories.
Now, take a look into the hudson.plugins.s3.Destination's constructor:
public Destination(final String userBucketName, final String fileName) {
if (userBucketName == null || fileName == null)
throw new IllegalArgumentException("Not defined for null parameters: "+userBucketName+","+fileName);
final String[] bucketNameArray = userBucketName.split("/", 2);
bucketName = bucketNameArray[0];
if (bucketNameArray.length > 1) {
objectName = bucketNameArray[1] + "/" + fileName;
} else {
objectName = fileName;
}
}
The Destination class JavaDoc says:
The convention implemented here is that a / in a bucket name is used to construct a structure in the object name. That is, a put of file.txt to bucket name of "mybucket/v1" will cause the object "v1/file.txt" to be created in the mybucket.
Conclusion: the filePath.getName() call strips off any prefix (S3 does not have any directory, but rather prefixes, see this and this threads for more info) you add to the file. If you really need to put your files into a "folder" (i.e. having a specific prefix that contains a slash (/)), I suggest you to add this prefix to the end of your bucket name, as explicited in the Destination class JavaDoc.
Yes this is possible.
It looks like for each folder destination, you'll need a separate instance of the S3 plugin however.
"Source" is the file you're uploading.
"Destination bucket" is where you place your path.
Using Jenkins 1.532.2 and S3 Publisher Plug-In 0.5, the UI configure Job screen rejects additional S3 publish entries. There would also be a significant maintenance benefit to us if the plugin recreated the workspace directory structure as we'll have many directories to create.
Set up your git plugin.
Set up your Bash script
All in your folder marked as "*" will go to bucket

Resources