I'm trying to set-up the deployment of binary files (after tests) on Amazon S3. The travis-ci documentation is useless because it doesn't mention where should the artifacts be generated/copied so that travis can upload them to the specific bucket. Any idea? Is there any 'well known" path there it looks for artifacts?
deploy:
provider: s3
access_key_id: "YOUR AWS ACCESS KEY"
secret_access_key: "YOUR AWS SECRET KEY"
bucket: "S3 Bucket"
on:
all_branches: true
The deployment will look at directories or files relative to the current working directory, which is the project's root normally, unless it's somehow changed as part of the build. If you don't specify a directory, Travis CI will deploy the entire project folder to S3.
So if your project creates an artifact in the dist directory, you can specify the relative path:
deploy:
provider: s3
local-dir: dist
Read more here
Related
I have a file in some folder on my computer that i want to add to the jenkins pipeline build folder before testing stage, without addind this file to the repository. is there a way to do that? this file is a config.yaml that contains a relevant version.
thanks
Hi I using Jenkins to deploy files (project is write in Vue.js). I'm using plugin to jenkins Publish Over FTP. I run build npm i && npm run build, jenkns created catalog dist with my project. Now I need send contents of catalog dist to my server by FTP. This is my config to Publish Over FTP:
Source files: dist/**
Remote directory: /public_html/
FTP plugin send file to my server but in catalog public_html I have catalog dist. I need send file without catalog dist. I need only contents of dist catalog in public_html.
I found resolve I add to Remove prefix value dist:
I need to store a .csv or .txt file and access it from the Jenkinsfile. I currently have couple of files which are in the credential storage (logins, passwords and so on) but this file just has to be stored on the Jenkins machine. I know I could upload it directly to the node but I would prefer doing it in the similar fashion as with the credentials (using web interface).
You can save the file in Managed Files through the web interface: Manage Jenkins > Managed files > Add a new Config.
Each file saved there will have an auto generated ID or you can set your own id, next you can use Config File Provider Plugin to access your files through Jenkins Pipeline using the file ID like this:
configFileProvider([configFile(fileId: 'maven-settings', targetLocation: '/path/to/file/in/workspace')]) {}
I'm working on a new CI proof of concept. I'm using TFS build and attempting to integrate jFrog Artifactory.
I'm trying to create a folder structure within my Artifactory repository like so:
[repository]/[sub-repository]/[Artifacts Folder]/[Versioned Artifact Folder]/[Versioned Artifact Zip Archive]
I've scripted the creation of the following correct structure in my Artifactory staging directory with PowerShell:
[Artifacts Folder]\[Versioned Artifact Folder]\[Versioned Artifact Zip Archive]
... and finally compressed my [Artifacts Folder] into a [Artifacts Folder].zip archive for deployment to Artifactory repository.
Now, although jFrog documentation indicates the introduction of an --explode option in jFrog 1.7 for this purpose, attempts to upload using this option returned an Incorrect Usage error:
2018-10-01T10:21:28.3168258Z running 'C:\jfrog\jfrog.exe' rt upload '[Artifactory Staging Directory]\[Artifacts Folder]\*' '[repository]/[sub-repository]/[Artifacts Folder]' --url=https://www.artifactrepository.xxx.net/artifactory --explode=true --user=******** --password=******** --props='build.number=[build_number];build.name=[build_name]'
2018-10-01T10:21:28.3168258Z
2018-10-01T10:21:28.3168258Z
2018-10-01T10:21:29.6761967Z Incorrect Usage.
2018-10-01T10:21:29.6761967Z
2018-10-01T10:21:29.6761967Z NAME:
2018-10-01T10:21:29.6761967Z jfrog rt upload - Upload files
2018-10-01T10:21:29.6761967Z
2018-10-01T10:21:29.6761967Z USAGE:
2018-10-01T10:21:29.6761967Z jfrog rt upload [command options] [arguments...]
2018-10-01T10:21:29.6761967Z
2018-10-01T10:21:29.6761967Z OPTIONS:
2018-10-01T10:21:29.6761967Z --url [Optional] Artifactory URL
2018-10-01T10:21:29.6761967Z --user [Optional] Artifactory username
2018-10-01T10:21:29.6761967Z --password [Optional] Artifactory password
2018-10-01T10:21:29.6761967Z --apikey [Optional] Artifactory API key
2018-10-01T10:21:29.6761967Z --ssh-key-path [Optional] SSH key file path
2018-10-01T10:21:29.6761967Z --props [Optional] List of properties in the form of "key1=value1;key2=value2,..." to be attached to the uploaded artifacts.
2018-10-01T10:21:29.6761967Z --deb [Optional] Used for Debian packages in the form of distribution/component/architecture.
2018-10-01T10:21:29.6917936Z --recursive [Default: true] Set to false if you do not wish to collect artifacts in sub-folders to be uploaded to Artifactory.
2018-10-01T10:21:29.6917936Z --flat [Default: true] If set to false, files are uploaded according to their file system hierarchy.
2018-10-01T10:21:29.6917936Z --regexp [Default: false] Set to true to use a regular expression instead of wildcards expression to collect files to upload.
2018-10-01T10:21:29.6917936Z --threads [Default: 3] Number of artifacts to upload in parallel.
2018-10-01T10:21:29.6917936Z --dry-run [Default: false] Set to true to disable communication with Artifactory.
2018-10-01T10:21:29.6917936Z
I using jFrog Artifactory Deployer 2.1.1 TFS build task.
This command line option is described here: https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory#CLIforJFrogArtifactory-UploadingFiles
However, it seems that jFrog.exe which is on our TFS servers doesn’t understand --explode command line option.
(Note: I am unsure what version of jFrog.exe is running on our build servers; currently awaiting details from responsible team, update to follow.)
Is the issue that the jFrog.exe version is older (pre 1.7) and does not support the --explode command option? If so, is there an alternative way to achieve multiple artifact upload while preserving staging folder structure?
(Note: I applied the --flat=false option but the staging folder hierarchy was preserved right back to the root; this is not what's required either).
insights appreciated, thanks for looking..
In the end, we were able to work around the absence of the '--explode' command option by using placeholders like so:
In the jFrog Artifactory Deployer task:
Path to the Artifacts: [Artifacts Folder]\(**)\(*)
Target Repository [repository]/[sub-repository]/[Artifacts Folder]/{1}/
The use of placeholders in this way accomplished the preservation of folder structure in the push to the Artifactory repository as required.
I'm running Jenkins v1.581 and Publish artifacts to SCP Repository v1.8.
I am able to successfully copy my artifacts over SCP to a destination directory; so I know that server names, authentication, etc... are all correct.
My configuration looks something like this:
Source: tmp/distribution/target/deploy/opt/**
Destination: opt
When Jenkins puts the file over SCP it ends up in a directory structure of opt/tmp/distribution/target/deploy/opt/rest_of_path. It looks like it's keeping the original path of the file as it existed as an artifact and appending it to the destination path. This causes my artifacts to be deployed to an unexpected path.
My expectation is that they would end up as opt/rest_of_path. How do I fix this?
I replaced the Publish to SCP Repository with the Send build artifacts over SSH plugin. This plugin has an option for Remove prefix which does exactly what I wanted.