Sending build artifacts over ssh is very slow - jenkins

I am new to jenkins . I am trying to deploy php codes to production server via "Publish over ssh" plugin . I enabled it in "post-build actions". Everything is fine but transfer is too slow [ 2 hours for 40MB transfer]
Here is the scenario:
Entire project is setup in local.Total size is nearly 700MB.
All codes pushed to BitBucket.
Now i configured build in Jenkins with "Send build artifacts over ssh " as post-build option.Inside transfer set i added " **/*. * " for source file option .
It is taking hours and hours to transfer entire project . Within 2 hours it transferred only 140MB.
Is it normal ? Do i need to change any settings? Network connections between the server in which Jenkins runs and the production server is fine .

"rsync over ssh " solved the problem of code transfer to production server.Now its taking only 2-3 seconds for a build.

Yes, 2 hours for 40MB is unexpectedly slow. However, it is not uncommon to get abysmally slow archiving of artifacts from agent to master.
Below are links to the 2 open tickets I know of tracking this. There are many others reported that have been closed over the last decade, but in my environment, I'm getting ~13Mbps transfers, despite the 10Gbps links we have between all nodes in our Jenkins cluster.
JENKINS-18276
JENKINS-63517

This isn't directly related to your question of sending artifacts via SSH, but it may help others who come upon this while trying to reduce the time it takes to archive artifacts in general. I used the pipeline utility plugin to zip everything I wanted to archive into a zip file, then I unzipped it when I want to un-archive it later.
That reduced the time by about 50x for me, presumably because it was just 1 file to transfer/archive/unarchive instead of a bunch of small files, and zipping/unzipping was less that 1 second in my case--much less time than archiving the individual files separately.
zip zipFile: 'artifacts.zip'
archiveArtifacts 'artifacts.zip'
then
copyArtifacts(filter: '**/*', projectName: currentBuild.projectName, selector: specific(params.archived_build_number))
unzip zipFile: 'artifacts.zip', quiet: true
The zip function also has a parameter to archive the zip, but I showed it separately here to be explicit. Again, nothing about this is specific to your SSH scenario, but I thought it would be useful to post it anyway.

Related

Jenkins running out of storage space

Though I have seen many were mentioning to use Deleteworkspace plugin to use and corresponding script in post build actions of the pipeline.
here we are using the multibranch pipeline.. I'm facing the challenges to maintain the storage space ..as it is running out for every build.
my management is not recommended to delete the workspace folder..instead choose the files in workspace and delete.
upon using the command du-sh ./*/ ..i can see workspace storage is 2.5 GB, but when i cd into workspace .. the ls -lart ..got to see ...all files with less size only 1 GB approxiamtely ( not upto 2.5GB)..
can someone. assist me, where i have missed ..
and help me to understand to restore the space in Jenkins

How send folder as attachments in Jenkins Job Email Notification

I have A Jenkins Job with Workspace C:\hello_world\test_output*
in the test output folder, 2 things are one folder and one HTML file I want to send test output folder as a zip file as attachments on Jenkin jobs but I can't able to do it, please help
Think of it as two steps: 1) zipping the files; and 2) sending the attachment.
I've done this by installing 7zip, then running the command:
"C:\Program Files\7-Zip\7z.exe" a -r C:\hellow_world\test_output.zip C:\job\test_output\* - mem=AES256
With the https://plugins.jenkins.io/email-ext plugin installed, there's a lot of flexibility, including the ability to send attachments.
Bear in mind that some mail hosts example GMAIL have started blocking things like executables, even when found within zip files. If you've got users on such a host, you might run into trouble through no fault of your own.
Apart from that, depending on the OS that Jenkins is running on, you could add a post-build Execute shell or Execute Windows Batch command step that calls the zip tool of your choice, and send an e-mail with attachments using the email-ext plugin for example

Jenkins - Add download link to file if build fails

I'm a little new to Jenkins, and I can't seem to figure this out. I have access to a Jenkins server that uses slaves to perform build jobs.
If a build fails, it stores a generated zip archive in a persistent Workspace directory for further debugging. The zip file is generated by a python script that keeps track of only the last 3 failed builds to conserve memory (i.e. 3 failed builds will result in 3 archives in the folder, but a fourth failed build will delete the oldest archive before adding the new one).
What I'm trying to do is add a download link to a failed Jenkins Run to allow users to quickly download the zip file that was generated for that build. But I'm really confused as to how approach this!
So I guess the question is, how could I add a download link to a Jenkins Run page to a file generated during that Run if it fails?
Example usage scenario:
1. I build some code :)
2. It fails :(
3. I download the zip file (from the Run page) with the generated debug files and find the fix :)
4. Space doesn't get filled up as zip files are kept only for the last 3 builds!
Any help would be greatly appreciated! Thank you! I'm happy to provide more information if needed ^^ I am currently trying to use a system groovy script to do this, but perhaps artifacts would be more appropriate? I really can't seem to find good documentation on this!
There are built in methods in Jenkins to allow this workflow:
you can archive any artifact (in that case the zip) as post build step
data retention strategy can be configured in the job via Discard old builds (Advanced).
in order to send out customized mails on build failure with embedded download link you should review Email Ext Plugin; it allows you to configure individual texts for e.g. build failures where you could add the link to downloading the artifact.

How can I use Jenkins to detect the presence of a file on an SFTP server?

I want to use Jenkins to monitor an SFTP site, and fire off a build when a given file type appears on that site.
How can I do this?
Ok, in that case you should have 2 jobs.
First job - running every N minutes with a bash script as a build step:
`wget ftp://login:password#ftp.example.org/file.txt
Then you should have https://wiki.jenkins-ci.org/display/JENKINS/Run+Condition+Plugin , which runs on condition when file "file.txt" (downloaded or not downloaded previously) exist
Afte that you can trigger your next job in case if file exist (or do anything else)
Like in the previous answer I would use two jobs but instead of bash scripts I would use python and and sftp since it makes dealing with ssh a bit easier.
https://pypi.python.org/pypi/pysftp

Jenkins perforce plugin: can I get it to do full sync if key files are missing?

We are using the Jenkins perforce plugin reasonably successfully in a semi-continuous integration setup. The "reasonably" in generally because our builds are slow, but this is not related to Jenkins and more to do with our own code.
One of the main problems we have is that if files have been deleted outside the builds - e.g. if we are low on diskspace and somebody "prunes" the builds on the build machines - the p4 plugin cannot directly handle that. In the mode we run it (without full-sync flag) it assumes that files sync'd on the previous run are still there.
This is covered under "quirks" on the plugin page - https://wiki.jenkins-ci.org/display/JENKINS/Perforce+Plugin - which suggests that you do a "One Time Force Sync" to get the workspace back to normal. However, we usually have a couple of build machines with each config for a bit of redundancy. In that case, the machine you run next is not always the same as the one that had the problem. This also makes it tricky adding new machines to the pool.
I wondered if somebody had a better solution. E.g. if certain key files are missing (indicative of a build being wiped) it does a forced sync anyway?
OK it took me a while but I eventually twigged an approach that works at least in our setup. Pretty well near the top of the job we check for the existence of a particular file. If this does not exist, we assume the workspace has been wiped (if it does exist, we assume is OK). If it has been wiped, we do the equivalent of a "Remove From Client" of all the files. Actually we modify a few files during the build, so I've added a revert in for good measure. Not sure if it is generally required - I suspect not - but should do no harm.
On a PC this means adding the following to the first build step:
IF NOT EXIST //%P4CLIENT%/sdk/ChangeLog.txt (
REM Remove from client. Throw away any files being modified
p4 revert %WORKSPACE%/...
p4 sync %WORKSPACE%/...#none > nul
)
On a Mac (and Linux boxes I assume):
if ! [ -e //%P4CLIENT%/sdk/ChangeLog.txt ]
then
# Remove from client. Throw away any files being modified
p4 revert $WORKSPACE/...
p4 sync $WORKSPACE/...#none > /dev/null
fi
sdk/ChangeLog.txt is the file we assume marks a valid installation. What this effectively does is to reset the environment so that the next sync is the equivalent of a forced sync.

Resources