Understanding Jenkins' Archive The Artifacts and the use of it - jenkins

I'm trying to understand what it does. Currently, this is the value that I see - dist/.tgz
From what I understand, our grunt scripts makes a tgz file. However, I don't know what Jenkins does.
I got an error when I didn't specify any pattern
ERROR: No artifacts are configured for archiving.
You probably forgot to set the file pattern, so please go back to the configuration and specify it.
If you really did mean to archive all the files in the workspace, please specify "**"
Build step 'Archive the artifacts' changed build result to FAILURE

Most importantly, it allows you to archive items from your job's workspace in a persistent and accessible way, linked to the specific build number.
I.e. you have a job Build that compiles your sources into program.exe, archiving it linked to the build it was produced by, and keeping it accessible for developers or other jobs can come in very handy.
Additionally, archived artifacts are transferred to your jenkins master, so your job can run on any slave, but your archived files will be always accessible, even when that particular slave is offline.
Also, with the right configuration and plugins, other projects can access archived artifacts from other projects. I.e. a job Deploy that uploads your program.exe to some location is as trivial as copying the archived artifact of the last successful build into its workspace for the upload.
Theres quite some information on SO already, i.e. here.

Related

DevOps Build and Pipeline Design Pattern - Need some advice on deploying many individual files

We have a PoC on deploying a file to an old mainframe. There are many types of deployments that we do but this question focuses on individual files. We are able to SSH into the mainframe and we have a deployment pipeline with the steps needed to get one file into the correct location.
The problem is we have over 54,000 of these individual files. During a release we may deploy as little as 1-5 files or large deployment may be 250 files. Each of them will have a different source and target destination. Some of them may be sources from the same folder and deployed to the same folder but that is not guaranteed.
We can make the assumption that the files are immutable. There are issues on both build and release to consider:
Build - what is the artifact? Do we use one artifact for each release that could contain 1-250 files? We don't want to have 250 build scripts for a release, that we know.
Release - How do we use the pipelines. If you batch them together then is it a one click deploy to that environment? How would you determine if someone added a file to the release? I guess we would need a new build that would create a new pipeline?
There are a few other things that come up like we need to check the status in our change management system to confirm that the ticket for that File is in a status that is approvable. That is a deployment step currently.
I'm not sure this is the "answer" or not but this is our take on it so far:
The Artifact
We are going to create a "release" data file. In this file there will be a list of files going with each deployment. We will organize the files by product line and create a branch of all files for a specific product. Then the build will read the files and create the artifact from the list of files related to that release. We will also include the data file in the artifact.
Deployment
We will create a Parent/Child release process. The Parent script will loop through the data file and call the child script. The Child script will deploy an individual file which will be represented by a row in the data file. To deploy to Production the Parent will be deployed only. The child will not every be deployed individually.
Multiple Deployment Times/Dependencies
We have a requirement to Deploy certain files at certain times. One production file deployment may be at 1 PM and another at 7 PM in the same release. To accommodate this
we will include deployment time in the data file. After each file is deployed we will some how keep track that this file has been deployed.
Change Management
We will do our change management system check in each child script to make sure the file is ready to deploy. If the individual file is not approved we will not stop processing, we will finish the deployment for any other files in the list that are approved and then as the last step in the deployment we will fail the deploy. We need to make the "tracking" available to the teams to see what caused the deploy to fail.
Making some assumptions here and is happy path, but perhaps this will help get you to the ultimate solution.
Have a master branch that has a products folder. This folder would then have subfolders for each product, which has the files:
master/
products/
productA
productB
productN
Dev Team would work on files in separate fix branches then merge into master via pull requests. You can setup policies and gates for audit
Create a build pipeline with powershell script task that checks for deltas (possible example) in master and copy/publish only those changes to an artifact destination folder with the same product subfolder layout
Create a release pipeline that has a stage for each product and/or destination path on the mainframe. Each stage would have a custom task that copies the files from the appropriate product folder to the destination via SSH. You could even create a task group that gets re-used then just use variables for folder paths, etc. NOTE: The will be quite a few stages, but that's what release pipelines are for :)
Schedule the release pipeline to run at the desired times. You can setup notifications on failures so someone or process can investigate/retry etc.

How to archive all the build versions (Artifacts) in target folder

Each time i generate my build through jenkins, my existing jar file in the target folder is overwritten by maven. For example: i have a existing version of 1.0 in jenkins target folder, now if i create a new build with version 1.1, the previous version in my target folder gets overwritten.
I don't want that to happen, i want to archive all the versions (because we might provide some of the old features to certain set of customers). i am just trying to understand is there way to do this in jenkins pipeline. I don't prefer plugins, it would be nice to do it declarative way using jenkins file.
First of all, it's not the best solution to store your artifacts just in target folder without any copying to other place. Usually all needed build artifacts are stored in Nexus or Artifactory repositories (of course, you can copy them to some local directory also). You can do that in pipeline Jenkinsfile as well, but you still require to install needed plugin. For example, for publishing artifacts to Nexus repo, you can use Nexus Platform Plugin, see this answer for details.
About overwriting your target folder, I'm not sure if it's cleaned by Jenkins by default. To clean workspace, you need to specify Discard old builds option in job configuration first.
Seems to be that you just execute mvn clean ... command, that's why target folder is cleaned, so I would recommend to check that first.

How to keep artifacts directory during the multi-configuration build in TFS?

I have a solution with multiple projects where some are web apps. I have set up a multi-configuration build in TFS vNext that builds the single app, creates an MSDeploy package, gets the proper staging configuration files and add or replaces the files in the package archive file.
I'd like to use the deployment files created as artifacts to be used in a Release Management pipeline. The problem is that the artifacts directory is purged before each build (i.e. build of a web application). At the end, only the artefacts of the last app that was built are left there.
I can certainly configure the step to copy the artifacts somewhere else, but then the question is how to delete it only at the very start of the build (and by that I mean the build of all projects).
Is there a way how to disable purging of the artefacts directory or how to perform an operation only at the beginning of the build? Has anyone similar experience?
Use the "Publish Artifacts" task to store the artifacts in a UNC location or in TFS itself so they're available for release.

Renaming a Jenkins Job

I have renamed a Jenkins Job from the Jenkins GUI. I changed the Project name in the Configure menu and hitting Save afterwards.
However the workspace name for this Jenkins job has not been changed. What I am finding is upon the job execution a new workspace is getting created with this given new name and none of the contents of the old workspace is getting copied.
So the issue is contents of the old workspace is not copied to the new workspace.
What should I do instead?
I know there are several questions in SO in this area. However those do not answer my question.
Renaming job in jenkins/hudson
Rename a job in Jenkins
So please check this before marking this question as a duplicate.
I was able to workaround this is issue using the Use custom workspace option.
To change this location, I need to choose configure job and click on the Advanced button in the Advanced Project Options section.
After opening the settings, you will find some more configuration options for your job. Look for the Use custom workspace option on the right hand side and check the box.
Reference: Jenkins: Change Workspaces and Build Directory Locations
Workspaces are volatile by nature and may reside on a build node which has gone offline, therefore your build job should not rely on files being present in the workspace. However sometimes you will benefit from a speed-up by reusing unchanged files existing in workspace and decide not to clean them.
When you start a build, a new workspace is (as you noted) created, this is the correct behaviour, you should not need to store files in your workspace between builds but set up your system to load all sources from your vcs. This way you will always be able to make a fresh build from source, there are also a few options available to clear the workspace from old files.
If you do not want to populate the workspace from a source code addon you can always use the custom shell script feature to run a few shell commands to copy the needed files.

Jenkins Node/Slave(Mac iOS build) - how to get the build package

I have setup Jenkins and everything is fine. It is connected(JNLP) and builds fine.
But how can I get the build back onto the master(the server hosting Jenkins)?
One thing could be to activate a script on the slave/node to copy the build. But since we have this very nice connected JNLP, my first thought was to get it through this connection?
Thanks in advance
Regards
christian
Usually, you'd use the artifacts mechanism to save off the results of the build (the .app for example) and then in another script to retrieve them and take the next step and Jenkins takes care of storing them for you.
To save them off, add a post-build action to Archive the artifacts and then give the path of the artifacts you want to save (optionally excluding some elements, etc).
When I store artifacts for iPhone builds, I usually store the -dSYM.zip and .ipa files.
If you want to use them in another build step, you can then use the Copy Artifact Plugin to copy them as a pre-build step and then operate on them later (for example: if you want to manually release the .ipa and dSYM.zip files to TestFlightApp or HockeyApp or another distribution mechanism).

Resources