Brief background. My team uses Jenkins for CI but (among other things) we pass our output to Azure Devops Release pipelines, to be used for downstream testing. We currently have a working system for this but it uses the Jenkins Team Foundation Server/TFS plugin (to trigger the Azure Devops release) and specific features of azure-pipelines-tasks (to pull the artifacts from Jenkins). The issue is that the former is deliberately disabled in recent versions of Jenkins (for licencing and security issues) and the latter similarly has a bug talking to Jenkins. Basically we are stuck on Jenkins 2.263.1 with no sign that this will be rectified. It would seem prudent to use a completely different approach.
Intuitively we need to be able to programmatically trigger the release pipeline. Additionally I need to transfer the artifact - whether Jenkins would push or ADevops would pull, I don't know - guess the latter as closer to current. Whatever, I am wondering if somebody already has instructions on how to do this to avoid us re-inventing the wheel.
Install the TEE-CLC and invoke the CLI commands from a shell?
That seems to be MS suggestion for their Azure plugins which they have formally announced abandoning.
Of course, now they have quietly announced they won't maintain that either, so I guess the implied message is "go use github?"
After going through the Azure REST API, chiefly here, and some experimentation, I can report what I got to work.
Because the original job was Traditional/Freestyle (the plugin did not support pipelines), I kept the job but replaced the plugin step with a bash script, run at after the "archive" step I had in there anyway. The key statement in this script is:
curl -X POST -f \
-H "Authorization: Basic ${SERVICE_TOKEN_B64}" \
-H "Content-Type: application/json" \
-d "{\"definitionId\":${RELEASE_DEFINITION_ID},\"description\":\"Triggered by ${BUILD_TAG}\",\"isDraft\":false,\"artifacts\":[{\"alias\":\"_${JOB_BASE_NAME}\",\"instanceReference\":{\"id\":\"${BUILD_NUMBER}\",\"name\":\"${BUILD_DISPLAY_NAME}\"}}]" \
-o output.json \
"${AZURE_URL}/${ORGANISATION}/${PROJECT_NAME}/_apis/release/releases?api-version=5.0"
The various macros are mainly set in an "Inject Environment Variables" above.
AZURE_URL here is "https://vsrm.dev.azure.com", but could be something else I guess if you had your own Enterprise instance of Azure. ORGANISATION and PROJECT_NAME should be obvious.
Less obvious is RELEASE_DEFINITION_ID. You can go through and enquire this from the REST API, given then name of the release (pipeline) you are dealing with - essentially, release definition in the API equates to release pipeline in the Web UI. However, the simplest approach is to navigate in the Web UI to the release pipeline you are interested in (not an individual release job/instance). If you look at the URL for this, you will see definitionId=XXX as a parameter. You want the XXX value.
SERVICE_TOKEN_B64 is a suitable PAT manipulated into a different format. What you actually want is to take the output of:
printf "%s"":$PAT" | base64
and save that as secret text in the Jenkins credentials system. Note that the plugin can do this on the fly (using the username and PAT from the credentials system), but if you do that in a bash script it will (typically) show this modified token in the log - which sort of defeats the object.
This leaves the artifacts. It took me quite a while to discover i) you need the artifacts and ii) the payload is quite important. We have what I assume is a fairly standard setup where there is a "Jenkins" Artifact on the Azure release pipeline. We have "Specify at time of release creation" for the Default Version - I didn't set the Azure release pipeline up, but I suspect all fairly standard too. With this, the artifact part is crucial and without it (or setting to an empty array) I was getting 400 errors which baffled me - I raised a parallel question here. Looking back the scenario is fairly straightforward. The artifacts payload itself is:
[{"alias":"_${JOB_BASE_NAME}","instanceReference":{"id":"${BUILD_NUMBER}","name":"${BUILD_DISPLAY_NAME}"}}]
The alias value is that of the Artifact name in the Azure release pipeline - not sure if just prefixing the job with "_" is standard or just the convention used by the guy who set the pipeline up. The other values could I think be treated as magic, but my guess is that the "id" is the actual value used of the Jenkins job instance from where the artifact itself is pulled (the Jenkins job name comes from the Azure pipeline setting) and the "name" is used for displaying/logging. As said, this works for our purposes but your usage may differ.
Related
I have a repository that I can create a release for. I have jenkins setup and since the jenkins is hosted inside a firewall that restricts any communication from outside the network, github-webhook doesnt work. Also getting the reverse proxy to work is a bit of a challenge for me. I understand that the github webhook sends out a json payload and I can qualify it based on release. But as I previously mentioned, this won't work because jenkins and github cannot talk with each other.
Therefore, I tried this solution; Filtering the branches or tags that the jenkins will build on. The following are the things I tried and they all didnt work. Everytime I run a build, jenkins just builds it.
I also tried the below mentioned regex,
:refs\/tags\/(\d+\.\d+\.\d+)
I also tried [0-9] instead of d. It build it every single time.
Am I missing something ? Or is that how jenkins work ? Even though we qualify the builds to run only on certain tags or releases, if we click on build now, it just runs it every single time ?
My requirement is very simple. I want jenkins build to run only on the release I created even if the release is 'n' commits behind the master. How can I achieve this ?
I have recently started to mess about with Jenkins and am unsure how to deploy my web app to a basic server. I've gotten into the Pipeline (https://jenkins.io/doc/book/pipeline/) and it seems like a fantastic way to work.
Where I'm a bit stuck is in two spots:
Once my repo is in my workspace within Jenkins, how do I prep it so I am only deploying the files necessary for the application? For example, I don't need my src/ directory or my Vagrantfile when I'm deploying things.
How do I deploy my app to the server? I see examples all over the place, but I am getting a bit lost since there seems to be so many ways to do this. I'm assuming scp or something like that...?
To build off of #2, is there a way to deploy web apps as transactions (in one shot) rather than file-by-file?
Please let me know if I can provide any information for potential answers!
I can't speak to your specific use case but a common way to do this is the build-and-deploy model, where you will have 2 Jenkins jobs. The "build" job will check out from source, run build commands such as maven or make, and lastly will "archive" the build artifacts. The latter is an option under the 'post-build actions' tab at the bottom.
In the "deploy" job, you will grab the artifacts of your choice. You can fetch a single file, all of them, and everything in between. This requires use of the 'Copy Artifact' plug-in and it allows you to copy files generated by other jobs. Now you can run your usual deploy script in the 'Execute Command' box. Most command line paradigms are supported out of the box such as setting environment variables.
The instructions above assume that you want to run your application off of a host that you've provisioned as a Jenkins slave.
Use artifacts as mentioned by Paul Back, or a 3rd party artifactory server as in video
This is always tricky and error-prone. Why not spin up a fresh server with new release (humanly verified once)
Jenkins & Ansible is the answer here. This is how I deploy to production, since I am in no need to use anything like Docker (too many issues with particular app) so have to run the app natively. Quick example would be
You monitor a specific branch in gitlab / github or whatever else and then call a webhook on push / merge etc on that branch, at this point you deal with anything you need to do by running a playbook on the jenkins job that monitors that branch (jenkins).
in my case jenkins and ansible run on the same server. Jenkins runs the ansible playbook that does whatever I need to do.
for example with ansible, I copy certain files that need to be there, run configs / change filenames etc. setup nginx, run composer,
you get the point.
I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.
I have a project where part of the build process is to create a native library on a remote machine. This is currently a manual process outside of the CI builds made by Jenkins.
The setup in question is that the Jenkins master server build a GIT based maven project, which has a dependency to a native library which can only be built on a specific machine. Jenkins can't compile this module, and because of this, it is currently a manual process.
I would like to install a Jenkins slave on the machine that creates the native library, and returns the compiled files to the Jenkins master, without handling any other parts of the build.
I am having trouble figuring out if this is even possible. The number of articles i have found on the subject discusses Jenkins slaves as a means of distributing the build, but i want the slave to take responsibility for a small part of the build process, and nothing else. The Jenkins master should just send the build request to the slave and wait for the result, instead of trying to compile the code itself.
I do exactly the same. My setup, very similar to what Mark O'Connor and gaige are advising, and I am using the Copy Artifact plugin.
job A: produces a zip file on a Mac
job B, runs on slave B - Windows machine, takes the zip as input and produces an MSI
Here's the important part in the config of job B:
restrict the job B on the proper slave using labels
make sure job B happens after job A
make sure artifacts from job A are sent to job B before your build
build your stuff
archive artifacts produced by job B
Delegating part of a job to a slave is something that would have to be done external to Jenkins, for example, using ssh.
However, as #kan indicates, you most likely want to extract the native library build as a separate job and then have that job execute on a particular slave, or any slave that meets a specific criteria.
To do this, my suggestion would be to use Labels in the node configurations to determine which slaves can be used for building that particular job.
In Jenkins > nodes > <slave node>, use the Labels property to set one-word labels that indicate your specific requirements, such as the OS or processor type.
Then, in the jobs that are node-specific, check Restrict where this project can be run and set the Label Expression to something that meets your criteria. If the criteria is simple, it will just be a single word, if you need a boolean, you can use those as well (such as OSX&&Lion in our case).
I believe this is all in the standard version of Jenkins, without need for a special plugin. Leave me a comment if it isn't and I'll try and diagnose which plugin enables this functionality.
This is problem is solved by using a binary repository manager to centralize your software artifacts. Personally I use Nexus, but it could be something as dumb as a remote file system.
The idea is to publish the built artifact after each Jenkins job (if you don't like Nexus, you could use one of the Publish over plugins) and retrieve it as a build dependency in the next job.
This approach means it longer matters where the build executes, and has the added advantage of decoupling the build of each module component.
I need to execute few of the Jenkins jobs such as "Release to Production" through Jenkins UI using logged on user credential. The reason is, we have separate Support Team Members, who have access to the production boxes and not the Dev team members. So, in order to deploy any code base to production, all the Windows Deploy Commands (ex, create, update files, folder etc.) needs to be run with specific user credential who has access to the Production Box. So that even the Dev team members who don't have access to the Production box but are Jenkins Admin, execute the same job should result in failure due to "Access Denied". The job should succeed only if its been run by Support Team members with their credential.
I tried using parameterized plugin but couldn't able to pass the Password successfully to the batch file which contains MSDeploy instructions. Even the Jenkins console log displays the parameter passed in its console output, which is a security issue.
I checked Role based security plugin, but that doesn't help me much. I just need a plugin which should ask for user to provide their credential before start building the Job and should use the user credential to get the job executed, so that my MSDeploy command will be able to deploy the code on Production boxes, when the Support team member build that Job using their credential. I wish there was support for impersonation.
Right now all the Jenkins Jobs are getting executed using the service account which the Tomcat service is configured to run with on which Jenkins is hosted.
Any help would be appreciated.
Just in case there is any confusion a Jenkins job will always run as the same OS user. The Matrix based security applies to users who log into the Jenkins server and controls features like creating or launching jobs.
You could configure the job to use a set of generic production credentials and then prevent your developers from invoking the job.
Perhaps a better approach would be to separate the process that builds the code from the one that deploys the code. The following diagram (Taken from the xebia-france project) demonstrates how some of my favourite tools Rundeck and Nexus can be integrated with Jenkins.
Finally, I highly recommend reading the following link:
Using Rundeck and Chef to build devops tool chains
Hi I know I'm coming late on this thread, but I just fell on this issue and had a hard time solving it, so I thought I might just share what I managed to set-up.
First things first: if you want to run a Jenkins job "as a specific user" (with all the correct habilitations) the easiest way is to run a Jenkins SLAVE as this user.
Then you might very well stumble into the following: you probably want to run serveral slaves on the same windows machine as windows services. This is very fine, as long as each slave has his own Remote root directory and probably have a specific "label" too.
Once you managed to run your slave as a windows service, launch the service console (run services.msc). Edit the newly created service properties, go to Log On tab. Select "Log on as: This account" and enter your account credentials.
Cheers :)
You can utilize the built in windows runas or Powershell InvokeCommand cmdlet and -Credential to run - Both these would store the username/password in plain text - So do think about the risks, but this gives you flexibility.
I'm surprised this doesn't have a better answer of set an agent on another machine to run as another service and define agent as a special "type" which picks up the jobs - Something along those lines is what I would expect but I haven't seen an implementation like that in Jenkins (I'm new to Jenkins so was looking for an answer and found this thread).
Something else that could be considered for someone more familiar with Jenkins is when you set the custom path to MSBuild could you set that to runas /user:... msbuild.exe perhaps? I don't have an extra Jenkins server currently to try that on.