I would like to be able to configure centrally something like "build profiles" which I can apply to multiple projects in Jenkins.
For instance, I want to setup a compile, email, deploy chain to be used by several projects. When I change something in this chain, I want to automatically apply the changes to all linked projects.
Is there a convenient way to do this? I am also open to suggestions for other build systems, as long as they can deal with sbt projects.
I see there is a SBT plugin for Jenkins which looks popular-I haven't used it
I have used the jenkins job-dsl which covers sbt out the box. This works by a build step in a job to create/regenerate other jobs (with an optional template)
The problem with having a generic job building separate projects is that all the job history gets merged together. I think it is better to use stand-alone jobs for each task and the job-dsl will allow you to do that
TeamCity supports build configuration templates out of the box and recently added basic sbt support.
Related
I am in a process of configuring Jenkins to deploy artifacts. I only need apache ant and java to create artifacts(both are available on the host machine) and no other external libraries. So, I think using Maven will make it unnecessarily complex as I have only 2 ant files. I want to keep it as simple as possible.
What I want to achieve is:
1. Trigger a Jenkins job 'A' to build the artifact and deploy it to nexus repository.
2. Trigger another Jenkins Job 'B' to take same artifact generated in in step above and deploy it to target environment.
Can anyone please help me to identify challenges with my approach and share some useful links to achieve what I have specified.
A short answer is Yes you can. Each of the component you mentioned can be used individually and can be integrated into your build pipeline. TBH, your use case isn't one off and can be easily done if you start here.
I am working with Jenkins, and we have quite a few projects that all use the same tasks, i.e. we set a few variables, change the version, restore packages, start sonarqube, build the solution, run unit/integration tests, stop sonarqube etc. The only difference would be like {Solution_Name}, everything else is exactly the same.
What my question is, is there a way to create 1 'Shared' job, that does all that work, while the job for building the project passes the variables down to that shared worker job. What i'm looking for is the ability to not have to create all the tasks for all of our services/components. It be really nice if each of our services/components could have only 2 tasks, one to set the variables, another to run the shared job.
Is this possible?
Thanks in advance.
You could potentially benefit from looking into the new pipelines as code feature.
https://jenkins.io/doc/book/pipeline/
Using this pattern, you define your build pipeline in a groovy script rather than the jenkins' UI. This script is then kept in the codebase of the project it builds in a file called Jenkinsfile.
By checking this pipeline into a git repository, you can create a minimal configuration on the jenkins' side and simply tell it to look towards a specific repo and do the things that pipeline says to do.
There's a few benefits to this approach if it works for your setup. The big one being that your build pipeline will be fully versioned just like the project it builds. And the repository becomes portable, easily able to be built on any jenkins' installation across as many jobs as long as the pipeline plugins are installed.
I have a project where part of the build process is to create a native library on a remote machine. This is currently a manual process outside of the CI builds made by Jenkins.
The setup in question is that the Jenkins master server build a GIT based maven project, which has a dependency to a native library which can only be built on a specific machine. Jenkins can't compile this module, and because of this, it is currently a manual process.
I would like to install a Jenkins slave on the machine that creates the native library, and returns the compiled files to the Jenkins master, without handling any other parts of the build.
I am having trouble figuring out if this is even possible. The number of articles i have found on the subject discusses Jenkins slaves as a means of distributing the build, but i want the slave to take responsibility for a small part of the build process, and nothing else. The Jenkins master should just send the build request to the slave and wait for the result, instead of trying to compile the code itself.
I do exactly the same. My setup, very similar to what Mark O'Connor and gaige are advising, and I am using the Copy Artifact plugin.
job A: produces a zip file on a Mac
job B, runs on slave B - Windows machine, takes the zip as input and produces an MSI
Here's the important part in the config of job B:
restrict the job B on the proper slave using labels
make sure job B happens after job A
make sure artifacts from job A are sent to job B before your build
build your stuff
archive artifacts produced by job B
Delegating part of a job to a slave is something that would have to be done external to Jenkins, for example, using ssh.
However, as #kan indicates, you most likely want to extract the native library build as a separate job and then have that job execute on a particular slave, or any slave that meets a specific criteria.
To do this, my suggestion would be to use Labels in the node configurations to determine which slaves can be used for building that particular job.
In Jenkins > nodes > <slave node>, use the Labels property to set one-word labels that indicate your specific requirements, such as the OS or processor type.
Then, in the jobs that are node-specific, check Restrict where this project can be run and set the Label Expression to something that meets your criteria. If the criteria is simple, it will just be a single word, if you need a boolean, you can use those as well (such as OSX&&Lion in our case).
I believe this is all in the standard version of Jenkins, without need for a special plugin. Leave me a comment if it isn't and I'll try and diagnose which plugin enables this functionality.
This is problem is solved by using a binary repository manager to centralize your software artifacts. Personally I use Nexus, but it could be something as dumb as a remote file system.
The idea is to publish the built artifact after each Jenkins job (if you don't like Nexus, you could use one of the Publish over plugins) and retrieve it as a build dependency in the next job.
This approach means it longer matters where the build executes, and has the added advantage of decoupling the build of each module component.
I'm prototyping a new build system using Jenkins, Gradle, and Artifactory. There seems to be conflicting or rather overlapping features in these tools, in regards to specifying the build artifacts and their destination. I see three paths going forward:
Specify the artifact settings on the particular task in Jenkins, using the Jenkins Artifactory plugin.
Specify the artifact settings in the Gradle build scripts, using the Gradle Artifactory plugin.
Specify generic maven repo settings in the Gradle build scripts, using the standard Gradle "maven" plugin.
I see pro's and con's to all of these approaches, but nothing is missing a critical feature for our builds, as far as I can see.
To further my confusion, the Gradle Artifactory plugin wiki states:
Build Server Integration - When running Gradle builds in your
continuous integration build server, it is recommended to use one of
the Artifactory Plugins for Jenkins, TeamCity or Bamboo to configure
resolution and publishing to Artifactory with build-info capturing,
via your build server UI.
So, some questions to get the conversation going:
Does it make sense to clutter the build scripts with artifact logic? It might help to add that developer's don't deploy. Currently, I only see build artifacts being uploaded from the Jenkins task.
Does leaving all of this build logic in the task configuration expose us to issues, in the event that the CI server is down?
What about version control for artifact changes done through the CI interface?
I've seen simple Bamboo configurations that specify the build artifacts through the CI server UI, rather than the pom's. Is this just a bad build practice?
Is there a killer tool integration feature that separates one of these approaches from the other?
How useful is the build info object? Is that only available in the Jenkins Artifactory plugin and not the Gradle Artifactory plugin?
I am really hoping to hear from existing users of these tools and what pitfalls/requirements may have led them to one of the approaches above (or perhaps even a better one that I haven't considered yet).
Does it make sense to clutter the build scripts with artifact logic? It might help to add that developer's don't deploy. Currently, I only see build artifacts being uploaded from the Jenkins task.
I'd say that's the way to go. Your build server is the single point of truth, and only artifacts built in the build server should be deployed.
Does leaving all of this build logic in the task configuration expose us to issues, in the event that the CI server is down?
That one is simple - you shouldn't deploy while your CI server is down. Building on local machine might produce wrong artifacts, which shouldn't be deployed.
What about version control for artifact changes done through the CI interface?
Not sure I understood your question.
I've seen simple Bamboo configurations that specify the build artifacts through the CI server UI, rather than the pom's. Is this just a bad build practice?
This configuration ignores Maven's ability to deploy, and I am not sure I can find a good scenario to justify it. The only thing I can think of is deferred deploy, but Artifactory plugin can take care of that.
Is there a killer tool integration feature that separates one of these approaches from the other?
Now we got to the essence :)
Well, the advantage of defining what you deploy in your build script (in case of Gradle) gives you the flexibility to fine-tuning every aspect of the deployment (think about the dynamic properties you might want to add in certain cases). Another very serious advantage is that your build is source, which means it is versionable in your version control.
The advantage of defining the deployment details in the build server configuration is that the build server is the only place the deployment should occur. So, if you don't have the deployment details in your build script, you know for sure it won't be deployed standalone.
So, how can you combine between the two to get the advantages of both worlds?
Code your deployment logic in your Gradle script using the Artifactory plugin DSL. Provide details like username and password from properties, which exists on build server only.
How useful is the build info object?
Extremely useful. The information in buildInfo was harvested during the build process and the buidInfo is the only place it exists. Having this information is the only option you will be able to reproduce this build in the future.
Is that only available in the Jenkins Artifactory plugin and not the Gradle Artifactory plugin?
'artifactory' and 'artifactory-publish' Gradle plugins both generate the buildInfo object, regardless of where are they running (be it your local machine or Jenkins build server).
Let's say I have this situation. I have three jobs. Job number one has two manually triggered downstream jobs (deploy to test, deploy to prod for example). Something like this:
I want the deployment jobs (test-job-2, test-job-3) to require a password before they are triggered. How can I solve this with Jenkins?
The only option right now supported by the Build Pipeline Plugin is to have a manually deployed downstream job. But this job starts right after you click on it. I would like to require the user to manually enter some parameters (password for example).
Is there some workaround? I was thinking of using the Promoted Builds Plugin. So the deployment jobs would run in a "dry run mode" - just checking that we have ssh access to the server and some other basic stuff. And then in order to deploy you will have to promote the build.
This approach isn't very nice though. Build pipeline and promoted builds plugins don't interact with each other very well.
This is not exactly what you want, but I guess it would some how solve your problem.
View Job Filters
Using this feature in tandem with a security feature such as the Standard matrix based security can help you create a view that will show different jobs depending on who is logged in.
I use different Jenkins Servers to "complete the pipeline" using Build Publisher job to publish the last part of the pipeline job to the other jenkins. I then pick it up from there. Operations teams have access to the "prod" jenkins system, and developers have access to the "dev" system.