Continuous Integration Clarification - jenkins

I work in a team which maintains a Java website and back end java jobs and shell script jobs.
After all developers complete their updates, only the relevant ones are committed to source control system.
Later ant build scripts are run and war files are generated.
Along with these war files there will genrally be shell scripts etc to be copied to QA/PROD.
Then one fine day there is a team call the release management team which will transfer the code from our Dev environment to QA/PROD.
Recently I came across the Continuous Integration systems like Jenkins/Hudson.
Can these tools build all the changes committed and automatically transfer my code to QA/PROD.
BTW I work in a AIX Server environment and use Tomcat as the Container.
I am more curious whether the tool will be able to copy my code to QA/PROD.
Please Clarify.

The answer is almost certainly yes, depending on your particular setup for copying the code. There is a large number of plugins for this purposes at the appropriate Jenkins wiki page. You should be able to find something there for your needs.

Related

Jenkins Bitbucket SSDT Continous Integration DevOps Process

CICD Process with Jenkins, Bitbucket, SSDT(SQL Server Data Tools).
Please list out the steps to perform CICD Process.
Including what plugins i need to install in Jenkins for SSDT(SSIS-ISPAC file) or SQL Database Solution(DAPAC file)
This question is very broad and as with all the stuff related to databases the best answer would be "it depends". As far as I know that there is no proper plugins either for Jenkins nor for Bitbucket that work with SSDT very well so you'll need to implement all your actions by yourself.
It will depend on your system how the pipeline should look like. There are a lot of questions that you'll need to answer first but without knowing your exact situation it is very hard to suggest you something specific. Example questions:
How many environments do you have?
Do you have tests?
Can somebody change the state of the destination database manually, passing the CI/CD pipeline?
Will you run publish on every commit?
Do you trust what SSDT will decide how to publish database? (Mostly people would like to preview the script that would be executed on prod)
Then after answering these questions you'll might know what will you need. After that you need prepare the proper publish script, exclude/ignore/add object types you'd like to deploy and use MSBuild.exe and SQLPackage.exe command line utilities. You'll run these utilities with specific set of arguments and paths to the publish configs, DACPACs, etc. Bamboo and Jenkins supports command line commands for that.

how to create deployment pipeline using jenkins

What does it mean by deploying code from dev to prod environments using Jenkins. Can anyone please help. I currently have the source code in my gitlab. I need to deploy this code from dev env to prod env
Thanks in advance.
Source code present in GitLab is just the files that is needed to create a WAR/EAR/JAR to run the application.
It's the environment files if present which makes the application behave slightly different on each environment i.e. DEV/PROD the data that you see on DEV will not be the same that you see on PROD(application is live), as developers tend to test/modify code/data to ensure that the application works as excepted. This is fine on DEV but is a big no-no on PROD as it will impact business.
Deploying code from dev to prod environments just means building the application with the right environment files e.g DEV points to xyz DB but prod points to abc DB.
All this can be achieved with jenkins and if your project uses maven/gradle then with a single line command you can achieve the above.(A bit of googling will help you here)
If your project doesn't involve Maven/Gradle then you will have to replace the environment file each time a build happens based on a parameter which can be passed from jenkins.
This whole process is part of DevOps culture. In simple terms it looks like this:
Developer pushes changes to source control (i.e. gitlab).
Build server (i.e. Jenkins) automatically downloads latest changes and builds an application (i.e. creates setup files or just the binaries). Usually you run tests (unit, integration, automation tests etc.). If something fails then developers get notified about it. This whole process is called continuous integration.
If everything went right then you can deploy your application to production. This part of the process is called continuous deployment.
It's a common strategy for web apps. For larger projects QA team tests the software and the software gets deployed once QA team approves it.

Should we use different server for automation scripts

This is a not related to code fix, but a general approach for test automation.
I have a test automation written in javascript which runs perfectly on my machine as well as my local jenkins.
Now, i want to use my company's server(centOS) and jenkins so that it is accessible to everyone in my organization.
Issue: nodejs version in company's server need update to run my automation, but server team wont do it since they are not sure if any other functionality used be other teams may start to break because of the upgrade.
Have you faced this situation. Do you have different servers for core code and automation scripts. Please suggest.
This is a complex situation that really depends on many variables. I would recommend using an agent that contains the proper version of Nodejs. With this solution you can leave the current build server how it is but you can also use the exact version of node you need. This will require an extra server/VM with the Jenkins slave software but this will remove the need to change the master server.
The solution my company went with is using Jenkins 2.x with Declarative pipelines and ephemeral Docker containers for builds. This allows you to use any Docker image such as the official Node image. You can pin a version and build it with that. With this there is no need to worry about the version on the server. Jenkins Master doesn't even need to actually build.

Build and Deploy a Web Application with TFS 2015 Build

We have just installed TFS 2015 (Update 1) on-premise and are trying to create a Continuous Integration/Build system using the new TFS Build system. The build works fine, and gives me a green light, but when I look at the default build it has only built the binaries from the bin directory, and there seems to be no easy way to deploy the app on-premise to a local server.
There are two deploy options for a filesystem copy, and a powershell script, and it would certainly be easy enough to use them to copy files to a new server, but since the build only built the binaries, I don't see a tool to gather up the Web artifacts (cshtml, images, scripts, css, etc..) for this.
After an exhaustive google search, I've only found one article which talks about this at:
http://www.deliveron.com/blog/building-websites-team-foundation-build-2015/
However, this uses WebDeploy and creates a rather messy deploy package.
How can I deploy the site (standard MVC web application, in fact my tests are using the default boilerplate site created by the create project wizard) complete with artifacts to a local server in the easiest possible way? I don't want to have to install WebDeploy on the servers, and would rather use PowerShell or something to deploy the final artifacts.
The build is just the standard Visual Studio build template, with 4 steps (Build, Test, Index & Publish, Publish Build Artifacts).
We use "Visual Studio Build" step and as Arguments for MSBuild we use following line:
/p:DeployOnBuild=True /p:PublishProfile=$(DeploymentConfiguration)
On Variables tab page DeploymentConfiguration has to be configured. It must be the Name of the publish Profile (filename of the pubxml file). If the file Name is Build.pubxml the publish profile is Build.
for example:
/p:DeployOnBuild=True /p:PublishProfile=Build
I wanted to add that Ben Day has an excellent write-up that helped us package quickly and then release to multiple environments through Release Manager.
His msbuild arguments look like this:
/p:DeployOnBuild=True /p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True /p:publishUrl=$(build.artifactstagingdirectory)\for-deploy\website
The difference between this and the accepted answer is that this parameter set stages everything in an artifacts folder, and then saves it as part of the build. We can then deploy exactly the same code repeatedly.
We capture the web.env.config files alongside the for-deploy folder and then use xdt transforms in the release process to ensure everything gets updated for whichever environment we're deploying to. It works well for all our web projects.
We use WebDeploy/MSDeploy for 40+ applications and love it. We do install WebDeploy on all our servers so we can deploy more easily but you could also use the Web Deploy On Demand feature which doesn't require WebDeploy be pre-installed.

TFS Build vs local build

I really don't understand the meaning of tfs build although MSDN provides many definitions.
For example, I have an asp.net project. If I passed the local build on my local machine and I checked in the code. Everything is fine.
I used to copy(publish) the code to the server, that's it.
Why we need tfs build? What is different between tfs build and local build. You might to say, there are build history that can be reverted to an old one. But I think that since code was versioned, we can checked it out and rebuild in the local machine and republish the project to the server.
When I was using TFS, I could run local builds on my local machine. And then when checking in code, TFS would automatically perform a build on the build server (this is specified via a build definition). In that case, the build server was located on the machine which housed the master copy of the TFS source repository.
It's not enough for each developer to build locally as they may not have the latest code. I think the point of a TFS build is that it will run a build on the build server which has all the latest code. I think the idea is that if the build is successful on the build server, then it's deemed safe to check in the code.
That's how I understood it anyway. It's useful if there are multiple developers working on a project. If there is only one developer on one machine, a separate build may not be necessary.
Did that answer your question or did I misunderstand?
The answer of CiaranG is indeed one way to look at it.
Also the TF Build server has the possibility to build your code with signed 3rd party DLL's and put everything in a place as it is every time a new version of your software. This can be then useful for testers that need to test your software and don't need development tools.
Besides CiaranG's description of the Continuous Integration benefits, there is also the security and cleanliness. Allowing production code to be built off of developer machines where there is a chance for virus/malware may be present and the configuration is not known/documented is just poor policy. By building it off a protected server, from which no surfing is ever done, you are ensuring a safe, clean, reproducible environment which adds professionalism to your code deployments. TFS also adds in reporting build metrics over time, accountability, and archiving.

Resources