Should we use different server for automation scripts - jenkins

This is a not related to code fix, but a general approach for test automation.
I have a test automation written in javascript which runs perfectly on my machine as well as my local jenkins.
Now, i want to use my company's server(centOS) and jenkins so that it is accessible to everyone in my organization.
Issue: nodejs version in company's server need update to run my automation, but server team wont do it since they are not sure if any other functionality used be other teams may start to break because of the upgrade.
Have you faced this situation. Do you have different servers for core code and automation scripts. Please suggest.

This is a complex situation that really depends on many variables. I would recommend using an agent that contains the proper version of Nodejs. With this solution you can leave the current build server how it is but you can also use the exact version of node you need. This will require an extra server/VM with the Jenkins slave software but this will remove the need to change the master server.
The solution my company went with is using Jenkins 2.x with Declarative pipelines and ephemeral Docker containers for builds. This allows you to use any Docker image such as the official Node image. You can pin a version and build it with that. With this there is no need to worry about the version on the server. Jenkins Master doesn't even need to actually build.

Related

Run a gitlab CI pipeline in Docker container

Absolute beginner in DevOps here. I have a Gitlab repo that I would like to build and run its tests in the Gitlab pipeline CI.
So far, I'm only testing locally on my machine with a specific runner. There's a lot information out there and I'm starting to get lost with what to use and how to use it.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
Any specific documentation links or suggestions are welcomed and appreciated.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
you can install those tools before the pipeline script runs. I usually do this in before_script.
If there's large-ish packages that need to be installed on every pipeline run, I'd recommend that you make yourown image, with all the required build dependencies, push it to GitLab and then just use it as your job image.
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
If you're using gitlab.com - Windows runners are currently in beta, but available for use.
SaaS runners on Windows are in beta and shouldn’t be used for production workloads.
During this beta period, the shared runner quota for CI/CD minutes applies for groups and projects in the same manner as Linux runners. This may change when the beta period ends, as discussed in this related issue.
If you're self-hosting - setup your own runner on Windows.
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
This really depends on:
previous parts (you're using GL.com / self hosted)
how your application is built
what infrastructure you have access to
What I'm trying to say is that I feel like I can't give you a good answer without quite some more information

How to write a Chef cookbook for multiple applications

Imagine Jenkins is generating 3 different distributions - one on javascript running on NodeJS, another on python running on apache with python module and another on Java using Springboot. How do you write a Chef cookbook to install all of them on an on-premise infrastructure having bare minimal linux ubuntu distribution. Scope of the problem involves capturing trigger from Jenkins and then kick starting Chef books to deploy these 3 apps. Based on configuration, either all 3 apps should be deployed on same infra or different deployment infrastructure.
So a few things:
How you write the cookbooks is up to you entirely. I've got some examples up on http://github.com/poise/application_examples for Python, Ruby, and Node, but that's just my take on the subject. Whenever you ask "How do I do X with Chef?" the answer is always "How would you do X without Chef, and then automate that".
How to trigger deploys from Jenkins is a bit more fuzzy than that already-very-fuzzy answer. The simplest answer is to have Jenkins SSH in to each machine and run chef-client. However this might have security implications you don't like. You can look at more dedicated command-push systems like MCollective, SaltStack, or maybe Chef Push Jobs (though I would skip that last one). You can also just set up your nodes to converge automatically every 5 minutes so all Jenkins does it updates some stuff in the Chef Server to say which version to deploy and then waits 10 minutes.
I have a similar case to yours, using TeamCity instead of Jenkins. (But you can replicate similar behaviour)
In my case I use policy_files to manage my infrastructure, where I pass as attributes the build information so I can download artefacts in a recipe.
The main trick is having a build which is triggered by the services you mention (Python, Java... whatever), updating the attributes (build id, artefact name...) in the policy_file, when committing to GIT the result.
As a recap:
Build for your services is completed.
A build for your policy_files is triggered updating artefact download information.
Your usual build workflow.
In order to download the artefacts, you can use the remote_file chef resource, making a checksum verification to avoid downloading the same file on each chef run.

Should Jenkins be run inside development/deployment environment or on standalone box

I am using Vagrant to provide a 'synchronised' and standardised development/test/uat/staging and production environments.
I am now looking at how to standardise my CI build process. I like the look of Jenkins but I am confused as to what the best way to deploy it is. Should I have it deployed in a stand-alone CI box or install it on all the various environments?
I guess I am a little confused here. Any help much appreciated, Thanks
The standard approach is a stand-alone CI server shared by the development team. This common server (at a well known URL) provides the development dashboard for a team and the only authorized way to publish into the release repository (Developers not allowed to publish directly)
You could go for extra credit and also setup an instance of Sonar which in my opinion is much better suited as a development dashboard, providing a richer set of metrics and also serves as a historicial record for development.
Finally Jenkins is so simple to setup, there is nothing stopping developers from running their own instances. I find that with Sonar it matters less and less where a build is actually run, once the release credentials are properly controlled. In fact this attitude is important as it prevents the build server from turning into a delicate snowflake :-)
Update
There's a vagrant plugin for Jenkins which might prove useful in running your current processes.
You're likely better off running Jenkins as a shared stand-alone server.
However, I highly recommend that you set up your builds in such a way that they can be run on each developer's machine locally as well. This is particularly key with unit-tests.
In our setup, we have a shared Jenkins server that executes all of our builds using NAnt. Each developer also has NAnt installed and can run the build and unit-test portions of the build freely. Ideally integration tests could also be run, but we're not quite there yet and having them execute on the CI server still gives us that proper feedback even if it takes a little longer to get.

Continuous Integration Clarification

I work in a team which maintains a Java website and back end java jobs and shell script jobs.
After all developers complete their updates, only the relevant ones are committed to source control system.
Later ant build scripts are run and war files are generated.
Along with these war files there will genrally be shell scripts etc to be copied to QA/PROD.
Then one fine day there is a team call the release management team which will transfer the code from our Dev environment to QA/PROD.
Recently I came across the Continuous Integration systems like Jenkins/Hudson.
Can these tools build all the changes committed and automatically transfer my code to QA/PROD.
BTW I work in a AIX Server environment and use Tomcat as the Container.
I am more curious whether the tool will be able to copy my code to QA/PROD.
Please Clarify.
The answer is almost certainly yes, depending on your particular setup for copying the code. There is a large number of plugins for this purposes at the appropriate Jenkins wiki page. You should be able to find something there for your needs.

Automated Build and Deploy of Windows Services

How would you implement an automated build and deploy system for Windows services. Things to keep in mind:
The service will have to be stopped on the target machine.
The service entry in the Windows registry might need to be created/updated.
Some, but not all, of the services might need to be automatically started.
I am willing to use TFS for this, but it isn't a requirement. The target machines will always be development machines, we won't be doing this for production servers.
The automated build part can be done in multiple ways - TFS, TeamCity (what we use), CruiseControl.NET, etc. That in turn could call a build script in NAnt (again, what we use), MSBuild, etc.
As for stopping and installing a service remotely, see How to create a Windows service by using Sc.exe. Note that you could shell/exec out to this from your build script if there isn't a built-in task. (I haven't tried this recently, so do a quick spike first to make sure it works in your environment.)
Alternately, it's probably possible (and likely elegant) in Windows PowerShell 2.0.

Resources