Installing existing Jenkins as a Windows Service - jenkins

We currently have two existing and long standing Jenkins instances that are not currently installed as a service. The desire is to get them installed as a service on their host machines. I have found and read the documentation on how to do that.
https://wiki.jenkins.io/display/JENKINS/Installing+Jenkins+as+a+Windows+service
My question is not particularly about how to do it, that appears to be simple enough. My concern is all the existing jobs and configurations that are there. Will they be effected by installing as a service? Will I need to essentially setup the Jenkins instance again? Are there any recommended precautions i should take? I currently have backups of each Jenkins instance, in case i need to back out. I'm hoping that someone out here as converted existing Jenkins setups to run as a service and can help ease my worries about losing the existing setups.
Thank you in advance!

Related

How to use a scheduler(cron) container to execute commands in other containers

I've spent a fair amount of time researching and I've not found a solution to my problem that I'm comfortable with. My app is working in a dockerized environment:
one container for the database;
one or more containers for the APP itself. Each container holds a specific version of the APP.
It's a multi-tenant application, so each client (or tenant) may be related to only one version at a time (migration should be handle per client, but that's not relevant).
The problem is I would like to have another container to handle scheduling jobs, like sending e-mails, processing some data, etc. The scheduler would then execute commands in app's containers. Projects like Ofelia offer a great promise but I would have to know the container to execute the command ahead of time. That's not possible because I need to go to the database container to discover which version the client is in, to figure it out what container the command should be executed in.
Is there a tool to help me here? Should I change the structure somehow? Any tips would be welcome.
Thanks.
So your question is you want to get the APP's version info in the database container before scheduling jobs,right?
I think this is relate to the business, not the dockerized environment,you may have ways to slove the problem:
Check the network ,make sure the network of the container can connect to each other
I think the database should support RPC function,you can use it to get the version data
You can use some RPC supported tools,like SSH

What are best practices for jenkins security?

What are the best practices for a jenkins installation like the one below?
I do have a quite small dedicated server with 16 gb of ram and 2tb of diskspace with enabled hardware virtualization, and one use of it would be to host my own projects (opensource), and there are applications set up such as git repository manager and stuff.
I would like to set up jenkins there for automatic building, but I want to make it secure.
This installation is small enough to require only a master node, but I am planning to disable building on master completely, and to run a virtual machine as an agent, for the reason that it would be isolated as much as possible on the same physical server, so that a job would be unable to destroy jenkins master data.
Should I go for master only anyway? or, if using a virtual machine agent, should I have only one executor there or multiple ones? I probably cannot isolate multiple parallel jobs running on one agent without using one agent per job, but maybe I am overthinking all this. Using one agent per job, at least in case of virtual machines, would exhaust server resources very quickly, or alternatively, money.
You can use Jenkins own database of users, which I have used in commercial settings and it has worked perfectly well. If you have Active Directory you can also integrate with this if you want to go to extra effort so people only have to remember one login.
Once users are logged in you should provide authorisation via the Role-Strategy plugin

CI / CD: Principles for deployment of environments

I am not a developer, but reading about CI/CD at the moment. Now I am wondering about good practices for automated code deployment. I read a lot about the deployment of code to a pre-existing environment so far.
My question now is whether it is also good-practice to use e.g. a Jenkins workflow to deploy an environment from scratch when a new build is created. For example for testing of a newly created build, deleting the environment again after testing.
I know that there are various plugins to interact with AWS, Azure etc. that could be used to develop a job for deployment of a virtual machine.
There are also plugins to trigger Puppet to deploy infra (as code) and there are plugins to invoke an infrastructure orchestration.
So everything is available to be able to deploy the infrastructure and middleware before deploying code (with some extra effort of course).
Is this something that is used in real life? How is it done?
The background of my question is my interest in full automation of development with as few clicks as possible, and cost saving in a pay-per-use model by not having idle machines.
My question now is whether it is also good-practice to use e.g. a Jenkins workflow to deploy an environment from scratch when a new build is created
Yes it is good practice to deploy an environment from scratch. Like you say, Jenkins and Jenkins pipelines can certainly help with kicking off and orchestrating that process depending on your specific requirements. Deploying a full environment from scratch is one of the hardest things to automate, and if that is automated, it implies that a lot of other things are also automated, such as infrastructure, application deployments, application configuration, and so on.
Is this something that is used in real life?
Yes, definitely. A lot of shops do this. The simpler your environments, the easier it is, and therefore, a startup with one backend app would have relatively little trouble achieving this valhalla state. But even the creation of the most complex environments--with hundreds of interdependent applications--can be fully automated; it just takes more time and effort.
The background of my question is my interest in full automation of development with as less clicks as possible and cost saving in a pay-per-use model by not having idling machines.
Yes, definitely. The "spin up and destroy" strategy benefits all hosting models (since, after full automation, no one ever has to wait for someone to manually provision an environment), but those using public clouds see even larger benefits in terms of cost (vs always leaving AWS environments running, for example).
I appreciate your thoughts.
Not a problem. I will advise that this question doesn't fit stackoverflow's question and answer sweet spot super well, since it is quite general. In the future, I would recommend chatting with your developers, finding folks who are excited about this sort of thing, and formulating more specific questions when you all get stuck in the weeds on something. Welcome to stackoverflow!
All is being used in various combinations; the objective is to deliver continuous value to end user. My two cents:
Build & Release
It depends on what you are using. I personally recommend to use what is available with the tool. For example, VSTS (Visual Studio Team Services) offers complete CI/CD pipeline. But if you have a unique need which can only be served by Jenkins then you must use that and VSTS offers that out of the box.
IAC (Infrastructure as code)
In addition to Puppet etc. You can take benefits of AZURE ARM (Azure Resource Manager) Template to Build and destroy an environment. Again, see what is available out of the box with the tool set you have.
Pay-per-use
What I have personally used is Azure Dev/Test Labs and have the code deployed to that via CI/CD pipeline. Later setup Shutdown policy on the VM so it will auto-start and auto-shutdown based on time provided. This is a great feature to let you save cost on the resources being used and replicate environments.
For example, UAT environment might not needed until QA is signed off. But using IAC you can quickly spin up the environment automatically and then have one-click deployment setup to deploy code to UAT.

Developing FrontEnd App without installing BackEnd

As I'm preparing my team growth, I searched informations for months about advices or good practices when it comes to welcome a FrontEnd Developer.
We are running Rails API on BackEnd and Angular on FrontEnd. Right now, we use 2 separate git repos, one for front and one for back.
Bot are hosted on heroku.
But our developers work with a local BackEnd. As we welcome new frontEnds we don't want them to install the whole huge package and config of Rails.
I looked at different solutions and don't know if it's possible/best :
BackEnd deployed on Heroku and making a tunnel for the FE dev to
access it ( what about CORS ? )
Deploying a Vagrant or RailsBox ? (we tried Docker and didn't work
at all)
Moving the front repo in the back repo ( yes, I know that doesn't
fix the main issue, but I'm wondering if the repo split could be a
stop about our goal ? )
I would definitely merge the repos. This will allow you to deploy a changeset (frontend + backend) across the stack. Otherwise, deployment will get to be complicated at a stage where you don't need that.
I think the best way is to have a one-liner for your frontend-developer to install the backend environment. At uberVU, we used Vagrant for this purpose, but anything works IMO as long as it's a one-liner and works across operating systems.
You have to keep a few facts in mind:
make the configuration tying the frontend to the backend as dumb as possible; have good defaults in place. You don't want frontend people wasting valuable time figuring out how to link the two after running them together
make sure that whatever solution you use, it updates the running backend automatically when they pull in new code. One very frequent mistake in our case was frontend people updating the code and not seeing API endpoints working correctly, etc. Something that watches for filesystem changes and restarts the backend daemons should work. Be careful on whether that works correctly with shared filesystems between the host machine and the virtualization solution you choose
make sure that the virtualization solution you choose runs on Windows and MacOS. While backend devs tend to use MacOS and Linux, frontend developers are also big fans of Windows. The latest versions are more and more handsome.

How to deploy my Rails app to a shared host

I have been chipping away at this for the last couple of days but it has become quite frustrating. My web host is "JustHost" and I have a shared hosting service. I have found this translates directly to "strips you of all privileges," when it comes to trying to set up the deployment on the server.
I thought I had found some really good tutorials that might help me to get my environment set up (I tried to Follow Rob McLarty's blogs on How to setup a Production Server and How to Deploy a Rails App as best as I could). The problem that I found is that I couldn't/didn't need to install any of the things recommended, and when it came time to try to set things up I didn't have permission over the files that were recommended to modify or I didn't have permission to use the commands that were needed.
Naturally I can see why this would be important on a shared host. It would be silly if each individual on the host could restart an apache server willy nilly.
Regardless, I have found this to be the problem with most of the research into getting this deployment up and running. Simply put, most of the things that are recommended I don't have permission to access.
Does anyone know how to setup a deployment on an environment that is so restrictive? Even some resources, or a simple "you are thinking about this way too hard" to point me in the right direction would be really really helpful.
I highly recommend OpenShift as a platform for deploying your rails app on a shared environment. It gives you the option to start for free with a small amount of resources in the server and gives you the ability of starting up everything you need at first like relational or NoSQL databases, loadbalancers, cache, etc.
Heroku is a simple setup formula, For a shared host ROR Deployment Found a good reference : REFERENCE LINK this will help to deploy ROR App on a shared host.(some modules may not be allowed on a shared server environment. You may have to go for a VPS to meet all the requirements).

Resources