Starting already existing VM with Jenkins on Google Cloud - jenkins

I am trying to start a VM that already exist in Google cloud with my jenkins to use it as a slave. The reason is because if I start the template of this VM I need to do a few things before I can use my Jenkins code.
Does anyone know how to start VM's that already exist in my VM Pool in Google Could via Jenkins?

There might be 2 approaches to this depending on the operations that you need to run before in your machine that is preventing you from just recreating it.
First and possibly the most straightforward given the restriction that the machine already exists would be talking directly to the GCE API in order to list and start the machine from Jenkins (using a build step).
Basically you can make requests to the GCE API to do operations with your instances. I suggest doing this using gcloud from within the Jenkins master node as it'll save you having to write your own client. It's straightforward as you only have to "install" it in your master and you can make it work safely using a service account.
Below is the outline of this approach:
Download the cloud-sdk to your master node following these release instructions.
You can do this once outside of Jenkins or directly in the build step, doesn't matter as long as Jenkins and its user is able to get the binary.
Create the service account, generate authentication keys and give it permissions to interact with GCE.
Using a service account is the way to go as you can restrict its permissions to the operations that are relevant for you.
Once you get the service account that will be bound to your gcloud client, you'll need to set it up in Jenkins. You might want to do this in a build step (I'm using Groovy here but it should be easy to translate it to the UI):
stage('Start_machine'){
steps{
//Considering that you already installed gcloud in this node, but you can also curl it from here if that's convenient
// You can set this as an scope env var in Jenkins or just hard code it
gcloud config set project ${GOOGLE_PROJECT_ID};
// This needs a json file location accessible by jenkins like: --key-file /var/lib/jenkins/..key.json
gcloud auth activate-service-account --key-file ${GOOGLE_SERVICE_ACCOUNT_KEY};
// Check the reference on this command: https://cloud.google.com/sdk/gcloud/reference/compute/instances/start
gcloud compute instances start my_existing_instance;
echo "Instance started"
}
post{
always{
println "Result : ${currentBuild.result}";
}
}
Wrapping up: You basically create a service account that has the permissions to start your instances. Download an client that can interact with the GCE API (gcloud), authenticate it and start the instance, all from within your pipeline.
The second approach might be easier if there were no constraints regarding the preexisting machine.
Jenkins has a plugin for Compute Engine that will automatically spin up new workers whenever needed.
I know that you need to do some previous operations before Jenkins sends some work to these slave machines. However, I want to bring to your attention that this plugin also considers the start up scripts.
So there's always the option to preload your operations there before the machine takes off and by the time it's ready, you might have everything done.
Hope this helps.

Related

How to scale down OpenShift/Kubernetes pods automatically on a schedule?

I have a requirement to scale down OpenShift pods at the end of each business day automatically.
How might I schedule this automatically?
OpenShift, like Kubernetes, is an api-driven application. Essentially all application functionality is exposed over the control-plane API running on the master hosts.
You can use any orchestration tool that is capable of making API calls to perform this activity. Information on calling the OpenShift API directly can be found in the official documentation in the REST API Reference Overview section.
Many orchestration tools have plugins that allow you to interact with OpenShift/Kubernetes API more natively than running network calls directly. In the case of Jenkins for example there is the OpensShift Pipeline Jenkins plugin that allows you to perform OpenShift activities directly from Jenkins pipelines. In the cases of Ansible there is the k8s module.
If you were to combine this with Jenkins capability to run jobs on a schedule you have something that meets your requirements.
For something much simpler you could just schedule Ansible or bash scripts on a server via cron to execute the appropriate API commands against the OpenShift API.
Executing these commands from within OpenShift would also be possible via the CronJob object.

Deployment with Ansible in Jenkins pipeline

I have an Ansible playbook to deploy a java application (jar) on AWS EC2. I would like to use it inside a Jenkins pipeline as 'Deploy' step. To deploy on EC2, I need the downloaded private ssh key when the instance was created.
I have 2 choices :
Install ansible on the machine hosting Jenkins, insert the private SSH key in Jenkins, and use ansible-playbook plugin to deploy my app
Take a base docker image with ansible installed, extend it by inserting my private SSH key, and use this docker image to deploy my app
From a security point of view, what is best ?
For option 1, it's recommended to create a new user account, e.g. jenkins in the EC2 instance without sudo privilege, or at least passcode protected sudo
And it's a good scenario that using Ansible to manage those users accounts, it limits usage of the super key created by AWS
While for option 2, Docker is a good scenario of immutable deployment, which means the configuration should be determined even before the image is ready, so that Ansible is not quite useful in this scenario.
Different conf means different images to be created
Maybe you still use Ansible to manage those DockerFiles rather than initiate Ansible and interact with the application itself
The 2 options look quite different from each other in terms of how you design your system more than security concern
Do let me know you need more clarification

jenkins slave runs as user

I have a jenkins setup with multiple users which are logging in with Active Directory plugin. This is useful so that each user can access his own tasks.
However each user also has different permissions on the local network, such as access to different folders etc. I have noticed that the permissions given to each task is not linked to the user but to the account under which the slave is running as service. Is there a way to change that so that the task is executed on the slave under the credential (and hence permissions) of the user?
Thank you
The problem is: there is only one slave process running the different job assigned to that server by the Jenkins master.
So the slave itself runs as one user (generally, a dedicated account or a system account).
Since you can get the user id as environment variable (with a plugin like JENKINS Build User Vars Plugin), you might consider configuring the job in order for it build step to "run as" the user who triggered the build.
See for instance the JENKINS Authorize Project plugin.
However, as mentioned this answer:
The "Authorize Project" plugin does not change the OS level user that is running commands.
It only sets the Jenkins user that is running the job and any downstream jobs, using Jenkins authentication (whatever it might be).
So you are left with build step with runas or su -c commands in order to be sure that your task does run with the right user.
I had the similar issue and I can recall for managing more control on projects I used role strategy plugin and setup global security using LDAP servers (Active directory should also be ok).
And I used authorized project plugin.
Have a look and I hope it should solve your purpose. Let me know on comment section for any clarification.
you can partially fix your problem this way:
install the slave as a service using the Java Web Start method and JLNP
go to Services control panel in windows
under Properties -> Connection replace the local system connection with a specific user
rebooted the service
This at least gives you the ability to use one account instead of system.

How to start Jenkins slave on docker-cloud?

I have a jenkins master defined as a stack on cloud.docker.com. I also have set up a couple of other stacks that contain the services I need to test against during my build process (some components use mongo, some use rabbitmq, etc).
docker cloud (man I wish they picked a more unique name!) has a REST api to start stacks, and I've even written a script that will redeploy the stack based on UUID, but I can't figure out how to get the jenkins master to start the stack or how to execute my script.
The jenkins slave setup plugin didn't document how to attach the "setup" to a node, and none of the other plugins I looked at appeared to have support for neither docker cloud nor a way of using arbitrary rest apis on slave startup.
I've also tried just using the docker daemon to launch containers directly, but docker-cloud appears to remove images not associated with stacks or services on its managed node, and then the jenkins docker plugin complains it can't find the slave image.
Everything is latest-and-greatest version-wise. The node itself is running on AWS and otherwise appears to function well.

How can we execute Jenkins job using other user credential

I need to execute few of the Jenkins jobs such as "Release to Production" through Jenkins UI using logged on user credential. The reason is, we have separate Support Team Members, who have access to the production boxes and not the Dev team members. So, in order to deploy any code base to production, all the Windows Deploy Commands (ex, create, update files, folder etc.) needs to be run with specific user credential who has access to the Production Box. So that even the Dev team members who don't have access to the Production box but are Jenkins Admin, execute the same job should result in failure due to "Access Denied". The job should succeed only if its been run by Support Team members with their credential.
I tried using parameterized plugin but couldn't able to pass the Password successfully to the batch file which contains MSDeploy instructions. Even the Jenkins console log displays the parameter passed in its console output, which is a security issue.
I checked Role based security plugin, but that doesn't help me much. I just need a plugin which should ask for user to provide their credential before start building the Job and should use the user credential to get the job executed, so that my MSDeploy command will be able to deploy the code on Production boxes, when the Support team member build that Job using their credential. I wish there was support for impersonation.
Right now all the Jenkins Jobs are getting executed using the service account which the Tomcat service is configured to run with on which Jenkins is hosted.
Any help would be appreciated.
Just in case there is any confusion a Jenkins job will always run as the same OS user. The Matrix based security applies to users who log into the Jenkins server and controls features like creating or launching jobs.
You could configure the job to use a set of generic production credentials and then prevent your developers from invoking the job.
Perhaps a better approach would be to separate the process that builds the code from the one that deploys the code. The following diagram (Taken from the xebia-france project) demonstrates how some of my favourite tools Rundeck and Nexus can be integrated with Jenkins.
Finally, I highly recommend reading the following link:
Using Rundeck and Chef to build devops tool chains
Hi I know I'm coming late on this thread, but I just fell on this issue and had a hard time solving it, so I thought I might just share what I managed to set-up.
First things first: if you want to run a Jenkins job "as a specific user" (with all the correct habilitations) the easiest way is to run a Jenkins SLAVE as this user.
Then you might very well stumble into the following: you probably want to run serveral slaves on the same windows machine as windows services. This is very fine, as long as each slave has his own Remote root directory and probably have a specific "label" too.
Once you managed to run your slave as a windows service, launch the service console (run services.msc). Edit the newly created service properties, go to Log On tab. Select "Log on as: This account" and enter your account credentials.
Cheers :)
You can utilize the built in windows runas or Powershell InvokeCommand cmdlet and -Credential to run - Both these would store the username/password in plain text - So do think about the risks, but this gives you flexibility.
I'm surprised this doesn't have a better answer of set an agent on another machine to run as another service and define agent as a special "type" which picks up the jobs - Something along those lines is what I would expect but I haven't seen an implementation like that in Jenkins (I'm new to Jenkins so was looking for an answer and found this thread).
Something else that could be considered for someone more familiar with Jenkins is when you set the custom path to MSBuild could you set that to runas /user:... msbuild.exe perhaps? I don't have an extra Jenkins server currently to try that on.

Resources