suppose user:ABC installed jenkins in windows VM IP:x.x.x.12 created many jobs and many clients are active in that environment. But ABC is account is going disable in few days.
So ABC is changing the his work space env to xyz.
xyz setting up of the existing environment in windows VM IP:x.x.x.12 which was installed.
now when he is trying to start jenkins server, it is creating new workspace for XYZ instead to existing.
Trying to use existing workspace environment created by ABC user and configure with user:XYZ
Note: many jobs are configured with user:ABC in scripts
Related
I am just started learning Jenkins deployment on Google Kubernetes engine. I was able to successfully deploy an application to my GKE instance. However, I couldn't figure out how to manage Nodes and Clouds.
Any tutorial or guidance would be highly appreciated.
Underlying idea behind nodes : Just one node may not be sufficient/effective to run multiple jobs so to distribute the load jobs are transferred to a different node to attain a good performance.
Prerequisites
#1 : A instance (Lets’ say it DEV) which is hosting Jenkins (git, maven, Jenkins)
#2 : A instance (Let’s call it Slave) which will be used to serve as host machine for our new node
In this machine you need to have java installed
A pass wordless connection should be established between two instances.
To achieve it enable password authentication >generate key in main machine i.e., Dev machine and copy this key into Dev machine.
Create a directory “workspace” in this machine (/home/Ubuntu/workspace)
Now Let's get started with Jenkins part -
Go to manage Jenkins> Manage nodes and cloud
By default Jenkins contains only the master node
To create a new node one could use the option “new node” available on the right side of the screen.
Provide a name to new node, mark it as permanent agent
Define remote root directory : It is the directory which is defined by you.
For e.g., a location like
“/home/Ubuntu/workspace “
Provide a label of your choice for e.g., let’s give the label as “Slave_lab”
Label = slave_lab
Now define your Launch method
Let’s select “Launch agent via execution of command on the master”
In the command put command as :
Ssh Ubuntu#private_IP_of_slave java -jar slave.jar
Note : here by #private_IP_of_slave i mean the IP of machine which will be used for our new node
Now we could process to configure jobs to be run on our new node
For that right click on your job > select configure
Under the general tab select the following
"Restrict where this project can be run" and provide the label "slave_lab"
Now when you’ll run the job it will be executed on the slave node not on the master name.
Jenkins cluster in my company runs builds as root user.
How to configure cluster/build to run as a different user? Without root privileges ?
Builds always run under the user that runs the node agent process. So your options are
Specify a different user for connecting the node, or
Switch to a different user during the build (e.g., via sudo in a shell build step). This is more flexible, but plugin related-code (like SCM checkout) will still run under the root account.
Any agent can be configured to be launched as any user, so do that.
Advise your company Jenkins Admin to change Jenkins immediately to NOT run as root. It does not need root (can be a daemon/service tho) and increases your risk exposure . We use Java Service Wrapper (RUN_AS_USER=jenkins) in Unix. The new windows installer prompts you for the account to use (don't use System despite being the default).
I have a jenkins setup with multiple users which are logging in with Active Directory plugin. This is useful so that each user can access his own tasks.
However each user also has different permissions on the local network, such as access to different folders etc. I have noticed that the permissions given to each task is not linked to the user but to the account under which the slave is running as service. Is there a way to change that so that the task is executed on the slave under the credential (and hence permissions) of the user?
Thank you
The problem is: there is only one slave process running the different job assigned to that server by the Jenkins master.
So the slave itself runs as one user (generally, a dedicated account or a system account).
Since you can get the user id as environment variable (with a plugin like JENKINS Build User Vars Plugin), you might consider configuring the job in order for it build step to "run as" the user who triggered the build.
See for instance the JENKINS Authorize Project plugin.
However, as mentioned this answer:
The "Authorize Project" plugin does not change the OS level user that is running commands.
It only sets the Jenkins user that is running the job and any downstream jobs, using Jenkins authentication (whatever it might be).
So you are left with build step with runas or su -c commands in order to be sure that your task does run with the right user.
I had the similar issue and I can recall for managing more control on projects I used role strategy plugin and setup global security using LDAP servers (Active directory should also be ok).
And I used authorized project plugin.
Have a look and I hope it should solve your purpose. Let me know on comment section for any clarification.
you can partially fix your problem this way:
install the slave as a service using the Java Web Start method and JLNP
go to Services control panel in windows
under Properties -> Connection replace the local system connection with a specific user
rebooted the service
This at least gives you the ability to use one account instead of system.
I have lot of slaves added (nearly 500) to on jenkins server (1.573) in production. I am trying to setup new jenkins server as sandbox. I need the to move some slave machines from production to sandbox environment. I tried to coipy the slaves tag from prod jenkins config.xml and copy to new sandbox jenkins server (same version as prod) and then consolidate the details. But whenever i copy the dsetails and give "Reload from server", the newly added slave details are all reset and it gives me empty list in the nodes.
could any one explain oir give a way to accomplish the above described task ?
I'm working with Jenkins servers in three different environments:Development-Staging-Production.
We work out the kinks in our Jenkins jobs in dev, test them in stage, and then finally move them to production. We do that be either replicating the job in the GUI (cut and paste) or tarring up the job directory and moving it to the next environment via the command line.
I'm wondering if the move option can be done with the service accounts that run these jobs. I can see the user account directories and config files under /var/lib/jenkins/users. What I don't see are the security settings that get applied to the user from the "Configure Global Security" screen in the GUI.
For these service accounts, we have the minimal authorization of READ on Global and READ and BUILD on Jobs.
What I'd like to be able to do is prove a service account in dev and then promote it to Stage and Prod from the command line vs having to manually recreate the account in the GUI for each upstream environment. If the API key could also be moved along with it that would be great.
Any thoughts or ideas?
User permissions are in config.xml under the Jenkins root folder, in section <authorizationStrategy>
This file contains other global settings, so just copying it would not be advisable
Just a wild thought, but why not use a master-slave config and trigger builds on the desired remote machine based on some "environment" parameter. You can also look through the plugins section to see if you can find something useful such as the:
node label parameter which allows to define and select the label for the node where you want the build to run
copy to slave that facilitates copying files to and from a slave
That way you'll only have one job configuration which can be executed on different environments without too much hustle.