gitlab credentials for specific user - docker

I have a problem figuring out how I can add credentials for a deploy job in gitlab in a way that only a certain user or usergroup gets access to them.
So what I want to do is the following. I have a gitlab instance with a gitlab-ci configuration that adds a manually activated job to deploy my code. The deploy job runs in a docker image and deploys via fabric to a server. This fabric call now needs an ssh private key to log into the server to deploy onto. That still wouldn't prevent anyone from clicking on this manual job but at least it would fail because of the missing credentials.
I now added the ssh private key as a secret variable. But unfortunately, this secret variable is visible to everyone who can trigger the build. Which would be all developers. Because I, of course, want them to be able to trigger the build jobs. I just don't want them to be able to trigger the deploy job. And I, of course, don't want them to be able to access the ssh key. E.g. through checking in "echo $SECRET_VARIBLE". So in my ideal world, I would be able to add a secret variable only to my account with is only set if I am the person who triggers the job.

Related

Bitbucket Webhook is not triggering Jenkins job even webhook returns 200

I am using bitbucket cloud and jenkins is running on ec2 instance on the private network.
Connection is well established between bitbucket and jenkins as when i run the job manually, the job shows the green status. However when i made the changes in the repo and it gets merged the webhook is not using my app password and as a result the job is getting failed.
I am getting authentication failed and It's basically asking me to use the app password. I have already created it but still webhook is not using it. I am getting the response 200 now in webhook means my webhook is able to reach the jenkins server but i am not sure why i am getting authentication failed.
Can you confirm that:
Your credentials are correctly placed under the credentials section of Jenkins. The username should be the username of the account you are using, and the password should be the app password. It should be present like this:
The ID of the credentials should be used within your pipeline script anywhere you want access to the Bitbucket repository.
Ensure that Bitbucket has access to your EC2 instance that runs Jenkins.
Basically, these are the 3 points where authentication can fail. Checking each point should reveal the problem.

Gitlab jenkins integration

I am using jenkins as CI server and Gitlab as Codebase both are running into two serpent docker containers.
I have created CICD pipeline into jenkins and gitlab repo. also setup webhook which working fine. Now I want to integrate jenkins from gitlab and inserted required details but it showing 401 error while test. please note entered details are verified and credentials are working.
enter image description here
enter image description here
401 indicates that the request is unauthorized. So, stated another way: your credentials are correctly set, but the credentials do not have the appropriate permission needed.
You should make sure your API keys are generated with appropriate scope and/or that the user account associated with the keys have appropriate permission.

Starting already existing VM with Jenkins on Google Cloud

I am trying to start a VM that already exist in Google cloud with my jenkins to use it as a slave. The reason is because if I start the template of this VM I need to do a few things before I can use my Jenkins code.
Does anyone know how to start VM's that already exist in my VM Pool in Google Could via Jenkins?
There might be 2 approaches to this depending on the operations that you need to run before in your machine that is preventing you from just recreating it.
First and possibly the most straightforward given the restriction that the machine already exists would be talking directly to the GCE API in order to list and start the machine from Jenkins (using a build step).
Basically you can make requests to the GCE API to do operations with your instances. I suggest doing this using gcloud from within the Jenkins master node as it'll save you having to write your own client. It's straightforward as you only have to "install" it in your master and you can make it work safely using a service account.
Below is the outline of this approach:
Download the cloud-sdk to your master node following these release instructions.
You can do this once outside of Jenkins or directly in the build step, doesn't matter as long as Jenkins and its user is able to get the binary.
Create the service account, generate authentication keys and give it permissions to interact with GCE.
Using a service account is the way to go as you can restrict its permissions to the operations that are relevant for you.
Once you get the service account that will be bound to your gcloud client, you'll need to set it up in Jenkins. You might want to do this in a build step (I'm using Groovy here but it should be easy to translate it to the UI):
stage('Start_machine'){
steps{
//Considering that you already installed gcloud in this node, but you can also curl it from here if that's convenient
// You can set this as an scope env var in Jenkins or just hard code it
gcloud config set project ${GOOGLE_PROJECT_ID};
// This needs a json file location accessible by jenkins like: --key-file /var/lib/jenkins/..key.json
gcloud auth activate-service-account --key-file ${GOOGLE_SERVICE_ACCOUNT_KEY};
// Check the reference on this command: https://cloud.google.com/sdk/gcloud/reference/compute/instances/start
gcloud compute instances start my_existing_instance;
echo "Instance started"
}
post{
always{
println "Result : ${currentBuild.result}";
}
}
Wrapping up: You basically create a service account that has the permissions to start your instances. Download an client that can interact with the GCE API (gcloud), authenticate it and start the instance, all from within your pipeline.
The second approach might be easier if there were no constraints regarding the preexisting machine.
Jenkins has a plugin for Compute Engine that will automatically spin up new workers whenever needed.
I know that you need to do some previous operations before Jenkins sends some work to these slave machines. However, I want to bring to your attention that this plugin also considers the start up scripts.
So there's always the option to preload your operations there before the machine takes off and by the time it's ready, you might have everything done.
Hope this helps.

jenkins can ssh to another user, but pgp still thinks its the jenkins user

I have a jenkins job that uses ssh to connect to the scheduler user on a quartz server; it can restart quartz as the scheduler user, and the processes and libraries appear to be owned by the scheduler, but whenever an encrypt/decrypt task is run, it thinks it's being called as the jenkins user instead of the scheduler.
ID and env indicate that the remote shell is running as the scheduler user - why does the encrypt task look to the jenkins .pgp directory? The only way for me to fix this is to ssh to the box myself, sudo to the scheduler, and restart the jobs. How do I get jenkins to emulate this?
You would need to record your own private key in Jenkins, through the JENKINS SSH Credentials Plugin.
That way, Jenkins would be able to use your own SSH credential when diong its SSH step, connecting to the quartz server as you instead of jenkins.
I finally spotted the source of my problem, and I'm so embarrassed - the scheduler user's home directory was actually owned by jenkins, with the scheduler user as the group owner. No wonder pgp looked in the jenkins directory for its info. I must have changed ownership to jenkins earlier when I was setting things up, but that wasn't a very good idea.
Thank you for responding - it is great to have some company when one is confused and needing a different perspective.

jenkins slave runs as user

I have a jenkins setup with multiple users which are logging in with Active Directory plugin. This is useful so that each user can access his own tasks.
However each user also has different permissions on the local network, such as access to different folders etc. I have noticed that the permissions given to each task is not linked to the user but to the account under which the slave is running as service. Is there a way to change that so that the task is executed on the slave under the credential (and hence permissions) of the user?
Thank you
The problem is: there is only one slave process running the different job assigned to that server by the Jenkins master.
So the slave itself runs as one user (generally, a dedicated account or a system account).
Since you can get the user id as environment variable (with a plugin like JENKINS Build User Vars Plugin), you might consider configuring the job in order for it build step to "run as" the user who triggered the build.
See for instance the JENKINS Authorize Project plugin.
However, as mentioned this answer:
The "Authorize Project" plugin does not change the OS level user that is running commands.
It only sets the Jenkins user that is running the job and any downstream jobs, using Jenkins authentication (whatever it might be).
So you are left with build step with runas or su -c commands in order to be sure that your task does run with the right user.
I had the similar issue and I can recall for managing more control on projects I used role strategy plugin and setup global security using LDAP servers (Active directory should also be ok).
And I used authorized project plugin.
Have a look and I hope it should solve your purpose. Let me know on comment section for any clarification.
you can partially fix your problem this way:
install the slave as a service using the Java Web Start method and JLNP
go to Services control panel in windows
under Properties -> Connection replace the local system connection with a specific user
rebooted the service
This at least gives you the ability to use one account instead of system.

Resources