The session logging we have in {{/etc/profile}} can interfere with services that launch sub-shells as new users - specifically, it always launches an interactive terminal, regardless of the context, which can cause certain key processes (e.g. Jenkins) from being able to perform critical tasks.
We had a Jenkins version upgrade and after hte upgrade, Jenkins seems to not be able to restart. Here’s what’s happening
```ubuntu#hoatname:~$ sudo service jenkins status
Correct java version found
Jenkins Automation Server is not running
ubuntu#hostname:~$ sudo service jenkins start
Correct java version found
Starting Jenkins Automation Server jenkins jenkins#hostname:~$
jenkins#hostname:~$
jenkins#hostname:~$ sudo service jenkins status
[sudo] password for jenkins:
jenkins#hostname:~$ exit
exit
[fail]
ubuntu#hostname:~$
```
Essentially, it seems that “service jenkins start” is somehow causing a session to be created, which dumps it into a script. I suspect this is due to how /etc/profile contains a script-based session logger, and i suspect that Jenkins is attempting to execute this script when it su’s into its own jenkins user
What should I do to alleviate this?
Related
We have a Jenkins matrix job with "SSH Agent" enabled in "Build Environment" with SSH credentials and a post-build action of "Execute Scripts On Matrix" with a shell command that runs ssh expecting to use the credentials stored by ssh-agent.
We recently upgraded from Jenkins v2.249.3 to v2.263.1 (and potentially upgraded some plugins at the same time, though I don't believe that we upgraded any of the ssh-related ones.) The aforementioned shell command now fails because it no longer has access to the ssh credentials it requires.
Comparing the build logs, we see a new call to ssh-agent -k in the Jenkins v2.263.1 parent job log immediately after the matrix children complete and before "[PostBuildScript] - [INFO] Executing post build scripts." that wasn't present with Jenkins v2.249.3.
It would appear that the agent is being killed before running the post-build operations by Jenkins v2.263.1 whereas it wasn't with Jenkins v2.249.3. I was unable to find a setting that controls this.
I entered JENKINS-64394 for this, but I wasn't really sure which components to label it with which I suspect mean that the right people haven't seen it. Does anyone here have any ideas?
I recently had some issues regarding version upgrade for my Jenkins server. In order to update the version of the jenkins server, the first step I did was to create a backup:
sudo tar -zcvf /tmp/jenkins.tgz /var/lib/jenkins
Then, I copied the archived file, from server A and untar it on another server, server B. I can see all the files [workspace, config.xml, jobs] of server A to server B in var/lib/jenkins.
When I am logging into the jenkins box it showed:
Jenkins detected that you appear to be running more than one instance of Jenkins that share the same home directory '/var/lib/jenkins’. This greatly confuses Jenkins and you will likely experience strange behaviors, so please correct the situation.
This Jenkins:
490566619 contextPath="" at 8779#jm1597185631ybr.cloud.phx3.gdg
Other Jenkins:
1998724099 contextPath="" at 20292#jm1584048540yxl.cloud.phx3.gdg
So, I stopped the jenkins service using:
sudo service jenkins stop
Then I restarted the service using
sudo service jenkins restart
All the jobs started to appear suddenly. I have following questions:
Why did the jobs started to show up and not throw the error of running multiple instances?
If version is the only issue, why cannot the newly provisioned
server have the updated version? Is it when I copy the files from server A, the server B files gets overwritten and hence, shows the same error of the version upgrade?
I was able to install and run jenkins on my linux subsystem in Windows 10.
It listens on 8082.
But unfortunately, for an unknown reason, it hangs up infinitely after a few minutes (or to be precise after a I've made a change in a job config and execute a build).
Then, I checked in the terminal:
root#jup1t3r /h/navds# service jenkins status
Correct java version found
2 instances of jenkins are running at the moment
but the pidfile /var/run/jenkins/jenkins.pid is missing
root#jup1t3r /h/navds# service jenkins stop
Correct java version found
* Stopping Jenkins Automation Server jenkins
...done.
root#jup1t3r /h/navds# service jenkins status
Correct java version found
2 instances of jenkins are running at the moment
but the pidfile /var/run/jenkins/jenkins.pid is missing
So there is no way to stop Jenkins. How can I restart it ?
We recently tried moving our Windows Jenkins slaves to run as a service instead of just running the slave agent jnlp file.
According to the Mercurial Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Mercurial+Plugin),
The default installation runs windows service with "local system" account, which does not seem to have enough priveleges for hg to execute, so You could try running Jenkins service with the same account as TortoiseHG, which will allow it to complete.
This we did, and it worked. For a while.
But sometimes after there was a disconnect between the Jenkins slave and master, it would stop working. Jenkins would call mercurial and it would hang, just like it would do if the service was running with the "local system" account.
I could sometimes get it to start working again by restarting the Jenkins service on the slave. But somtimes I'd have to go back in and re-set the service to run with an elevated account.
Has anybody else experienced anything like this? Is there any way to keep the Jenkins Service running with elevated priveleges?
I created a job in jenkins and I want to build the project using ansible. I want to run my command on several host (that's why I use ansible). When I try to run the project it fails with some permission error:
/home/ubuntu/install.sh -s -U ubuntu -f 5
FATAL: command execution failed
java.io.IOException: Cannot run program "/usr/bin/ansible" (in directory "/var/lib/jenkins/jobs/Standard Demo/workspace"): error=13, Permission denied
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at hudson.Proc$LocalProc.<init>(Proc.java:244)
at hudson.Proc$LocalProc.<init>(Proc.java:216)
Do you know what the problem is? I am logged into the jenkins server as admin user.
This is not an Ansible problem, it is a configuration issue in Jenkins. As others have noted, by default Jenkins will run as a "normal user" (typically jenkins). That is the user that jobs and steps (including shell scripts like the one you're calling) will run as. In your case, this user does not have sufficient permissions to run Ansible.
I don't recommend changing this default user because a. there are good security reasons for this setup, and b. it can actually be complex to do right, because you would have to address permissions issues for all of Jenkins to match the new user. However, it's quite easy to do things like run sudo from within a Script build step. Just use that tool (and a properly configured /etc/sudoers) to gain the permissions you need during the build.