Get the list of files that has change-puppet - jira

We have our Jira managed by puppet, so we have puppet script to install lira.So after installation we have few file like server.xml,setting.sh manually changed in the server without using puppet.
So we need to commit the changes done back to puppet repo(r10k managed).But how will we identify the files which have changes compared to files in puppet .

You shouldn't be changing files manually on the server and committing them back into puppet.
But how will we identify the files which have changes compared to files in puppet .
You can't.
This fundamentally breaks the idea of infrastructure as code and configuration management. The whole idea of putting your configuration data into puppet is to stop this behaviour so that multiple people can always know what has changed because the changes are tracked in version control.
Make all the changes inside a git repo and then test them using a Puppet run, potentially with --noop if you're worried this may break JIRA.
You need to get a workflow set up so that this is easy, not continue to manipulate files on a server by hand and then expect Puppet to understand what each person has done.

Related

Jenkins Upgrade: What configuration should I be concerned about in the Jenkins WAR directory?

I am trying to automate Jenkins Upgrades so they do not have to be hands on. Some documentation recommends creating a batch file with instructions on the machine running Jenkins, and create a scheduled task to run the batch job. The site I found with a batch file is here, where it says:
It does delete the complete exploded war file from the deployment location, so be careful if you save any configuration files to that directory.
What configuration file would I have to worry about? No one I've talked to at my company knows of any configuration files held there, and they seem to think we have a pretty default setup, so what could I look for manually that would tell me whether or not I should be concerned?
We are running Jenkins on a Windows virtual box, I believe with Jenkins running as a service.
Alternatively, if the above method is not the easiest or best way to automate Jenkins upgrades, does anyone know a better way?
You can ignore this warning. I've never seen anything storing configuration files in that directory. It is intended to be used as a cache only.
If unsure, check your existing war directory for any files with timestamps newer than the installation time.
Here, on a busy Jenkins master, no files have been added or modified there over a period of several months (since initial war file explosion at installation time).

Revert to original configuration in Jenkins

I have a Jenkins server hosted, which has a master node and couple of other slave configurations. Last night, the job that triggers the matrix based build configuration failed. I did a restart and performed clean up jobs via Jenkins but none of those fixed the issue. The initial error that was logged was:
FATAL: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
hudson.remoting.RequestAbortedException: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
Following which I performed a reload configuration from disk,followed by a manual restart via <jenkins_job_url>/restart, which even worsened the build system. The master went offline due to unavailability of space in /tmp folder, which I fixed by cleaning up the space. Following which I observed that the original slave server configuration is no longer seen. I had slave-0 and slave-1 still there, but slave-2 was no longer present. Instead, it got replaced with slave-3 configuration. Now the slave 0's and 1's seems to be working fine. However, slave 3's build are failing due to Failed to mkdirs. Is there a way I could revert back to the original configuration from where I started, since the steps I performed seems to make sense initially, but I had no idea it had so many repercussions? Any help is appreciated.
UPDATE1: I guess I should have used some of the configuration backup plugins available in Jenkins, but is there some specific directory other than $JENKINS_HOME where these configurations gets stored?
You should always backup ${JENKINS_HOME} before doing major changes.
Even better is to have a job based on time trigger that will do this for you once in a while.
Other than that - only physically restoring the hard drive to a previous state will get back your old configs. Once a config is overwritten in Jenkins - it is gone. Except when you are using Job Config History plugin. Though keeping manually created backups is better in my opinion: where's the insurance that JobConfigHistory won't disappear along with the job configs? :)
Aside from that, the mentioned plugin tracks system config too.
As mentioned by #Zloj, there is no easy way to repair once the changes gets overwritten. I ended up fixing the issues by deleting the slaves that were not working, remapping the existing builds to the newer slaves that I created via Copy of the existing slaves that were working, reducing the number of builds(by removing the ones from the matrix that aren't required) and finally, taking a backup via https://wiki.jenkins-ci.org/display/JENKINS/thinBackup plugin and backing up the configuration at Stash :)
for windows just delete .jenkins folder in your home directory. This will revert you to the original settings.
We have been using SCM Sync Configuration plugin and that has saved our butt many times. It stores all job configuration including global config in bitbucket. But the latest plugin will say that its no longer maintained. but I was able to pull the source code from github and rebuild it ourselves.
one word of caution...don't use global variables for storing passwords and keys...this plugin will sync them all to github. Strictly use Jenkins Credentials.

Jenkins putting '$' characters in file/folder names, breaking automatic backups

I'm using Jenkins v1.546, hosted on a Windows Server 2008 R2 SP1 machine.
I've set up a fairly simple job for building a Maven Java project. It polls the SCM with no schedule and picks up remote build triggers, requiring an authentication token. It uses Subversion and performs clean checkouts with svn update. Additionally, it has a post-build step that archives some build artifacts (i.e., the resulting WAR and WSDLs).
The issue I'm experiencing is that the builds that it stores on the filesystem itself contain invalid characters in their filenames. This causes our automatic backup process to blow up, it being unable to alter or remove those directories/files with the '$'. I myself cannot move/delete those folders or files either, but if I rename it and remove the $, then things work fine. Oh, and if I try to follow one of these links with the $ in it, it doesn't resolve. None of the other jobs seem to do this - just my job, of course. Anyone know why this may be occurring and what I can do to resolve this?
I've attached multiple screenshots that show the bad filename and my Jenkins job setup. I had to white out some company information. If I can provide any additional information to help troubleshoot this, just let me know.
Also, as an update, I did some additional research, looking through the changelogs for each released version of Jenkins since my version (latest is 1.557). I saw three possible issues in the changelogs that could be related, but it's hard for me to tell. I cannot simply upgrade our Jenkins to test out this theory, since I'll need to provide a reason for upgrading beyond a hunch.
https://issues.jenkins-ci.org/browse/JENKINS-21023
https://issues.jenkins-ci.org/browse/JENKINS-20534
https://issues.jenkins-ci.org/browse/JENKINS-21958
The $ is a perfectly valid character in Windows directory name. You can manually make a folder with it, and delete it without any problems.
The com.company$moduleName syntax is used by Jenkins Maven-style job to separate modules of your build. If you don't see this structure for other people's jobs, it is because they are either not building a Maven job, or they don't have multiple modules in a single job.
What is strange though it that these are symlinks (I don't see that in my environment). It is possible that the location that is referenced by the symlink is deleted, but the link remains. In this case, you would not be able to navigate to that location through the link (this is what you are experiencing)
Is it possible that your backup software is deleting the target directories before deleting the links?
In any case, do a simple dir on the directory with the links to see what they link to. And then verify those target locations exists. If they don't, you need to figure out who/what is deleting the links' targets
Edit:
This seems to be more related to the issue that you are facing. Unfortunately, it's marked as "unresolved"
https://issues.jenkins-ci.org/browse/JENKINS-20725
The issue stems from the fact that the symlinks are referencing to targets with / instead of \
My Maven plugin (not Maven version) is 2.6. See if upgrading your Maven plugin in Jenkins will help you. Also, I am running Maven 3.2.2 from the automatic installers. Try with that, as I don't see symlinks in my modules.

jenkins missing jobs after removal of plugins

I have a Jenkins Server (1.510) on Win 2008 with ~100 jobs.
After installing and then uninstalling the CloudBees (Plugin_1, Plugin_2) set of plugins + restart I have the following issues:
half of the Jobs are now missing.
many plugins are not functioning well, for example the green-balls plugin is not working and also the entry to launch the backup plugin is missing
many built in Jenkins buttons such the new "Credentials" is missing from the "Jenkins Configure" Menu.
Looking at the FS, i still see all the jobs.
I already tried :
Using the reload configuration
Reinstalling the plugins
Reinstalling Jenkins with the same version once again
Still the jobs are missing
Any idea how to solve it?
Thanks,
Doron
When job is loaded, many of the related Java classes get instantiated. If instantiation fails, usually because some plugin has been removed and that class is no longer available at all, then the job is hidden.
I suspect you have accidentally removed some other plugin too.
Note: Before actually doing anything, take full backup! Easiest is to backup entire Jenkins folder, where the jobs, configuration etc reside.
Easiest solution might be to just install Jenkins from scratch, install the plugins you do need (see below for troubleshooting if you're missing some), then copy the jobs subfolder to the new Jenkins. It might be best to do any configuration under Manage Jenkins by hand, but you can also just try copying the related XML config files.
If you are missing a plugin and can't figure out which one, you should look at jenkins.out.log and jenkins.err.log log files and search for exceptions happening after Jenkins is started. That may give you a clue on what plugin you are missing.
You can also try editing the job XML files to remove build steps you identified from the exceptions (remember to take backups first!), then restart Jenkins or select Reload configuration from disk from Manage Jenkins page.
If not solved, but you find relevant-looking exceptions or something else interesting, please update the question with details.
After I upgraded Jenkins, one of my jobs disappeared. I found out that although my job directory still existed, the config.xml file inside had somehow gone missing.
I restored this file from a backup, after updating all the plugins that needed updating, and reloaded the configuration, and the job reappeared in Jenkins.

Hudson build scripts location - recommendation?

I'm already finishing my project build automation :) with Hudson and Nant.
My project structure is something like
$/Project
build.scripts
script1.build
script2.build
build.properties.xml
Code
Project1
Project2
So Hudson downloads from the root $/Project to the workspace folder.
And everything is ok since the build.scripts are in the workspace, I run them very easily, however what is bugging me is the fact that since the build scripts are inside the workspace, then I can't program Hudson to run automatically either based on time or changes because it will always detect changes to the files (note build.properties.xml which I check out and check in at build time to store some stats).
Where do you recommend these files to go in and still get the advantage of having them source-controlled?
What I ended up doing is to NOT check-in changes to those files. I changed my CI workflow to create another file (local to the workspace only) where the changes are written to.
This way, I still get the last build info written somewhere to pick it up, and avoid the issue of Jenkins detecting the change.
PS: I changed from Hudson to Jenkins since I saw that most plugins ran away from the former. The transition was too smooth to be true.

Resources