We are running gerrit 2.10.7, and we've been having occasional problems with corrupted objects not being repaired by gerrit gc, even though git gc repairs them fine.
On the other hand, I read that gerrit gc creates bitmaps which optimize other gerrit operations.
So I'm wondering if it wouldn't make sense to run both, especially since gerrit gc doesn't notify me on failures. The only way to confirm gerrit gc ran is to examine the logs.
If I do run both, should I run git gc first, then gerrit gc, or the other way around?
So, what to you do?
Also posted here: https://groups.google.com/forum/#!topic/repo-discuss/XXsoBhqQUiY
If you have corrupted objects which gerrit gc cannot fix, but git gc can, I wonder if it would be worth creating an issue ticket for JGit (Gerrit uses JGit internally). But before doing that, I would suggest to update Gerrit to the most recent version and see if the issue still exists there.
Basically, running Gerrit gc on a regular basis is the suggested way. You can configure your Gerrit instance to run it automatically.
Related
My former web developer setup my site so that it uses Jenkins and GitHub. I understand the very basics of GitHub and even less of Jenkins. But in theory, when I make minor text changes to my website, can't GitHub manage the process of pushing those changes to the server? Or is there some good reason that Jenkins is also involved?
Thank you.
Yes. It's not a must but using both Jenkins and Github will make your life easy. Github and Jenkins are two tools that help you to do different functions.
Github will mainly help you to manage your codebase, resolve conflicts, etc. So it will basically behave as a repository. You can commit your changes and get other's updates and always be up to date. There are tons of other advantages but I'll keep it simple for understanding purpose.
Jenkins is an open-source automation server. In your case, you can automate the product building. For example if you have a test environment or even when you deploy the changes t live, you can do all that with just a click. And you can separately build tests and live environments and With concepts like pipeline, you can even integrate the building with tests, etc.
But if you are talking about your local environment, yes git is enough because you can build the project manually. but in production have git and jenkins both will be a handy option.
Read more on Jenkins
We have our Jira managed by puppet, so we have puppet script to install lira.So after installation we have few file like server.xml,setting.sh manually changed in the server without using puppet.
So we need to commit the changes done back to puppet repo(r10k managed).But how will we identify the files which have changes compared to files in puppet .
You shouldn't be changing files manually on the server and committing them back into puppet.
But how will we identify the files which have changes compared to files in puppet .
You can't.
This fundamentally breaks the idea of infrastructure as code and configuration management. The whole idea of putting your configuration data into puppet is to stop this behaviour so that multiple people can always know what has changed because the changes are tracked in version control.
Make all the changes inside a git repo and then test them using a Puppet run, potentially with --noop if you're worried this may break JIRA.
You need to get a workflow set up so that this is easy, not continue to manipulate files on a server by hand and then expect Puppet to understand what each person has done.
I have a Jenkins server hosted, which has a master node and couple of other slave configurations. Last night, the job that triggers the matrix based build configuration failed. I did a restart and performed clean up jobs via Jenkins but none of those fixed the issue. The initial error that was logged was:
FATAL: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
hudson.remoting.RequestAbortedException: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
Following which I performed a reload configuration from disk,followed by a manual restart via <jenkins_job_url>/restart, which even worsened the build system. The master went offline due to unavailability of space in /tmp folder, which I fixed by cleaning up the space. Following which I observed that the original slave server configuration is no longer seen. I had slave-0 and slave-1 still there, but slave-2 was no longer present. Instead, it got replaced with slave-3 configuration. Now the slave 0's and 1's seems to be working fine. However, slave 3's build are failing due to Failed to mkdirs. Is there a way I could revert back to the original configuration from where I started, since the steps I performed seems to make sense initially, but I had no idea it had so many repercussions? Any help is appreciated.
UPDATE1: I guess I should have used some of the configuration backup plugins available in Jenkins, but is there some specific directory other than $JENKINS_HOME where these configurations gets stored?
You should always backup ${JENKINS_HOME} before doing major changes.
Even better is to have a job based on time trigger that will do this for you once in a while.
Other than that - only physically restoring the hard drive to a previous state will get back your old configs. Once a config is overwritten in Jenkins - it is gone. Except when you are using Job Config History plugin. Though keeping manually created backups is better in my opinion: where's the insurance that JobConfigHistory won't disappear along with the job configs? :)
Aside from that, the mentioned plugin tracks system config too.
As mentioned by #Zloj, there is no easy way to repair once the changes gets overwritten. I ended up fixing the issues by deleting the slaves that were not working, remapping the existing builds to the newer slaves that I created via Copy of the existing slaves that were working, reducing the number of builds(by removing the ones from the matrix that aren't required) and finally, taking a backup via https://wiki.jenkins-ci.org/display/JENKINS/thinBackup plugin and backing up the configuration at Stash :)
for windows just delete .jenkins folder in your home directory. This will revert you to the original settings.
We have been using SCM Sync Configuration plugin and that has saved our butt many times. It stores all job configuration including global config in bitbucket. But the latest plugin will say that its no longer maintained. but I was able to pull the source code from github and rebuild it ourselves.
one word of caution...don't use global variables for storing passwords and keys...this plugin will sync them all to github. Strictly use Jenkins Credentials.
I'm using Jenkins v1.546, hosted on a Windows Server 2008 R2 SP1 machine.
I've set up a fairly simple job for building a Maven Java project. It polls the SCM with no schedule and picks up remote build triggers, requiring an authentication token. It uses Subversion and performs clean checkouts with svn update. Additionally, it has a post-build step that archives some build artifacts (i.e., the resulting WAR and WSDLs).
The issue I'm experiencing is that the builds that it stores on the filesystem itself contain invalid characters in their filenames. This causes our automatic backup process to blow up, it being unable to alter or remove those directories/files with the '$'. I myself cannot move/delete those folders or files either, but if I rename it and remove the $, then things work fine. Oh, and if I try to follow one of these links with the $ in it, it doesn't resolve. None of the other jobs seem to do this - just my job, of course. Anyone know why this may be occurring and what I can do to resolve this?
I've attached multiple screenshots that show the bad filename and my Jenkins job setup. I had to white out some company information. If I can provide any additional information to help troubleshoot this, just let me know.
Also, as an update, I did some additional research, looking through the changelogs for each released version of Jenkins since my version (latest is 1.557). I saw three possible issues in the changelogs that could be related, but it's hard for me to tell. I cannot simply upgrade our Jenkins to test out this theory, since I'll need to provide a reason for upgrading beyond a hunch.
https://issues.jenkins-ci.org/browse/JENKINS-21023
https://issues.jenkins-ci.org/browse/JENKINS-20534
https://issues.jenkins-ci.org/browse/JENKINS-21958
The $ is a perfectly valid character in Windows directory name. You can manually make a folder with it, and delete it without any problems.
The com.company$moduleName syntax is used by Jenkins Maven-style job to separate modules of your build. If you don't see this structure for other people's jobs, it is because they are either not building a Maven job, or they don't have multiple modules in a single job.
What is strange though it that these are symlinks (I don't see that in my environment). It is possible that the location that is referenced by the symlink is deleted, but the link remains. In this case, you would not be able to navigate to that location through the link (this is what you are experiencing)
Is it possible that your backup software is deleting the target directories before deleting the links?
In any case, do a simple dir on the directory with the links to see what they link to. And then verify those target locations exists. If they don't, you need to figure out who/what is deleting the links' targets
Edit:
This seems to be more related to the issue that you are facing. Unfortunately, it's marked as "unresolved"
https://issues.jenkins-ci.org/browse/JENKINS-20725
The issue stems from the fact that the symlinks are referencing to targets with / instead of \
My Maven plugin (not Maven version) is 2.6. See if upgrading your Maven plugin in Jenkins will help you. Also, I am running Maven 3.2.2 from the automatic installers. Try with that, as I don't see symlinks in my modules.
I am on a redhat linux box. I recently updated Jenkins to version 1.509 only to find that after doing so it has "forgotten" two of my jobs/projects. The jobs can still be found on my Jenkins machine under /var/lib/jenkins/jobs, but they no longer show up in the Jenkins GUI. I attempted to re-create them based off the configuration file I have, but I am not confident I have totally re-created the functionality they had.
I also tried to copy the job and or rename it hoping that would get jenkins to see it, but no luck. I had tried cp -r /var/lib/jenkins/jobs/JOB1 /var/lib/jenkins/jobs/JOB2. I also restarted the service a number of times. Finally I updated all of my plugins on the off chance that was somehow related.
So my question is "How can I get Jenkins to notice these jobs?" or failing that "can I run these jobs from in the terminal?"
NOTE: I am not discouraging others from upgrading Jenkins. After I upgraded Jenkins did complain about a number of things which I didn't pay enough attention to which I believe got me into this mess in the first place.
If I were you, I would try the Jenkins CLI (from $JENKINS_URL/cli) and use the create-job command and feed the job configuration file to the cli's stdin.
If that does not help, I would inspect Jenkins log files (you are saving stdout and stderr of Jenkins somewhere, right?) for any errors or clues. If the job failed to load because of some tag that you can guess is provided by a plugin, try to remove that part from the config file.
If that does not help, I would upgrade Jenkins. I think there might be some fixes related to this in the LTS version changelog since 1.509.
And above all... if I were you, I would start making backups of the job configuration files.
I regularly back up the global config.xml, all the job config.xml files and all the plugins. Using these I can set up my Jenkins from scratch. And I do that to set up a test instance where I try any plugin or Jenkins core upgrade. If I see no problems after running a few of the trickiest builds, I know I can upgrade the production instance with much more confidence.