On Gerrit Web UI, I do not see a way to purge Change-Ids in "Abandoned" state.
While trying out Gerrit we created a good number of Change-Ids that are required to go
away from the Web UI.
Is cleaning from DB directly using SQL scripts, the only alternative?
Gerrit: version 2.4.2
OS: RedHat Based
Please let me know if you require any additional information.
Thanks
Yes, purging the DB is the only way (currently) to completely delete changes. They also will still exist in the repository under the refs/changes/ branch, but won't show up in the Web UI once the database has been purged.
Related
How can I enable auto fork syncing on BitBucket cloud ? I cant find the option and have to manually keep the fork updated.
Thanks!
While originally I found this article, it seems this only applies to their server product: https://confluence.atlassian.com/bitbucketserver/keeping-forks-synchronized-776639961.html
This article indicates that its a process you will need to manage manually on local:
https://confluence.atlassian.com/bitbucket/forking-a-repository-221449527.html
After you fork a repository, the original repository is likely to continue to evolve as other users commit changes to it. These changes do not appear in your fork. However, you can pull these changes into your fork later by syncing changes locally from the command line.
While this describes pulling upstream manually, you could probably script something to do this more automatically for your purposes. If I end up doing something like this for our team, I'll update this answer with more details or perhaps someone else will do the same.
Recently, in our enterprise production setup, it seems someone has tried to setup a new job / test definition by using another (copying) from identical job. However, (s)he seems to have NOT saved (and probably, am guessing here, closed the browser with the session being lost).
But the new job got saved though it was not set to stable or active; we knew about this because changes uploaded to gerrit, started failing in this newly setup partial job (because, these changes were in certain repos that met certain TDD settings).
Question: Jenkins system does not have trace of who setup the system in 'configure versions' option. Is there anyway to know the details of who setup the job / when was that done ?
No, Jenkins does not store that information by default.
If your Jenkins instance happen to be running behind an Apache or Nginx web server, there might be access logs that can help you. To find out when the job was created you could look at when its config.xml file was created/modified.
However, there are a few plugins that can add this functionality so that you won't have this problem again:
JobConfigHistory Plugin – Tracks changes in your job configurations and gives the ability to restore old versions.
Audit Trail Plugin – Keeps a log of who performed particular Jenkins operations, such as configuring jobs.
Is there any way to restore data from neo4j?
I just lost all data and want to restore to previous state of neo4j.
Please help me with this.
Neo4J Server must be configured to run backups. If your server wasn't configured to create backups, then there is not a way to restore your data using Neo4J. This is controlled by the Neo4J config option online_backup_enabled.
This feature is enabled by default in Neo4J 2.1.6 Enterprise. However, you have to manually run a backup in order for one to be created. So, unless you ran a backup, then you aren't going to find one which was automatically taken anywhere on your system. Sorry :-(
In the future, you can configure and run backups following the Neo4J documentation.
I am currently using TFS to source changes to a web site code base. Currently, when I'm done making a change, I need to deploy the changes to a web server for review by the end user.
Generally the way I would do this is just connect to that machine via RDP, open visual studio and get latest to pull changes...
However, this only works if I'm the only one working on the entire site. If someone else RDP in to make changes, the site is locked to my TFS account, and they can't make any changes to it...
They could pull their own copy of the site into their own machine via TFS and check in the changes there but because so much of their part is done on the database (vs code) they'd have to duplicate everything they do into the website every time them commit a change, so they prefer to work directly on the machine...
is there any way to make this work, a better way to set this up so I can pull their changes into my local copy via TFS?
my biggest problem to overcome is the fact that when I Get Latest on the webserver via RDP it locks the entire solution to my TFS account, so that when they login to RDP with their credentials, they can't make any changes because the files are checked in, and of course they can't checkout because of course the solution is tied to my account.
If I can get past that I think we'd be okay.
any info is appreciated, please let me know if I can provide more context, thanks
Can you set up a different TFS workspace for each user on your RDP machine? This should allow multiple users to use the TFS client to pull the same solution on the same machine without issue.
I am trying to migrate the setup here at the office from SVN to Git and am setting up Redmine as the host for our projects and issue management (Currently we use a version of Gforge + SVN). I should preface by saying that I'm an embedded C software developer by day and have basically zero experience with Rails or web apps, but I like trying new things so I volunteered to set up the project management tools which will take us into the future.
I have Redmine setup and am using Gitolite as the Git repo manager. Additionally, I am using the ericpaulbishop/redmine_git_hosting plugin to facilitate automatic public ssh key pushing to Gitolite and automatic repo creation when we register a new project. Everything seems to work except the repo view within the project does not keep track of the changesets. (The "History" is just empty, although when you view the files, it does show the latest version correctly)
I copied the post-receive hook from the plugin's contrib directory to the .gitolite/ common hooks, but again I know little about Ruby and how these gitolite hooks work so I don't know how to debug this. I notice there are log messages and things in the hook, but I have no idea where those are printed, etc...
I even tried the Howto on the Redmine wiki, HowTo setup automatic refresh of repositories in Redmine on commit:
#!/bin/sh
curl "http://<redmine url>/sys/fetch_changesets?key=<your service key>"
Any ideas on where I start debugging? I've been able to resolve every problem up to this point, but I'm a little stuck now. The plugin doesn't make it obvious how this is supposed to work, and to be honest, I'm not even sure if this is a problem with Redmine not reading the repo correctly (or at all), or gitolite not communicating as Redmine expects, etc...
I guess I could answer this...
I checked the issues under the Github page and I found this on:
https://github.com/ericpaulbishop/redmine_git_hosting/issues/89
Which was pretty much exactly my problem. This does appear to be a small bug in the plugin, but you can work around it by changing Max Cache Time to "1 minute or until next commit". This immediately fixed my problem. I simply left it like that but one of the posters claimed that you could change it back to until next commit and it works from then on...