How can I enable auto fork syncing on BitBucket cloud ? I cant find the option and have to manually keep the fork updated.
Thanks!
While originally I found this article, it seems this only applies to their server product: https://confluence.atlassian.com/bitbucketserver/keeping-forks-synchronized-776639961.html
This article indicates that its a process you will need to manage manually on local:
https://confluence.atlassian.com/bitbucket/forking-a-repository-221449527.html
After you fork a repository, the original repository is likely to continue to evolve as other users commit changes to it. These changes do not appear in your fork. However, you can pull these changes into your fork later by syncing changes locally from the command line.
While this describes pulling upstream manually, you could probably script something to do this more automatically for your purposes. If I end up doing something like this for our team, I'll update this answer with more details or perhaps someone else will do the same.
Related
Just to quote as an example one can submit a remote-run with some tool like TeamCity (similar to Jenkins) where it will apply delta/patch on what user is trying to commit & produces result whether changes is good from set-of configured checks for that project.
With Github & Jenkins, can such validation be achieved with any plugins out there?, which will avoid breaking a build?
I know with pull-request & status check one can achieve similar end-result. But without commit/push to remote repo of Git - is there a way Jenkins can handle this validation & produce initial result ??
It isn't possible to have GitHub perform checks on data it doesn't have, so if you don't push the data to the remote server, GitHub won't know anything about it and therefore will do nothing.
Jenkins does have a REST API that you could use to do this, provided you equipped each developer with appropriate credentials. However, this is not a common situation and wouldn't be a recommended configuration.
You'd be better off with a script in the repository that users could install as a hook or invoke from a hook that would perform the testing you want. If your CI jobs run a script in your repository, then sharing code between them should be easy.
Note that you shouldn't mandate pre-commit hooks, since they can interfere with advanced users (who may make intentionally incomplete temporary commits) and they can be disabled by users. Any sort of required checks should be done as part of CI, where policy can be enforced appropriately.
Recently, in our enterprise production setup, it seems someone has tried to setup a new job / test definition by using another (copying) from identical job. However, (s)he seems to have NOT saved (and probably, am guessing here, closed the browser with the session being lost).
But the new job got saved though it was not set to stable or active; we knew about this because changes uploaded to gerrit, started failing in this newly setup partial job (because, these changes were in certain repos that met certain TDD settings).
Question: Jenkins system does not have trace of who setup the system in 'configure versions' option. Is there anyway to know the details of who setup the job / when was that done ?
No, Jenkins does not store that information by default.
If your Jenkins instance happen to be running behind an Apache or Nginx web server, there might be access logs that can help you. To find out when the job was created you could look at when its config.xml file was created/modified.
However, there are a few plugins that can add this functionality so that you won't have this problem again:
JobConfigHistory Plugin – Tracks changes in your job configurations and gives the ability to restore old versions.
Audit Trail Plugin – Keeps a log of who performed particular Jenkins operations, such as configuring jobs.
I'm beginning to suspect that this is not possible. I was hoping that I could set up custom access control in Gerrit so that a particular role (defined in TF) would not have read access to a specific branch in a repo.
However, it appears that users with this role are unable to clone the repo at all. I was hoping they'd be able to clone and just not beb able to check out the restricted branch.
Just wondering if anyone else has enountered this and might be able to confirm the behaviour I'm seeing. I did see another thread here recommending gitolite for partial copies but I'm restricted to using TF/Gerrit.
Thanks!
Gerrit comes with a default commit message hook to insert a Change-Id in the footer. The hook defines that the Change-Id should be inserted after Issue or Bug in the footer:
CHANGE_ID_AFTER="Bug|Issue"
Now I would like to modify this so it becomes:
CHANGE_ID_AFTER="Refs|Closes|Fixes"
This is because the issue tracker in use doesn't work with Bug or Issue but with an issue identifier starting with a #.
Now of course I can apply this locally but uniformity is desired across the complete development team. It would help a lot if Gerrit would serve this modified file.
I can't seem to find anything in Gerrit's docs that describes what I would like to achieve.
Since the hook is pulled in through scp I thought I might be able to upload a modified hook the same way. This of course didn't work.
I have administrative rights to Gerrit, so that can't be the issue.
I am trying to migrate the setup here at the office from SVN to Git and am setting up Redmine as the host for our projects and issue management (Currently we use a version of Gforge + SVN). I should preface by saying that I'm an embedded C software developer by day and have basically zero experience with Rails or web apps, but I like trying new things so I volunteered to set up the project management tools which will take us into the future.
I have Redmine setup and am using Gitolite as the Git repo manager. Additionally, I am using the ericpaulbishop/redmine_git_hosting plugin to facilitate automatic public ssh key pushing to Gitolite and automatic repo creation when we register a new project. Everything seems to work except the repo view within the project does not keep track of the changesets. (The "History" is just empty, although when you view the files, it does show the latest version correctly)
I copied the post-receive hook from the plugin's contrib directory to the .gitolite/ common hooks, but again I know little about Ruby and how these gitolite hooks work so I don't know how to debug this. I notice there are log messages and things in the hook, but I have no idea where those are printed, etc...
I even tried the Howto on the Redmine wiki, HowTo setup automatic refresh of repositories in Redmine on commit:
#!/bin/sh
curl "http://<redmine url>/sys/fetch_changesets?key=<your service key>"
Any ideas on where I start debugging? I've been able to resolve every problem up to this point, but I'm a little stuck now. The plugin doesn't make it obvious how this is supposed to work, and to be honest, I'm not even sure if this is a problem with Redmine not reading the repo correctly (or at all), or gitolite not communicating as Redmine expects, etc...
I guess I could answer this...
I checked the issues under the Github page and I found this on:
https://github.com/ericpaulbishop/redmine_git_hosting/issues/89
Which was pretty much exactly my problem. This does appear to be a small bug in the plugin, but you can work around it by changing Max Cache Time to "1 minute or until next commit". This immediately fixed my problem. I simply left it like that but one of the posters claimed that you could change it back to until next commit and it works from then on...