I have a remote server ABC with 3 repositories code jellybean, Kitkat and lollipop.
Now I want to do replication only Lollipop repository on Another server XYZ.
Can anyone help me with the replication.config.
How do i write replication.config.
Gerrit document says
[remote "host-one"]
url = gerrit2#host-one.example.com:/some/path/${name}.git
[remote "pubmirror"]
url = mirror1.us.some.org:/pub/git/${name}.git
url = mirror2.us.some.org:/pub/git/${name}.git
url = mirror3.us.some.org:/pub/git/${name}.git
push = +refs/heads/*
push = +refs/tags/*
threads = 3
Should i mention both host ABC and mirror XYZ URL??
Please explain with example
To enable replication.
First we need to make the mirror of existing repository on slave server.
Then in replication.config in master use projects: to mention repository name.
Related
right now I'm deploying to cloud run and run
gcloud run deploy myapp --tag pr123 --no-traffic
I can then access the app via
https://pr123---myapp-jo5dg6hkf-ez.a.run.app
Now I would like to have a custom domain mapping going to this tag. I know how to point a custom domain to the service but I don't know how to point it to the tagged version of my service.
Can I add labels to the DomainMapping that would cause the mapping to got this version of my cloud run service? Or is there a routeName, eg. myapp#pr123 that would do the trick there?
In the end I would like to have
https://pr123.dev.mydomain.com
being the endpoint for this service.
With a custom domain, you configure a DNS to point to a service, not a revision/tag of the service. So, you can't by this way.
The solution is to use a load balancer with a serverless NEG. The most important is to define the URL mask that you want to map the tag and service from the URL which is received by the Load Balancer.
I ended up building the loadbalancer with a network endpoint group (as suggested). For further reference, here is my terraform snippet to create it. The part is then the traffic tag you assign to your revision.
resource "google_compute_region_network_endpoint_group" "api_neg" {
name = "api-neg"
network_endpoint_type = "SERVERLESS"
region = "europe-west3"
cloud_run {
service = data.google_cloud_run_service.api_dev.name
url_mask = "<tag>.preview.mydomain.com"
}
}
So heres the thing ,
We have two webhooks setup on the same repository in Gitlab ,
Webhooks number 1 is set to url : http://jenkins.local/project/job1 (build job from master branch )
Webhooks number 2 is set to url : http://jenkins.local/project/job2 (builds job from branch "1" )
The issue we're trying to overcome is , whenever there is a mergre request being opened
Both of those web hooks are being triggered .
Is there a way to "configure" the webhooks to fire only when a merge reuqest is being made into the master / 1 branch ,
i haven't found such settings in settings -> integrations
Webhook settings info
Currently, the option to restrict webhooks per branch is only available for Push events; for Merge requests events; there isn't a way to restrict/filter.
You have to filter it in your Jenkins job (which job to get fired; if that's also you looking for) as an example of GitLab plugin like this -
Job1:-
triggers {
gitlabPush {
buildOnMergeRequestEvents(true)
targetBranchRegex('master')
}
}
Job2:-
triggers {
gitlabPush {
buildOnMergeRequestEvents(true)
targetBranchRegex('branch1')
}
}
What is libgit2sharp's equivalent to the following git command?
git pull origin master --allow-unrelated-histories
I have a scenario where I have to merge a branch from another remote with an unrelated history. When i want to push the merged result to this remote, I'll get a NonFastForwardException.
So far I implemented the following code:
using (var repo = new Repository(repoPath))
{
Remote remote = repo.Network.Remotes.Add("newRemote", targetRepositoryUrl);
repo.Branches.Update(repo.Branches["wikiMaster"], b => b.Remote = "newRemote");
Commands.Pull(repo, sig, pullOptions);
// Will throw "NonFastForwardException"
repo.Network.Push(repo.Branches["wikiMaster"], pushOptions);
}
pullOptions and pushOptions do only contain the credentials.
We use Bitbucket server and want to trigger a Jenkins build whenever something is pushed to Bitbucket.
I tried to set up everything according to this page:
https://wiki.jenkins.io/display/JENKINS/BitBucket+Plugin
So I created a Post Webhook in Bitbucket, pointing at the Jenkins Bitbucket plugin's endpoint.
Bitbucket successfully notifies the plugin when a push occurs. According to the Jenkins logs, the plugin then iterates over all jobs where "Build when a change is pushed to BitBucket" is checked, and tries to match that job's repo URL to the URL of the push that occurred.
So, if the repo URL is
https://jira.mycompany.com/stash/scm/PROJ/project.git, the plugin tries to match it against
https://jira.mycompany.com/stash/PROJ/project, which obviously fails.
As per official info from Atlassian, Bitbucket cannot be prevented from inserting the "/scm/" part in the path.
The corresponding code in the Bitbucket Jenkins plugin is in class com.cloudbees.jenkins.plugins.BitbucketPayloadProcessor:
private void processWebhookPayloadBitBucketServer(JSONObject payload) {
JSONObject repo = payload.getJSONObject("repository");
String user = payload.getJSONObject("actor").getString("username");
String url = "";
if (repo.getJSONObject("links").getJSONArray("self").size() != 0) {
try {
URL pushHref = new URL(repo.getJSONObject("links").getJSONArray("self").getJSONObject(0).getString("href"));
url = pushHref.toString().replaceFirst(new String("projects.*"), new String(repo.getString("fullName").toLowerCase()));
String scm = repo.has("scmId") ? repo.getString("scmId") : "git";
probe.triggerMatchingJobs(user, url, scm, payload.toString());
} catch (MalformedURLException e) {
LOGGER.log(Level.WARNING, String.format("URL %s is malformed", url), e);
}
}
}
In the JSON payload that Bitbucket sends to the plugin, the actual checkout URL doesn't appear, only the link to the repository's Bitbucket page. The above method from the plugin appears to construct the checkout URL from that URL by removing everything after and including projects/ and adding the "full name" of the repo, resulting in the above wrong URL.
Official info from Atlassian is that Bitbucket cannot be prevented from adding the "scm" part to the checkout URL.
Is this a bug in the Jenkins plugin? If so, how can the plugin work for anyone?
I found the reason for the failure.
The issue is that the Bitbucket plugin for Jenkins does account for the /scm part in the path, but only if it's the first part after the host name.
If your Bitbucket server instance is configured not under its own domain but under a path of another service, matching the checkout URLs will fail.
Example:
https://bitbucket.foobar.com/scm/PROJ/myproject.git will work,
https://jira.foobar.com/stash/scm/PROJ/myproject.git will not work.
Someone who also had this problem has already created a fix for the plugin, the pull request for which is pending: JENKINS-49177: Now removing first occurrence of /scm
I want to try to get information of local git that pushed to my git server, so I can use username as an user in my authorization and then user just need to put their password. How can I solve this problem because I can't get that information. I used library to make it, maybe someone could help me and I would appreciate any help from you
This is how I get git request to the server
public ActionResult Smart(string username, string project, string service, string verb)
{
switch (verb)
{
case "info/refs":
return InfoRefs(username, project, service);
case "git-upload-pack":
return ExecutePack(username, project, "git-upload-pack");
case "git-receive-pack":
return ExecutePack(username, project, "git-receive-pack");
default:
return RedirectToAction("Tree", "Repository", new { Name = project });
}
}
There is no such thing as a git username. There are the signatures for the author and committer for each commit and then there's whatever you made the user authenticate with.
The only way you're going to be able to know who initiated the push is to ask the authentication layer that you put in front of the git protocol. If you use HTTP to serve the repository, that information would be in the HTTP library, if you use SSH, in the SSH library.