Gerrit always show clone command like this :
The gerrit server i build view like this :
Is any plugin should be installed ? I already install download-commands plugin.
this is my gerrit.config file:
[gerrit]
basePath = git
serverId = 8f64957d-327a-4099-93ad-dc3f6fb598fa
canonicalWebUrl = http://192.168.1.188:8090
[database]
type = h2
database = /Users/wxkmac/Documents/gerrit/db/ReviewDB
[auth]
type = HTTP
#httpHeader = SM_USER
[receive]
enableSignedPush = false
[sendemail]
smtpServer = smtp.163.com
smtpServerPort = 465
smtpEncryption = ssl
smtpUser = xxx#163.com
smtpPass = xxxx
sslVerify = false
from=CodeReview<xxx#163.com>
[container]
user = xxx
javaHome = /Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/jre
[sshd]
listenAddress = *:29418
[httpd]
listenUrl = proxy-http://192.168.1.188:7788/
[cache]
directory = cache
[download]
command = checkout
command = cherry_pick
command = pull
command = format_patch
scheme = ssh
scheme = http
scheme = anon_http
scheme = anon_git
scheme = repo_download
Some itens to check:
1) Make sure the donwloads-commands plugin is installed and running without issues
Check Plugins > installed
Check GERRIT-SITE/plugins
Restart Gerrit and check GERRIT-SITE/logs
2) Make sure you have set the download options in GERRIT-SITE/etc/gerrit.config
[download]
scheme = https
3) I think, in your case, you should set the sshd.advertisedAddress option
See more info here.
Related
I have a create provisioner and a destroy provisioner. I've read that apparently, triggers might solve this problem, so they're integrated here, but while this succeeds to build the resources, it doesn't let them destroy this module.thingx.null_resource.script-stuff resource.
I'm not sure I'm using triggers correctly here, and it's more confusing that create time apply works fine, but the destroy time apply fails with the mentioned error.
Here is the module null resource that apparently the error is referring to; includes both the create and destroy time provisioners:
resource "null_resource" "script-stuff" {
### -- testing triggers
triggers = {
dns_zones = var.dns_zones[each.key]
dnat_ip = google_compute_instance.server[each.key].network_interface.0.access_config.0.nat_ip
pem = tls_private_key.node_ssh[each.key].private_key_pem
} ### -- end testing
depends_on = [google_compute_instance.server, google_project_iam_member.list-sa]
for_each = var.list_map
provisioner "remote-exec" {
when = create
inline = [
"cat ${var.dns_zones[each.key]} > /dev/null",
"sensitive-script.sh --create"
]
connection {
type = "ssh"
host = google_compute_instance.server[each.key].network_interface[0].access_config[0].nat_ip
user = "common-user"
private_key = tls_private_key.node_ssh[each.key].private_key_pem
}
}
provisioner "remote-exec" {
when = destroy
inline = [
# "echo ${var.dns_zones[each.key]} > /dev/null", #<-- this doesn't work when terraform is destroying
"echo ${self.triggers.dns_zones[each.key]} > /dev/null",
"sensitive-script.sh --destroy"
]
connection {
type = "ssh"
#host = google_compute_instance.server[each.key].network_interface[0].access_config[0].nat_ip #<-- this doesn't work when terraform is destroying
host = self.triggers.dnat_ip
user = "common-user"
#private_key = tls_private_key.node_ssh[each.key].private_key_pem #<-- this doesn't work when terraform is destroying
private_key = self.triggers.pem
}
}
}
destroy triggered provisioners do not support variables as explained in this GitHub issue:
Allow destroy-time provisioners to access variables
So you can't have any variable in "echo ${var.dns_zones[each.key]} > /dev/null".
I got a sample AWS codepipeline working via the console but need to get it set up via Terraform.
I have two problems, one minor and one major:
The Github stage fails until I go in and edit it via the console, even though I wind up not changing anything that I already had set up in "owner" or "repo"
The more major item is that I keep getting CannotPullContainerError on the build step that keeps anything else from happening. It says "repository does not exist or may require 'docker login'".
The repository DOES exist; I used the command line from my Linux instance to verify the same 'docker login' and 'docker pull' commands that don't work from AWS CodePipeline.
(I know: the buildspec.yml is stupidly insecure but I wanted to get the prototype I had working the same way before I put in kms.)
My buildspec.yml is simple:
version: 0.2
phases:
pre_build:
commands:
- $(aws ecr get-login --no-include-email --region us-west-2)
- docker pull 311541007646.dkr.ecr.us-west-2.amazonaws.com/agverdict-next:latest
build:
commands:
- sudo apt install curl
- curl -sL https://deb.nodesource.com/setup_8.x | sudo bash -
- sudo apt install nodejs -y
- mkdir /root/.aws
- cp ./deployment/credentials /root/.aws/credentials
- cd ./deployment
- bash ./DeployToBeta.sh
Here's the terraform that creates the pipeline. (No 'deploy' step as the 'build' shell script does that from a previous incarnation.)
locals {
github_owner = "My-Employer"
codebuild_compute_type = "BUILD_GENERAL1_LARGE"
src_action_name = "projectname-next"
codebuild_envronment = "int"
}
data "aws_caller_identity" "current" {}
provider "aws" {
region = "us-west-2"
}
variable "aws_region" { default="us-west-2"}
variable "github_token" {
default = "(omitted)"
description = "GitHub OAuth token"
}
resource "aws_iam_role" "codebuild2" {
name = "${var.codebuild_service_role_name}"
path = "/projectname/"
assume_role_policy = "${data.aws_iam_policy_document.codebuild_arpdoc.json}"
}
resource "aws_iam_role_policy" "codebuild2" {
name = "codebuild2_service_policy"
role = "${aws_iam_role.codebuild2.id}"
policy = "${data.aws_iam_policy_document.codebuild_access.json}"
}
resource "aws_iam_role" "codepipeline2" {
name = "${var.codepipeline_service_role_name}"
path = "/projectname/"
assume_role_policy = "${data.aws_iam_policy_document.codepipeline_arpdoc.json}"
}
resource "aws_iam_role_policy" "codepipeline" {
name = "codepipeline_service_policy"
role = "${aws_iam_role.codepipeline2.id}"
policy = "${data.aws_iam_policy_document.codepipeline_access.json}"
}
resource "aws_codebuild_project" "projectname_next" {
name = "projectname-next"
description = "projectname_next_codebuild_project"
build_timeout = "60"
service_role = "${aws_iam_role.codebuild2.arn}"
encryption_key = "arn:aws:kms:${var.aws_region}:${data.aws_caller_identity.current.account_id}:alias/aws/s3"
artifacts {
type = "CODEPIPELINE"
name = "projectname-next-bld"
}
environment {
compute_type = "${local.codebuild_compute_type}"
image = "311541007646.dkr.ecr.us-west-2.amazonaws.com/projectname-next:latest"
type = "LINUX_CONTAINER"
privileged_mode = false
environment_variable {
"name" = "PROJECT_NAME"
"value" = "projectname-next"
}
environment_variable {
"name" = "PROJECTNAME_ENV"
"value" = "${local.codebuild_envronment}"
}
}
source {
type = "CODEPIPELINE"
}
}
resource "aws_codepipeline" "projectname-next" {
name = "projectname-next-pipeline"
role_arn = "${aws_iam_role.codepipeline2.arn}"
artifact_store {
location = "${var.aws_s3_bucket}"
type = "S3"
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "ThirdParty"
provider = "GitHub"
version = "1"
output_artifacts = ["projectname-webapp"]
configuration {
Owner = "My-Employer"
Repo = "projectname-webapp"
OAuthToken = "${var.github_token}"
Branch = "deploybeta_bash"
PollForSourceChanges = "false"
}
}
}
stage {
name = "Build"
action {
name = "projectname-webapp"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["projectname-webapp"]
output_artifacts = ["projectname-webapp-bld"]
version = "1"
configuration {
ProjectName = "projectname-next"
}
}
}
}
Thanks much for any insight whatsoever!
Both issues sound like permission problems.
CodePipeline's console is likely replacing the GitHub OAuth token (with one that works): https://docs.aws.amazon.com/codepipeline/latest/userguide/GitHub-authentication.html
Make sure the CodeBuild role (${aws_iam_role.codebuild2.arn} in the code you provided I think) has permission to access ECR.
I am running my emqtt broker by docker image.
I am trying to catch all messages published on my broker and all acknowledged messages. For this purpose I am trying to use emq_web_hook plugin but when I enabled this plugin from dashboard, client disconnects and then unable to connect again to broker. I tried this with default configurations like
web.hook.api.url = http://127.0.0.1
web.hook.rule.client.connected.1 = {"action": "on_client_connected"}
web.hook.rule.client.disconnected.1 = {"action": "on_client_disconnected"}
web.hook.rule.client.subscribe.1 = {"action": "on_client_subscribe"}
web.hook.rule.client.unsubscribe.1 = {"action": "on_client_unsubscribe"}
web.hook.rule.session.created.1 = {"action": "on_session_created"}
web.hook.rule.session.subscribed.1 = {"action": "on_session_subscribed"}
web.hook.rule.session.unsubscribed.1 = {"action": "on_session_unsubscribed"}
web.hook.rule.session.terminated.1 = {"action": "on_session_terminated"}
web.hook.rule.message.publish.1 = {"action": "on_message_publish"}
web.hook.rule.message.delivered.1 = {"action": "on_message_delivered"}
web.hook.rule.message.acked.1 = {"action": "on_message_acked"}
I also changed the url and provide a url of my node server but it also dis not work.My endpoint was never called.
My dashboard is working but I am unable to publish.
What am I doing wrong or are there any steps I missed?
And How can I catch all those events?
I am confused. Documentation is not clear enough.
Thanks
I am using the default nixos 17.09 channel and want to install an unfree package from the unstable channel.
I am using (import <nixos-unstable> {}).vscode to install vscode in this case, but I am getting the error that I must set ...allowUnfree = true;
It seems that the setting only applies to the default channel.
How can I set allowFree = true; also on the unstable channel?
I found a solution (https://github.com/NixOS/nixpkgs/issues/25880#issuecomment-322855573).
It creates an alias for the unstable channel with the same config.
nixpkgs.config =
{
# Allow proprietary packages
allowUnfree = true;
# Create an alias for the unstable channel
packageOverrides = pkgs:
{
unstable = import <nixos-unstable>
{
# pass the nixpkgs config to the unstable alias
# to ensure `allowUnfree = true;` is propagated:
config = config.nixpkgs.config;
};
};
};
Then you can use it like unstable.vscode instead of (import <nixos-unstable> {}).vscode.
As an alternative example:
{ config, pkgs, ... }:
let
unstable = import <unstable> {
config = config.nixpkgs.config;
};
in
{
environment.systemPackages = with pkgs; [
# google-chrome
unstable.google-chrome
];
nixpkgs.config.allowUnfree = true;
}
I have a TFS 2013 XAML build process template that runs a PowerShell script (which pushes packages to NuGet).
The build activity WriteCustomSummaryInformation was added in TFS2012 for XAML builds. I'd like to use this same activity or implement the same functionality somehow from my script (so that I can show which packages were published). How can I do this?
I figured it out by running the activity and looking at what it added to the build information.
Function New-CustomSummaryInformation($Build, $Message, $SectionHeader, $SectionName, $SectionPriority = 0)
{
$CustomSummaryInformationType = 'CustomSummaryInformation'
$root = $Build.Information.Nodes | ? { $_.Type -eq $CustomSummaryInformationType } | select -First 1
if (!$root)
{
$root = $Build.Information.CreateNode()
$root.Type = 'CustomSummaryInformation'
}
$node = $root.Children.CreateNode()
$node.Type = 'CustomSummaryInformation'
$node.Fields['Message'] = $Message
$node.Fields['SectionHeader'] = $SectionHeader
$node.Fields['SectionName'] = $SectionKeyName
$node.Fields['SectionPriority'] = $SectionPriority
}
[void][Reflection.Assembly]::LoadWithPartialName('Microsoft.TeamFoundation.Client')
[void][Reflection.Assembly]::LoadWithPartialName('Microsoft.TeamFoundation.VersionControl.Client')
[void][Reflection.Assembly]::LoadWithPartialName('Microsoft.TeamFoundation.Build.Client')
$workspaceInfo = [Microsoft.TeamFoundation.VersionControl.Client.Workstation]::Current.GetLocalWorkspaceInfo($env:TF_BUILD_SOURCESDIRECTORY )
$tpc = new-object Microsoft.TeamFoundation.Client.TfsTeamProjectCollection $workspaceInfo.ServerUri
$vcs = $tpc.GetService([Microsoft.TeamFoundation.VersionControl.Client.VersionControlServer])
$buildServer = $tpc.GetService([Microsoft.TeamFoundation.Build.Client.IBuildServer])
$buildDef = $buildServer.GetBuildDefinition("MyProject", "MyBuildDefn")
$build = $buildServer.GetBuild($def.LastBuildUri)
New-CustomSummaryInformation $build -Message "This is a test message" -SectionHeader "This is the header displayed" -SectionName "ThisIsAnInternalKey"
$build.Information.Save()