Terraform private endpoint and subsequent apply results in destroy/recreate - terraform-provider-azure

resource "azurerm_synapse_managed_private_endpoint" "example" {
name = "example-endpoint"
synapse_workspace_id = azurerm_synapse_workspace.example.id
target_resource_id = azurerm_storage_account.example_connect.id
subresource_name = "blob"
depends_on = [azurerm_synapse_firewall_rule.example]
}
Question 1# subsequent apply results in destroy/recreate and changing IPaddress. Resulting in production issue?
Question 2# How to auto approve blob storage endpoint ? "Approval State" is set to "Pending"

Question 1# subsequent apply results in destroy/recreate and changing
IPaddress. Resulting in production issue?
Subsequent apply for a terraform resource won't destroy and recreate the existince resource. Terraform works on incremental changes.As the configuration changes, Terraform is able to determine what changed and create incremental execution plans

Related

Bazel: ttl for an artefact

I am writing a bazel rule, and one of the steps is acquiring an authentication token that will expire in some time. When I rebuild this target after that time, the step sees that nothing regarding getting that token has changed, so bazel uses a cached token.
Is there a way to take the TTL of that token into account? Or at least force that step to be rebuilt every time the build is run?
The problem here is, that you actively want to write a rule that breaks bazels hermeticity guarantees.
I would advise to generate the authentication token outside of bazel and inject it into the build. There are several options to inject your secret:
using --action_env=SECRET=$TOKEN as a command-line argument (possibly via a generated .bazelrc). This has the downside of invalidating your entire bazel cache as every rule has to re-execute when the token changes.
generate a secret.bzl somehere containing a SECRET="..." line that you can load() where you need it.
If you don't want to generate the token outside of bazel, you can write a custom repository_rule() that generates a load()able file:
def _get_token_impl(repository_ctx):
repository_ctx.file(
"BUILD.bazel",
"",
executable = False,
)
repository_ctx.file(
"secret.bzl",
"SECRET = {}".format("..."),
executable = False,
)
get_token = repository_rule(
implementation = _get_token_impl,
local = True, # important
)
The local = True here is important:
Indicate that this rule fetches everything from the local system and should be reevaluated at every fetch.

Terraform for_each issue with data type

i have the next code to attach snapshot policy to existing disks for particular instance:
data "alicloud_ecs_disks" "db_disks" {
instance_id = alicloud_instance.db.id
}
resource "alicloud_ecs_auto_snapshot_policy" "common" {
...
}
resource "alicloud_ecs_auto_snapshot_policy_attachment" "db" {
for_each = { for disk in data.alicloud_ecs_disks.db_disks.disks : disk.disk_id => disk }
auto_snapshot_policy_id = alicloud_ecs_auto_snapshot_policy.common.id
disk_id = each.key
}
When i run plan it works well, but after applying the next plan fails with error:
data.alicloud_ecs_disks.db_disks.disks is a list of object, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be
│ determined until apply, so Terraform cannot predict how many instances will
│ be created. To work around this, use the -target argument to first apply
│ only the resources that the for_each depends on.
What the best option to workaround this? it works using plan on some machines and sometimes not. Thanks
The reason for your error is that your alicloud_ecs_disks.db_disks references alicloud_instance.db which is probably created in the same configuration. As the error msg says, you can't use dynamic data in for_each.
The solution is to use -target option first to deploy your alicloud_instance.db first, and then when it has been deployed you can proceed with the rest of your infrastructure.
If you don't want to split your deployment, then you have to re-architect your TF so that there is no dependency on any dynamic content.
The problem is that during first apply snapshot policy has been attached to the system disk. It forces the whole instance re-creation on next plan and instance.id is not know until apply.

PullRequest Build Validation with Jenkins and OnPrem Az-Devops

First off the setup in question:
A Jenkins Instance with several build nodes and on prem Azure-Devops server containing the Git Repositories.
The Repo in question is too large to always build on push for all branches and all devs, so a small workaround was done:
The production branches have a polling enabled twice a day (because of testing duration which is handled downstream more builds would not help with quality)
All other branches have their automated building suppressed. They still can start it manually for Builds/Deployments/Unittests if they so choose.
The jenkinsfile has parameterization for which platforms to build, on prod* all the platforms are true, on all other branches false.
This helps because else the initial build of a feature branch would always build/deploy locally all platforms which would take too much of a load on the server infrastructure.
I added a service endpoint for Jenkins in the Azure Devops, added a Buildvalidation .yml - this basically works because when I call the sourcebranch of the pull request with the merge commitID i added a parameter
isPullRequestBuild which contains the ID of the PR.
snippet of the yml:
- task: JenkinsQueueJob#2
inputs:
serverEndpoint: 'MyServerEndpoint'
jobName: 'MyJob'
isMultibranchJob: true
captureConsole: true
capturePipeline: true
isParameterizedJob: true
multibranchPipelineBranch: $(System.PullRequest.SourceBranch)
jobParameters: |
stepsToPerform=Build
runUnittest=true
pullRequestID=$(System.PullRequest.PullRequestId)
Snippet of the Jenkinsfile:
def isPullRequest = false
if ( params.pullRequestID?.trim() )
{
isPullRequest = true
//do stuff to change how the pipeline should react.
}
In the jenkinsfile I look whether the parameter is not empty and reset the platforms to build to basically all and to run the unittests.
The problem is: if the branch has never run, Jenkins does not already know the parameter in the first run, so it is ignored, building nothing, and returning with 0 because "nothing had to be done".
Is there any way to only run the jenkins build if it hasnt run already?
Or is it possible to get information from the remote call if this was the build with ID 1?
The only other thing would be to Call the Jenkins via web api and check for the last successful build, but in that case I would have have the token somewhere stored in source control.
Am I missing something obvious here? I dont want to trigger the feature branch builds to do nothing more than once, because Devs could lose useful information about their started builds/deployments.
Any ideas appreciated
To whom it may concern with similar problems:
In the end I used the following workaround:
The Jenkins Endpoint is called via a user that only is used for automated builds. So, in case that this user triggered the build, I set everything to run a Pull Request Validation, even if it is the first build. Along the lines of
def causes = currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')
if (causes != null)
{
def buildCauses= readJSON text: currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause').toString()
buildCauses.each
{
buildCause ->
if (buildCause['userId'] == "theNameOfMyBuildUser")
{
triggeredByAzureDevops = true
}
}
}
getBuildcauses must be allowed to run by a Jenkins Admin for that to work.

Jenkins/Gerrit: Multiple builds with different labels from one gerrit event

I create two labels in one of our projects that requires builds on Windows and Linux, so the project.config for that project now looks as follows
[label "Verified"]
function = NoBlock
[label "Verified-Windows"]
function = MaxWithBlock
value = -1 Fails
value = 0 No score
value = +1 Verified
[label "Verified-Unix"]
function = MaxWithBlock
value = -1 Fails
value = 0 No score
value = +1 Verified
This works as intended. Submits require that one succesful build reports verified-windows and the other one verified-linux [1].
However, the two builds are now triggered by the same gerrit event (from 'different' servers, see note), but when they report back only one of the two labels 'survives'.
It seems as though, the plugin collates the two messages that arrive into one comment and only accepts whichever label was the first one to be set.
Is this by design or a bug? Can I work around this?
This is using the older version of the trigger: 2.11.1
[1] I got this to work by adding more than one server and then reconfiguring the messages that are sent back via SSH to gerrit. This is cumbersome and quite non-trivial. I think jobs should be able to override the label that a succesful build will set on gerrit.
This can be adressed by using more than one user name, so the verdicts on labels don't get mixed up. However this is only partially satisfactory, since multiple server connections for the same server also duplicate events from the event stream.
I am currently working on a patch for the gerrit trigger plugin for jenkins to address this issue and and make using different labels more efficient.
Maybe you can solve this challenge by using a post build groovy script.
I provided an example at another topic: https://stackoverflow.com/a/32825278
To be more specific as mentioned by arman1991
Install the Groovy Postbuild Plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin
Use the following example script as PostBuild action in each of your jobs. Modify it to your needs for the Linux verification.
It will do for you:
collect necessary environment variables and status of the job
build feedback message
build ssh command
execute ssh command -> send feedback to gerrit
//Collect all environment variables of the current build job
def env = manager.build.getEnvironment(manager.listener)
//Get Gerrit Change Number
def change = env['GERRIT_CHANGE_NUMBER']
//Get Gerrit Patch Number
def patch = env['GERRIT_PATCHSET_NUMBER']
//Get Url to current job
def buildUrl = env['BUILD_URL']
//Build Url to console output
def buildConsoleUrl = buildUrl + "/console"
//Verification will set to succeded (+1) and feedback message will be generated...
def result = +1
def message = "\\\"Verification for Windows succeeded - ${buildUrl}\\\""
//...except job failed (-1)...
if (manager.build.result.isWorseThan(hudson.model.Result.SUCCESS)){
result = -1
message = "\\\"Verification for Windows failed - ${buildUrl}\\\""
}
//...or job is aborted
if (manager.build.result == hudson.model.Result.ABORTED){
result = 0
message = "\\\"Verification for Windows aborted - ${buildConsoleUrl}\\\""
}
//Send Feedback to Gerrit via ssh
//-i - Path to private ssh key
def ssh_message = "ssh -i /path/to/jenkins/.ssh/key -p 29418 user#gerrit-host gerrit review ${change},${patch} --label=verified-Windows=${result} --message=${message}"
manager.listener.logger.println(new ProcessBuilder('bash','-c',"${ssh_message}").redirectErrorStream(true).start().text)
I hope this will help you to solve your challenge without using the Gerrit Trigger Plugin to report the results.

How to queue another TFS (2012) Build from a TFS Build AND pass process parameters?

The product I work on comprises 3/4 seperate (non-dependant) TFS builds.
I would like to create a single TFS build which queues the other 3/4 builds from within the ProcessTemplate AND, critically, pass process parameters to them. This build would wait for them all to complete and return an overall success/failure of the build.
So my questions are:
Can this be achieved by any existing 'standard' Workflow activities (my manager has had bad experiences with custom workflow activities)?
If not, I am able to 'shell out' to powershell. Can I achieve what I want from within Powershell (accessing the API)?
Maybe using TFSBuild.exe? But I can't find a way of passing the custom process parameters I need.
Any assistance or guidance would be appreciated.
UPDATE
The following powershell script will execute the build, but I'm still at a loss to be able to pass my custom process parameters :-(
function Get-BuildServer
{
param($serverName = $(throw 'please specify a TFS server name'))
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.Client")
[void][System.Reflection.Assembly]::LoadWithPartialName ("Microsoft.TeamFoundation.Build.Client")
$tfs = [Microsoft.TeamFoundation.Client.TeamFoundationServerFactory]::GetServer($serverName)
return $tfs.GetService([Microsoft.TeamFoundation.Build.Client.IBuildServer])
}
$buildserver = Get-BuildServer "http://tfsserver:8080/tfs/My%20Project%20Collection"
$teamProject = "ESI"
$buildDefinition = "iPrl_BuildMaster"
$definition = $buildserver.GetBuildDefinition($teamProject, $buildDefinition)
$request = $definition.CreateBuildRequest()
$buildserver.QueueBuild($request, "None")
Now after googling, I have found the following C# code to update the verbosity and, assuming it's the same for my custom process parameters, I need to convert this to work with the above powershell script. Any ideas?
IDictionary<String, Object> paramValues = WorkflowHelpers.DeserializeProcessParameters(processParameters);
paramValues[ProcessParameterMetadata.StandardParameterNames.Verbosity] = buildVerbosity;
return WorkflowHelpers.SerializeProcessParameters(paramValues);

Resources