Azure Elastic Premium Service Plan - IAC using Bicep - devops

Had an Azure DevOps Pipeline that deploys IAC using Bicep Templates. This was working a few days ago - but has now stopped. I now get an error.... Inner Errors:
{"message": "Object reference not set to an instance of an object."}
Code for template is as follows...
resource hostingPlan 'Microsoft.Web/serverfarms#2020-06-01' = {
tags: {
Project: project
}
name: hostingPlanName
location: location
sku: {
name: 'EP1'
tier: 'ElasticPremium'
}
kind: 'elastic'
}
Seems like i can't create an Elastic Premium Plan anymore.
Can anyone help? Thanks.

Related

Terraform docker_registry_image error: 'unable to get digest: Got bad response from registry: 400 Bad Request'

I am trying to use CDF for terraform to build and push a docker image to AWS ECR. I have decided to use terraform docker provider for it. Here is my code
class MyStack extends TerraformStack {
constructor(scope: Construct, name: string) {
super(scope, name);
const usProvider = new aws.AwsProvider(this, "us-provider", {
region: "us-east-1",
defaultTags: {
tags: {
Project: "CV",
Name: "CV",
},
},
});
const repo = new aws.ecr.EcrpublicRepository(this, "docker-repo", {
provider: usProvider,
repositoryName: "cv",
forceDestroy: true,
});
const authToken = new aws.ecr.DataAwsEcrpublicAuthorizationToken(
this,
"auth-token",
{
provider: usProvider,
}
);
new docker.DockerProvider(this, "docker-provider", {
registryAuth: [
{
address: repo.repositoryUri,
username: authToken.userName,
password: authToken.password,
},
],
});
new docker.RegistryImage(this, "image-on-public-ecr", {
name: repo.repositoryUri,
buildAttribute: {
context: __dirname,
},
});
}
}
But during deployment, I have this error: Unable to create image, image not found: unable to get digest: Got bad response from registry: 400 Bad Request. But it still is able to push to the registry, I can see it from the AWS console.
I can't seem to find any mistake in my code, and I don't understand the error. I hope you can help
In the Terraform execution model is build so that Terraform first finds all the information it needs to get the current state of your infrastructure and the in a second step calculates the plan of changes that need to be applied to get the current state into the that you described through your configuration.
This poses a problem here, the provider you declare is using information that is only available once the plan is being put into action, there is no repo url / auth token before the ECR repo is being created.
There a different ways to solve this problem: You can make use of the cross-stack references / multi-stack feature and split the ECR repo creation into a separate TerraformStack that deploys beforehand. You can pass a value from that stack into your other stack and use it to configure the provider.
Another way to solve this is by building and pushing your image outside of the terraform provider through the null provider with a local provisioner as it's done in the docker-aws-ecs E2E example

Bitbucket Cloud interceptor for Tekton EventListener

I'm creating an eventlisterner for my repo on Bitbucket Cloud and saw on the curent example on the Tekton documentation that the Bitbucket interceptor only support Bitbucket Server.
I've created the eventlistener and looks like this:
---
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: bitbucket-el
spec:
serviceAccountName: tekton-triggers-admin
triggers:
- name: bitbucket-triggers
interceptors:
- bitbucket:
secretRef:
secretName: bitbucket-secret
secretKey: secretToken
eventTypes:
- cel:
filter: "header.match('X-Event-Key', 'repo:push')"
overlays:
- key: extensions.tag_name
expression: "split(body.ref, '/')[2]"
- key: extensions.mangledtag
expression: "split(split(body.ref, '/')[2], '.')[0]+'-'+split(split(body.ref, '/')[2], '.')[1]+'-'+split(split(body.ref, '/')[2], '.')[2]"
bindings:
- ref: bitbucket-binding
template:
ref: bitbucket-template
and I pass it the token generated (bitbucket-secret) from Bitbucket Cloud consumer secret by going through this doc: https://support.atlassian.com/bitbucket-cloud/docs/use-oauth-on-bitbucket-cloud/
I used basic auth on Ingress and the webhook return 401 Unauthorized and now after I remove the basic auth and then trigger the webhook with a push I'm seeing 403 Forbiden.
Check the image below for illustartion:
enter image description here
Thank you in advance
I spend alot of time on this issue and finally fixed it by using the CEL expression interceptors, as follows.
In this Trigger, we are using the overlays to add the "X-Hub-Signature" to the body of the payload, where the expression value i.e., 1234567 doesn't matter it can be anything, we are just adding the HMAC to the body so that we will not get an error.
Note: By default, there is no interceptor for the bitbucket CLOUD
apiVersion: triggers.tekton.dev/v1beta1
kind: Trigger
metadata:
name: energy
spec:
serviceAccountName: pipeline
interceptors:
- ref:
name: "cel"
params:
- name: "filter"
value: "header.match('X-Event-Key', 'repo:push')"
- name: "overlays"
value:
- key: X-Hub-Signature
expression: "1234567"
bindings:
- ref: energy
template:
ref: energy
I am trying to achieve the same, starting a build when a PR merge has been done in BitBucket cloud.
I was able to create the EventListener resource, but my pipeline is not triggered after merging a PR.
Looking at your example, I still have some questions
How is the GIT repository and the secret configured?
How can you specify a specific branch?
I was looking for a complete example but it seems like Tekton is just ignoring Bitbucket Cloud as a VCS ...
Kind regards,
Bregt

Azure devops pipeline restAPI How to pass varible with SourceFolder for CopyFiles Task , self hosted agent in container

my set up is as follow
I have hosted agent that as first job copies files from the self-hosted agent which is started as a docker container
the hosted pipeline is triggered with pipeline "run" rest API :
https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/run%20pipeline?view=azure-devops-rest-6.0
this is how the body looks like now :
"resources": {
"repositories:": {
"self": {
"refName": "refs/heads/my_branch"
}
}
}
it is working great.
now the part of the hosted pipeline looks like this :
- job: self_hosted_connect
timeoutInMinutes: 10
pool: Default
steps:
- task: CopyFiles#2
inputs:
SourceFolder: '/home/copy_dir'
Contents: '**'
TargetFolder: '$(build.artifactstagingdirectory)'
also, work great.
My questions are :
I like to send in the "run" rest API another parameter that contains the SourceFolder path
so that the CopyFiles task will be dynamic and not have hardcode SourceFolder path
When i run the self-hosted agent from docker how do i tell the self-hosted agent to include the directory outside its working dir? so the pipeline will not fail with the error :
#[error]Unhandled: Not found SourceFolder: /home/copy_dir
UPDATE
i updated the request to :
{
"resources": {
"repositories:": {
"self": {
"refName": "refs/heads/my_branch"
}
}
},
"templateParameters": {
"Folderpath":"{/home/foo/my_dir}"
}
}
but I'm getting an error:
{
"$id": "1",
"innerException": null,
"message": "Unexpected parameter 'Folderpath'",
"typeName": "Microsoft.Azure.Pipelines.WebApi.PipelineValidationException, Microsoft.Azure.Pipelines.WebApi",
"typeKey": "PipelineValidationException",
"errorCode": 0,
"eventId": 3000
}
send in the "run" rest API another parameter that contains the SourceFolder path
We can use runtime parameters in pipeline.
YAML sample:
parameters:
- name: Folderpath
displayName: 'configure Folder path'
type: string
default: {SourceFolder path}
steps:
- task: CopyFiles#2
inputs:
SourceFolder: '${{ parameters.Folderpath}}'
Contents: '**'
TargetFolder: '$(build.artifactstagingdirectory)'
Request URL:
POST https://dev.azure.com/{organization}/{project}/_apis/pipelines/{pipelineId}/runs?api-version=6.0-preview.1
Request Body:
{
"resources":{
"repositories":{
"self":{"refName":"refs/heads/{my_branch}"
}
}
},
"templateParameters": {
"Folderpath":"{SourceFolder path}"
}
}
how do i tell the self-hosted agent to include the directory outside its working dir?
We can copy the local folder or azure DevOps predefined variables to define the source folder.
Update1
We should define the parameter in the YAML build, if not, we will get the error Unexpected parameter 'Folderpath'"
UPDATE 2
as i like it to take from the real path (the one i pass in the request ) on the disc where the self-hosted docker
running and not relative to the docker working dir, so now it gives me this error :
[error]Unhandled: Not found SourceFolder: /azp/agent/_work/1/s/{/home/copy_dir}
where /azp is the docker working dir
i configured docker from this link :
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops

Deploying Jenkins to AWS using cloudformation and secrets manager

My objective is to build Jenkins as a docker image and deploy it to AWS Elastic Beanstalk.
To build the docker image I am using the Configuration as Code plugin and injecting all secrets via environment variables in the Dockerfile.
What I am trying to figure out now is how to automate this deployment using CloudFormation or CodePipeline.
My question is:
Can I fetch secrets from AWS Secrets Manager using either CloudFormation or CodePipeline and inject them as environment variables in the deployment to Elastic Beanstalk?
Not sure why you want to do stuff in this way in general, but couldn't you just use the AWS CLI to get the secrets from Secrets Manager directly from your ELB instance?
Cloudformation templates can recover secrets from Secrets Manager. It is somewhat ugly, but works pretty well. In general, I use a security.yaml nested stack to generate secrets for me in SM, then recover them in other stacks.
I can't speak too much to EB, but if you are deploying that through CF, then this should help.
Generating a secret in SM (CF security.yaml):
Parameters:
DeploymentEnvironment:
Type: String
Description: Deployment environment, e.g. prod, stage, qa, dev, or userdev
Default: "dev"
...
Resources:
...
RegistryDbAdminCreds:
Type: 'AWS::SecretsManager::Secret'
Properties:
Name: !Sub "RegistryDbAdminCreds-${DeploymentEnvironment}"
Description: "RDS master uid/password for artifact registry database."
GenerateSecretString:
SecretStringTemplate: '{"username": "artifactadmin"}'
GenerateStringKey: "password"
PasswordLength: 30
ExcludeCharacters: '"#/\+//:*`"'
Tags:
-
Key: AppName
Value: RegistryDbAdminCreds
Using the secret in another yaml:
Parameters:
DeploymentEnvironment:
Type: String
Description: Deployment environment, e.g. prod, stage, qa, dev, or userdev
Default: "dev"
...
Resources:
DB:
Type: 'AWS::RDS::DBInstance'
DependsOn: security
Properties:
Engine: postgres
DBInstanceClass: db.t2.small
DBName: quilt
MasterUsername: !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:username}}'
MasterUserPassword: !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:password}}'
StorageType: gp2
AllocatedStorage: "100"
PubliclyAccessible: true
DBSubnetGroupName: !Ref SubnetGroup
MultiAZ: true
VPCSecurityGroups:
- !GetAtt "network.Outputs.VPCSecurityGroup"
Tags:
- Key: Name
Value: !Join [ '-', [ !Ref StackName, "dbinstance", !Ref DeploymentEnvironment ] ]
The trick is in !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:username}}' and !Sub '{{resolve:secretsmanager:RegistryDbAdminCreds-${DeploymentEnvironment}:SecretString:password}}'

Override Default Jenkins Pipeline Node Block

My company has a small pipeline library that we implicitly load for every build. Is there a way to overload the node { block of every build transparently?
My specific case is that I'm provisioning kubernetes slaves with the kubernetes plugin, and I want to provide a default YAML template, while allowing users to pick another template or override specific values. Eg:
node {
// Gets you a Pod with a DinD engine with a low CPU/Mem request/limit
}
Optionally overridden by name:
node('2-core') {
// Gets you a Pod with a DinD engine with 2 CPU/ more Mem request/limit
}
Or overridden with a template:
import com.foo.utils.PodTemplates
slaveTemplates = new PodTemplates()
slaveTemplates.bigPod {
node {
// Big node
}
}
Or:
def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: redis
image: redis
"""
) {
node (label) {
// Same small pod as before PLUS a redis container
}
}
This seems trickiest, since you want the values of the parent to override the values of the child.
You can do this, but, in my opinion, it will lead to confusing behavior and possibly strange error cases.
For example:
echo.groovy
def call(String string) {
steps.echo "Calling step echo: $string"
}
Jenkinsfile
echo 'hello'
Output:
Calling step echo: hello
There is a blog post here that demonstrates this a little more in depth
Paid support for some pipeline restriction tools are offered by CloudBees that might solve your use case
The heaviest way to accomplish this is to of course write a plugin.

Resources