Using ApplicationLoadBalancedFargateService with aws-codepipeline-actions - aws-cdk

// fargate
const ecsService = new patterns.ApplicationLoadBalancedFargateService(this, 'Service', {
cluster: cluster, // Required
publicLoadBalancer: true,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('nginx')
}
});
// codepipeline artifact
const sourceOutput = new codepipeline.Artifact();
// pipeline
const pipeline = new codepipeline.Pipeline(this, 'Pipeline');
// pipeline stage: Source
pipeline.addStage({
stageName: 'Source',
actions: [
new codepipeline_actions.EcrSourceAction({
actionName: 'ecr_push',
repository: repository,
output: sourceOutput
})
]
});
// pipeline stage: Deploy
pipeline.addStage({
stageName: 'Deploy',
actions: [
new codepipeline_actions.EcsDeployAction({
actionName: 'Deploy',
input: sourceOutput,
service: ecsService
})
]
});
Using patterns ApplicationLoadBalancedFargateService to create fagate service
But, codepipeline_actions EcsDeployAction props service required type ecs.BaseService
How to resolve this problem? Back to build fargae service from scrath ?
Any suggestion will be appreciated !!

The ApplicationLoadBalancedFargateService higher-level pattern has a service property that is exposed on the instance. The type of ecsService.service is FargateService which implements the IBaseService interface. Your code should work if you change it to :
pipeline.addStage({
stageName: 'Deploy',
actions: [
new codepipeline_actions.EcsDeployAction({
actionName: 'Deploy',
input: sourceOutput,
service: ecsService.service, // <-
})
]
});
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ecs-patterns.ApplicationLoadBalancedFargateService.html#service

Related

cdk watch command forces full deploy with unrelated error on file change

I'm developing a little CDKv2 script to instantiate a few AWS services.
I have some lambda code deployed in the lambda/ folder and the frontend stored in a bucket populated using the frontend/ folder in the source.
I've noticed that whenever I make a change to any of the file inside these two, cdk watch return the following error and falls back to perform a full redeploy (which is significantly slow).
Could not perform a hotswap deployment, because the CloudFormation template could not be resolved: Parameter or resource 'DomainPrefix' could not be found for evaluation
Falling back to doing a full deployment
Is there any way to make changes in these folders only trigger updating the related bucket content or the related lambda?
Following here the stack.ts for quick reference, just in case here you can take a look at the repo.
export class CdkAuthWebappStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const domainPrefixParam = new CfnParameter(this, 'DomainPrefix', {
type: 'String',
description: 'You have to set it in google cloud as well', //(TODO: add link to explain properly)
default: process.env.DOMAIN_NAME || ''
})
const googleClientIdParam = new CfnParameter(this, 'GoogleClientId', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_ID || ''
})
const googleClientSecretParam = new CfnParameter(this, 'GoogleClientSecret', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_SECRET || ''
})
if(!domainPrefixParam.value || !googleClientIdParam.value || !googleClientSecretParam.value){
throw new Error('Make sure you initialized DomainPrefix, GoogleClientId and GoogleClientSecret in the stack parameters')
}
const s3frontend = new s3.Bucket(this, 'Bucket', {
bucketName: domainPrefixParam.valueAsString+'-frontend-bucket',
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
encryption: s3.BucketEncryption.S3_MANAGED,
enforceSSL: true,
versioned: false,
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
websiteIndexDocument: "index.html",
});
//TODO: fare in modo che questa origin access identity non sia legacy quando deployo
const cfdistributionoriginaccessidentity = new cloudfront.OriginAccessIdentity(this, 'CFOriginAccessIdentity', {
comment: "Used to give bucket read to cloudfront"
})
const cfdistribution = new cloudfront.CloudFrontWebDistribution(this, 'CFDistributionFrontend', {
originConfigs: [
{
s3OriginSource: {
s3BucketSource: s3frontend,
originAccessIdentity: cfdistributionoriginaccessidentity
},
behaviors: [{
isDefaultBehavior: true,
allowedMethods: cloudfront.CloudFrontAllowedMethods.GET_HEAD_OPTIONS,
forwardedValues: {
queryString: true,
cookies: { forward: 'all' }
},
minTtl: cdk.Duration.seconds(0),
defaultTtl: cdk.Duration.seconds(3600),
maxTtl: cdk.Duration.seconds(86400)
}]
}
]
})
s3frontend.grantRead(cfdistributionoriginaccessidentity)
const cfdistributionpolicy = new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['cloudfront:CreateInvalidation'],
resources: [`"arn:aws:cloudfront::${this.account}:distribution/${cfdistribution.distributionId}"`]
});
const userpool = new cognito.UserPool(this, 'WebAppUserPool', {
userPoolName: 'web-app-user-pool',
selfSignUpEnabled: false
})
const userpoolidentityprovidergoogle = new cognito.UserPoolIdentityProviderGoogle(this, 'WebAppUserPoolIdentityGoogle', {
clientId: googleClientIdParam.valueAsString,
clientSecret: googleClientSecretParam.valueAsString,
userPool: userpool,
attributeMapping: {
email: cognito.ProviderAttribute.GOOGLE_EMAIL
},
scopes: [ 'email' ]
})
// this is used to make the hostedui reachable
userpool.addDomain('Domain', {
cognitoDomain: {
domainPrefix: domainPrefixParam.valueAsString
}
})
const CLOUDFRONT_PUBLIC_URL = `https://${cfdistribution.distributionDomainName}/`
const client = userpool.addClient('Client', {
oAuth: {
flows: {
authorizationCodeGrant: true
},
callbackUrls: [
CLOUDFRONT_PUBLIC_URL
],
logoutUrls: [
CLOUDFRONT_PUBLIC_URL
],
scopes: [
cognito.OAuthScope.EMAIL,
cognito.OAuthScope.OPENID,
cognito.OAuthScope.PHONE
]
},
supportedIdentityProviders: [
cognito.UserPoolClientIdentityProvider.GOOGLE
]
})
client.node.addDependency(userpoolidentityprovidergoogle)
// defines an AWS Lambda resource
const securedlambda = new lambda.Function(this, 'AuhtorizedRequestsHandler', {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset('lambda'),
handler: 'secured.handler'
});
const lambdaapiintegration = new apigw.LambdaIntegration(securedlambda)
const backendapigw = new apigw.RestApi(this, 'AuthorizedRequestAPI', {
restApiName: domainPrefixParam.valueAsString,
defaultCorsPreflightOptions: {
"allowOrigins": apigw.Cors.ALL_ORIGINS,
"allowMethods": apigw.Cors.ALL_METHODS,
}
})
const backendapiauthorizer = new apigw.CognitoUserPoolsAuthorizer(this, 'BackendAPIAuthorizer', {
cognitoUserPools: [userpool]
})
const authorizedresource = backendapigw.root.addMethod('GET', lambdaapiintegration, {
authorizer: backendapiauthorizer,
authorizationType: apigw.AuthorizationType.COGNITO
})
const s3deploymentfrontend = new s3deployment.BucketDeployment(this, 'DeployFrontEnd', {
sources: [
s3deployment.Source.asset('./frontend'),
s3deployment.Source.data('constants.js', `const constants = {domainPrefix:'${domainPrefixParam.valueAsString}', region:'${this.region}', cognito_client_id:'${client.userPoolClientId}', apigw_id:'${backendapigw.restApiId}'}`)
],
destinationBucket: s3frontend,
distribution: cfdistribution
})
new cdk.CfnOutput(this, 'YourPublicCloudFrontURL', {
value: CLOUDFRONT_PUBLIC_URL,
description: 'Navigate to the URL to access your deployed application'
})
}
}
Recording the solution from the comments:
Cause:
cdk watch apparently does not work with template parameters. I guess this is because the default --hotswap option bypasses CloudFormation and deploys instead via SDK commands.
Solution:
Remove the CfnParamters from the template. CDK recommends not using parameters in any case.
Perhaps cdk watch --no-hotswap would also work?

AWS CDK. Aurora + fargate service

I am trying to create a Fargate Service that connects to an Aurora Postgres DB through the CDK, but I am unable. I get an error connection. This should be pretty straightforward though. Does anybody have any resources?
export class myStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const DATABASE_NAME = "myDatabase";
const vpc = new ec2.Vpc(this, "myVpc");
const cluster = new ecs.Cluster(this, "myCluster", {
vpc: vpc
});
const databaseAdminSecret = new Secret(this, 'myCredentialsSecret', {
secretName: 'database-secret',
description: 'Database auto-generated user password',
generateSecretString: {
secretStringTemplate: JSON.stringify({username: 'boss'}),
generateStringKey: 'password',
passwordLength: 30,
excludeCharacters: "\"#/\\",
excludePunctuation: true,
}
});
const database = new rds.DatabaseCluster(this, 'myDatabase', {
engine: rds.DatabaseClusterEngine.auroraPostgres({
version: rds.AuroraPostgresEngineVersion.VER_14_5
}),
credentials: rds.Credentials.fromSecret(databaseAdminSecret),
instanceProps: {
vpc,
},
defaultDatabaseName: DATABASE_NAME,
port: 5432,
});
// Create a load-balanced Fargate service and make it public
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, "myService", {
cluster: cluster, // Required
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('my-custom-image'),
environment: {
DB_URL: `jdbc:postgresql://${database.clusterEndpoint.socketAddress}/${DATABASE_NAME}`,
DB_USERNAME: databaseAdminSecret.secretValueFromJson('username').unsafeUnwrap().toString(),
DB_PASSWORD: databaseAdminSecret.secretValueFromJson('password').unsafeUnwrap().toString(),
},
},
});
// Allow the service to connect to the database
database.connections.allowDefaultPortFrom(service.service);
}
}
When I spin up this stack with cdk deploy my Fargate service ends up dying because "Connection was refused".
What am I doing wrong?
Thanks,

Dynamically filling parameters from a file in a Jenkins pipeline

TL;DR:
I would like to use ActiveChoice parameters in a Multibranch Pipeline where choices are defined in a YAML file in the same repository as the pipeline.
Context:
I have config.yaml with the following contents:
CLUSTER:
dev: 'Cluster1'
test: 'Cluster2'
production: 'Cluster3'
And my Jenkinsfile looks like:
pipeline {
agent {
dockerfile {
args '-u root'
}
}
stages {
stage('Parameters') {
steps {
script {
properties([
parameters([
[$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
description: 'Select the Environemnt from the Dropdown List',
filterLength: 1,
filterable: false,
name: 'Env',
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: true,
script:
"return['Could not get The environemnts']"
],
script: [
classpath: [],
sandbox: true,
script:
'''
// Here I would like to read the keys from config.yaml
return list
'''
]
]
]
])
])
}
}
}
stage("Loading pre-defined configs") {
steps{
script{
conf = readYaml file: "config.yaml";
}
}
}
stage("Gather Config Parameter") {
options {
timeout(time: 1, unit: 'HOURS')
}
input {
message "Please submit config parameter"
parameters {
choice(name: 'ENV', choices: ['dev', 'test', 'production'])
}
}
steps{
// Validation of input params goes here
script {
env.CLUSTER = conf.CLUSTER[ENV]
}
}
}
}
}
I added the last 2 stages just to show what I currently have working, but it's a bit ugly as a solution:
The job has to be built without parameters, so I don't have an easy track of the values I used for each job.
I can't just built it with parameters and just leave, I have to wait for the agent to start the job, reach the stage, and then it will finally ask for input.
Choices are hardcoded.
The issue I'm currently facing is that config.yaml doesn't exist in the 'Parameters' stage since (as I understand) the repository hasn't been cloned yet. I also tried using
def yamlFile = readTrusted("config.yaml")
within the groovy code but it didn't work either.
I think one solution could be to try to do a cURL to the file, but I would need Git credentials and I'm not sure that I'm going to have them at that stage.
Do you have any other ideas on how I could handle this situation?

(#aws-cdk/pipelines) Build stage for application source code

I worked through the CDK Pipelines: Continuous delivery for AWS CDK applications tutorial, which gave an overview of creating a self-mutating CDK pipeline with the new CodePipeline API.
The tutorial creates a CodePipeline with the CDK source code automatically retrieved from a GitHub repo every time a change is pushed to the master branch. The CDK code defines a lambda with a typescript handler defined alongside the CDK.
For my use case, I would like to define a self-mutating CodePipeline that is also triggered whenever I push to a second repository containing my application source code. The second repository will also contain a buildspec that generates a Docker image with my application and uploads the image to ECR. The new image will then be deployed to Fargate clusters in the application stages of my pipeline.
I've created an ApplicationBuild stage after the PublishAssets stage, which includes a CodeBuild project. The CodeBuild project reads from my repository and builds / uploads the image to ECR; however, I need a way to link this CodeBuild to the deployment of the pipeline. It's not clear to me how to do this with the new cdk CodePipeline API.
In case anyone has the same problem, I was able to hack together a solution using the legacy CdkPipeline API following the archived version of the tutorial I mentioned in my question.
Here is a minimum viable pipeline stack that includes...
a CDK pipeline source action (in "Source" stage)
an application source action (in "Source" stage)
a CDK build action (in "Build" stage) + self-mutating pipeline ("UpdatePipeline" stage)
an application build action (in "Build" stage)
lib/cdkpipelines-demo-pipeline-stack.ts
import * as codepipeline from '#aws-cdk/aws-codepipeline';
import * as codepipeline_actions from '#aws-cdk/aws-codepipeline-actions';
import * as core from '#aws-cdk/core';
import {Construct, SecretValue, Stack, StackProps} from '#aws-cdk/core';
import {CdkPipeline, SimpleSynthAction} from "#aws-cdk/pipelines";
import * as iam from "#aws-cdk/aws-iam";
import * as ecr from "#aws-cdk/aws-ecr";
import * as codebuild from "#aws-cdk/aws-codebuild";
/**
* The stack that defines the application pipeline
*/
export class CdkpipelinesDemoPipelineStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const sourceArtifact = new codepipeline.Artifact();
const cloudAssemblyArtifact = new codepipeline.Artifact();
const pipeline = new CdkPipeline(this, 'Pipeline', {
// The pipeline name
pipelineName: 'MyServicePipeline',
cloudAssemblyArtifact,
// Where the source can be found
sourceAction: new codepipeline_actions.GitHubSourceAction({
actionName: 'GitHub',
output: sourceArtifact,
oauthToken: SecretValue.secretsManager('github-token'),
owner: 'OWNER',
repo: 'REPO',
}),
// How it will be built and synthesized
synthAction: SimpleSynthAction.standardNpmSynth({
sourceArtifact,
cloudAssemblyArtifact,
// We need a build step to compile the TypeScript Lambda
buildCommand: 'npm run build'
}),
});
const pipelineRole = pipeline.codePipeline.role;
// Add application source action
const appSourceArtifact = new codepipeline.Artifact();
const appSourceAction = this.createAppSourceAction(appSourceArtifact);
const sourceStage = pipeline.stage("Source");
sourceStage.addAction(appSourceAction);
// Add application build action
const codeBuildServiceRole = this.createCodeBuildServiceRole(this, pipelineRole);
const repository = this.createApplicationRepository(this, codeBuildServiceRole);
const pipelineProject = this.createCodeBuildPipelineProject(
this, codeBuildServiceRole, repository, 'REGION', 'ACCOUNT_ID');
const appBuildOutput = new codepipeline.Artifact();
const appBuildAction = this.createAppCodeBuildAction(
this, appSourceArtifact, appBuildOutput, pipelineProject, codeBuildServiceRole);
const buildStage = pipeline.stage("Build");
buildStage.addAction(appBuildAction);
// This is where we add the application stages...
}
createAppSourceAction(appSourceArtifact: codepipeline.Artifact): codepipeline_actions.GitHubSourceAction {
return new codepipeline_actions.GitHubSourceAction({
actionName: 'GitHub-App-Source',
output: appSourceArtifact,
oauthToken: SecretValue.secretsManager('github-token'),
owner: 'SOURCE-OWNER',
repo: 'SOURCE-REPO',
});
}
createCodeBuildServiceRole(scope: core.Construct, pipelineRole: iam.IRole): iam.Role {
const role = new iam.Role(scope, 'CodeBuildServiceRole', {
assumedBy: new iam.ServicePrincipal('codebuild.amazonaws.com'),
});
role.assumeRolePolicy?.addStatements(new iam.PolicyStatement({
sid: "PipelineAssumeCodeBuildServiceRole",
effect: iam.Effect.ALLOW,
actions: ["sts:AssumeRole"],
principals: [pipelineRole]
}));
// Required policies to create an AWS CodeBuild service role
role.addToPolicy(new iam.PolicyStatement({
sid: "CloudWatchLogsPolicy",
effect: iam.Effect.ALLOW,
actions: [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
resources: ["*"]
}));
role.addToPolicy(new iam.PolicyStatement({
sid: "CodeCommitPolicy",
effect: iam.Effect.ALLOW,
actions: ["codecommit:GitPull"],
resources: ["*"]
}));
role.addToPolicy(new iam.PolicyStatement({
sid: "S3GetObjectPolicy",
effect: iam.Effect.ALLOW,
actions: [
"s3:GetObject",
"s3:GetObjectVersion"
],
resources: ["*"]
}));
role.addToPolicy(new iam.PolicyStatement({
sid: "S3PutObjectPolicy",
effect: iam.Effect.ALLOW,
actions: [
"s3:PutObject"
],
resources: ["*"]
}));
role.addToPolicy(new iam.PolicyStatement({
sid: "S3BucketIdentity",
effect: iam.Effect.ALLOW,
actions: [
"s3:GetBucketAcl",
"s3:GetBucketLocation"
],
resources: ["*"]
}));
// This statement allows CodeBuild to upload Docker images to Amazon ECR repositories.
// source: https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html#sample-docker-running
role.addToPolicy(new iam.PolicyStatement({
sid: "ECRUploadPolicy",
effect: iam.Effect.ALLOW,
actions: [
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetAuthorizationToken",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
resources: ["*"]
}));
return role;
}
createApplicationRepository(scope: core.Construct, codeBuildServiceRole: iam.Role): ecr.Repository {
const repository = new ecr.Repository(scope, 'Repository', {
repositoryName: 'cdkpipelines-demo-image-repository'
});
repository.grantPullPush(codeBuildServiceRole);
return repository;
}
createCodeBuildPipelineProject(scope: core.Construct,
codeBuildServiceRole: iam.Role,
repository: ecr.Repository,
region: string,
accountId: string): codebuild.PipelineProject {
return new codebuild.PipelineProject(scope, 'BuildProject', {
buildSpec: codebuild.BuildSpec.fromSourceFilename("buildspec.yml"),
environment: {
buildImage: codebuild.LinuxBuildImage.fromCodeBuildImageId("aws/codebuild/standard:4.0"),
privileged: true,
computeType: codebuild.ComputeType.SMALL,
environmentVariables: {
AWS_DEFAULT_REGION: {value: region},
AWS_ACCOUNT_ID: {value: accountId},
IMAGE_REPO_NAME: {value: repository.repositoryName},
IMAGE_TAG: {value: "latest"},
}
},
role: codeBuildServiceRole
});
}
createAppCodeBuildAction(scope: core.Construct,
input: codepipeline.Artifact,
output: codepipeline.Artifact,
pipelineProject: codebuild.PipelineProject,
serviceRole: iam.Role) {
return new codepipeline_actions.CodeBuildAction({
actionName: "App-Build",
checkSecretsInPlainTextEnvVariables: false,
input: input,
outputs: [output],
project: pipelineProject,
role: serviceRole,
type: codepipeline_actions.CodeBuildActionType.BUILD,
})
}
}

Testcontainers fails to get mapped port on Jenkins kubernates docker in docker

I am trying to run my integration tests with Testcontainers on Jenkins Kubernetes Docker in Docker container.
Testcontainer version: 1.15.3
However, it always fails to get the Container.getMappedPort(X) inside the DinD Container.
It works absolutely fine on my local setup and manages to get the mapped port.
Has anyone encounter this issue before or has a solution for this?
My Jenkins file
#!groovy
def label = "debug-${UUID.randomUUID().toString()}"
podTemplate(label: label, slaveConnectTimeout: '10', containers: [
containerTemplate(
name: 'docker-in-docker',
image: 'cfzen/dind:java11',
privileged: true,
workingDir: '/home/jenkins/agent',
ttyEnabled: true,
command: 'cat',
envVars: [
envVar(key: 'TESTCONTAINERS_HOST_OVERRIDE', value: 'tcp://localhost:2375'),
envVar(key: 'TESTCONTAINERS_RYUK_DISABLED', value: 'true'),
]
),
containerTemplate(
name: 'helm-kubectl',
image: 'dtzar/helm-kubectl',
workingDir: '/home/jenkins/agent/',
ttyEnabled: true,
command: 'cat'
)
],
volumes: [hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),],
annotations: [
podAnnotation(key: 'iam.amazonaws.com/role',
value: 'arn:aws:iam::xxxxxxxxxxx')
],
)
{
node(label) {
deleteDir()
stage('Checkout') {
checkout scm
def shortCommit = sh(returnStdout: true, script: "git log -n 1 --pretty=format:'%h'").trim()
currentBuild.description = "${shortCommit}"
}
stage('Run Integration tests') {
container('docker-in-docker') {
withCredentials([
usernamePassword(credentialsId: 'jenkins-artifactory-credentials',
passwordVariable: 'ARTIFACTORY_SERVER_PASSWORD',
usernameVariable: 'ARTIFACTORY_SERVER_USERNAME')])
{
echo 'Run Integration tests'
sh("mvn -B clean verify -q -s mvn/local-settings.xml")
}
}
}
TestRunner:
#RunWith(CucumberWithSerenity.class)
#CucumberOptions(features = "classpath:features")
public final class RunCucumberIT {
#BeforeClass
public static void init(){
Containers.POSTGRES.start();
System.out.println("Exposed port of db is"+Containers.POSTGRES.getExposedPorts());
System.out.println("Assigned port of db is"+Containers.POSTGRES.getFirstMappedPort());
Containers.WIREMOCK.start();
Containers.S3.start();
}
private RunCucumberIT() {
}
}
Fails at Containers.POSTGRES.getFirstMappedPort()
Requested port (X) is not mapped

Resources