Does anyone know if there's an up to date example or tutorial for deploying a static website to CloudFront using the AWS CDK?
I'm also interested in using a Lambda#Edge to do some path rewriting, as described here.
Last time I looked at this, it seemed there wasn't a way to specify a static destination bucket for S3 assets, so I started tracking this issue which appears to have been resolved.
I would appreciate any advice or documentation on best practices for this.
Here's an official example using the AWS CDK in Typescript: https://github.com/aws-samples/aws-cdk-examples/blob/master/typescript/static-site/static-site.ts
The relevant part is in static-site.ts:
const zone = route53.HostedZone.fromLookup(this, 'Zone', { domainName: props.domainName });
const siteDomain = props.siteSubDomain + '.' + props.domainName;
new cdk.CfnOutput(this, 'Site', { value: 'https://' + siteDomain });
// Content bucket
const siteBucket = new s3.Bucket(this, 'SiteBucket', {
bucketName: siteDomain,
websiteIndexDocument: 'index.html',
websiteErrorDocument: 'error.html',
publicReadAccess: true,
// The default removal policy is RETAIN, which means that cdk destroy will not attempt to delete
// the new bucket, and it will remain in your account until manually deleted. By setting the policy to
// DESTROY, cdk destroy will attempt to delete the bucket, but will error if the bucket is not empty.
removalPolicy: cdk.RemovalPolicy.DESTROY, // NOT recommended for production code
});
new cdk.CfnOutput(this, 'Bucket', { value: siteBucket.bucketName });
// TLS certificate
const certificateArn = new acm.DnsValidatedCertificate(this, 'SiteCertificate', {
domainName: siteDomain,
hostedZone: zone
}).certificateArn;
new cdk.CfnOutput(this, 'Certificate', { value: certificateArn });
// CloudFront distribution that provides HTTPS
const distribution = new cloudfront.CloudFrontWebDistribution(this, 'SiteDistribution', {
aliasConfiguration: {
acmCertRef: certificateArn,
names: [ siteDomain ],
sslMethod: cloudfront.SSLMethod.SNI,
securityPolicy: cloudfront.SecurityPolicyProtocol.TLS_V1_1_2016,
},
originConfigs: [
{
s3OriginSource: {
s3BucketSource: siteBucket
},
behaviors : [ {isDefaultBehavior: true}],
}
]
});
new cdk.CfnOutput(this, 'DistributionId', { value: distribution.distributionId });
// Route53 alias record for the CloudFront distribution
new route53.ARecord(this, 'SiteAliasRecord', {
recordName: siteDomain,
target: route53.AddressRecordTarget.fromAlias(new targets.CloudFrontTarget(distribution)),
zone
});
// Deploy site contents to S3 bucket
new s3deploy.BucketDeployment(this, 'DeployWithInvalidation', {
sources: [ s3deploy.Source.asset('./site-contents') ],
destinationBucket: siteBucket,
distribution,
distributionPaths: ['/*'],
});
Related
We are deploying a remote app written in NextJS and Typescript; The host app is in React only.
Currently the host app gets a 404 not found error as the remote app runs into this error in the Build Snapshot on Jenkins
+ ls ./dist/static/chunks/remoteEntry.js
ls: cannot access './dist/static/chunks/remoteEntry.js': No such file or directory
script returned exit code 2
However, the file is generated locally and both apps are able to spin up in local environment.
Here is our next.config.js:
const NextFederationPlugin = require('#module-federation/nextjs-mf');
const { exposedModules } = require('./lib/routes');
const version = process.env.VERSION_OVERRIDE || require('./package.json').version;
const deps = require('./package.json').dependencies;
// Note: This path needs to match with what's specified in CIRRUS_FRONTEND_ENTRYPOINT for www.
const assetBasePath = process.env.CDN_PATH ? `${process.env.CDN_PATH}${version}` : process.env.ASSET_BASE_PATH;
// Note: Heavily references module federation example meant for omnidirectional federation between Next apps.
// Changes mostly around path resolution due to our current resolution pattern via cdn
// https://github.com/module-federation/module-federation-examples/blob/master/nextjs/home/next.config.js
module.exports = {
webpack(config, options) {
Object.assign(config.experiments, { topLevelAwait: true });
// Integrated mode calls `next build` which has minimization by default. For local development, this is unnecessary.
if (process.env.NEXT_PUBLIC_ENVIRONMENT === 'INTEGRATED') {
config.optimization.minimize = false;
}
if (!options.isServer) {
console.log("Not Server");
config.output.publicPath = 'auto';
config.plugins.push(
new NextFederationPlugin({
name: 'cirrus',
filename: 'static/chunks/remoteEntry.js',
exposes: {
'./FederatedRouter': './lib/FederatedRouter',
...exposedModules
},
remoteType: 'var',
remotes: {},
shared: {
'#transcriptic/amino': {
requiredVersion: deps['#transcriptic/amino'],
singleton: true
},
react: {
requiredVersion: deps.react,
singleton: true
},
'react-dom': {
requiredVersion: deps['react-dom'],
singleton: true
},
'#strateos/micro-apps-utils': {
requiredVersion: deps['#strateos/micro-apps-utils'],
singleton: true
}
},
extraOptions: {
// We need to override the default module sharing behavior of this plugin as that assumes a nextjs host
// and thus next modules will be provided by the parent application.
// However, web is currently NOT a nextjs application so this child application so that assumption is
// invalid. Note that this means we need to ensure we explicitly specify common modules such as `react`
// in the `shared` key above.
skipSharingNextInternals: true
}
})
);
} else {
console.log("Is Server");
}
return config;
},
// Note: Annoyingly, NextJS automatically automatically appends a `_next` directory for assetPrefix
// but NOT public path so we'll have to manually include it here.
publicPath: `${assetBasePath}/_next/`,
// Note: If serving assets via CDN, assetPrefix is required to help resolve static assets.
// Also, NextJS automatically appends and expects a `_next` directory to the assetPrefix path.
// See https://nextjs.org/docs/api-reference/next.config.js/cdn-support-with-asset-prefix
assetPrefix: process.env.CDN_PATH ? assetBasePath : undefined,
distDir: 'dist',
// Use index react-router as fallback for resolving any pages that are not directly specified
async rewrites() {
return {
fallback: [
{
source: '/:path*',
destination: '/'
}
]
};
}
};
Tried upgrade NextJS from 12.1.6 to 12.2.2
Tried upgrade Webpack from 5.74.0 to 5.75.0
Cleaned cache by sh 'yarn cache clean'
Tried clear env by sh 'env -i PATH=$PATH make build-snapshot'
Hash of node modules by tar -cf - node_modules | md5sum
Downgraded "#module-federation/nextjs-mf" from 5.12.9 to 5.10.5
Verified file writing permission
I'm developing the App in AWS CDK which will create ESC scheduled task, event bridge rule and some IAM roles with inline policies. This resources are created in same stack. For better controlling I decided to describe clearly IAM roles and appropriate inline policies (pseudo code is down below).
//Role 1
private ExecutionRole (): iam.Role {
const RolePolicy = new iam.PolicyDocument ({
statements:
[
new iam.PolicyStatement({
resources: ['some:Actions*'],
actions: ['some:Actions'],
effect: iam.Effect.ALLOW
})
]
});
const execRole = new iam.Role(this, 'Execution-Role', {
assumedBy: new iam.ServicePrincipal('ecs-tasks.amazonaws.com'),
inlinePolicies: {RolePolicy}
});
return execRole;
}
// Role 2
private EventBridgeRole(): iam.Role {
const RolePolicy = new iam.PolicyDocument({
statements:
[
new iam.PolicyStatement({
resources: ['some:Resources'],
actions: ['some:Actions'],
effect: iam.Effect.ALLOW,
})
],
});
const eventBridgeRole = new iam.Role(this, 'Event-Bridge-Role', {
assumedBy: new iam.ServicePrincipal('events.amazonaws.com'),
inlinePolicies: {RolePolicy},
});
return eventBridgeRole;
}
This functions are used in another functions which create Task Definition and Event Bridge Target
// Task Definition
private FargateTaskDefinition(): ecs.FargateTaskDefinition {
const taskDefName = 'blaBlaTaskDefinition'
const taskRole = this.TaskRole();
const taskExecRole = this.ExecutionRole();
const taskDefinition = new ecs.FargateTaskDefinition(this, taskDefName, {
cpu: 1024,
family: taskDefName,
memoryLimitMiB: 2048,
taskRole: taskRole,
executionRole: ExecutionRole
});
return taskDefinition;
}
// Event Bridge Target
private EventBridgeTarget(): eventTargets.EcsTask {
const vpc = 'some-vpc-id';
const cluster = 'ecs-cluster-id';
const subnets = 'some-subnets';
const taskDefinition = this.FargateTaskDefinition();
const ebTargetRole = this.createEventBridgeTargetRole(cluster, taskDefinition);
const ebTarget = new eventTargets.EcsTask({
taskDefinition,
cluster,
role: ebTargetRole,
platformVersion: ''platform-version,
subnetSelection: { subnets },
});
return eventBridgeTarget;
}
When I perform cdk deploy all resources are created but IAM roles contains additional autogenerated policy statement. It's true for both roles:
autogeneratedstatements
Is there any way to prevent creation this autogenerated policies?
Also may be someone found a way how to provide names for this policies.
Appreciate for any advices. Thank you!
CDK constructs helpfully add required policies to roles. If instead you want explicit control over the policies that get added to a role, use the withoutPolicyUpdates method on a Role instance to return an "immutable" IRole that can be passed around:
Use the object returned by this method if you want this Role to be used by a construct without it automatically updating the Role's Policies. If you do, you are responsible for adding the correct statements to the Role's policies yourself.
Can anyone provide or point at an AWS-CDK/typescript example provisioning an AWS AppConfig app/env/config consumed by a lambda please? Could not find anything meaningful online.
A little late to this party, If anyone is coming to this question, looking for a quick answer, here you go:
AppConfig Does Not Have Any Official L2 Constructs
This means that one has to work with L1 constructs currently vended here
This is ripe for someone authoring an L2/L3 construct (I am working on vending this custom construct soon - so keep an eye out on updates here).
This does not mean its hard to use AppConfig in CDK, its just that unlike L2/L3 constructs, we have to dig deeper into setting AppConfig up reading their documentation.
A Very Simple Example (YMMV)
Here is a custom construct I have to setup AppConfig:
import {
CfnApplication,
CfnConfigurationProfile,
CfnDeployment,
CfnDeploymentStrategy,
CfnEnvironment,
CfnHostedConfigurationVersion,
} from "aws-cdk-lib/aws-appconfig";
import { Construct } from "constructs";
// 1. https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appconfig-readme.html - there are no L2 constructs for AppConfig.
// 2. https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-working.html- this the CDK code from this console setup guide.
export class AppConfig extends Construct {
constructor(scope: Construct, id: string) {
super(scope, id);
// create a new app config application.
const testAppConfigApp: CfnApplication = new CfnApplication(
this,
"TestAppConfigApp",
{
name: "TestAppConfigApp",
}
);
// you can customize this as per your needs.
const immediateDeploymentStrategy = new CfnDeploymentStrategy(
this,
"DeployStrategy",
{
name: "ImmediateDeployment",
deploymentDurationInMinutes: 0,
growthFactor: 100,
replicateTo: "NONE",
finalBakeTimeInMinutes: 0,
}
);
// setup an app config env
const appConfigEnv: CfnEnvironment = new CfnEnvironment(
this,
"AppConfigEnv",
{
applicationId: testAppConfigApp.ref,
// can be anything that makes sense for your use case.
name: "Production",
}
);
// setup config profile
const appConfigProfile: CfnConfigurationProfile = new CfnConfigurationProfile(
this,
"ConfigurationProfile",
{
name: "TestAppConfigProfile",
applicationId: testAppConfigApp.ref,
// we want AppConfig to manage the configuration profile, unless we need from SSM or S3.
locationUri: "hosted",
// This can also be "AWS.AppConfig.FeatureFlags"
type: "AWS.Freeform",
}
);
// Update AppConfig
const configVersion: CfnHostedConfigurationVersion = new CfnHostedConfigurationVersion(
this,
"HostedConfigurationVersion",
{
applicationId: testAppConfigApp.ref,
configurationProfileId: appConfigProfile.ref,
content: JSON.stringify({
someAppConfigResource: "SomeAppConfigResourceValue",
//... add more as needed.
}),
// https://www.rfc-editor.org/rfc/rfc9110.html#name-content-type
contentType: "application/json",
}
);
// Perform deployment.
new CfnDeployment(this, "Deployment", {
applicationId: testAppConfigApp.ref,
configurationProfileId: appConfigProfile.ref,
configurationVersion: configVersion.ref,
deploymentStrategyId: immediateDeploymentStrategy.ref,
environmentId: appConfigEnv.ref,
});
}
}
Here is what goes inside my lambda handler (please note that you should have lambda layers enabled for AppConfig extension, see more information here):
const http = require('http');
exports.handler = async (event) => {
const res = await new Promise((resolve, reject) => {
http.get(
"http://localhost:2772/applications/TestAppConfigApp/environments/Production/configurations/TestAppConfigProfile",
resolve
);
});
let configData = await new Promise((resolve, reject) => {
let data = '';
res.on('data', chunk => data += chunk);
res.on('error', err => reject(err));
res.on('end', () => resolve(data));
});
const parsedConfigData = JSON.parse(configData);
return parsedConfigData;
};
EDIT: As promised, I released a new npm library : https://www.npmjs.com/package/cdk-appconfig, as of this update, this is still pre-release, but time permitting I will release a 1.x soon. But this will allow people to check out/fork if need be. Please share feedback/or open PRs if you'd like to collaborate.
I have no experience with AWS AppConfig. But I would start here and here
I'm developing a Blazor WebAssembly app with PWA enabled, and with files appsettings.json, appsettings.Development.json and appsettings.Production.json. The last one is empty because it would contain secrets to replace when production environment is deployed to a kubernetes cluster.
I'm using k8s to deploy, and a Secret resource to replace the empty appsettings.Production.json file by an encrypted file, into a nginx based container with the published blazor app inside.
Now I'm getting this issue in the browser:
When the application was built using docker build in a CI pipeline, the file was an empty json file, and got a SHA computed that does not match then one computed by the build process.
My question is: How can I replace the appsettings.Production.json during deployment, much later than the build process, and don't have the integrity test failed over that file?
The file blazor.boot.json does not contain any SHA for the appsetting.Production.json file:
{
"cacheBootResources": true,
"config": [
"appsettings.Development.json",
"appsettings.json",
"appsettings.Production.json"
],
"debugBuild": false,
"entryAssembly": "IrisTenantWeb",
"icuDataMode": 0,
"linkerEnabled": true,
"resources": {
"assembly": {
"Azure.Core.dll": "sha256-rzNx\/GlDpiutVRPzugT82owXvTopmiixMar68xLA6L8=",
// Bunch of .dlls,
"System.Private.CoreLib.dll": "sha256-S7l+o9J9ivjCunMa+Ms\/JO\/kVaXLW8KTAjq1eRjY4EA="
},
"lazyAssembly": null,
"pdb": null,
"runtime": {
"dotnet.timezones.blat": "sha256-SQvzbzBfueaAxSKIKE1khBH02NH2MJJaWDBav\/S5MSs=",
"dotnet.wasm": "sha256-YXYNlLeMqRPFVpY2KSDhleLkNk35d9KvzzwwKAoiftc=",
"icudt.dat": "sha256-m7NyeXyxM+CL04jr9ui1Z6pVfMWwhHusuz5qNZWpAwA=",
"icudt_CJK.dat": "sha256-91bygK5voY9lG5wxP0\/uj7uH5xljF9u7iWnSldT1Z\/g=",
"icudt_EFIGS.dat": "sha256-DPfeOLph83b2rdx40cKxIBcfVZ8abTWAFq+RBQMxGw0=",
"icudt_no_CJK.dat": "sha256-oM7Z6aN9jHmCYqDMCBwFgFAYAGgsH1jLC\/Z6DYeVmmk=",
"dotnet.5.0.5.js": "sha256-Dvb7uXD3+JPPqlsw2duS+FFNQDkFaxhIbSQWSnhODkM="
},
"satelliteResources": null
}
}
But the service-worker-assets.js file DOES contains a SHA computed for it:
self.assetsManifest = {
"assets": [
{
"hash": "sha256-EaNzjsIaBdpWGRyu2Elt6mv3X+48iD9gGaSN8xAm3ao=",
"url": "appsettings.Development.json"
},
{
"hash": "sha256-RIn54+RUdIs1IeshTgpWlNViz\/PZ\/1EctFaVPI9TTAA=",
"url": "appsettings.json"
},
{
"hash": "sha256-RIn54+RUdIs1IeshTgpWlNViz\/PZ\/1EctFaVPI9TTAA=",
"url": "appsettings.Production.json"
},
{
"hash": "sha256-OV+CP+ILUqNY7e7\/MGw1L5+Gi7EKCXEYNJVyBjbn44M=",
"url": "css\/app.css"
},
// ...
],
"version": "j39cUu6V"
};
NOTE: You can see that both appsettings.json and appsettings.Production.json have the same hash because they are both the empty json file {}. But in production the second one is having a computed hash of YM2gjmV5... and issuing the error.
I can't have different build processes for different environments, because that would not ensure using the same build from staging and production. I need to use the same docker image but replacing the file at deployment time.
I edited the wwwroot/service-worker.published.js file, which first lines are as follow:
// Caution! Be sure you understand the caveats before publishing an application with
// offline support. See https://aka.ms/blazor-offline-considerations
self.importScripts('./service-worker-assets.js');
self.addEventListener('install', event => event.waitUntil(onInstall(event)));
self.addEventListener('activate', event => event.waitUntil(onActivate(event)));
self.addEventListener('fetch', event => event.respondWith(onFetch(event)));
const cacheNamePrefix = 'offline-cache-';
const cacheName = `${cacheNamePrefix}${self.assetsManifest.version}`;
const offlineAssetsInclude = [ /\.dll$/, /\.pdb$/, /\.wasm/, /\.html/, /\.js$/, /\.json$/, /\.css$/, /\.woff$/, /\.png$/, /\.jpe?g$/, /\.gif$/, /\.ico$/, /\.blat$/, /\.dat$/ ];
const offlineAssetsExclude = [ /^service-worker\.js$/ ];
async function onInstall(event) {
console.info('Service worker: Install');
// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => new Request(asset.url, { integrity: asset.hash }));
await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}
...
I added an array of patterns, similar to offlineAssetsInclude and offlineAssetsExclude to indicate which files I want to skip integrity checks.
...
const offlineAssetsInclude = [ /\.dll$/, /\.pdb$/, /\.wasm/, /\.html/, /\.js$/, /\.json$/, /\.css$/, /\.woff$/, /\.png$/, /\.jpe?g$/, /\.gif$/, /\.ico$/, /\.blat$/, /\.dat$/ ];
const offlineAssetsExclude = [ /^service-worker\.js$/ ];
const integrityExclude = [ /^appsettings\.Production\.json$/ ]; // <-- new variable
Then at onInstall, instead of always returning a Request with integrity set, I skipped it for excluded patterns:
...
async function onInstall(event) {
console.info('Service worker: Install');
// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => {
// Start of new code
const integrity =
integrityExclude.some(pattern => pattern.test(asset.url))
? null
: asset.hash;
return !!integrity
? new Request(asset.url, { integrity })
: new Request(asset.url);
// End of new code
});
await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}
...
I'll wait for others to comment and propose other solutions, because the ideal response would set the correct SHA hash to the file, instead of ignoring it.
I'm attempting to implement workbox to precache image and video assets on a website.
I've created a service worker file. It appears to be successfully referenced and used in my application. The service worker:
import { clientsClaim, setCacheNameDetails } from 'workbox-core';
import { precacheAndRoute, addRoute } from 'workbox-precaching';
const context = self; // eslint-disable-line no-restricted-globals
setCacheNameDetails({ precache: 'app' });
// eslint-disable-next-line no-restricted-globals, no-underscore-dangle
precacheAndRoute(self.__WB_MANIFEST);
context.addEventListener('install', (event) => {
event.waitUntil(
caches.open('dive').then((cache) => {
console.log(cache);
}),
);
});
context.addEventListener('activate', (event) => {
console.log('sw active');
});
context.addEventListener('fetch', async (event) => {
console.log(event.request.url);
});
context.addEventListener('message', ({ data }) => {
const { type, payload } = data;
if (type === 'cache') {
payload.forEach((url) => {
addRoute(url);
});
const manifest = payload.map((url) => (
{
url,
revision: null,
}
));
console.log('attempting to precache and route manifest', JSON.stringify(manifest));
precacheAndRoute(manifest);
}
});
context.skipWaiting();
clientsClaim();
The application uses workbox-window to load, reference and message the service worker. The app looks like:
import { Workbox } from 'workbox-window';
workbox = new Workbox('/sw.js');
workbox.register();
workbox.messageSW({
type: 'cache',
payload: [
{ url: 'https://media0.giphy.com/media/Ju7l5y9osyymQ/giphy.gif' }
],
});
This project is using vue with vue-cli. It has a webpack config which allows plugins to be sent added to webpack. The config looks like:
const { InjectManifest } = require('workbox-webpack-plugin');
const path = require('path');
module.exports = {
configureWebpack: {
plugins: [
new InjectManifest({
swSrc: path.join(__dirname, 'lib/services/Asset-Cache.serviceworker.js'),
swDest: 'Asset-Cache.serviceworker.js',
}),
],
},
};
I can see messages are successfully sent to the service worker and contain the correct payload. BUT, the assets never show up in Chrome dev tools cache storage. I also never see any workbox logging related to the assets sent via messageSW. I've also tested by disabling my internet, the assets don't appear to be loading into the cache. What am I doing wrong here?
I found the workbox docs to be a little unclear and have also tried to delete the message event handler from the service worker. Then, send messages to the service worker like this:
workbox.messageSW({
type: 'CACHE_URLS',
payload: { urlsToCache: [ 'https://media0.giphy.com/media/Ju7l5y9osyymQ/giphy.gif'] },
});
This also appears to have no effect on the cache.
The precache portion of precacheAndRoute() works by adding install and activate listeners to the current service worker, taking advantage of the service worker lifecycle. This will effectively be a no-op if the service worker has already finished installing and activating by the time it's called, which may be the case if you trigger it conditionally via a message handler.
We should probably warn folks about this ineffective usage of precacheAndRoute() in our Workbox development builds; I've filed an issue to track that.