AWS CDK Code Pipelines- Why Can Local Obtain The Branch But Code Build Cannot? - aws-cdk

My goal is to dynamically name resources to allow for multiple environments. For example, a "dev-accounts" table, and a "prod-accounts" table.
The issue I am facing is Code Build cannot dynamically name resources, whilst local can. Following the example above, I am receiving "undefined-accounts" when viewing the logs in Code Build.
Code to obtain the environment by branch name:
export const getContext = (app: App): Promise<CDKContext> => {
return new Promise(async (resolve, reject) => {
try {
const currentBranch = await gitBranch();
const environment = app.node.tryGetContext("environments").find((e: any) => e.branchName === currentBranch);
const globals = app.node.tryGetContext("globals");
return resolve({...globals, ...environment});
}
catch (error) {
return reject("Cannot get context from getContext()");
}
})
}
Further Explanation:
In the bin/template.ts file, I am using console.log to log the context, after calling const context = await getContext(app);
Local CLI outcome:
{
appName: 'appName',
region: 'eu-west-1',
accountId: '000000000',
environment: 'dev',
branchName: 'dev'
}
Code Build outcome:
{
appName: 'appName',
region: 'eu-west-1',
accountId: '000000000'
}
Note I've removed sensitive information.
This is my Code Pipeline built in the CDK:
this.codePipeline = new CodePipeline(this, `${environment}-${appName}-`, {
pipelineName: `${environment}-${appName}-`,
selfMutation: true,
crossAccountKeys: false,
role: this.codePipelineRole,
synth: new ShellStep("Deployment", {
input: CodePipelineSource.codeCommit(this.codeRepository, environment, {
codeBuildCloneOutput: true
}),
installCommands: ["npm i -g npm#latest"],
commands: [
"cd backend",
"npm ci",
"npm run build",
"cdk synth",
],
primaryOutputDirectory: "backend/cdk.out",
})
});
By using the key/value codeBuildCloneOutput: true, I believe I am completing a full clone of Code Commit repository, and thus the git metadata.

CodeBuild exposes the CODEBUILD_SOURCE_VERSION environment variable. For CodeCommit, this is "the commit ID or branch name associated with the version of the source code to be built".
const currentBranch = process.env.CODEBUILD_SOURCE_VERSION ?? await gitBranch();

Related

remoteEntry.js not found on Jenkins

We are deploying a remote app written in NextJS and Typescript; The host app is in React only.
Currently the host app gets a 404 not found error as the remote app runs into this error in the Build Snapshot on Jenkins
+ ls ./dist/static/chunks/remoteEntry.js
ls: cannot access './dist/static/chunks/remoteEntry.js': No such file or directory
script returned exit code 2
However, the file is generated locally and both apps are able to spin up in local environment.
Here is our next.config.js:
const NextFederationPlugin = require('#module-federation/nextjs-mf');
const { exposedModules } = require('./lib/routes');
const version = process.env.VERSION_OVERRIDE || require('./package.json').version;
const deps = require('./package.json').dependencies;
// Note: This path needs to match with what's specified in CIRRUS_FRONTEND_ENTRYPOINT for www.
const assetBasePath = process.env.CDN_PATH ? `${process.env.CDN_PATH}${version}` : process.env.ASSET_BASE_PATH;
// Note: Heavily references module federation example meant for omnidirectional federation between Next apps.
// Changes mostly around path resolution due to our current resolution pattern via cdn
// https://github.com/module-federation/module-federation-examples/blob/master/nextjs/home/next.config.js
module.exports = {
webpack(config, options) {
Object.assign(config.experiments, { topLevelAwait: true });
// Integrated mode calls `next build` which has minimization by default. For local development, this is unnecessary.
if (process.env.NEXT_PUBLIC_ENVIRONMENT === 'INTEGRATED') {
config.optimization.minimize = false;
}
if (!options.isServer) {
console.log("Not Server");
config.output.publicPath = 'auto';
config.plugins.push(
new NextFederationPlugin({
name: 'cirrus',
filename: 'static/chunks/remoteEntry.js',
exposes: {
'./FederatedRouter': './lib/FederatedRouter',
...exposedModules
},
remoteType: 'var',
remotes: {},
shared: {
'#transcriptic/amino': {
requiredVersion: deps['#transcriptic/amino'],
singleton: true
},
react: {
requiredVersion: deps.react,
singleton: true
},
'react-dom': {
requiredVersion: deps['react-dom'],
singleton: true
},
'#strateos/micro-apps-utils': {
requiredVersion: deps['#strateos/micro-apps-utils'],
singleton: true
}
},
extraOptions: {
// We need to override the default module sharing behavior of this plugin as that assumes a nextjs host
// and thus next modules will be provided by the parent application.
// However, web is currently NOT a nextjs application so this child application so that assumption is
// invalid. Note that this means we need to ensure we explicitly specify common modules such as `react`
// in the `shared` key above.
skipSharingNextInternals: true
}
})
);
} else {
console.log("Is Server");
}
return config;
},
// Note: Annoyingly, NextJS automatically automatically appends a `_next` directory for assetPrefix
// but NOT public path so we'll have to manually include it here.
publicPath: `${assetBasePath}/_next/`,
// Note: If serving assets via CDN, assetPrefix is required to help resolve static assets.
// Also, NextJS automatically appends and expects a `_next` directory to the assetPrefix path.
// See https://nextjs.org/docs/api-reference/next.config.js/cdn-support-with-asset-prefix
assetPrefix: process.env.CDN_PATH ? assetBasePath : undefined,
distDir: 'dist',
// Use index react-router as fallback for resolving any pages that are not directly specified
async rewrites() {
return {
fallback: [
{
source: '/:path*',
destination: '/'
}
]
};
}
};
Tried upgrade NextJS from 12.1.6 to 12.2.2
Tried upgrade Webpack from 5.74.0 to 5.75.0
Cleaned cache by sh 'yarn cache clean'
Tried clear env by sh 'env -i PATH=$PATH make build-snapshot'
Hash of node modules by tar -cf - node_modules | md5sum
Downgraded "#module-federation/nextjs-mf" from 5.12.9 to 5.10.5
Verified file writing permission

Playwright configuration: POM fixtures for projects

I need to configure Playwright to use different POM fixtures for different project settings.
All examples I find configure the POM while extending the base test. This works, but this way Playwright would use the same POM fixture for all projects.
import { test as base } from '#playwright/test';
type TestOptions = {
productDetailPom: ProductDetailPom
};
export const test = base.extend<TestOptions>({
productDetailPom: async ({ browser }, use) => {
await use(await ProductDetailPom.create(browser, 'url'));
},
});
What I need are different POMs for each configured project. Is there a way to create a POM instance with the browser or page fixture for each project in the config?
// playwright.config.ts
const config: PlaywrightTestConfig<TestOptions> = {
...
projects: [
{
name: 'proj1',
use: {
productDetailPom: new ProductDetailPom1(browser, 'url1') // POM instance 1
}
},
{
name: 'proj2',
use: {
productDetailPom: new ProductDetailPom2(browser, 'url2') // POM instance 2
}
}
],
};
I found no way to construct POMs in the config, but a workaround. Here is what I came up with. Please tell me if you have a better solution.
In short: The POM instances are created not in the config, but later. For instantiation I use a factory, which can be the same for all projects. Only the data fed into the factory differ per project.
In the config I add the data I need for construction: the POM class itself and data that will get passed to the constructor. Those data can be different in each project.
// playwright.config.ts
const config: PlaywrightTestConfig<TestOptions> = {
...
projects: [
{
name: 'proj1',
use: {
productDetailPomConstr: [ProductDetailPom1, 'url1']
},
},
{
name: 'proj2',
use: {
productDetailPomConstr: [ProductDetailPom2, 'url2']
}
}
]
};
For creating the instance of the POM I use a factory, which is the same for all projects and because of this can be added by extending the base test. This is where I get the browser instance from (you could also get all other fixtures like page).
import { test as base } from '#playwright/test';
type TestOptions = {
pomFactory: PomFactory,
productDetailPomConstr: [typeof ProductDetailPom, Record<string, any>]
};
export const test = base.extend<TestOptions>({
pomFactory: async ({ browser }, use) => {
const pomFactory = new PomFactory(browser);
await use(pomFactory);
},
productDetailPomConstr: [null, { option: true }],
});
In the test I can get both fixtures, factory and PomConstruction data, and instantiate my POM with them.
test('someTest', async ({pomFactory, productDetailPomConstr}) => {
const pdPom = await pomFactory.create<ProductDetailPom>(productDetailPomConstr);
});
Maybe this helps someone.

Error loading appsettings.Production.json due to digest integrity issue

I'm developing a Blazor WebAssembly app with PWA enabled, and with files appsettings.json, appsettings.Development.json and appsettings.Production.json. The last one is empty because it would contain secrets to replace when production environment is deployed to a kubernetes cluster.
I'm using k8s to deploy, and a Secret resource to replace the empty appsettings.Production.json file by an encrypted file, into a nginx based container with the published blazor app inside.
Now I'm getting this issue in the browser:
When the application was built using docker build in a CI pipeline, the file was an empty json file, and got a SHA computed that does not match then one computed by the build process.
My question is: How can I replace the appsettings.Production.json during deployment, much later than the build process, and don't have the integrity test failed over that file?
The file blazor.boot.json does not contain any SHA for the appsetting.Production.json file:
{
"cacheBootResources": true,
"config": [
"appsettings.Development.json",
"appsettings.json",
"appsettings.Production.json"
],
"debugBuild": false,
"entryAssembly": "IrisTenantWeb",
"icuDataMode": 0,
"linkerEnabled": true,
"resources": {
"assembly": {
"Azure.Core.dll": "sha256-rzNx\/GlDpiutVRPzugT82owXvTopmiixMar68xLA6L8=",
// Bunch of .dlls,
"System.Private.CoreLib.dll": "sha256-S7l+o9J9ivjCunMa+Ms\/JO\/kVaXLW8KTAjq1eRjY4EA="
},
"lazyAssembly": null,
"pdb": null,
"runtime": {
"dotnet.timezones.blat": "sha256-SQvzbzBfueaAxSKIKE1khBH02NH2MJJaWDBav\/S5MSs=",
"dotnet.wasm": "sha256-YXYNlLeMqRPFVpY2KSDhleLkNk35d9KvzzwwKAoiftc=",
"icudt.dat": "sha256-m7NyeXyxM+CL04jr9ui1Z6pVfMWwhHusuz5qNZWpAwA=",
"icudt_CJK.dat": "sha256-91bygK5voY9lG5wxP0\/uj7uH5xljF9u7iWnSldT1Z\/g=",
"icudt_EFIGS.dat": "sha256-DPfeOLph83b2rdx40cKxIBcfVZ8abTWAFq+RBQMxGw0=",
"icudt_no_CJK.dat": "sha256-oM7Z6aN9jHmCYqDMCBwFgFAYAGgsH1jLC\/Z6DYeVmmk=",
"dotnet.5.0.5.js": "sha256-Dvb7uXD3+JPPqlsw2duS+FFNQDkFaxhIbSQWSnhODkM="
},
"satelliteResources": null
}
}
But the service-worker-assets.js file DOES contains a SHA computed for it:
self.assetsManifest = {
"assets": [
{
"hash": "sha256-EaNzjsIaBdpWGRyu2Elt6mv3X+48iD9gGaSN8xAm3ao=",
"url": "appsettings.Development.json"
},
{
"hash": "sha256-RIn54+RUdIs1IeshTgpWlNViz\/PZ\/1EctFaVPI9TTAA=",
"url": "appsettings.json"
},
{
"hash": "sha256-RIn54+RUdIs1IeshTgpWlNViz\/PZ\/1EctFaVPI9TTAA=",
"url": "appsettings.Production.json"
},
{
"hash": "sha256-OV+CP+ILUqNY7e7\/MGw1L5+Gi7EKCXEYNJVyBjbn44M=",
"url": "css\/app.css"
},
// ...
],
"version": "j39cUu6V"
};
NOTE: You can see that both appsettings.json and appsettings.Production.json have the same hash because they are both the empty json file {}. But in production the second one is having a computed hash of YM2gjmV5... and issuing the error.
I can't have different build processes for different environments, because that would not ensure using the same build from staging and production. I need to use the same docker image but replacing the file at deployment time.
I edited the wwwroot/service-worker.published.js file, which first lines are as follow:
// Caution! Be sure you understand the caveats before publishing an application with
// offline support. See https://aka.ms/blazor-offline-considerations
self.importScripts('./service-worker-assets.js');
self.addEventListener('install', event => event.waitUntil(onInstall(event)));
self.addEventListener('activate', event => event.waitUntil(onActivate(event)));
self.addEventListener('fetch', event => event.respondWith(onFetch(event)));
const cacheNamePrefix = 'offline-cache-';
const cacheName = `${cacheNamePrefix}${self.assetsManifest.version}`;
const offlineAssetsInclude = [ /\.dll$/, /\.pdb$/, /\.wasm/, /\.html/, /\.js$/, /\.json$/, /\.css$/, /\.woff$/, /\.png$/, /\.jpe?g$/, /\.gif$/, /\.ico$/, /\.blat$/, /\.dat$/ ];
const offlineAssetsExclude = [ /^service-worker\.js$/ ];
async function onInstall(event) {
console.info('Service worker: Install');
// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => new Request(asset.url, { integrity: asset.hash }));
await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}
...
I added an array of patterns, similar to offlineAssetsInclude and offlineAssetsExclude to indicate which files I want to skip integrity checks.
...
const offlineAssetsInclude = [ /\.dll$/, /\.pdb$/, /\.wasm/, /\.html/, /\.js$/, /\.json$/, /\.css$/, /\.woff$/, /\.png$/, /\.jpe?g$/, /\.gif$/, /\.ico$/, /\.blat$/, /\.dat$/ ];
const offlineAssetsExclude = [ /^service-worker\.js$/ ];
const integrityExclude = [ /^appsettings\.Production\.json$/ ]; // <-- new variable
Then at onInstall, instead of always returning a Request with integrity set, I skipped it for excluded patterns:
...
async function onInstall(event) {
console.info('Service worker: Install');
// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => {
// Start of new code
const integrity =
integrityExclude.some(pattern => pattern.test(asset.url))
? null
: asset.hash;
return !!integrity
? new Request(asset.url, { integrity })
: new Request(asset.url);
// End of new code
});
await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}
...
I'll wait for others to comment and propose other solutions, because the ideal response would set the correct SHA hash to the file, instead of ignoring it.

Call yeoman generator from code with options

I created a yeoman generator with user interaction, that can be called in the terminal like (after running npm link):
yo mygenerator --name test --path /test/path --project testproject
Now I want to include this generator in my vscode extension.
How can I call the yo generator from my typescript code when the generator when the generator is added as a package.json dependency?
So something like (pseudo code)
import { yo } from 'yeoman';
import mygenerator; // added as a dependency via package.json
const options = {
name: 'test',
path: '/test/path',
project: 'testproject',
};
yo.exec(mygenerator, options, () => {
console.log('yeoman finished')
});
Is something like this possible?
Here is a solution for that:
const env = yeoman.createEnv();
const generatorPath = '../node_modules/generator-name/generators/app/index.js';
env.getByPath(generatorDir);
env.on('error', (err: any) => {
// handle error
});
const options = {
env,
'option1': option1,
'option2': option2,
};
try {
await env.run('name', options);
} catch (err) {
// handle error
}

Gulp build with browserify environment variable

I'm looking to include either an environment variable or file that my modules can access for conditional flow.
// contains env build specific data
// or value 'develop' || 'production'
var env = require('config');
I know I can access the CL arguments with yargs which is great, but I can't seem to find a way to get arguments into my browserify build.
var bundleStream = {
cache: {},
packageCache: {},
fullPaths: false,
entries: [filename],
extensions: config.extensions,
debug: config.debug,
paths: ['./node_modules', './app/js/'],
require: ['jquery', 'lodash']
};
var bundle = function() {
bundleLogger.start(filename);
return bundleStream
.bundle()
.on('error', handleErrors)
.pipe(source(filename.replace('-app', '-bundle')))
.pipe(gulp.dest(process.cwd()))
.on('end', reportFinished)
.pipe(browserSync.reload({
stream: true
}));
};
You could create a config.json file dynamically, and then require it in your modules:
var fs = require('fs');
var gutil = require('gulp-utils');
gulp.task('create-config', function(cb) {
fs.writeFile('config.json', JSON.stringify({
env: gutil.env.env,
tacos: 'delicious'
}), cb);
});
gulp.task('browserify', ['create-config'], function() {
//...
});
In your modules:
var config = require('./config.json');
if (config.env === 'production') {
//...
}
And on the command line:
gulp --env=production

Resources