I am trying to build Gatsby project with staging environment variable but it is always using production environment variables
I used this tutorial Environment Variables | Gatsby
This is my gatsby-config.js file
let activeEnv = process.env.ACTIVE_ENV || process.env.NODE_ENV ||
'development';
console.log(`Using environment config: '${activeEnv}'`);
require("dotenv").config({
path: `.env.${activeEnv}`,
});
module.exports = {
plugins: [
{
resolve: `gatsby-plugin-sass`,
options: {
precision: 8,
},
},
]
};
This is the command i am using to build
"build:staging": "set ACTIVE_ENV='staging' && gatsby build",
When i run the above command it show Using environment config: 'staging'
but after build it uses production variables
My .env files
After Running ACTIVE_ENV='staging' gatsby build i got this
$ ACTIVE_ENV='staging' gatsby build
success delete html and css files from previous builds — 0.058 s
⠁ Using environment config: 'staging'
{ API_URL: 'https://api.company.com/api/company/test',
COMPANY_URL: 'https://company.test.com/test/' }
success open and validate gatsby-config — 0.011 s
info One or more of your plugins have changed since the last time you ran
Gatsby. As a precaution, we're deleting your site's cache to ensure there's not any stale
data
success copy gatsby files — 0.044 s
success onPreBootstrap — 0.039 s
success source and transform nodes — 0.026 s
success building schema — 0.112 s
success createLayouts — 0.007 s
success createPages — 0.001 s
success createPagesStatefully — 0.082 s
success onPreExtractQueries — 0.001 s
success update schema — 0.072 s
success extract queries from components — 0.041 s
success run graphql queries — 0.015 s
success write out page data — 0.005 s
success write out redirect data — 0.001 s
success onPostBootstrap — 0.001 s
info bootstrap finished - 3.516 s
success Building CSS — 13.139 s
success Building production JavaScript bundles — 26.757 s
⢀ Building static HTML for pages{ API_URL:
'https://api.company.com/api/company/prod',
COMPANY_URL: 'https://company.test.com/prod/',
NODE_ENV: 'production',
PUBLIC_DIR: 'D:\\website/public' }
success Building static HTML for pages — 8.390 s
info Done building in 51.808 sec
i resolved this issue using cross-env package. Now it is working fine.
Here is the command
"build:staging": "cross-env ACTIVE_ENV=\"staging\" gatsby build",
Now when i run npm run build:staging it build using .env.staging
Related
We are trying to get our multi-stack application deployed using the cdk pipeline library.
We have recently disabled the publishAssetsInParallel flag, as with the default setting our pipeline would create >20 FileAsset objects under the Assets stage, which AWS then complains as being too many CodeBuild projects running parallel.
However, with this property now disabled, I'm getting the following error for the Assets stage:
[Container] 2022/11/14 12:04:24 Phase complete: DOWNLOAD_SOURCE State: FAILED
[Container] 2022/11/14 12:04:24 Phase context status code: YAML_FILE_ERROR Message: stat /codebuild/output/src112668013/src/buildspec-c866864112c35d54804951dbe96b99440c9b891fde-FileAsset.yaml: no such file or directory
I'm assuming this is supposed to be a build spec that is create by cdk pipeline, as we didn't need to create a build spec when things were running in parallel.
Here is the current pipeline code:
const pipeline = new CodePipeline(this, 'Pipeline', {
publishAssetsInParallel: false,
selfMutation: false,
pipelineName: fullStackName('Pipeline', app),
synth: new CodeBuildStep('SynthStep', {
input: CodePipelineSource.codeCommit(repo, repoBranchName, {codeBuildCloneOutput: true}),
buildEnvironment: {computeType: ComputeType.MEDIUM},
installCommands: [
'npm install -g yarn',
'yarn install',
'cd apps/cloud-app',
'yarn install',
'yarn global add aws-cdk'
],
commands: [
'yarn build',
'cdk synth'
],
primaryOutputDirectory: 'apps/cloud-app/cdk.out'
}
)
});
UPDATE:
I reverted the publishAssetsInParallel flag to its default setting to compare, and it seems there is a fundamental difference in the way it creates the FileAsset CodeBuild projects based on this flag. With it enabled, when I inspect the build details for one of the FileAsset projects that is created, I can see under the buildspec section it contains a concrete implementation of a build spec, eg:
{
"version": "0.2",
"phases": {
"install": {
"commands": [
"npm install -g cdk-assets#2"
]
},
"build": {
"commands": [
"cdk-assets --path \"MyStack.assets.json\" --verbose publish \"2357296280127ce793d8dbb13e6c907db22f5dcc57a173ba77fcd19a76d8f444:12345678910-eu-west-2\""
]
}
}
}
With the flag disabled, the buildspec simply contains a pointer to a buildspec file as below, which it then fails to find...
buildspec-c866864112c35d54804951dbe96b99440c9b891fde-FileAsset.yaml
Self-mutation has to be enabled - currently, asset updates mutate the pipeline.
Reference: https://github.com/aws/aws-cdk/issues/9080
I've scoured the internet and have bits and pieces but nothing is coming together for me. I have a local Drupal environment running with Lando. I've successfully installed and configured webpack. Everything is working except when I try to watch or hot reload.
When I run lando npm run build-dev (that currently uses webpack --watch I can see my changes compiled successfully into the correct folder. However, when I refresh my Drupal site, I do not see that changes. The only time I see my updated JS changes are when I run lando drush cr to clear cache. Same things are happening when I try to configure the webpack-dev-server. I can get everything to watch for changes and compile correctly but I cannot get my browser to reload my files, they stay cached. I'm at a loss.
I've tried configuring a proxy in my .lando.yml , and have tried different things with the config options for devServer. I'm just not getting a concise answer, and I just don't have the knowledge to understand exactly what is happening. I believe it has to do with Docker containers not being exposed to webpack (??) but I don't understand how to configure this properly.
These are the scripts I have set up in my package.json , build outputs my production ready files into i_screamz/js/dist, build-dev starts a watch and compiles non-minified versions to i_screamz/js/dist-dev - start I have in here from trying to get the devServer to work. I'd like to get webpack-dev-server running as I'd love to have reloading working.
"scripts": {
"start": "npm run build:dev",
"build:dev": "webpack --watch --progress --config webpack.config.js",
"build": "NODE_ENV=production webpack --progress --config webpack.config.js"
},
This is my webpack.config.js - no sass yet, this is just a working modular js build at this point.
const path = require("path");
const BrowserSyncPlugin = require('browser-sync-webpack-plugin');
const isDevMode = process.env.NODE_ENV !== 'production';
module.exports = {
mode: isDevMode ? 'development' : 'production',
devtool: isDevMode ? 'source-map' : false,
entry: {
main: ['./src/index.js']
},
output: {
filename: isDevMode ? 'main-dev.js' : 'main.js',
path: isDevMode ? path.resolve(__dirname, 'js/dist-dev') : path.resolve(__dirname, 'js/dist'),
publicPath: '/web/themes/custom/[MYSITE]/js/dist-dev'
},
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader'
}
}
]
},
plugins: [
new BrowserSyncPlugin({
proxy: {
target: 'http://[MYSITE].lndo.site/',
proxyReq: [
function(proxyReq) {
proxyReq.setHeader('Cache-Control', 'no-cache, no-store');
}
]
},
open: false,
https: false,
files: [
{
match: ['**/*.css', '**/*.js'],
fn: (event, file) => {
if (event == 'change') {
const bs = require("browser-sync").get("bs-webpack-plugin");
if (file.split('.').pop()=='js') {
bs.reload();
} else {
bs.stream();
}
}
}
}
]
}, {
// prevent BrowserSync from reloading the page
// and let Webpack Dev Server take care of this
reload: false,
injectCss: true,
name: 'bs-webpack-plugin'
}),
],
watchOptions: {
aggregateTimeout: 300,
ignored: ['**/*.woff', '**/*.json', '**/*.woff2', '**/*.jpg', '**/*.png', '**/*.svg', 'node_modules'],
}
};
And here is the config I have setup in my .lando.yml - I did have the proxy key in here but it's been removed as I couldn't get it setup right.
name: [MYSITE]
recipe: pantheon
config:
framework: drupal8
site: [MYPANTHEONSITE]
services:
node:
type: node
build:
- npm install
tooling:
drush:
service: appserver
env:
DRUSH_OPTIONS_URI: "http://[MYSITE].lndo.site"
npm:
service: node
settings.local.php
<?php
/**
* Disable CSS and JS aggregation.
*/
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I've updated my code files above to reflect reflect a final working setup with webpack. The main answer was a setting in
/web/sites/default/settings.local.php
**Disable CSS & JS aggregation. **
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I found a working setup from saschaeggi and just tinkered around until I found this setting. So thank you! I also found more about what this means here. This issue took me way longer than I want to admit and it was so simple. I don't know why the 'Disabling Caching css/js aggregation' page never came up when I was furiously googling a caching issue. Hopefully this answer helps anyone else in this very edge case predicament.
I have webpack setup within my theme root folder with my Drupal theme files. I run everything with Lando, including NPM. I found a nifty trick to switch the dist-dev and dist libraries for development / production builds from thinkshout.
I should note my setup does not include hot-reloading but I can at least compile my files and refresh immediately and see my changes. The issue I was having before is that I would have to stop my watches to drush cr and that workflow was ridiculous. I've never gotten hot reloading to work with with either BrowserSync or Webpack Dev Server and I might try to again but I need to move on with my life at this point.
I've also note included sass yet, so these files paths will change to include compilation and output for both .scss and .js files but this is the basic bare min setup working.
Im looking to use Playwright to test against a web page.
The system im working on has 4 different environments that we need to deploy against,
for example the test urls may be
www.test1.com
www.test2.com
www.test3.com
www.test4.com
The first question is how do I target the different Environment? In my playwright config I had a baseUrl but I need to override that.
In addition each environment has different login credentials, how can I create and override these as parameters per environment?
Since Playwright v1.13.0, there is a baseURL option available. You can utilise that in this way probably
In your config.js file, you can have this
import { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
use: {
baseURL: process.env.URL,
},
};
export default config;
Now in the package.json file, you can have the environment variables set in the test commands for various env in the scripts , like this
...
"scripts": {
"start": "node app.js",
"test1": "URL=www.test1.com mocha --reporter spec",
"test2": "URL=www.test2.com mocha --reporter spec",
.
.
},
...
Similarly you can set the environment variables for the login credentials also and then pass them in the script in the same way the URL is passed.
Another approach to this is to use a Bash script. I use something like the following to run tests across environments, to ensure that my Playwright tests will work in all environments they're run in -
#!/bin/bash
echo "Running tests against env 1";
ENV_URL=https://www.env1.com SOMESERVICE_ENV_URL=http://www.env1.com/scholarship npx playwright test $1;
echo "Running tests against env 2"
ENV_URL=https://env2.com SOMESERVICE_ENV_URL=http://env2.com:4008 npx playwright test $1;
echo "Running tests against env 3";
ENV_URL=http://localhost:3000 SOMESERVICE_ENV_URL=http://localhost:4008 npx playwright test $1;
And then run with ./myScript.sh myTest.test.ts
(In a Bash script, the first argument passed in is available via $1.)
For unknown reasons my Jest tests seem to block at the end of my CI via Travis.
The Travis logs say as follow:
Test Suites: 5 passed, 5 total
Tests: 31 passed, 31 total
Snapshots: 0 total
Time: 21.993s
Jest did not exit one second after the test run has completed.
This usually means that there are asynchronous operations that weren't stopped in your tests. Consider running Jest with --detectOpenHandles to troubleshoot this issue.
Note that --detectOpenHandles does not display anything.
As you can see my tests pass but do not exit, even though I do the following :
describe('[INTEGRATION] Index.test.js', () => {
beforeAll(async () => {
const rootDir = resolve(__dirname, '../..');
let config = {};
// ... config
nuxt = new Nuxt(config);
new Builder(nuxt).build();
await nuxt.listen(3000, 'localhost');
homePage = await nuxt.renderAndGetWindow('http://localhost:3000/');
}, 30000);
// This is called after every suite (even during the CI process)
afterAll(() => {
nuxt.close();
});
// ... my tests
});
My tests work fine locally but only do this during the CI process via Travis.
Here is the content of my Jest config file:
module.exports = {
verbose: true,
moduleFileExtensions: ['js', 'vue', 'json'],
transform: {
'^.+\\.js$': 'babel-jest',
'.*\\.(vue)$': 'jest-vue-preprocessor',
},
setupTestFrameworkScriptFile: './jest.setup.js',
silent: true,
};
With jest.setup.js containing jest.setTimeout(30000);.
Finally, here is my .travis.yml config file:
language: 'node_js'
node_js: '8'
cache:
directories:
- 'node_modules'
before_script:
- npm run build
- npm run lint
- npm install
- npm run generate
What could be causing this problem? The timeout shouldn't be it as it is needed in order to execute all my integration tests, and I close my nuxtsession after my integration test suites.
Nothing has majorly been changed between yesterday when it worked and today.
I think you may need to make sure you wait for the nuxt.close() call to resolve. It looks like close is asynchronous and emits a promise that resolves when the connection is actually closed. So Jest probably finishes up before that asynchronous operation finishes. And it's really a timing issue so the CI machine may run things slightly differently causing the close call to take longer than it does on your local machine.
Try changing your afterAll to something like this:
afterAll(async () => {
await nuxt.close();
});
I'm configuring Karma to work with Jenkins CI as described here.
My junitReporter.outputFile, test-results.xml, is always empty.
Per the docs (linked above) Please note the test-results.xml files will be written to subdirectories named after the browsers the tests were run in inside the present working directory (and you will need to tell Jenkins where to find them).
I'm using PhantomJS to run my tests. I do not see any subdirectories named after PhantomJS.
Any ideas?
I ended up adding karma-junit-reporter to plugins in my karma.conf.js file and everything started working, like so:
// Which plugins to enable
plugins: [
"karma-phantomjs-launcher",
"karma-jasmine",
"karma-junit-reporter"
],
// Continuous Integration mode
// if true, it capture browsers, run tests and exit
singleRun: true,
reporters: ['progress', 'junit'],
// the default configuration
junitReporter: {
outputDir: 'test', // results will be saved as $outputDir/$browserName.xml
outputFile: 'test-results.xml', // if included, results will be saved as $outputDir/$browserName/$outputFile
suite: '', // suite will become the package name attribute in xml testsuite element
useBrowserName: true // add browser name to report and classes names
},