Nuxt 3 environment variables for static site [duplicate] - environment-variables

I have .env file in the project root, and in my nuxt config I am using variables to configure ReCaptcha like this:
import dotenv from 'dotenv'
dotenv.config()
export default {
modules: [
['#nuxtjs/recaptcha', {
siteKey: process.env.RECAPTCHA_SITE_KEY,
version: 3,
size: 'compact'
}],
]
}
and in .env like this:
RECAPTCHA_SITE_KEY=6L....
but the application always failed with console log error:
ReCaptcha error: No key provided
When I hard-code ReCaptcha key directly like that: siteKey: 6L.... app start working, so I guess the problem is with reading .env props in nuxt.config
do you have any idea how to fix it?
EDIT:
I tried update my nuxt.config by #kissu recommendation and by example which I found here: https://www.npmjs.com/package/#nuxtjs/recaptcha
so there is new nuxt.config which also not working:
export default {
modules: [
'#nuxtjs/recaptcha',
],
publicRuntimeConfig: {
recaptcha: {
siteKey: process.env.RECAPTCHA_SITE_KEY,
version: 3,
size: 'compact'
}
}
}

If your Nuxt version is 2.13 or above, you don't need to use #nuxtjs/dotenv or anything alike because it is already backed into the framework.
To use some variables, you need to have an .env file at the root of your project. This one should be ignored by git. You can then input some keys there like
PUBLIC_VARIABLE="https://my-cool-website.com"
PRIVATE_TOKEN="1234qwer"
In your nuxt.config.js, you have to input those into 2 objects, depending of your use case, either publicRuntimeConfig or privateRuntimeConfig:
export default {
publicRuntimeConfig: {
myPublicVariable: process.env.PUBLIC_VARIABLE,
},
privateRuntimeConfig: {
myPrivateToken: process.env.PRIVATE_TOKEN
}
}
Differences: publicRuntimeConfig can basically be used anywhere, while privateRuntimeConfig can only be used during SSR (a key can only stay private if not shipped to the browser).
A popular use case for the privateRuntimeConfig is to use it for nuxtServerInit or during the build process (either yarn build or yarn generate) to populate the app with headless CMS' API calls.
More info can be found on this blog post: https://nuxtjs.org/blog/moving-from-nuxtjs-dotenv-to-runtime-config/
Then, you will be able to access it into any .vue file directly with
this.$config.myPublicVariable
You access it into Nuxt's /plugins too, with this syntax
export default ({ $axios, $config: { myPublicVariable } }) => {
$axios.defaults.baseURL = myPublicVariable
}
If you need this variable for a Nuxt module or in any key in your nuxt.config.js file, write it directly with
process.env.PRIVATE_TOKEN
Sometimes, the syntax may differ a bit, in this case refer to your Nuxt module documentation.
// for #nuxtjs/gtm
publicRuntimeConfig: {
gtm: {
id: process.env.GOOGLE_TAG_MANAGER_ID
}
},
PS: if you do use target: server (default value), you can yarn build and yarn start to deploy your app to production. Then, change any environment variables that you'd like and yarn start again. There will be no need for a rebuild. Hence the name RuntimeConfig!
Nuxt3 update
As mentioned here and in the docs, you can use the following for Nuxt3
nuxt.config.js
import { defineNuxtConfig } from 'nuxt3'
export default defineNuxtConfig({
runtimeConfig: {
public: {
secret: process.env.SECRET,
}
}
}
In any component
<script setup lang="ts">
const config = useRuntimeConfig()
config.secret
</script>
In a composable like /composables/test.js as shown in this comment
export default () => {
const config = useRuntimeConfig()
console.log(config.secret)
}
Here is the official doc for that part.

You can also use the env property with Nuxt
nuxt.config.js:
export default {
// Environment variables
env: {
myVariable: process.env.NUXT_ENV_MY_VAR
},
...
}
Then in your plugin:
const myVar = process.env.myVariable

It's very easy. Providing you an example with axios/nuxt
Define your variable in the .env file:
baseUrl=http://localhost:1337
Add the variable in the nuxt.config.js in an env-object (and use it in the axios config):
export default {env: {baseUrl: process.env.baseUrl},axios: {baseURL: process.env.baseUrl},}
Use the env variable in any file like so:
console.log(process.env.baseUrl)
Note that console.log(process.env) will output {} but console.log(process.env.baseUrl) will still output your value!

For nuxt3 rc11, in nuxt.conf.ts file:
export default defineNuxtConfig({
runtimeConfig: {
public: {
locale: {
defaultLocale: process.env.NUXT_I18N_LOCALE,
fallbackLocale: process.env.NUXT_I18N_FALLBACK_LOCALE,
}
}
},
...
and in .env file:
NUXT_I18N_LOCALE=tr
NUXT_I18N_FALLBACK_LOCALE=en
public: is very important otherwise it cannot read it and gives undefined error.

For v3 there is a precise description in the official docs
You define a runtimeConfig entry in your nuxt.config.[ts,js] which works as initial / default value:
export default defineNuxtConfig({
runtimeConfig: {
recaptchaSiteKey: 'default value' // This key is "private" and will only be available within server-side
}
}
You can also use env vars to init the runtimeConfig but its written static after build.
But you can override the value at runtime by using the following env var:
NUXT_RECAPTCHA_SITE_KEY=SOMETHING DYNAMIC
If you need to use the config on client-side, you need to use the public property.
export default defineNuxtConfig({
runtimeConfig: {
public: {
recaptchaSiteKey: 'default value' // will be also exposed to the client-side
}
}
}
Notice the PUBLIC part in the env var:
NUXT_PUBLIC_RECAPTCHA_SITE_KEY=SOMETHING DYNAMIC

This is very strange because we can't access process.env in Nuxt 3
In the Nuxt 3, we are invited to use the runtime config, but this is not always convenient, because the Nuxt application context is required.
But in a situation where we have some plain library, and we don’t want to wrap it in plugins nor composables functions, declaring global variables through vite / webpack is best:
// nuxt.config.ts
export default defineNuxtConfig({
vite: {
define: {
MY_API_URL: JSON.stringify(process.env.MY_API_URL)
}
}
})
And then you can use in any file without dancing with a tambourine:
// some-file.ts
console.log('global var:', MY_API_URL) // replaced by vite/webpack in real value

Related

How do I get AWS CDK stack env values for bootstrapping an environment?

AWS and other sources consider explicitly specifying the AWS account and region for each stack as best practice. I'm trying to write a CI pipeline that will bootstrap my environments. However, I'm not seeing any straight-forward way to retrieve the stack's explicit env values from here:
regions.forEach((region) =>
new DbUpdateStack(app, `${stackBaseName}-prd-${region}`, {
env: {
account: prdAccount,
region: region
},
environment_instance: 'prd',
vpc_id: undefined,
})
);
EG, base-name-prd-us-east-1 knows the region and account as defined in the code but how do I access this from the command line without doing something hacky?
I need to run cdk bootstrap with those values and I don't want to duplicate them.
The Cloud Assembly module can introspect an App's stack environments. Synth the app, then instantiate a CloudAssembly class by pointing at the cdk output directory:
import * as cx_api from '#aws-cdk/cx-api';
(() => {
const cloudAssembly = new cx_api.CloudAssembly('cdk.out');
const appEnvironments = cloudAssembly.stacks.map(stack => stack.environment);
console.log(appEnvironments);
})();
Result:
[
{
account: '123456789012',
region: 'us-east-1',
name: 'aws://123456789012/us-east-1',
},
];

CDK not updating

Running cdk deploy after updating my Stack:
export function createTaskXXXX (stackScope: Construct, workflowContext: WorkflowContext) {
const lambdaXXXX = new lambda.Function(stackScope, 'XXXXFunction', {
runtime: Globals.LAMBDA_RUNTIME,
memorySize: Globals.LAMBDA_MEMORY_MAX,
code: lambda.Code.fromAsset(CDK_MODULE_ASSETS_PATH),
handler: 'xxxx-handler.handler',
timeout: Duration.minutes(Globals.LAMBDA_DURATION_2MIN),
environment: {
YYYY_ENV: (workflowContext.production) ? 'prod' : 'test',
YYYY_A_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/`,
YYYY_B_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/` <--- added
}
})
lambdaXXXX.addToRolePolicy(new iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['s3:PutObject'],
resources: [
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/*`,
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/*` <---- added
]
}))
I realize that those changes are not updated at stack.template.json:
...
"Runtime": "nodejs12.x",
"Environment": {
"Variables": {
"YYYY_ENV": "test",
"YYYY_A_LOCATION": "s3://.../adata-workflow/split-input/"
}
},
"MemorySize": 3008,
"Timeout": 120
}
...
I have cleaned cdk.out and tried the deploy --force, but never see any updates.
Is it deleting the stack and redeploy the only final alternative, or am i missing something? I think at least at synth should generate different results.
(i also changed to cdk 1.65.0 in my local system to match the package.json)
Thanks.
EDITED: I git clone the project, and did npm install and cdk synth again and finally saw the changes, i would like not to do this everytime, any light of what could be blocking the correct synth generation?
EDITED 2: After a diff between the bad old project and the new from git where synth worked, i realized that some of my project files that had .ts (for example cdk.ts my App definition) also had replicas with .js and .d.ts., such as cdk.js and cdk.d.ts. Could i have runned some command by mistake that compiled Typescript, i will continue to investigate, thanks to all answers.
because CDK uses Cloudformation, it performs an action to determine a ChangeSet. This is to say, if it doesn't think anything has changed, it wont change that resource.
This can, of course, be very annoying as sometimes it thinks it is the same and doesn't update when there is actually a change - I find this most often with Layers and using some form of make file to generate the zips for the layers. Even tho it makes a 'new' zip whatever it uses to determine that the zip is updated recalls it as the same because of ... whatever compression/hash/ect changes are used.
You can get around this by updating the description with a datetime. Its assigned at synth (which is part of the cdk deploy) and so if you do a current now() of datetime
You can also use cdk diff to see what it thinks the changes are.
And finally... always remember to save your file before deployments as, depending on your IDE, it may not be available to the command line ;)
I think it will update where I see the code, but I don't know why it can't.
It is advisable to comment out the part about Lambda once and deploy it, then uncomment it and deploy it again, then recreate Lambda.
This is how I do it. Works nicely so far. Basically you can do the following:
Push your lambda code as a zip file to an s3 bucket. The bucket must have versioning enabled. .
The CDK code below will do the following:
Create a custom resource. It basically calls s3.listObjectVersions for my lambda zip file in S3. I grab the first returned value, which seems to be the most recent object version all the time (I cannot confirm this with the documentation though). I also create a role for the custom resource.
Create the lambda and specify the code as the zip file in s3 AND THE OBJECT VERSION RETURNED BY THE CUSTOM RESOURCE! That is the most important part.
Create a new lambda version.
Then the lambda's code updates when you deploy the CDK stack!
const versionIdKey = 'Versions.0.VersionId';
const isLatestKey = 'Versions.0.IsLatest'
const now = new Date().toISOString();
const role = new Role(this, 'custom-resource-role', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
role.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AdministratorAccess')); // you can make this more specific
// I'm not 100% sure this gives you the most recent first, but it seems to be doing that every time for me. I can't find anything in the docs about it...
const awsSdkCall: AwsSdkCall = {
action: "listObjectVersions",
parameters: {
Bucket: buildOutputBucket.bucketName, // s3 bucket with zip file containing lambda code.
MaxKeys: 1,
Prefix: LAMBDA_S3_KEY, // S3 key of zip file containing lambda code
},
physicalResourceId: PhysicalResourceId.of(buildOutputBucket.bucketName),
region: 'us-east-1', // or whatever region
service: "S3",
outputPaths: [versionIdKey, isLatestKey]
};
const customResourceName = 'get-object-version'
const customResourceId = `${customResourceName}-${now}` // not sure if `now` is neccessary...
const response = new AwsCustomResource(this, customResourceId, {
functionName: customResourceName,
installLatestAwsSdk: true,
onCreate: awsSdkCall,
onUpdate: awsSdkCall,
policy: AwsCustomResourcePolicy.fromSdkCalls({resources: AwsCustomResourcePolicy.ANY_RESOURCE}), // you can make this more specific
resourceType: "Custom::ListObjectVersions",
role: role
})
const fn = new Function(this, 'my-lambda', {
functionName: 'my-lambda',
description: `${response.getResponseField(versionIdKey)}-${now}`,
runtime: Runtime.NODEJS_14_X,
memorySize: 1024,
timeout: Duration.seconds(5),
handler: 'index.handler',
code: Code.fromBucket(buildOutputBucket, LAMBDA_S3_KEY, response.getResponseField(versionIdKey)), // This is where the magic happens. You tell CDK to use a specific S3 object version when updating the lambda.
currentVersionOptions: {
removalPolicy: RemovalPolicy.DESTROY,
},
});
new Version(this, `version-${now}`, { // not sure if `now` is neccessary...
lambda: fn,
removalPolicy: RemovalPolicy.DESTROY
})
Do note:
For this to work, you have to upload your lambda zip code to S3 before each cdk deploy. This can be the same code as before, but the s3 bucket versioning will create a new version. I use code pipeline to do this as part of additional automation.

Set environment = 'staging' in Ember deployed with Capistrano 3

I've got a staging environment where I'd like to set a custom set of variables for deploying my Ember.js app and I'm drawing a blank on how to do it correctly. I am using the ember-cli-rails gem. According to the documentation for that:
EMBER_ENV: If set on the environment, the value of EMBER_ENV will be
passed to the ember process as the value of the --environment flag.
I'm just drawing a blank on how to set it on the "environment".
/project/frontend/config/environment.js
if (environment === 'test') {
// Testem prefers this...
ENV.baseURL = '/';
ENV.locationType = 'none';
// keep test console output quieter
ENV.APP.LOG_ACTIVE_GENERATION = false;
ENV.APP.LOG_VIEW_LOOKUPS = false;
ENV.APP.rootElement = '#ember-testing';
}
if (environment === 'staging') {
ENV.apiHost = '/app-data';
ENV.contentSecurityPolicy = contentSecurityPolicy;
ENV.torii = {
providers: {
'my-custom-oauth2': {
apiKey: '1234123412341234123412341234',
baseUrl: 'http://www.server.com:9108/oauth/authorize'
}
}
};
}
if (environment === 'production') {
ENV.apiHost = '/app-data';
ENV.contentSecurityPolicy = contentSecurityPolicy;
Things I've tried so far:
Setting export EMBER_ENV='staging' in my deployer user's .profile
Setting set :default_env, { 'EMBER_ENV' => 'staging' } in my /config/deploy/staging.rb file.
Unfortunately, ember cli only support test, development and production as valid environments
The ENV object contains three important keys:
EmberENV can be used to define Ember feature flags (see the Feature
Flags guide).
APP can be used to pass flags/options to your
application instance.
environment contains the name of the current environment (development, production or test).
There are several unofficial solutions to this out there, some of them use ember-cli-deploy and change the settings based on deployTarget value. Others use dotenv to manage each environment settings.
We're dealing with this at the moment and couldn't find a solution that feels right.

Detecting environment with Meteor.js?

Anybody figure out syntax or a pattern yet for detecting hosting environment using Meteor.js? I've got Heroku buildpacks working, and have a dev/production environment, but I'm kind of drawing a blank on how to have my app detect which environment it's running in.
Is there a way to have node.js detect which port it's running on? I was hoping there might be something low-level like app.address().port, but that doesn't seem to work...
Edit: This is the solution that worked for me. Note that the following needs to be run on the server, so it needs to be included in server\server.js, or a similar file.
if (Meteor.is_server) {
Meteor.startup(function () {
// we want to be able to inspect the root_url, so we know which environment we're in
console.log(JSON.stringify(process.env.ROOT_URL));
// in case we want to inspect other process environment variables
//console.log(JSON.stringify(process.env));
});
}
Also created the following:
Meteor.methods({
getEnvironment: function(){
if(process.env.ROOT_URL == "http://localhost:3000"){
return "development";
}else{
return "staging";
}
}
});
Which allows for the following on client side:
Meteor.call("getEnvironment", function (result) {
console.log("Your application is running in the " + result + "environment.");
});
Thanks Rahul!
You can inspect the process.env variable on the server to find information about the current environment, including the port:
{ TERM_PROGRAM: 'Apple_Terminal',
TERM: 'xterm-256color',
SHELL: '/bin/bash',
TMPDIR: '/var/folders/y_/212wz0cx5vs20yd7y2psnh7m0000gp/T/',
Apple_PubSub_Socket_Render: '/tmp/launch-hch25f/Render',
TERM_PROGRAM_VERSION: '309',
OLDPWD: '/usr/local/meteor/bin',
TERM_SESSION_ID: '3FE307A0-B8FC-41AD-B1EB-FCFA0B8B25D1',
USER: 'Rahul',
COMMAND_MODE: 'unix2003',
SSH_AUTH_SOCK: '/tmp/launch-gFCBXS/Listeners',
__CF_USER_TEXT_ENCODING: '0x1F6:0:0',
Apple_Ubiquity_Message: '/tmp/launch-QAWKHL/Apple_Ubiquity_Message',
PATH: '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin',
PWD: '/Users/Rahul/Documents/Sites/test',
NODE_PATH: '/usr/local/meteor/lib/node_modules',
SHLVL: '1',
HOME: '/Users/Rahul',
LOGNAME: 'Rahul',
LC_CTYPE: 'UTF-8',
SECURITYSESSIONID: '186a4',
PORT: '3001',
MONGO_URL: 'mongodb://127.0.0.1:3002/meteor',
ROOT_URL: 'http://localhost:3000' }
there is a direct Meteor function:
Meteor.isDevelopment
see: https://docs.meteor.com/api/core.html#Meteor-isDevelopment
and for production:
Meteor.isProduction
both return a boolean
I used a variation of the above with the NODE_ENV variable. See here for more info:
http://meteorpedia.com/read/Environment_Variables#Checking%20the%20value%20of%20an%20Environment%20Variable
if Meteor.isServer
Meteor.methods
'getEnvironment': -> process.env.NODE_ENV
Meteor.call 'getEnvironment', (err, result) ->
if result == 'development'
console.log('In dev env')

window/Linux Specific filesystem properties in Grails

I want to add linux based or windows based system properties in Grails as my app needs to run in the both. I know that we can add grails.config.locations location specified in Config. groovy.
But I need the if and esle condition for the file to be picked.
the problem is config.grrovy has userHome grailsHome appName appVersion
I would need something like osName.
Either I can go ahead with syetm.properties or if soembody can tell me how these (only) properties are availble in Config.groovy (through DefaultGrailsApplication or otherwise. that woyuld be great.
Also, somewjhat moer elegant would be if where I need those properties I make my service as user-defined-spring-bean. Would that be right and feasible approach?If yes, some example
You could do something like this in your Config.groovy:
environments {
development {
if (System.properties["os.name"] == "Linux") {
grails.config.locations = [ "file:$basedir/grails-app/conf/linux.properties" ]
} else {
grails.config.locations = [ "file:$basedir/grails-app/conf/windows.properties" ]
}
}
...
}
Alternatively, for a service based approach, you could bundle up all the OS specific behavior into implementations of service interface. For example:
// OsPrinterService.groovy
interface OsPrinterService {
void printOs();
}
// LinuxOsPrinterService.groovy
class LinuxOsPrinterService implements OsPrinterService {
void printOs() { println "Linux" }
}
// WindowsOsPrinterService.groovy
class WindowsOsPrinterService implements OsPrinterService {
void printOs() { println "Windows" }
}
Then instantiate the correct one in grails-app/conf/spring/resources.groovy like so:
beans = {
if (System.properties["os.name"] == "Linux") {
osPrinterService(LinuxOsPrinterService) {}
} else {
osPrinterService(WindowsOsPrinterService) {}
}
}
Then the correct service will be automatically injected into your objects by spring.
Create a custom enviorenment for both Windows and Linux. Something like the following should work if placed in config.groovy
environments {
productionWindows {
filePath=c:\path
}
productionLinux {
filePath=/var/dir
}
}
You should then be able to use the grails config object to get the value of filePath reguardless of weather your on Windows or Linux. For more details on this see section 3.2 of
http://www.grails.org/doc/1.0.x/guide/3.%20Configuration.html If you wanted to create a war file to run on Linux you would execute the following command.
grails -Dgrails.env=productionLinux war
And then to get a file path you stored in config.groovy for the specific environment your running in.
def fileToOpen=Conf.config.filePath
fileToOpen will contain the value you assigned to filePath in your config.groovy based on the environment your currently running as, so when running with productionLinux as the environment it will contain the value /var/dir

Resources