How to externalize and consume environment variables from a Vue App:
Created with VueCLI3
Deployed in a docker container
Using NGINX
Some details:
The project is built once and deployed to test and live environments. So, I want to externalize some variables which change through the environments (like URLs to call, domains, usernames etc.). The classical usage with .env file variations with VUE_APP_ prefixed does not help this issue as their values are injected in the code during the build stage: They are not variables once it is built.
Trying it out, I have found a blog post making use of dotenv and some extra configuration; but I could not put it together with the configuration in this VueCLI 3 official guide. The solution does not need to adopt a similar approach though, I am just trying to make a way out.
Probably not a useful information, but I am planning to define those environment variables in Config Maps in Kubernetes configuration.
I think I've accomplished to overcome this case. I am leaving the resolution here.
Define your environment-specific environment variables in .env.development (for development purposes) and add them also to the Pod configuration with correxponding values.
Add a configuration.js file somewhere in your Vue project source folder. It would act as a wrapper for determining whether the runtime is development (local) or production (container). It is like the one shown here, but importing/configuring dotenv is not required:
export default class Configuration {
static get EnvConfig () {
return {
envKey1: '$ENV_KEY_1',
envKey2: '$ENV_KEY_2'
}
}
static value (key) {
// If the key does not exist in the EnvConfig object of the class, return null
if (!this.EnvConfig.hasOwnProperty(key)) {
console.error(`Configuration: There is no key named "${key}". Please add it in Configuration class.`)
return
}
// Get the value
const value = this.EnvConfig[key]
// If the value is null, return
if (!value) {
console.error(`Configuration: Value for "${key}" is not defined`)
return
}
if (!value.startsWith('$VUE_APP_')) {
// value was already replaced, it seems we are in production (containerized).
return value
}
// value was not replaced, it seems we are in development.
const envName = value.substr(1) // Remove $ and get current value from process.env
const envValue = process.env[envName]
if (!envValue) {
console.error(`Configuration: Environment variable "${envName}" is not defined`)
return
}
return envValue
}
}
Create an entrypoint.sh. With some modification, it would look like follows:
#!/bin/bash
function join_by { local IFS="$1"; shift; echo "$*"; }
# Find vue env vars
vars=$(env | grep VUE_APP_ | awk -F = '{print "$"$1}')
vars=$(join_by ',' $vars)
echo "Found variables $vars"
for file in /app/js/app.*;
do
echo "Processing $file ...";
# Use the existing JS file as template
cp $file $file.tmpl
envsubst "$vars" < $file.tmpl > $file
rm $file.tmpl
done
nginx -g 'daemon off;'
In your Dockerfile, add a CMD for running this entrypoint.sh script above as a bootstrapping script during container creation. So that, every time you start a container, it will get the environment variables from the pod configuration and inject it to the Configuration class shown in Step 2.
# build stage
FROM node:lts-alpine as build-stage
# make the 'app' folder the current working directory
WORKDIR /app
# Copy package*.json and install dependencies in a separaate step to enable caching
COPY package*.json ./
RUN npm install
# copy project files and folders to the current working directory
COPY ./ .
# install dependencies and build app for production with minification
RUN npm run build
# Production stage
FROM nginx as production-stage
RUN mkdir /app
# copy 'dist' content from the previous stage i.e. build
COPY --from=build-stage /app/dist /app
# copy nginx configuration
COPY nginx.conf /etc/nginx/nginx.conf
# Copy the bootstrapping script to inject environment-specific values and pass it as argument current to entrypoint
COPY entrypoint.sh entrypoint.sh
# Make the file executable
RUN chmod +x ./entrypoint.sh
CMD ["./entrypoint.sh"]
Finally, instead of process.env use our wrapper configuration class like Configuration.value('envKey1'). And voila!
Related
I have a NextJS application that accesses a database from the API.
When in development, I have a .env file that contains the host, port, username, password, and database. After running npm run dev, the API functionality works as expected. Even if I run npm run build and npm run start on my local machine, it works correctly.
The problem comes after I push to Github and have Github build the app and deploy it as a docker container. For some reason, the docker build does not accept my environment variables loaded through an attached .env file.
To further elaborate, the environment variables are either on the dev machine or attached to the docker container on the production server. They are mounted to the same place (the root directory of the project: /my-site/.env) but the production API does not work.
The environment variables are included in /lib/db.tsx:
import mysql from "mysql2/promise";
const executeQuery = async (query: string) => {
try {
const db = await mysql.createConnection({
host: process.env.MYSQL_HOST,
database: process.env.MYSQL_DATABASE,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
});
const [results] = await db.execute(query, []);
db.end();
return results;
} catch (error) {
return { error };
}
};
export default executeQuery;
This file is included in the API endpoints as:
import executeQuery from "../../../../lib/db";
Again, since it works on the development computer, I think there is an issue is with the building of the docker container.
Here is my included Dockerfile:
FROM node:lts as dependencies
WORKDIR /my-site
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
FROM node:lts as builder
WORKDIR /my-site
COPY . .
COPY --from=dependencies /my-site/node_modules ./node_modules
RUN yarn build
FROM node:lts as runner
WORKDIR /my-site
ENV NODE_ENV production
# If you are using a custom next.config.js file, uncomment this line.
# COPY --from=builder /my-site/next.config.js ./
COPY --from=builder /my-site/public ./public
COPY --from=builder /my-site/.next ./.next
COPY --from=builder /my-site/node_modules ./node_modules
COPY --from=builder /my-site/package.json ./package.json
EXPOSE 3000
CMD ["yarn", "start"]
Any and all assistance is greatly appreciated!
Edit: Other things I have attempted:
Add them as environment variables to the docker container in the docker compose file (under environment) and verified that they are accessible inside of the container using echo $MYSQL_USER.
Mounting the .env file inside of the .next folder
Mounting the .env file inside of the .next/server folder
I ended up solving my own issue after hours of trying to figure it out.
My solution was to create a .env.production file and commit it to git.
I also adjusted my Dockerfile to include: COPY --from=builder /my-site/.env.production ./
I am not a fan of that solution, as it involves pushing secrets to a repo, but it works.
Having read the Quasar framework's description for Handling process.env, I understand that it is possible to add environment variables when building the application for development or production.
You can even go one step further. Supply it with values taken from the
quasar dev/build env variables:
// quasar.config.js
build: {
env: {
FOO: process.env.FOO,
}
}
Then, I can use that variable by using process.env.FOO.
For staging and production, however, I'm building a Docker image which runs an NGINX serving the final dist/spa folder. I'd like to pass an environment variable when deploying the application, so that I can configure the FOO variable depending on its value in the docker-compose.yml:
// staging
services:
image: my-quasar-image
environment:
FOO: "STAGING"
// production
services:
image: my-quasar-image
environment:
FOO: "PROD"
I have found some blog post which mentions that you could create a custom entrypoint.sh for the Docker image which reads env variables and adds them to the window object but I wonder if there might be a more "elegant" solution.
The primary question is: Is it possible to pass in (Docker) environment variables before the application starts and which are then available on process.env?
This is how I sorted my requirement that works perfectly for my use case.
A quick review of what I wanted to do: Be able to pass in environment variables via a docker-compose file to a Vue.js application to allow different team members to test different development APIs depending on their assignment(localhost if running the server locally, api-dev, api-staging, api-prod).
Update public/index.html to contain following at the head:
<script>
// CONFIGURATIONS_PLACEHOLDER
</script>
There is no need to update vue.config.js as we are using the public folder for configuration.
Create new file env.js to consume runtime variables (keep it inside src folder)
export default function getEnv(name) {
return window?.configs?.[name] || process.env[name];
}
Create new bash file set-env-variable.sh in the root folder of the app.
#!/bin/sh
JSON_STRING='window.configs = { \
"VUE_APP_VAR1":"'"${VUE_APP_VAR1}"'", \
"VUE_APP_VAR2":"'"${VUE_APP_VAR2}"'" \
}'
sed -i "s#// CONFIGURATIONS_PLACEHOLDER#${JSON_STRING}#" /usr/share/nginx/html/index.html
exec "$#"
Update docker file (assuming it's in the root folder of your vue app)
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY ./set-env-variable.sh /docker-entrypoint.d
RUN chmod +x /docker-entrypoint.d/set-env-variable.sh
RUN dos2unix /docker-entrypoint.d/set-env-variable.sh
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Deployment
vue-app:
....
volumes:
- "./nginx/templates/:/etc/nginx/templates/"
environment:
VUE_APP_VAR1: my-app
VUE_APP_VAR2: 8080
Consuming variables in vue app
import getEnv from "./service/env.js";
var myVar = getEnv("VUE_APP_VAR1");
I have this Dockerfile
FROM node:14.17.1
ARG GITHUB_TOKEN
ARG REACT_APP_BASE_URL
ARG DATABASE_URL
ARG BASE_URL
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
ENV GITHUB_TOKEN=${GITHUB_TOKEN}
ENV REACT_APP_BASE_URL=${REACT_APP_BASE_URL}
ENV DATABASE_URL=${DATABASE_URL}
ENV BASE_URL=${BASE_URL}
ENV PORT 80
COPY . /usr/src/app
RUN npm install
RUN npm run build
EXPOSE 80
CMD ["npm", "start"]
But I don't like having to set each environment variable. Is is possible to make all of them available without needing to set one by one?
We need to pay attention to next two items before continue:
As mentioned by #Lukman in comments, TOKEN is not a good item to be stored in image unless you totally for internal use, you decide.
Even we did not specify environment one by one in Dockerfile, we still need to define them in some other place, as program itself can't know what environment you really need.
If you no problem with above, let's go on. Basically, I think define the environment (Here, use ENV1, ENV2 as example) in a script, then source them in container, and let app have ways to access these variables is what you needed.
env.sh:
export ENV1=1
export ENV2=2
app.js:
#!/usr/bin/env node
var env1 = process.env.ENV1;
var env2 = process.env.ENV2;
console.log(env1);
console.log(env2);
entrypoint.sh:
#!/bin/bash
source /usr/src/app/env.sh
exec node /usr/src/app/app.js
Dockerfile:
FROM node:14.17.1
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN chmod -R 755 /usr/src/app
CMD ["/usr/src/app/entrypoint.sh"]
Execution:
$ docker build -t abc:1 .
$ docker run --rm abc:1
1
2
Explain:
We change CMD or ENTRYPOINT in Dockerfile to use customized entrypoint.sh, in this entrypoint.sh, we will first source env.sh which make ENV1 and ENV2 visible to subprocess of entrypoint.sh.
Then, we use exec to replace current process as node app.js, so PID1 becomes node app.js now, meanwhile app.js still could get the environment defined in env.sh.
With above, we no need to define variables in Dockerfile one by one, but still our app could get the environment.
Here's a different (easy) way.
Start by making your file. Here I'm choosing to use everything on my this is messy and not recommended. It's a useful bit of code though so I thought I'd add it.
env | sed 's/^/export /' > env.sh
edit it so you only have what you need
vi env.sh
Use the below to import files into the container. Change pwd to whichever folder you want to share. Using this carelessly may result in you sharing to many files*
sudo docker run -it -v `pwd`:`pwd` ubuntu
Assign appropriate file permissions. I'm using 777 which means anyone can read, write, execute - for demonstration purposes. But you only need execute privileges.
Run this command and make sure you add the full stop.
. /LOCATION/env.sh
If you're confused to where your file is just type pwd in the host console.
You can just add those commands where appropriate to your Dockerfile to automate the process. If I recall there is a VOLUME flag for Dockerfile.
I have a problem when I try to run my app in a Docker container. It is running fine with a simple go run main.go, but whenever I build an image and I run the docker container, I got the error of panic: html/template: pattern matches no files: *.html, so I guess GOPATH is not properly set in the docker container (tho I use this same docker file from other projects and I don't have any problems). I am a little lost here, since this method I been using already for a while without problems.
I am using gin as a framework for develop.
The docker file is:
FROM golang:alpine as builder
RUN apk update && apk add git && apk add ca-certificates
# For email certificate
RUN apk add -U --no-cache ca-certificates
COPY . $GOPATH/src/github.com/kiketordera/advanced-performance/
WORKDIR $GOPATH/src/github.com/kiketordera/advanced-performance/
RUN go get -d -v $GOPATH/src/github.com/kiketordera/advanced-performance
# For Cloud Server
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags="-w -s" -o /go/bin/advanced-performance $GOPATH/src/github.com/kiketordera/advanced-performance
FROM scratch
COPY --from=builder /go/bin/advanced-performance /advanced-performance
COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/media/ /go/src/github.com/kiketordera/advanced-performance/media/
# For email certificate
VOLUME /etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt
COPY --from=alpine /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
EXPOSE 8050/tcp
ENV GOPATH /go
ENTRYPOINT ["/advanced-performance"]
Main function is:
package main
import (
"fmt"
"net/http"
"github.com/gin-gonic/gin"
i18n "github.com/suisrc/gin-i18n"
"golang.org/x/text/language"
)
func main() {
// We create the instance for Gin
r := gin.Default()
/* Internationalization for showing the right language to match the browser's default settings
*/
bundle := i18n.NewBundle(
language.English,
"text/en.toml",
"text/es.toml",
)
// Tell Gin to use our middleware. This means that in every single request (GET, POST...), the call to i18n will be executed
r.Use(i18n.Serve(bundle))
// Path to the static files. /static is rendered in the HTML and /media is the link to the path to the images, svg, css.. the static files
r.StaticFS("/static", http.Dir("media"))
// Path to the HTML templates. * is a wildcard
r.LoadHTMLGlob("*.html")
// Redirects when users introduces a wrong URL
r.NoRoute(redirect)
// This get executed when the users gets into our website in the home domain ("/")
r.GET("/", renderHome)
r.POST("/", getForm)
// Listen and serve on 0.0.0.0:8080 (for windows "localhost:8080")
r.Run()
}
The full project can be found in https://github.com/kiketordera/advanced-performance, is a simple website rendering with i18n and a POST form-handler
GOPATH is not relevant; it's used to "resolve import statements" and plays no role when running an executable (unless your code references it specifically!). The WORKDIR is the issue here.
FROM "clears any state created by previous instructions". This includes the WORKDIR. For example if you use the docker file:
FROM alpine:3.12
WORKDIR /test
copy 1.txt .
FROM alpine:3.12
copy 2.txt .
The final resulting image will have file 2.txt in the root folder (and no /test folder).
In your dockerfile you are copying the media folder to /go/src/github.com/kiketordera/advanced-performance/media/ on the assumption that the WORKDIR will be set; but that is not the case (it defaults to /). Simplest fix is to change COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/media/ /go/src/github.com/kiketordera/advanced-performance/media/ to COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/media/ /media/.
You are also accessing files from the root folder so need to copy these in with COPY --from=builder /go/src/github.com/kiketordera/advanced-performance/*.html / (or similar). Given that you are doing this it's probably best to put everything (the exe, html files and media folder) into a folder (e.g. /app) to keep the root folder clean.
Note: There is no need to set GOPATH in the second image; as mentioned above it's not relevant when running the executable. I'd recommend using modules (support for GOPATH will probably be dropped in 1.17); this would also enable you to considerably shorten your paths!
Objective
I have an env variable script file that looks like:
#!/bin/sh
export FOO="public"
export BAR="private"
I would like to source the env variables to be available when a docker image is being built. I am aware that I can use ARG and ENV with build args, but I have too many Env Variables, and I am afraid that will be a lengthy list.
It's worth mentioning that I only need the env variables to install a specific step in my docker file (will highlight in the Dockerfile below), and do not necessarily want them to be available in the built image after that.
What I have tried so far
I have tried having a script (envs.sh) that export env vars like:
#!/bin/sh
export DOG="woof"
export CAT="meow"
My Docker file looks like:
FROM fishtownanalytics/dbt:0.18.1
# Define working directory
# Load ENV Vars
COPY envs.sh envs.sh
CMD ["sh", "envs.sh"]
# Install packages required
CMD ["sh", "-c", "envs.sh"]
RUN dbt deps # I need to env variables to be available for this step
# Exposing DBT Port
EXPOSE 8081
But that did not seem to work. How can I export env variables as a script to the docker file?
In the general case, you can't set environment variables in a RUN command: each RUN command runs a new shell in a new container, and any environment variables you set there will get lost at the end of that RUN step.
However, you say you only need the variables at one specific step in your Dockerfile. In that special case, you can run the setup script and the actual command in the same RUN step:
FROM fishtownanalytics/dbt:0.18.1
COPY envs.sh envs.sh
RUN . ./envs.sh \
&& dbt deps
# Anything that envs.sh `export`ed is lost _after_ the RUN step
(CMD is irrelevant here: it only provides the default command that gets run when you launch a container from the built image, and doesn't have any effect on RUN steps. It also looks like the image declares an ENTRYPOINT so that you can only run dbt subcommands as CMD, not normal shell commands. I also use the standard . to read in a script file instead of source, since not every container has a shell that provides that non-standard extension.)
Your CMD call runs a new shell (sh) that defines those variables and then dies, leaving the current process unchanged. If you want those environment variables to apply to the current process, you could source it:
CMD ["source", "envs.sh"]