Pass (Docker) environment variable into Vue/Quasar application at runtime - docker

Having read the Quasar framework's description for Handling process.env, I understand that it is possible to add environment variables when building the application for development or production.
You can even go one step further. Supply it with values taken from the
quasar dev/build env variables:
// quasar.config.js
build: {
env: {
FOO: process.env.FOO,
}
}
Then, I can use that variable by using process.env.FOO.
For staging and production, however, I'm building a Docker image which runs an NGINX serving the final dist/spa folder. I'd like to pass an environment variable when deploying the application, so that I can configure the FOO variable depending on its value in the docker-compose.yml:
// staging
services:
image: my-quasar-image
environment:
FOO: "STAGING"
// production
services:
image: my-quasar-image
environment:
FOO: "PROD"
I have found some blog post which mentions that you could create a custom entrypoint.sh for the Docker image which reads env variables and adds them to the window object but I wonder if there might be a more "elegant" solution.
The primary question is: Is it possible to pass in (Docker) environment variables before the application starts and which are then available on process.env?

This is how I sorted my requirement that works perfectly for my use case.
A quick review of what I wanted to do: Be able to pass in environment variables via a docker-compose file to a Vue.js application to allow different team members to test different development APIs depending on their assignment(localhost if running the server locally, api-dev, api-staging, api-prod).
Update public/index.html to contain following at the head:
<script>
// CONFIGURATIONS_PLACEHOLDER
</script>
There is no need to update vue.config.js as we are using the public folder for configuration.
Create new file env.js to consume runtime variables (keep it inside src folder)
export default function getEnv(name) {
return window?.configs?.[name] || process.env[name];
}
Create new bash file set-env-variable.sh in the root folder of the app.
#!/bin/sh
JSON_STRING='window.configs = { \
"VUE_APP_VAR1":"'"${VUE_APP_VAR1}"'", \
"VUE_APP_VAR2":"'"${VUE_APP_VAR2}"'" \
}'
sed -i "s#// CONFIGURATIONS_PLACEHOLDER#${JSON_STRING}#" /usr/share/nginx/html/index.html
exec "$#"
Update docker file (assuming it's in the root folder of your vue app)
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY ./set-env-variable.sh /docker-entrypoint.d
RUN chmod +x /docker-entrypoint.d/set-env-variable.sh
RUN dos2unix /docker-entrypoint.d/set-env-variable.sh
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Deployment
vue-app:
....
volumes:
- "./nginx/templates/:/etc/nginx/templates/"
environment:
VUE_APP_VAR1: my-app
VUE_APP_VAR2: 8080
Consuming variables in vue app
import getEnv from "./service/env.js";
var myVar = getEnv("VUE_APP_VAR1");

Related

Dockerized Sveltkit app: Hot reload not working

With the help from SO community I was finally able to dockerize my Sveltekit app and access it from the browser (this was an issue initially). So far so good, but now every time I perform a code change I need to re-build and redeploy my container which obviously is not acceptable. Hot reload is not working, I've been trying multiple things I've found online but none of them have worked so far.
Here's my Dockerfile:
FROM node:19-alpine
# Set the Node environment to development to ensure all packages are installed
ENV NODE_ENV development
# Change our current working directory
WORKDIR /app
# Copy over `package.json` and lock files to optimize the build process
COPY package.json package-lock.json ./
# Install Node modules
RUN npm install
# Copy over rest of the project files
COPY . .
# Perhaps we need to build it for production, but apparently is not needed to run dev script.
# RUN npm run build
# Expose port 3000 for the SvelteKit app and 24678 for Vite's HMR
EXPOSE 3333
EXPOSE 8080
EXPOSE 24678
CMD ["npm", "run", "dev"]
My docker-compose:
version: "3.9"
services:
dmc-web:
build:
context: .
dockerfile: Dockerfile
container_name: dmc-web
restart: always
ports:
- "3000:3000"
- "3010:3010"
- "8080:8080"
- "5050:5050"
- "24678:24678"
volumes:
- ./:/var/www/html
the scripts from my package.json:
"scripts": {
"dev": "vite dev --host 0.0.0.0",
"build": "vite build",
"preview": "vite preview",
"test": "playwright test",
"lint": "prettier --check . && eslint .",
"format": "prettier --write ."
},
and my vite.config.js:
import { sveltekit } from '#sveltejs/kit/vite';
import {defineConfig} from "vite";
export default defineConfig({
plugins: [sveltekit()],
server: {
watch: {
usePolling: true,
},
host: true, // needed for the DC port mapping to work
strictPort: true,
port: 8080,
}
});
any idea what am I missing? I can reach my app at http://localhost:8080 but cannot get to reload the app when a code change happens.
Thanks.
Solution
The workspace in question does not work simply because it does not bind-mount the source directory. Other than that, it has no problem whatsoever.
Here's working code at my github:
https://github.com/rabelais88/stackoverflow-answers/tree/main/74680419-svelte-docker-HMR
1. Proper bind mount in docker-compose
The docker-compose.yaml from the question only mounts the result of previous build, not the current source files.
# 🚨wrong
volumes:
- ./:/var/www/html
# ✅answer
volumes:
# it avoids mounting the workspace root
# because it may cause OS specific node_modules folder
# or build folder(.svelte-kit) to be mounted.
# they conflict with the temporary results from docker space.
# this is why many mono repos utilize ./src folder
- ./src:/home/node/app/src
- ./static:/home/node/app/static
- ./vite.config.js:/home/node/app/vite.config.js
- ./tsconfig.json:/home/node/app/tsconfig.json
- ./svelte.config.js:/home/node/app/svelte.config.js
2. dockerfile should not include file copy and command
dockerfile does not always have to include command. it is necessary when 1)the result has to be preserved 2)the process lifecycle is critical to image. In this case 1)the result is not quite certain because the source may not be complete at the moment of booting, 2) the process lifecycle is not really important because the user may manually execute or close the container. The local development environment for VSCode + Docker, a.k.a VSCode devcontainer, also uses sleep infinity command for this reason.
As mentioned above, the code cannot be copied to docker space because it would conflict with bind-mounted files. To avoid both files collide, just remove COPY and CMD command from dockerfile and add more commands at docker-compose.yaml
# dockerfile
# 🚨wrong
COPY package.json package-lock.json ./
RUN npm install
COPY . .
# ...
CMD ["npm", "run", "dev"]
# ✅answer
COPY package*.json ./
RUN npm install
# comment out COPY and CMD
# COPY . .
# ...
# CMD ["npm", "run", "dev"]
and add command to docker-compose
# docker-compose.yaml
services:
svelte:
# ...
command: npm dev
and rest of configs in the question are not necessary. you can check this out from my working demo at Github
Edit
I just did this, but when running it I'm getting Error: Cannot find module '/app/npm dev'.
the answer uses arbitrary settings. the volumes and CMD may has to be changed accordingly.
i.e.)
# docker-compose.yaml
volumes:
- ./src:/$YOUR_APP_DIR/src
- ./static:/$YOUR_APP_DIR/static
# ...
I've used /home/node/app as WORKDIR because /home/node is used as main WORKDIR for official node docker image. However, it is not necessary to use the same folder. If you're going to use /home/node/app, make sure create the folder before use.
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
Why are you using docker for local development? Check this https://stackoverflow.com/a/70159286/3957754
Nodejs works very well even on windows, so my advice is to develop directly in the host with a simple nodejs installation. Your hot reload should work.
Docker is for your test, staging or production servers in which you don't want a hot-reload because reals users are using your web. Hot reload is only for local development.
container process
When the docker starts, is linked to a live and foreground process. If this process ends or is restarted, the entire container will crash. Check this https://stackoverflow.com/a/68593731/3957754
That's why docker goal is not related to hot-reload at source code level
anyway
Anyway, if you have a machine in which to have the workspace (nodejs, git, etc) is so complex, you could use docker with nodejs hot reload following these steps:
Don't use Dockerfile, use directly docker run ubuntu .... You are working with nodejs not with c#, php or similar follies
At the root of your workspace (package.json) execute this
docker run -p 8080:8080 -v $(pwd):/src node:19-alpine bash
The previous sentence will create a container not linked to a tcp process. So you will have new sub-shell, with nodejs 19 and alpine ready to use
Execute the classic
cd /src
npm install
npm run dev
If your app works fine, the hot reload should work. If don't work, try without docker and share us the result

how to set enviroment variable in docker

High level- i have front end web application which runs on one docker container and i made second container for database mysql.
I picked a environment variable mysqldb i need to set that variable to ip address of Docker MySQL container. Part two- i got web application which has to know what ip address is running on( mysql container whatever its going to be because the ip of the container will change) so it has to read that environment variable that was set. So my question do i set a variable so when i run the program mysql container runs and shows that database i set up is working
Dockerfile
FROM golang:1.19-bullseye AS build
WORKDIR /app
COPY ./ ./
RUN go build -o main ./
FROM debian:bullseye
COPY --from=build /app/main /usr/local/bin/main
#CMD[apt-get install mysql-clientmy]
CMD ["/usr/local/bin/main"]
makefile
build:
go build -o bin/main main.go
run:
go run main.go
runcontainer:
docker run -d -p 9008:8080 tiny
compile:
echo "Compiling for every OS and Platform"
GOOS=linux GOARCH=arm go build -o bin/main-linux-arm main.go
GOOS=linux GOARCH=arm64 go build -o bin/main-linux-arm64 main.go
GOOS=freebsd GOARCH=386 go build -o bin/main-freebsd-386 main.go
part of my go program
func main() {
linkList = map[string]string{}
http.HandleFunc("/link", addLink)
http.HandleFunc("/hpe/", getLink)
http.HandleFunc("/", Home)
ip := flag.String("i", "0.0.0.0", "")
port := flag.String("p", "8080", "")
flag.Parse()
fmt.Printf("Listening on %s \n", net.JoinHostPort(*ip, *port))
log.Fatal(http.ListenAndServe(net.JoinHostPort(*ip, *port), nil))
}
Yes, you can achieved this by using env variable in Dockerfile or Docker compose file. By the way don't use IP of db container. Always use hostname. Hostname is static but every recreation of container IP will be get changed.
You can do it the following way in the Dockerfile itself(The example itself if from the Docker documentation):
FROM busybox
ENV FOO=/bar
WORKDIR ${FOO} # WORKDIR /bar
ADD . $FOO # ADD . /bar
COPY \$FOO /quux # COPY $FOO /quux
In case of a Docker-compose file, you can try to do the following:
version:'3'
services:
mysqldb:
container_name: mydb
restart: always
env_file:
- db.env
You should change it according to your requirements.
Addendum:
As far as I can understand what you're trying to achieve, and you can solve this problem by using a db_env in your go program structure as well as your docker-compose file.
Try to do the following:
create a .env file in your go project structure,
add HOST=172.0.0.1
Read the variable in your go program using either a third party package like viper or simply using os.Getenv("HOST")
create a docker-compose file and add both the services you're supposed to create.
You can look at the example that I provided earlier and then create services accordingly by specifying the same db_env below the docker-compose env_file flag.

NextJS environment variables not accessible in production build

I have a NextJS application that accesses a database from the API.
When in development, I have a .env file that contains the host, port, username, password, and database. After running npm run dev, the API functionality works as expected. Even if I run npm run build and npm run start on my local machine, it works correctly.
The problem comes after I push to Github and have Github build the app and deploy it as a docker container. For some reason, the docker build does not accept my environment variables loaded through an attached .env file.
To further elaborate, the environment variables are either on the dev machine or attached to the docker container on the production server. They are mounted to the same place (the root directory of the project: /my-site/.env) but the production API does not work.
The environment variables are included in /lib/db.tsx:
import mysql from "mysql2/promise";
const executeQuery = async (query: string) => {
try {
const db = await mysql.createConnection({
host: process.env.MYSQL_HOST,
database: process.env.MYSQL_DATABASE,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
});
const [results] = await db.execute(query, []);
db.end();
return results;
} catch (error) {
return { error };
}
};
export default executeQuery;
This file is included in the API endpoints as:
import executeQuery from "../../../../lib/db";
Again, since it works on the development computer, I think there is an issue is with the building of the docker container.
Here is my included Dockerfile:
FROM node:lts as dependencies
WORKDIR /my-site
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
FROM node:lts as builder
WORKDIR /my-site
COPY . .
COPY --from=dependencies /my-site/node_modules ./node_modules
RUN yarn build
FROM node:lts as runner
WORKDIR /my-site
ENV NODE_ENV production
# If you are using a custom next.config.js file, uncomment this line.
# COPY --from=builder /my-site/next.config.js ./
COPY --from=builder /my-site/public ./public
COPY --from=builder /my-site/.next ./.next
COPY --from=builder /my-site/node_modules ./node_modules
COPY --from=builder /my-site/package.json ./package.json
EXPOSE 3000
CMD ["yarn", "start"]
Any and all assistance is greatly appreciated!
Edit: Other things I have attempted:
Add them as environment variables to the docker container in the docker compose file (under environment) and verified that they are accessible inside of the container using echo $MYSQL_USER.
Mounting the .env file inside of the .next folder
Mounting the .env file inside of the .next/server folder
I ended up solving my own issue after hours of trying to figure it out.
My solution was to create a .env.production file and commit it to git.
I also adjusted my Dockerfile to include: COPY --from=builder /my-site/.env.production ./
I am not a fan of that solution, as it involves pushing secrets to a repo, but it works.

VueCLI3 app (nginx/docker) use environment specific variables

How to externalize and consume environment variables from a Vue App:
Created with VueCLI3
Deployed in a docker container
Using NGINX
Some details:
The project is built once and deployed to test and live environments. So, I want to externalize some variables which change through the environments (like URLs to call, domains, usernames etc.). The classical usage with .env file variations with VUE_APP_ prefixed does not help this issue as their values are injected in the code during the build stage: They are not variables once it is built.
Trying it out, I have found a blog post making use of dotenv and some extra configuration; but I could not put it together with the configuration in this VueCLI 3 official guide. The solution does not need to adopt a similar approach though, I am just trying to make a way out.
Probably not a useful information, but I am planning to define those environment variables in Config Maps in Kubernetes configuration.
I think I've accomplished to overcome this case. I am leaving the resolution here.
Define your environment-specific environment variables in .env.development (for development purposes) and add them also to the Pod configuration with correxponding values.
Add a configuration.js file somewhere in your Vue project source folder. It would act as a wrapper for determining whether the runtime is development (local) or production (container). It is like the one shown here, but importing/configuring dotenv is not required:
export default class Configuration {
static get EnvConfig () {
return {
envKey1: '$ENV_KEY_1',
envKey2: '$ENV_KEY_2'
}
}
static value (key) {
// If the key does not exist in the EnvConfig object of the class, return null
if (!this.EnvConfig.hasOwnProperty(key)) {
console.error(`Configuration: There is no key named "${key}". Please add it in Configuration class.`)
return
}
// Get the value
const value = this.EnvConfig[key]
// If the value is null, return
if (!value) {
console.error(`Configuration: Value for "${key}" is not defined`)
return
}
if (!value.startsWith('$VUE_APP_')) {
// value was already replaced, it seems we are in production (containerized).
return value
}
// value was not replaced, it seems we are in development.
const envName = value.substr(1) // Remove $ and get current value from process.env
const envValue = process.env[envName]
if (!envValue) {
console.error(`Configuration: Environment variable "${envName}" is not defined`)
return
}
return envValue
}
}
Create an entrypoint.sh. With some modification, it would look like follows:
#!/bin/bash
function join_by { local IFS="$1"; shift; echo "$*"; }
# Find vue env vars
vars=$(env | grep VUE_APP_ | awk -F = '{print "$"$1}')
vars=$(join_by ',' $vars)
echo "Found variables $vars"
for file in /app/js/app.*;
do
echo "Processing $file ...";
# Use the existing JS file as template
cp $file $file.tmpl
envsubst "$vars" < $file.tmpl > $file
rm $file.tmpl
done
nginx -g 'daemon off;'
In your Dockerfile, add a CMD for running this entrypoint.sh script above as a bootstrapping script during container creation. So that, every time you start a container, it will get the environment variables from the pod configuration and inject it to the Configuration class shown in Step 2.
# build stage
FROM node:lts-alpine as build-stage
# make the 'app' folder the current working directory
WORKDIR /app
# Copy package*.json and install dependencies in a separaate step to enable caching
COPY package*.json ./
RUN npm install
# copy project files and folders to the current working directory
COPY ./ .
# install dependencies and build app for production with minification
RUN npm run build
# Production stage
FROM nginx as production-stage
RUN mkdir /app
# copy 'dist' content from the previous stage i.e. build
COPY --from=build-stage /app/dist /app
# copy nginx configuration
COPY nginx.conf /etc/nginx/nginx.conf
# Copy the bootstrapping script to inject environment-specific values and pass it as argument current to entrypoint
COPY entrypoint.sh entrypoint.sh
# Make the file executable
RUN chmod +x ./entrypoint.sh
CMD ["./entrypoint.sh"]
Finally, instead of process.env use our wrapper configuration class like Configuration.value('envKey1'). And voila!

Pass environment variables from docker-compose to vue app with nginx

I want to dockerize my vuejs app and to pass it environment variables from the docker-compose file.
I suspect the app gets the environment variables only at the build stage, so it does not get the environment variables from the docker-compose.
vue app:
process.env.FIRST_ENV_VAR
Dockerfile:
FROM alpine:3.7
RUN apk add --update nginx nodejs
RUN mkdir -p /tmp/nginx/vue-single-page-app
RUN mkdir -p /var/log/nginx
RUN mkdir -p /var/www/html
COPY nginx_config/nginx.conf /etc/nginx/nginx.conf
COPY nginx_config/default.conf /etc/nginx/conf.d/default.conf
WORKDIR /tmp/nginx/vue-single-page-app
COPY . .
RUN npm install
RUN npm run build
RUN cp -r dist/* /var/www/html
RUN chown nginx:nginx /var/www/html
CMD ["nginx", "-g", "daemon off;"]
docker-compose:
version: '3.6'
services:
app:
image: myRegistry/myProject:tag
restart: always
environment:
- FIRST_ENV_VAR="first environment variable"
- SECOND_ENV_VAR="first environment variable"
ports:
- 8080:8080
Is there any way to pass environment variables to a web application after the build stage?
In vue js apps you need to pass the env variables as VUE_APP_
so in your case it should be VUE_APP_FIRST_ENV_VAR
Based on this https://medium.com/#rakhayyat/vuejs-on-docker-environment-specific-settings-daf2de660b9, I have made a silly npm package that help to acomplish what you want.
Go to https://github.com/juanise/jvjr-docker-env and take a look to README file.
Basically just run npm install jvjr-docker-env. A new Dockerfile, entrypoint and json file will be added to your project.
Probably you will need to modify some directory and/or file name in Dockerfile in order to work.
You can try this. The value of FIRST_ENV_VAR inside docker will be set to the value of FIRST_ENV_VAR_ON_HOST on your host system.
version: '3.6'
services:
app:
image: myRegistry/myProject:tag
restart: always
environment:
- FIRST_ENV_VAR=$FIRST_ENV_VAR_ON_HOST
- SECOND_ENV_VAR=$SECOND_ENV_VAR_ON_HOST
ports:
- 8080:8080
As you can see in the docker docs docker-compose reference envs
the defined environment values are always available in the container, not only at build stage.
You can check this by change the CMD to execute the command "env" to display all environments in your container.
If your application is not getting the actual values of the env variables it should be anything else related with your app

Resources