This is my second post about this particular issue. I've since deleted that question because I've found a better way to explain what exactly I'd like to do.
Essentially, I'd like to pass command line arguments to docker-compose up and set them as environment variables in my Vue.js web application. The goal is to be able to change the environment variables without rebuilding the container every time.
I'm running into several issues with this. Here are my docker files:
Dockerfile for Vue.js application.
FROM node:latest as build-stage
WORKDIR /app
# Environment variable.
ENV VUE_APP_FOO=FOO
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
VUE_APP_FOO is stored and accessible via Node's process.env objected and seems to be passed in at build time.
And my docker-compose.yml:
version: '3.5'
services:
ms-sql-server:
image: mcr.microsoft.com/mssql/server:2017-latest-ubuntu
ports:
- "1430:1433"
api:
image: # omitted (pulled from url)
restart: always
depends_on:
- ms-sql-server
environment:
DBServer: "ms-sql-server"
ports:
- "50726:80"
client:
image: # omitted(pulled from url)
restart: always
environment:
- VUE_APP_BAR="BAR"
depends_on:
- api
ports:
- "8080:80"
When I ssh into the client container with docker exec -it <container_name> /bin/bash, the VUE_APP_BAR variable is present with the value "BAR". But the variable is not stored in the process.env object in my Vue application. It seems like something odd is happening with Node and it's environmental variables. It's like it's ignoring the container environment.
Is there anyway for me to access the container level variables set in docker-compose.yml inside my Vue.js application? Furthermore, is there anyway to pass those variables as arguments with docker-compose up? Let me know if you need any clarification/more information.
So I figured out how to do this in sort of a hacky way that works perfectly for my use case. A quick review of what I wanted to do: Be able to pass in environment variables via a docker-compose file to a Vue.js application to allow different team members to test different development APIs depending on their assignment(localhost if running the server locally, api-dev, api-staging, api-prod).
The first step is to declare your variables in a JS file inside your VueJS project (it can be defined anywhere) formatted like this:
export const serverURL = 'VUE_APP_SERVER_URL'
Quick note about the value of this string: it has to be completely unique to your entire project. If there is any other string or variable name in your application that matches it, it will get replaced with the docker environment variable we pass using this method.
Next we have to go over to our docker-compose.yml and declare our environment variable there:
docker-compose.yml
your_vuejs_client:
build: /path/to/vuejs-app
restart: always
environment:
VUE_APP_SERVER_URL: ${SERVER_URL}
ports:
- "8080:80"
Now when you run docker-compose up in your terminal, you should see something like this:
WARNING: The SERVER_URL variable is not set. Defaulting to a blank string.
After we have our docker-compose setup properly, we need to create a entrypoint script in the VueJS application to run before the app is launched via nginx. To do this, navigate back to your VueJS directory and run touch entrypoint.sh to create a blank bash script. Open that up and this is what I have in mine:
entrypoint.sh
#!/bin/sh
ROOT_DIR=/usr/share/nginx/html
echo "Replacing env constants in JS"
for file in $ROOT_DIR/js/app.*.js* $ROOT_DIR/index.html $ROOT_DIR/precache-manifest*.js;
do
echo "Processing $file ...";
sed -i 's|VUE_APP_SERVER_URL|'${VUE_APP_SERVER_URL}'|g' $file
done
sed -i 's|VUE_APP_SERVER_URL|'${VUE_APP_SERVER_URL}'|g' $file This line transverses your entire application for the string 'VUE_APP_SERVER_URL' and replaces it with the environment variable from docker-compose.yml.
Finally we need to add some lines to our VueJS application Dockerfile to tell it to run the entrypoint script we just created before nginx is started. So right before the CMD ["nginx", "-g", "daemon off;"] line in your Dockerfile, add the lines below:
VueJS Dockerfile
# Copy entrypoint script as /entrypoint.sh
COPY ./entrypoint.sh /entrypoint.sh
# Grant Linux permissions and run entrypoint script
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
After that, running docker-compose run -e SERVER_URL=yourserverapi.com/api, the serverURL constant we set in a JS file at the beginning will be replaced with whatever you supply in the docker-compose command. This was a pain to finally get working, but I hope this helps out anyone facing similar troubles. The great thing is that you can add as many environment variables as you want, just add more lines to the entrypoint.sh file and define them in the Vue.js application and your docker-compose file. Some of the ones I've used are providing a different endpoint for the USPS API depending on whether you're running locally or hosted in the cloud, providing different Maps API keys based on whether the instance is running in production or development, etc. etc.
I really hope this helps someone out, let me know if anyone has any questions and I can hopefully be of some help.
The client app runs on a web browser, but environment variables on are on the server. The client needs a way to obtain the environment variable value from the server.
To accomplish that, you have several options, including:
Leverage Nginx to serve the environment variable itself for this using an approach like one of these: nginx: use environment variables. This approach may be quick, more dynamic or more static depending on your needs, maybe less formal and elegant. Or
Implement a server API (Node.js?) that reads the environment variable and returns it to the client over an AJAX call. This approach is elegant, dynamic, API-centric. Or
Lastly if the environment variable is static per nginx instance per deployment, you could build the static assets of the Vue app during deployment and hard-code the environment variable right there in the static assets. This approach is somewhat elegant but does pollute client code with server details and is somewhat static (can only change on deployment).
As i posted here https://stackoverflow.com/a/63097312/4905563, I have developed a package that could help.
Try with npm install jvjr-docker-env take a look to README.md to see some examples of use.
Even though the question title says how to consume environment variables from vue.js side, the questioner's goal is to be able to configure backend api endpoint dynamically without rebuilding docker image.
I achieved it by using reverse proxy.
For dev run, configure reverse proxy on vue.config.js, which is consumed by vue-cli web server.
For nginx run, configure reverse proxy on nginx.conf. You can use nginx template to read environment variables.
This approach also eliminates the need for CORS configuration on web-api server side, since web api is called from vue app's web server, not from the browser directly.
More thorough working sample can be found on this commit.
vue.config.js:
module.exports = {
devServer: {
proxy: {
'/api': {
target: 'http://host.docker.internal:5000',
},
},
},
};
nginx.conf:
...
http {
...
include /etc/nginx/conf.d/*.conf;
}
nginx.default.conf.template:
server {
listen 80;
...
location /api {
proxy_pass ${WEBAPI_ENDPOINT};
}
}
Dockerfile:
...
COPY nginx.conf /etc/nginx/nginx.conf
COPY nginx.default.conf.template /etc/nginx/templates/default.conf.template
Add a static config.js script in index.html. This file is not processed by Webpack, but included verbatim. Use your docker-compose file or kubernetes manifest or AWS ECS task config or similar to override that file at run time.
For example, in my Vue project:
public/config.js
// Run-time configuration. Override this in e.g. your Dockerfile, kubernetes pod or AWS ECS Task.
// Use only very simple, browser-compatible JS.
window.voxVivaConfig = {};
public/index.html
<!DOCTYPE html>
<html lang="en">
<head>
<!-- ... -->
<!-- Allow injection of run-time config -->
<script async src="config.js"></script>
<!-- ... -->
</head>
<body>
<div id="app" aria-busy="true">
<noscript><p>This app requires JavaScript.</p></noscript>
</div>
</body>
</html>
src/config.js
function getRunTimeConfig() {
if (typeof window.voxVivaConfig === "object") {
return window.voxVivaConfig || {};
}
return {};
}
export default Object.freeze({
appTitle: "My App",
backEndBaseUrl: process.env.VUE_APP_BACK_END_BASEURL || "https://example.com",
whatever: process.env.VUE_APP_WHATEVER || "",
/**
* Allow config specified at run time to override everything above.
*/
...getRunTimeConfig(),
});
Advantages
This puts all config in one place, and lets you choose which config values should be specified at compile time, build time or run time, as you see fit.
Related
I created a Next.JS app which uses environment variables. I have the environment variables needed as the system's environment variables (because it is a dockerized nextjs app).
# in terminal
echo $NEXT_PUBLIC_KEY_NAME
# >> value of key
but process.env.NEXT_PUBLIC_KEY_NAME is undefined in the app only when running in production mode. How can I access them? I can't seem to find any documentation on this on Nextjs's website or anywhere else.
NextJS Solution
NextJS has built in support to accomplish what you want,
you just need to put your environment variables inside .env.local in your root folder.
Other than .env.local, you can also use .env, .env.development, and .env.production.
an example of .env.local:
DB_HOST=localhost
DB_USER=myuser
DB_PASS=mypassword
in your case, it will become:
NEXT_PUBLIC_KEY_NAME=[insert_what_you_want]
voila, you can access it from your NextJS app, using process.env. keyword.
// pages/index.js
export async function getStaticProps() {
const db = await myDB.connect({
host: process.env.DB_HOST,
username: process.env.DB_USER,
password: process.env.DB_PASS,
})
// ...
}
You can read more from the source.
Docker Solution
If the above solution is not the one you are looking for, then what you need is how to set env variable on Docker instead of NextJS.
If you are using docker-compose file:
frontend:
image: frontend
build:
context: .
dockerfile: Dockerfile
environment:
- NEXT_PUBLIC_KEY_NAME=[insert_what_you_want]
if you run the docker manual, use -e parameters:
docker run -e NEXT_PUBLIC_KEY_NAME=[insert_what_you_want] frontend bash
or using env file on docker command:
docker run --env-file ./env.list frontend bash
you can read more from the source.
I have a sample application, I am using nodejs and reactjs, So my project folder consists of client and server folder. The client folder is created using create-react-app.
i have created two Dockerfile for each of the folder, and i am using a docker-compose.yml on the root of the project.
everything is working fine. Now i just want to host this application. I am trying to use jenkins.
Since i have little knowledge on the devops side. i have some doubts
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
6) what is the use of volumes is it required in docker-compose.yml file ?
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
please see the below folder structure
movielisting
client
Dockerfile
package.json
package.lock.json
... other create-react-app folders like src..
server
Dockerfile
index.js
docker-compose.yml
.env
Dockerfile -- client
FROM node:10.15.1-alpine
#Create app directory and use it as the working directory
RUN mkdir -p /srv/app/client
WORKDIR /srv/app/client
COPY package.json /srv/app/client
COPY package-lock.json /srv/app/client
RUN npm install
COPY . /srv/app/client
CMD ["npm", "start"]
Dockerfile -- server
FROM node:10.15.1-alpine
#Create app directory
RUN mkdir -p /srv/app/server
WORKDIR /srv/app/server
COPY package.json /srv/app/server
COPY package-lock.json /srv/app/server
RUN npm install
COPY . /srv/app/server
CMD ["node", "index.js"]
docker-compose.yml -- root of project
version: "3"
services:
#########################
# Setup node container
#########################
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server:/srv/app/server
command: ${NODE_COMMAND:-node} index.js
##########################
# Setup client container
##########################
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/srv/app/client/src
- ./client/public:/srv/app/client/public
links:
- server
command: npm run start
.env
API_HOST="http://localhost:4000"
APP_SERVER_PORT=4000
REACT_APP_PORT=3000
package.json -- client
"proxy": "http://server:4000"
what all things can i refactor,
Any help appreciated.
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
Each dockerfile will build a docker image. So in the end you will have two images one for the react application and the other one for the backend which is nodejs application
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
You need to build the react application within the steps you have in its Dockerfile in order to be able to use it as a normal application. Also you might use environment varaible to customize the image during the build using build-args for example passing custom path or anything else.
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
It would be better if you use the dockerfile(s) with jenkins in order to build your images and keep docker-compose.yml file(s) for deploying the application itself without using the build keyword
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
Using command inside the docker-compose.yml file will override the CMD for the dockerfile which was set during the build step
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
Generally speaking yes you need it however as long as you want to use override it from the docker-compose file you might added it as CMD ["node", "--help"] or something
6) what is the use of volumes is it required in docker-compose.yml file ?
Volumes is needed in case you have shared files between containers or you need to keep data persistent on the host
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
server is an alias for the nodejs container inside the docker network once you start your application. and why named server ? because you have it inside your docker-compose.yml file in this part:
services:
server:
But of course you can change it by adding alias to it within network keyword inside the docker-compose.yml file
Note: React itself is a client side which means it works through the browser itself so it wont be able to contact the nodejs application through docker network you may use the ip itself or use localhost and make the nodejs accessible through localhost
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
Docker itself does not know about which port your application is using so you have to make both of them use the same port. and in nodejs this is achievable by using environment variable
For more details:
https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg
https://docs.docker.com/compose/networking/
https://docs.docker.com/compose/compose-file/#aliases
https://docs.docker.com/compose/compose-file/#command
https://facebook.github.io/create-react-app/docs/deployment
if any one facing issues with connecting react and express, make sure there is NO localhost attached to server api address in client code
(eg: http://localhost:5000/api should be changed to /api),
since proxy entry is there in package.json file.
PS: if no entry is there, add
{
"proxy": "http://server:5000"
}
to package.json, ('server' is your express app container name in docker-compose file)
finally made it work. thought of sharing this if it helps anyone else
I'm running openresty nginx within official alpine-fat docker image, and openresty process starts with nobody user.
I need to set nginx variable with the next string:
set_by_lua $var 'return os.getenv("ENV_VAR")';
docker-compose.yml contains the next block:
build:
context: .
dockerfile: ./Dockerfile.nginx
environment:
- ENV_VAR=value
But, nginx worker process seems not getting its value, and $var remains empty.
I tried to add export ENV_VAR=value to /etc/profile file, but no use.
I tried to run openresty with nginx user, but it also can't see the value of ENV_VAR variable.
How can I make that thing work, if I can?
Try adding env ENV_VAR; to your nginx config. By default nginx will discard all environment variables, this will allow to save it.
From https://nginx.org/en/docs/ngx_core_module.html#env
Syntax: env variable[=value];
Default:
env TZ;
Context: main
By default, nginx removes all environment variables inherited from its parent process except the TZ variable. This directive allows preserving some of the inherited variables, changing their values, or creating new environment variables.
I use a simple Nginx docker container to serve some static files. I deploy this container to different environments (e.g. staging, production etc) using Kubernetes which sets an environment variable on the container (let's say FOO_COUNT).
What's the easiest way of getting Nginx to pass the value of FOO_COUNT as a header on every response without having to build a different container for each environment?
Out-of-the-box, nginx doesn't support environment variables inside most configuration blocks. But, you can use envsubst as explained in the official nginx Docker image.
Just create a configuration template that's available inside your nginx container named, e.g. /etc/nginx/conf.d/mysite.template.
It contains your nginx configuration, e.g.:
location / {
add_header X-Foo-Header "${FOO_COUNT}";
}
Then override the Dockerfile command with
/bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
If you're using a docker-compose.yml file, use command.
Kubernetes provides a similar way to override the default command in the image.
This was made possible in nginx 1.19+. This functionality is now builtin.
All you need to do is add your default.conf.template file (it looks for *.template suffix) to /etc/nginx/templates and the startup script will run envsubst on it, and output to /etc/nginx/conf.d.
Read more at the docs, see an example here https://devopsian.net/notes/docker-nginx-template-env-vars/
I have configured SSL via certbot on live server. I have a volume mapping for this in nginx section of docker-compose.yml:
volumes:
...
- /etc/letsencrypt:/etc/letsencrypt
This works just fine on the live server but I have a different setup on my local machine, where I run the app and see it on http://localhost. I suppose I don't need SSL on my local machine, so probably I just can exclude this part of setup if it runs locally.
Also this case makes me think I will have to potentially configure some other things differently locally vs live.
So, the question is how to properly distinguish these differences between local and live setups and apply them (semi)automatically depending on the environment?
Environment variables are typically a good way to create simple runtime portability like this, and many tools/apps/packages support runtime configuration by nothing more than environment variables. Unfortunately, nginx is not one of those apps
For nginx, try something like this:
environment:
- NGINX_CONF=localhost.conf
Here, localhost.conf would be your nginx configuration for your local machine. Run an entrypoint.sh of some kind, and symlink the config specified by NGINX_CONF to wherever nginx will pick it up (usually /etc/nginx/conf.d or /etc/nginx/sites-enabled).
ln -s /etc/nginx/conf.d/running.conf /app/nginx-confs/${NGINX_CONF}
This assumes you have all of your confs copied to the container in /app/nginx-confs, but they can live wherever you like. The localhost.conf would serve your site as http://localhost.
For your live server, pass NGINX_CONF=liveserver.conf. This conf would serve https://www.liveserver.com or whatever you host is.
At this point, you can choose which nginx configuration you'll run when you start your container, using environment variables. Even if you don't want to do it this way, hopefully it gets you moving in the right direction and thinking about environment variables as a way to configure at runtime.
There are other, more granular ways of managing nginx confs at runtime. Something like confd or a templating engine like the mustache templating engine are options. There are roundabout ways in nginx with env directive and set_by_lua, but this feels like the most hacky solution to me so I prefer the others.
You can write a Makefile to generate different docker-compose.yml files depending on your needs.
Makefile:
# Makefile
-include config.mk
A_CONFIG_VAR ?= default_value
all: docker-compose.yml
docker-compose.yml: docker-compose.yml.in config.mk
#echo 'Generating docker-compose.yml'
#sed -e 's|##A_CONFIG_VAR##|$(A_CONFIG_VAR)|g' $< > $#
config.mk: put your configuration variables in this file.
# config.mk
A_CONFIG_VAR = "a_value"
docker-compose.yml.in: write an input docker-compose.yml file like so
volumes:
- /path/to/somewhere:##A_CONFIG_VAR##
Change the contents of config.mk to suit your needs. And then run
make
This should generate a docker-compose.yml file for you.