How can I access environment variables in Vue, that are passed to the container at runtime and not during the build?
Stack is as follows:
VueCLI 3.0.5
Docker
Kubernetes
There are suggested solutions on stackoverflow and elsewhere to use .env file to pass variables (and using mode) but that's at build-time and gets baked into the docker image.
I would like to pass the variable into Vue at run-time as follows:
Create Kubernetes ConfigMap (I get this right)
Pass ConfigMap value into K8s pod env variable when running deployment yaml file (I get this right)
Read from env variable created above eg. VUE_APP_MyURL and do something with that value in my Vue App (I DO NOT get this right)
I've tried the following in helloworld.vue:
<template>
<div>{{displayURL}}
<p>Hello World</p>
</div>
</template>
<script>
export default {
data() {
return {
displayURL: ""
}
},
mounted() {
console.log("check 1")
this.displayURL=process.env.VUE_APP_ENV_MyURL
console.log(process.env.VUE_APP_ENV_MyURL)
console.log("check 3")
}
}
</script>
I get back "undefined" in the console log and nothing showing on the helloworld page.
I've also tried passing the value into a vue.config file and reading it from there. Same "undefined" result in console.log
<template>
<div>{{displayURL}}
<p>Hello World</p>
</div>
</template>
<script>
const vueconfig = require('../../vue.config');
export default {
data() {
return {
displayURL: ""
}
},
mounted() {
console.log("check 1")
this.displayURL=vueconfig.VUE_APP_MyURL
console.log(vueconfig.VUE_APP_MyURL)
console.log("check 3")
}
}
</script>
With vue.config looking like this:
module.exports = {
VUE_APP_MyURL: process.env.VUE_APP_ENV_MyURL
}
If I hardcode a value into VUE_APP_MyURL in the vue.config file it shows successfully on the helloworld page.
VUE_APP_ENV_MyURL is successfully populated with the correct value when I interrogate it: kubectl describe pod
process.env.VUE_APP_MyURL doesn't seem to successfully retrieve the value.
For what it is worth... I am able to use process.env.VUE_APP_3rdURL successfully to pass values into a Node.js app at runtime.
Create a file config.js with your desired configuration. We will use that later to create a config map that we deploy to Kubernetes. Put it into your your Vue.js project where your other JavaScript files are. Although we will exclude it later from minification, it is useful to have it there so that IDE tooling works with it.
const config = (() => {
return {
"VUE_APP_ENV_MyURL": "...",
};
})();
Now make sure that your script is excluded from minification. To do that, create a file vue.config.js with the following content that preserves our config file.
const path = require("path");
module.exports = {
publicPath: '/',
configureWebpack: {
module: {
rules: [
{
test: /config.*config\.js$/,
use: [
{
loader: 'file-loader',
options: {
name: 'config.js'
},
}
]
}
]
}
}
}
In your index.html, add a script block to load the config file manually. Note that the config file won't be there as we just excluded it. Later, we will mount it from a ConfigMap into our container. In this example, we assume that we will mount it into the same directory as our HTML document.
<script src="<%= BASE_URL %>config.js"></script>
Change your code to use our runtime config:
this.displayURL = config.VUE_APP_ENV_MyURL || process.env.VUE_APP_ENV_MyURL
In Kubernetes, create a config map that uses the content your config file. Of course, you wanna read the content from your config file.
apiVersion: v1
kind: ConfigMap
metadata:
...
data:
config.js: |
var config = (() => {
return {
"VUE_APP_ENV_MyURL": "...",
};
})();
Reference the config map in your deployment. This mounts the config map as a file into your container. The mountPath Already contains our minified index.html. We mount the config file that we referenced before.
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
volumes:
- name: config-volume
configMap:
name: ...
containers:
- ...
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/config.js
subPath: config.js
Now you can access the config file at <Base URL>/config.js and you should see the exact content that you put into the ConfigMap entry. Your HTML document loads that config map as it loads the rest of your minified Vue.js code. Voila!
I am adding my working solution here, for those who are still having trouble. I do think that #Hendrik M Halkow 's answer is more elegant, though I couldn't manage to solve it using that, simply just because of my lack of expertise in webpack and Vue.I just couldn't figure out where to put the config file and how to refer it.
My approach is to make use of the environment variables with constants (dummy values) to build it for production, then replace that constants in the image using a custom entrypoint script. The solution goes like this.
I have encapsulated all configs into one file called app.config.js
export const clientId = process.env.VUE_APP_CLIENT_ID
export const baseURL = process.env.VUE_APP_API_BASE_URL
export default {
clientId,
baseURL,
}
This is used in the project just by looking up the value from config file.
import { baseURL } from '#/app.config';
Then I am using standard .env.[profile] files to set environment variables.
e.g. the .env.development
VUE_APP_API_BASE_URL=http://localhost:8085/radar-upload
VUE_APP_CLIENT_ID=test-client
Then for production I set string constants as values.
e.g. the .env.production
VUE_APP_API_BASE_URL=VUE_APP_API_BASE_URL
VUE_APP_CLIENT_ID=VUE_APP_CLIENT_ID
Please not here the value can be any unique string. Just to keep the readability easier, I am just replacing the environment variable name as the value. This will just get compiled and bundled similar to development mode.
In my Dockerfile, I add an entrypoint that can read those constants and replace it will environment variable values.
My Dockerfile looks like this (this is pretty standard)
FROM node:10.16.3-alpine as builder
RUN mkdir /app
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY . /app/
RUN npm run build --prod
FROM nginx:1.17.3-alpine
# add init script
COPY ./docker/nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY --from=builder /app/dist/ .
COPY ./docker/entrypoint.sh /entrypoint.sh
# expose internal port:80 and run init.sh
EXPOSE 80
ENTRYPOINT ["/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
Then create a ./docker/entrypoint.sh file as below.
#!/bin/sh
ROOT_DIR=/usr/share/nginx/html
# Replace env vars in JavaScript files
echo "Replacing env constants in JS"
for file in $ROOT_DIR/js/app.*.js* $ROOT_DIR/index.html $ROOT_DIR/precache-manifest*.js;
do
echo "Processing $file ...";
sed -i 's|VUE_APP_API_BASE_URL|'${VUE_APP_API_BASE_URL}'|g' $file
sed -i 's|VUE_APP_CLIENT_ID|'${VUE_APP_CLIENT_ID}'|g' $file
done
echo "Starting Nginx"
nginx -g 'daemon off;'
This enables me to have runtime configurable image that I can run on many environments. I know it is a bit of a hack. But have seen many people do it this way.
Hope this helps someone.
Create config file
In public folder: public/config.js
const config = (() => {
return {
"VUE_CONFIG_APP_API": "...",
};
})();
Update index.html
Update public/index.html to contain following at the end of head:
<!-- docker configurable variables -->
<script src="<%= BASE_URL %>config.js"></script>
There is no need to update vue.config.js as we are using the public folder for configuration.
ESLint
ESLint would give us error of usage of undefined variable. Therefore we define global variable in .eslintrc.js file:
globals: {
config: "readable",
},
Usage
Eg. in the store src/store/user.js
export const actions = {
async LoadUsers({ dispatch }) {
return await dispatch(
"axios/get",
{
url: config.VUE_CONFIG_APP_API + "User/List",
},
{ root: true }
);
},
...
K8S ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: fe-config
namespace: ...
data:
config.js: |
var config = (() => {
return {
"VUE_CONFIG_APP_API": "...",
};
})();
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
volumes:
- name: config-volume
configMap:
name: fe-config
containers:
- ...
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/config.js
subPath: config.js
I had the same problem in my current project and found out that it is not possible to access environment variables at runtime at the moment so I end up with the solution of creating .env files or local environment variables that, as you said, are used at the build time.
I recently created a set of plugins to solve this problem elegantly:
No global namespace pollution.
No import statement is required.
Almost zero configuration.
Out-of-the-box support for Vue CLI (also works with Webpack, Rollup and Vite, CSR, SSR and SSG, and unit testing tools. Powered by Unplugin and Babel).
You can access environment variables (heavily inspired by Vite) this way:
// src/index.js
console.log(`Hello, ${import.meta.env.HELLO}.`);
During production, it will be temporarily replaced with a placeholder:
// dist/index.js
console.log(`Hello, ${"__import_meta_env_placeholder__".HELLO}.`);
Finally, you can use built-in script to replace placeholders with real environment variables in your system (e.g., read environment variables in your k8s pod):
// dist/index.js
console.log(`Hello, ${{ HELLO: "import-meta-env" }.HELLO}.`);
// > Hello, import-meta-env.
You can see more info at https://iendeavor.github.io/import-meta-env/ .
And there is a Docker setup example and Vue CLI setup example.
Hope this helps someone who needs this.
I got it to work with the solution proposed by #Hendrik M Halkow.
But I stored the config.js in the static folder. By doing that, I don't have to care about not minifying the file.
Then include it like this:
<script src="<%= BASE_URL %>static/config.js"></script>
and use this volume mount configuration:
...
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/static/config.js
subPath: config.js
If you are using VueJs3 + Vite3 + TypeScript, you can do this:
create app.config.ts (not JS)
export const clientId = import.meta.env.VITE_CLIENT_ID
export const baseURL = import.meta.env.VITE_API_BASE_URL
export default {
clientId,
baseURL,
}
Replace values in assets subdir: (improved shell script)
#!/bin/sh
# #see https://stackoverflow.com/questions/18185305/storing-bash-output-into-a-variable-using-eval
ROOT_DIR=/usr/share/nginx/html
# Replace env vars in JavaScript files
echo "Replacing env constants in JS"
keys="VITE_CLIENT_ID
VITE_API_BASE_URL"
for file in $ROOT_DIR/assets/index*.js* ;
do
echo "Processing $file ...";
for key in $keys
do
value=$(eval echo \$$key)
echo "replace $key by $value"
sed -i 's#'"$key"'#'"$value"'#g' $file
done
done
echo "Starting Nginx"
nginx -g 'daemon off;'
In the Dockerfile don't forget "RUN chmod u+x"
# build
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY ./docker/nginx.conf /etc/nginx/conf.d/default.conf
COPY ./docker/entrypoint.sh /entrypoint.sh
COPY ./docker/entrypoint.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/entrypoint.sh
EXPOSE 80
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
then you be able to import it in any TypeScript file in the project
import { baseURL } from '#/app.config';
#see update of NehaM response: Pass environment variable into a Vue App at runtime
Related
I deployed a MERN app to a digital ocean droplet with Docker. If I run my docker-compose.yml file local on my PC it works well. I have 2 containers: 1 backend, 1 frontend. If I try to compose-up on droplet, it seems the frontend is ok but can't communicate with backend.
I use http-proxy-middleware, my setupProxy.js file:
const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function (app) {
app.use(
'/api',
createProxyMiddleware({
target: 'http://0.0.0.0:5001',
changeOrigin: true,
})
);
};
I tried target: 'http://main-be:5001', too, as main-be is the name of my backend container, but get the same error. Just the Request URL is http://main-be:5001/api/auth/login in the chrome/devops/network.
...also another page:
My docker-compose.yml file:
version: '3.4'
networks:
main:
services:
main-be:
image: main-be:latest
container_name: main-be
ports:
- '5001:5001'
networks:
main:
volumes:
- ./backend/config.env:/app/config.env
command: 'npm run prod'
main-fe:
image: main-fe:latest
container_name: main-fe
networks:
main:
volumes:
- ./frontend/.env:/app/.env
ports:
- '3000:3000'
command: 'npm start'
My Dockerfile in the frontend folder:
FROM node:12.2.0-alpine
COPY . .
RUN npm ci
CMD ["npm", "start"]
My Dockerfile in the backend folder:
FROM node:12-alpine3.14
WORKDIR /app
COPY . .
RUN npm ci --production
CMD ["npm", "run", "prod"]
backend/package.json file:
"scripts": {
"start": "nodemon --watch --exec node --experimental-modules server.js",
"dev": "nodemon server.js",
"prod": "node server.js"
},
frontend/.env file:
SKIP_PREFLIGHT_CHECK=true
HOST=0.0.0.0
backend/config.env file:
DE_ENV=development
PORT=5001
My deploy.sh script to build images, copy to droplet...
#build and save backend and frontend images
docker build -t main-be ./backend & docker build -t main-fe ./frontend
docker save -o ./main-be.tar main-be & docker save -o ./main-fe.tar main-fe
#deploy services
ssh root#46.111.119.161 "pwd && mkdir -p ~/apps/first && cd ~/apps/first && ls -al && echo 'im in' && rm main-be.tar && rm main-fe.tar &> /dev/null"
#::scp file
#scp ./frontend/.env root#46.111.119.161:~/apps/first/frontend
#upload main-be.tar and main-fe.tar to VM via ssh
scp ./main-be.tar ./main-fe.tar root#46.111.119.161:~/apps/thesis/
scp ./docker-compose.yml root#46.111.119.161:~/apps/first/
ssh root#46.111.119.161 "cd ~/apps/first && ls -1 *.tar | xargs --no-run-if-empty -L 1 docker load -i"
ssh root#46.111.119.161 "cd ~/apps/first && sudo docker-compose up"
frontend/src/utils/axios.js:
import axios from 'axios';
export const baseURL = 'http://localhost:5001';
export default axios.create({ baseURL });
frontend/src/utils/constants.js:
const API_BASE_ORIGIN = `http://localhost:5001`;
export { API_BASE_ORIGIN };
I have been trying for days but can't see where the problem is so any help highly appreciated.
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
I've been attempting to create a managed instance group on GCP which consists of instances that host a custom docker image. However I'm struggling to figure out how to do this with Pulumi.
Reading Google's GCP documentation it's possible to deploy instances that host a docker container within a managed instance group via instance templates.
Practically with gcloud this looks like:
gcloud compute instance-templates create-with-container TEMPLATE_NAME --container-image DOCKER_IMAGE
Reading Pulumi's instance template documentation however, it's not clear how to create an instance template which would do the same thing as the command above.
Is it possible in Pulumi to create a managed instance group where the instances host a custom docker image, or will I have to do something like create an instance template manually, and refer to that within my Pulumi script?
Here's a hybrid approach that utilises both gcloud and Pulumi.
At a high level:
Create a docker container and upload to the Google Container Registry
Create an instance template using gcloud
Create a managed instance group, referencing the instance template from within the Pulumi script
#1 Creating the Docker Container
Use CloudBuild to detect changes within a Git repo, create a docker container, and upload it to the Google Container Registry.
Within my repo I have a Dockerfile file with instructions on how to build the container that will be used for my instance. I use Supervisord to start and monitor my application.
Here's how it looks:
# my-app-repo/Dockerfile
FROM ubuntu:22.04
RUN apt update
RUN apt -y install software-properties-common
RUN apt install -y supervisor
COPY supervisord.conf /etc/supervisord.conf
RUN chmod 0700 /etc/supervisord.conf
COPY ./my-app /home/my-app
RUN chmod u+x /home/my-app
EXPOSE 443/tcp # HTTPS
EXPOSE 9001/tcp # supervisord support
CMD ["supervisord", "-c", "/etc/supervisord.conf"]
The second part of this is to build the docker container and upload to the Google Container Registry. I do this via CloudBuild. Here's the corresponding Pulumi code (building a Golang app):
Note: make sure you've connected the repo via the CloudBuild section of the GCP website first
const myImageName = pulumi.interpolate`gcr.io/${project}/my-image-name`
const buildTrigger = new gcp.cloudbuild.Trigger("my-app-build-trigger", {
name: "my-app",
description: "Builds My App image",
build: {
steps: [
{
name: "golang",
id: "build-server",
entrypoint: "bash",
timeout: "300s",
args: ["-c", "go build"],
},
{
name: "gcr.io/cloud-builders/docker",
id: "build-docker-image",
args: [
"build",
"-t", pulumi.interpolate`${myImageName}:$BRANCH_NAME-$REVISION_ID`,
"-t", pulumi.interpolate`${myImageName}:latest`,
'.',
],
},
],
images: [myImageName]
},
github: {
name: "my-app-repo",
owner: "MyGithubUsername",
push: {
branch: "^main$"
}
},
});
#2 Creating an Instance Template
As I haven't been able to figure out how to easily create an instance template via Pulumi, I decided to use the Google SDK via the gcloud commandline tool.
gcloud compute instance-templates create-with-container my-template-name-01 \
--region us-central1 \
--container-image=gcr.io/my-project/my-image-name:main-e286d94217719c3be79aac1cbd39c0a629b84de3 \
--machine-type=e2-micro \
--network=my-network-name-59c9c08 \
--tags=my-tag-name \
--service-account=my-service-account#my-project.iam.gserviceaccount.com
The values for above (container, network name etc) I got simply by browsing my project on the GCP website.
#3 Creating the Managed Instance Group
Having created an instance template you can now reference that template within your Pulumi script
const myHealthCheck = new gcp.compute.HealthCheck("my-app-health-check", {
checkIntervalSec: 5,
timeoutSec: 5,
healthyThreshold: 2,
unhealthyThreshold: 5,
httpHealthCheck: {
requestPath: "/health-check",
port: 80,
},
});
const instanceGroupManager = new gcp.compute.InstanceGroupManager("my-app-instance-group", {
baseInstanceName: "my-app-name-prefix",
zone: hostZone,
targetSize: 2,
versions: [
{
name: "my-app",
instanceTemplate: "https://www.googleapis.com/compute/v1/projects/my-project/global/instanceTemplates/my-template-name-01",
},
],
autoHealingPolicies: {
healthCheck: myHealthCheck.id,
initialDelaySec: 300,
},
});
For completeness, I've also included another part of my Pulumi script which creates a backend service and connects it to the instance group created above via the InstanceGroupManager call. Note that the Load Balancer in this example is using TCP instead of HTTPS (My App is handling SSL connections and thus uses a TCP Network Load Balancer).
const backendService = new gcp.compute.RegionBackendService("my-app-backend-service", {
region: hostRegion,
enableCdn: false,
protocol: "TCP",
backends: [{
group: instanceGroupManager.instanceGroup,
}],
healthChecks: defaultHttpHealthCheck.id,
loadBalancingScheme: "EXTERNAL",
});
const myForwardingRule = new gcp.compute.ForwardingRule("my-app-forwarding-rule", {
description: "HTTPS forwarding rule",
region: hostRegion,
ipAddress: myIPAddress.address,
backendService: backendService.id,
portRange: "443",
});
Note: Ideally step #2 would be done with Pulumi as well however I haven't worked that part out just yet.
I'm managing Kubernetes + nginx.
I'd like to install dynamic modules on nginx that are provided by Nginx Ingress Controller.
Those dynamic modules are not offered by Nginx Ingress Controller official configmap (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/)
So I believe, I need to build my own Docker container of Nginx Ingress Controller.
(Could be added at this? https://github.com/kubernetes/ingress-nginx/blob/8951b7e22ad3952c549150f61d7346f272c563e1/images/nginx/rootfs/build.sh#L618-L632 )
Do you know how we can customize the controller and manage it by helm chart? I'm thinking about making a Fork branch from the controller master repo on Github.
But I don't have any idea on how we install a customized version of the controller on terraform + helm chart.
However, I would prefer to use a non-customizable solution (because of some annotation settings)
Environment:
Kubernetes
Nginx Ingress Controller is installed by helm chart + terraform
Nginx Ingress Controller -> https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
Terraform:
resource "helm_release" "nginx-ingress-controller" {
name = "nginx-ingress-controller"
chart = "ingress-nginx/ingress-nginx"
namespace = "kube-system"
version = "3.34.0"
}
dynamic modules
https://docs.nginx.com/nginx/admin-guide/dynamic-modules/dynamic-modules/
(install process might be using --add-dynamic-module option, and set load_module modules/something.so on nginx.conf via ingress.yaml)
Thank you.
TL;DR
Extend the official image with the dynamic modules, and update the helm_release terraform resource to set the controller.image.registry, controller.image.image, controller.image.tag, controller.image.digest, and controller.image.digestChroot for your custom image along with a controller.config.main-snippet to load the dynamic module(s) in the main context.
This is similar to my previous answer for building modules using the official nginx image. You can extend the ingress-nginx/controller image, build the modules in one stage, extend the official image with the dynamic modules in another stage, and use the image in your helm_release. An example for extending the ingress-nginx/controller with the echo-nginx-module e.g.:
Docker
ARG INGRESS_NGINX_CONTROLLER_VERSION
FROM registry.k8s.io/ingress-nginx/controller:${INGRESS_NGINX_CONTROLLER_VERSION} as build
ARG INGRESS_NGINX_CONTROLLER_VERSION
ENV INGRESS_NGINX_CONTROLLER_VERSION=${INGRESS_NGINX_CONTROLLER_VERSION}
USER root
RUN apk add \
automake \
ca-certificates \
curl \
gcc \
g++ \
make \
pcre-dev \
zlib-dev
RUN NGINX_VERSION=$(nginx -V 2>&1 |sed -n -e 's/nginx version: //p' |cut -d'/' -f2); \
curl -L "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" | tar -C /tmp/nginx --strip-components=1 -xz
WORKDIR /src/echo-nginx-module
RUN curl -L https://github.com/openresty/echo-nginx-module/archive/refs/tags/v0.63.tar.gz | tar --strip-components=1 -xz
WORKDIR /tmp/nginx
RUN ./configure --with-compat --add-dynamic-module=/src/echo-nginx-module && \
make modules
FROM registry.k8s.io/ingress-nginx/controller:${INGRESS_NGINX_CONTROLLER_VERSION}
COPY --from=build /tmp/nginx/objs/ngx_http_echo_module.so /etc/nginx/modules/
... build and push the image e.g.: docker build --rm -t myrepo/ingress-nginx/controller:v1.5.1-echo --build-arg INGRESS_NGINX_CONTROLLER_VERSION=v1.5.1 . && docker push myrepo/ingress-nginx/controller:v1.5.1-echo
Terraform
Update the terraform helm_release resource to install the charts using the custom image and adding a main-snippet to set the load_module directive in the main context:
resource "helm_release" "ingress-nginx" {
name = "ingress-nginx"
namespace = "kube-system"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
version = "3.34.0"
set {
name = "controller.image.registry"
value = "myrepo"
}
set {
name = "controller.image.image"
value = "ingress-nginx/controller"
}
set {
name = "controller.image.tag"
value = "v1.5.1-echo"
}
set {
name = "controller.image.digest"
value = "sha256:1b32b3e8c983ef4a32d87dead51fbbf2a2c085f1deff6aa27a212ca6beefcb72"
}
set {
name = "controller.image.digestChroot"
value = "sha256:f2e1146adeadac8eebb251284f45f8569beef9c6ec834ae1335d26617da6af2d"
}
set {
name = "controller.config.main-snippet"
value = <<EOF
load_module /etc/nginx/modules/ngx_http_echo_module.so;
EOF
}
}
The controller.image.digest is the image RepoDigest: docker inspect myrepo/ingress-nginx/controller:v1.5.1-echo --format '{{range .RepoDigests}}{{println .}}{{end}}' |cut -d'#' -f2
The controller.image.digestChroot is the Parent sha: docker inspect myrepo/ingress-nginx/controller:v1.5.1-echo --format {{.Parent}}
Test
Create a nginx pod: kubectl run nginx --image=nginx
Expose the pod: kubectl expose pod nginx --port 80 --target-port 80
Create an ingress with a server-snippet:
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/server-snippet: |
location /hello {
echo "hello, world!";
}
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: echo.example.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: nginx
port:
number: 80
tls:
- hosts:
- echo.example.com
secretName: tls-echo
EOF
Using cert-manager for TLS certificates issuance and external-dns for DNS management.
Test using curl:
Please take a look at Cloud Native Buildpacks.
Images can be built directly from application source without additional instructions.
Maybe this nginx-buildpack solves your problem:
Loading dynamic modules
You can use templates to set the path to a dynamic module using the load_module directive.
PS. https://12factor.net/build-release-run
The twelve-factor app uses strict separation between the build, release, and run stages. For example, it is impossible to make changes to the code at runtime, since there is no way to propagate those changes back to the build stage.
As far as I search, there are several tips to place dynamic variables in nginx/conf.d/default.conf. Instead, I want to activate env variable in apiURLs.js file, which will be loaded in vue files.
# apiURLs.js
export const apiUrls = {
auth: "http://{{IP_ADDRESS}}:8826/auth/",
service1: "http://{{IP_ADDRESS}}:8826/",
service2: "http://{{IP_ADDRESS}}:8827/"
};
# in xxx.vue
let apiUrl = `${apiUrls.auth}login/`;
I tried envsubst in docker-compose.yml as
environment:
- IP_ADDRESS
command: /bin/sh -c "envsubst < /code/src/apiUrls.js | tee /code/src/apiUrls.js && nginx -g 'daemon off;'"
It appears to works as I go in docker container and confirmed that apiUrls.js is replaced.
However when I check the front app in browser and inspect api via chrome developer tool, it turned out that the api referred is still low url string with env argment, i.e. http://%24%7Bip_address%7D:8826/auth/login.
I suspect that vue project is pre-compiled during making docker image.
Is there some solution?
Or is it possible to purse env variable declared in nginx/conf.d/default.conf to apiURLs.js?
Your suspicion is correct. Any js file will almost certainly get processed into a new bundled file by webpack during the vue build, as opposed to being loaded at runtime.
If you can navigate in your docker to the vue project and re-run the build, your env variables should update.
Check package.json in the vue project for different build options. You should get something like:
{
"name": "vue-example",
"version": "0.1.0",
"private": true,
"scripts": {
"serve": "vue-cli-service serve",
"build": "vue-cli-service build",
"lint": "vue-cli-service lint"
},
//...
and be able to rebuild it with npm run build
(Also in the name of stating the obvious, remember to disable cache pages in the chrome dev tools when checking the updated site)
I'm creating a docker image for our fluentd.
The image contains a file called http_forward.conf
It contains:
<store>
type http
endpoint_url ENDPOINTPLACEHOLDER
http_method post # default: post
serializer json # default: form
rate_limit_msec 100 # default: 0 = no rate limiting
raise_on_error true # default: true
authentication none # default: none
username xxx # default: ''
password xxx # default: '', secret: true
</store>
So this is in our image. But we want to use the image for all our environments. Specified with environment variables.
So we create an environment variable for our environment:
ISSUE_SERVICE_URL = http://xxx.dev.xxx.xx/api/fluentdIssue
This env variable contains dev on our dev environment, uat on uat etc.
Than we want to replace our ENDPOINTPLACEHOLDER with the value of our env variable. In bash we can use:
sed -i -- 's/ENDPOINTPLACEHOLDER/'"$ISSUE_SERVICE_URL"'/g' .../http_forward.conf
But how/when do we have to execute this command if we want to use this in our docker container? (we don't want to mount this file)
We did that via ansible coding.
Put the file http_forward.conf as template, and deploy the change depend on the environment, then mount the folder (include the conf file) to docker container.
ISSUE_SERVICE_URL = http://xxx.{{ environment }}.xxx.xx/api/fluentdIssue
playbook will be something like this, I don't test it.
- template: src=http_forward.conf.j2 dest=/config/http_forward.conf mode=0644
- docker:
name: "fluentd"
image: "xxx/fluentd"
restart_policy: always
volumes:
- /config:/etc/fluent
In your DockerFile you should have a line starting with CMD somewhere. You should add it there.
Or you can do it cleaner: set the CMD line to call a script instead. For example CMD ./startup.sh. The file startup.sh will then contain your sed command followed by the command to start your fluentd (I assume that is currently the CMD).