I've been reading other questions about getting K8s environment variables to work in a Next.js app, but no accepted answer till now.
My app works fine using .env.local but it's getting an error (undefined) when deployed to K8s.
This is my next.config.js
module.exports = {
env: {
NEXT_PUBLIC_API_BASE_URL: process.env.NEXT_PUBLIC_API_BASE_URL,
},
};
K8s environment:
Can anyone help me to get that environment var works on my next.js app?
Right now I do a simple trick, that is added ARG and ENV on dockerfile, then inject it when I build the docker image
Dockerfile:
ARG NEXT_PUBLIC_API_BASE_URL
ENV NEXT_PUBLIC_API_BASE_URL=${NEXT_PUBLIC_API_BASE_URL}
You should add the ENV_VARS in a .env.local file. in form of a configMap. (https://nextjs.org/docs/basic-features/environment-variables)
In Kubernetes you create a configMap like so:
apiVersion: v1
name: env-local
data:
.env: |-
NEXT_PUBLIC_API_URL=http:/your.domain.com/api
API_URL=http://another.endpoint.com/serverSide
kind: ConfigMap
Then you mount that configMap as FILE into your deployment, it then is available at app/.env.local:
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 1
selector:
matchLabels:
app: your-app
template:
metadata:
labels:
app: your-app
spec:
containers:
- image: your/image:latest
imagePullPolicy: Always
name: your-app
ports:
volumeMounts:
- mountPath: /app/.env.local
name: env-local
readOnly: true
subPath: .env.local
volumes:
- configMap:
defaultMode: 420
items:
- key: .env
path: .env.local
name: env-local
name: env-local
What also worked - for me at least - for server side vars was simply adding them as regular env vars in my deployment: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container
apiVersion: v1
kind: Pod
metadata:
name: your-app
labels:
purpose: demonstrate-envars
spec:
containers:
- name: your-app-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
const withSvgr = require('next-svgr');
module.exports = {
// Will only be available on the server side
serverRuntimeConfig: {
API_URL: process.env.API_URL,
},
// Will be available on both server and client
publicRuntimeConfig: {
NEXT_PUBLIC_API_URL: process.env.API_URL,
},
};
I spent whole day experimenting with the ways to throw my vars into next js app not exposing them in a repo. None of above mentioned clues did the job, same as official docs. I use GitLab CI/CD for building stage, and K8S deployments. Finally made it work like so:
Create Project variables in GitLab
in .gitlab-ci.yml reconstructed .env.local (since it's the only point you get vars from)
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- touch .env.local
- echo "NEXT_PUBLIC_API_KEY='$NEXT_PUBLIC_API_KEY'" | cat >> .env.local
...
- echo "NEXT_PUBLIC_MEASUREMENT_ID='$NEXT_PUBLIC_MEASUREMENT_ID'" | cat >> .env.local
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
I ran into this issue and got it working with Docker while still respecting the 12 Factor App Rules, the TL;DR is you need to modify your next.js.config and your _app.js files with the following:
next.js.config
/** #type {import('next').NextConfig} */
const nextConfig = {
publicRuntimeConfig: {
// remove private variables from processEnv
processEnv: Object.fromEntries(
Object.entries(process.env).filter(([key]) =>
key.includes('NEXT_PUBLIC_')
)
),
},
}
module.exports = nextConfig
_app.js
import App from 'next/app'
function MyApp({ Component, pageProps }) {
return <Component {...pageProps} />
}
// Only uncomment this method if you have blocking data requirements for
// every single page in your application. This disables the ability to
// perform automatic static optimization, causing every page in your app to
// be server-side rendered.
MyApp.getInitialProps = async (appContext) => {
// calls page's `getInitialProps` and fills `appProps.pageProps`
const appProps = await App.getInitialProps(appContext);
return { ...appProps }
}
export default MyApp
To access the environment variables in any page or component simply add this:
import getConfig from 'next/config';
const {
publicRuntimeConfig: { processEnv },
} = getConfig();
Here’s an example of what component would look like:
import getConfig from 'next/config';
const {
publicRuntimeConfig: { processEnv },
} = getConfig();
const Header = () => {
const {NEXT_PUBLIC_MESSAGE} = processEnv;
return (
<div>
Hello, {NEXT_PUBLIC_MESSAGE}
</div>
)
}
export default Header;
The real issue is the way the Dockerfile starts the app in order to load env vars we need to start it with npm start.
I wrote an article with my findings if you want to get the full details of why and how it works: https://benmarte.com/blog/nextjs-in-docker/
I also made a sample repo which can be used as a template: https://github.com/benmarte/nextjs-docker
I will make a PR this week to the with-docker repo.
Kubernetes set the environment variables in runtime. But the NEXT_PUBLIC_API_BASE_URL is created in BUILD TIME, not in RUN TIME.
That means that env var should be in the .env file when you run the command npm run build. It's not possible to set up that env var in run time according to the documentation:
This inlining occurs at build time, so your various NEXT_PUBLIC_ envs need to be set when the project is built.
https://nextjs.org/docs/basic-features/environment-variables#exposing-environment-variables-to-the-browser
What you can do is implement the getServerSideProps and return the value in the props.
Or you have another option more complex to achieve the configuration in runtime there is a workaround like this https://dev.to/akdevcraft/react-runtime-variables-49dc
Try to remove the definition of the environment variable from your dockerfile.
Then add the definition of the environment variable to your deployment(or pod or replicaset), for exemple:
...
spec:
containers:
- name: test-container
image: gcr.io/kuar-demo/kuard-amd64:blue
imagePullPolicy: Always
env:
- name: NEXT_PUBLIC_API_BASE_URL
value: ANY_VALUE
...
Related
I'm using this Splunk image on Kubernetes (testing locally with minikube).
After applying the code below I'm facing the following error:
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe
$SPLUNK_HOME or $SPLUNK_ETC is set wrong?
My Splunk deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: splunk
labels:
app: splunk-app
tier: splunk
spec:
selector:
matchLabels:
app: splunk-app
track: stable
replicas: 1
template:
metadata:
labels:
app: splunk-app
tier: splunk
track: stable
spec:
volumes:
- name: configmap-inputs
configMap:
name: splunk-config
containers:
- name: splunk-client
image: splunk/splunk:latest
imagePullPolicy: Always
env:
- name: SPLUNK_START_ARGS
value: --accept-license --answer-yes
- name: SPLUNK_USER
value: root
- name: SPLUNK_PASSWORD
value: changeme
- name: SPLUNK_FORWARD_SERVER
value: splunk-receiver:9997
ports:
- name: incoming-logs
containerPort: 514
volumeMounts:
- name: configmap-inputs
mountPath: /opt/splunk/etc/system/local/inputs.conf
subPath: "inputs.conf"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: splunk-config
data:
inputs.conf: |
[monitor:///opt/splunk/var/log/syslog-logs]
disabled = 0
index=my-index
I tried to add also this env variables - with no success:
- name: SPLUNK_HOME
value: /opt/splunk
- name: SPLUNK_ETC
value: /opt/splunk/etc
I've tested the image with the following docker configuration - and it ran successfully:
version: '3.2'
services:
splunk-forwarder:
hostname: splunk-client
image: splunk/splunk:latest
environment:
SPLUNK_START_ARGS: --accept-license --answer-yes
SPLUNK_USER: root
SPLUNK_PASSWORD: changeme
ports:
- "8089:8089"
- "9997:9997"
Saw this on Splunk forum but the answer did not help in my case.
Any ideas?
Edit #1:
Minikube version: Upgraded fromv0.33.1 to v1.2.0.
Full error log:
$kubectl logs -l tier=splunk
splunk_common : Set first run fact -------------------------------------- 0.04s
splunk_common : Set privilege escalation user --------------------------- 0.04s
splunk_common : Set current version fact -------------------------------- 0.04s
splunk_common : Set splunk install fact --------------------------------- 0.04s
splunk_common : Set docker fact ----------------------------------------- 0.04s
Execute pre-setup playbooks --------------------------------------------- 0.04s
splunk_common : Setting upgrade fact ------------------------------------ 0.04s
splunk_common : Set target version fact --------------------------------- 0.04s
Determine captaincy ----------------------------------------------------- 0.04s
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?
Edit #2: Adding config map to the code (was removed from the original question for the sake of brevity). This is the cause of failure.
Based on the direction pointed out by #Amit-Kumar-Gupta I'll try also to give a full solution.
So this PR change makes it so that containers cannot write to secret, configMap, downwardAPI and projected volumes since the runtime will now mount them as read-only.
This change is since v1.9.4 and can lead to issues for various applications which chown or otherwise manipulate their configs.
When Splunk boots, it registers all the config files in various locations on the filesystem under ${SPLUNK_HOME} which is in our case /opt/splunk.
The error specified in the my question reflect that splunk failed to manipulate all the relevant files in the /opt/splunk/etc directory because of the change in the mounting mechanism.
Now for the solution.
Instead of mounting the configuration file directly inside the /opt/splunk/etc directory we'll use the following setup:
We'll start the docker container with a default.yml file which will be mounted in /tmp/defaults/default.yml.
For that, we'll create the default.yml file with: docker run splunk/splunk:latest create-defaults > ./default.yml
Then, We'll go to the splunk: block and add a config: sub block under it:
splunk:
conf:
inputs:
directory: /opt/splunk/etc/system/local
content:
monitor:///opt/splunk/var/log/syslog-logs:
disabled : 0
index : syslog-index
outputs:
directory: /opt/splunk/etc/system/local
content:
tcpout:splunk-indexer:
server: splunk-indexer:9997
This setup will generate two files with a .conf postfix (Remember that the sub block start with conf:) which be owned by the correct Splunk user and group.
The inputs: section will produce the a inputs.conf with the following content:
[monitor:///opt/splunk/var/log/syslog-logs]
disabled = 0
index=syslog-index
In a similar way, the outputs: block will resemble the following:
[tcpout:splunk-receiver]
server=splunk-receiver:9997
This is instead of the passing an environment variable directly like I did in the origin code:
SPLUNK_FORWARD_SERVER: splunk-receiver:9997
Now everything is up and running (:
Full setup of the forwarder.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: splunk-forwarder
labels:
app: splunk-forwarder-app
tier: splunk
spec:
selector:
matchLabels:
app: splunk-forwarder-app
track: stable
replicas: 1
template:
metadata:
labels:
app: splunk-forwarder-app
tier: splunk
track: stable
spec:
volumes:
- name: configmap-forwarder
configMap:
name: splunk-forwarder-config
containers:
- name: splunk-forwarder
image: splunk/splunk:latest
imagePullPolicy : Always
env:
- name: SPLUNK_START_ARGS
value: --accept-license --answer-yes
- name: SPLUNK_PASSWORD
valueFrom:
secretKeyRef:
name: splunk-secret
key: password
volumeMounts:
- name: configmap-forwarder
mountPath: /tmp/defaults/default.yml
subPath: "default.yml"
For further reading:
https://splunk.github.io/docker-splunk/ADVANCED.html
https://github.com/splunk/docker-splunk/blob/develop/docs/ADVANCED.md
https://www.splunk.com/blog/2018/12/17/deploy-splunk-enterprise-on-kubernetes-splunk-connect-for-kubernetes-and-splunk-insights-for-containers-beta-part-1.html
https://splunk.github.io/splunk-ansible/ADVANCED.html#inventory-script
https://static.rainfocus.com/splunk/splunkconf18/sess/1521146368312001VwQc/finalPDF/FN1089_DockerizingSplunkatScale_Final_1538666172485001Loc0.pdf
There are two questions here: (1) why are you seeing that error message, and (2) how to achieve the desired behaviour you're hoping to achieve that you're trying to express through your Deployment and ConfigMap. Unfortunately, I don't believe there's a "cloud-native" way to achieve what you want, but I can explain (1), why it's hard to do (2), and point you to something that might give you a workaround.
The error message:
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?
does not imply that you've set those environment variables incorrectly (necessarily), it implies that Splunk is looking for a file in that location and can't read a file there, and it's providing a hint that maybe you've put the file in another place but forgot to give Splunk the hint (via the $SPLUNK_HOME or $SPLUNK_ETC environment variables) to look elsewhere.
The reason why it can't read /opt/splunk/etc/splunk-launch.conf is because, by default, the /opt/splunk directory would be populated with tons of subdirectories and files with various configurations, but because you're mounting a volume at /opt/splunk/etc/system/local/inputs.conf, nothing can be written to /opt/splunk.
If you simply don't mount that volume, or mount it somewhere else (e.g. /foo/inputs.conf) the Deployment will start fine. Of course the problem is that it won't know anything about your inputs.conf, and it'll use the default /opt/splunk/etc/system/local/inputs.conf it writes there.
I assume what you want to do is allow Splunk to generate all the directories and files it likes, you only want to set the contents of that one file. While there is a lot of nuance about how Kubernetes deals with volume mounts, in particular those coming from ConfigMaps, and in particular when using subPath, at the end of the day I don't think there's a clean way to do what you want.
I did an Internet search for "splunk kubernetes inputs.conf" and this was my first result: https://www.splunk.com/blog/2019/02/11/deploy-splunk-enterprise-on-kubernetes-splunk-connect-for-kubernetes-and-splunk-insights-for-containers-beta-part-2.html. This is from official splunk.com, and it's advising running things like kubectl cp and kubectl exec to:
"Exec" into the master pod, and run ... commands, to copy (configuration) into the (target) directory and chown to splunk user.
🤷🏾♂️
One solution that worked for me in K8s deployment was:
Ammend below to the image Dockerfile
#RUN chmod -R 755 /opt/ansible
#RUN echo " ignore_errors: yes" >> /opt/ansible/roles/splunk_common/tasks/change_splunk_directory_owner.yml
Then use that same image in your deployment from your private repo with belo env variables:
#has to run as root otherwise won't let you write to $SPLUNK_HOME/S
env:
- name: SPLUNK_START_ARGS
value: --accept-license --answer-yes --no-prompt
- name: SPLUNK_USER
value: root
I've tried https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
and the base 64 encoded solution in a yaml file (which is ultimately what I need to do) doesn't authenticate. (apparently this is a common problem and if anyone's got a yaml file that has it working I'd love to see it or a method that allows secure deployment from a private repo, just so we don't get stuck in the x-y problem)
So I tried the following:
kubectl create secret generic registrykey --from-file=.dockerconfigjson=/home/dbosh/.docker/config.json --type=kubernetes.io/dockerconfigjson
with the deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my_deployment
spec:
selector:
matchLabels:
app: my_deployment
tier: backend
track: stable
replicas: 7
template:
metadata:
labels:
app: my_deployment
tier: backend
track: stable
spec:
containers:
- name: my_deployment
image: "my_private_repo:image_name"
ports:
- name: http
containerPort: 8082
imagePullSecrets:
- name: registrykey
However whenever I try to deploy, I keep getting that the "pull access denied for my_private_repo, repository does not exist or may require 'docker login".
Now to create the docker login file, I have indeed logged in and tested again with logging in immediately before regenerating the secret and then redeploying and it still doesn't authenticate.
Any help appreciated please.
UPDATE (thanks to a useful comment):
It would appear that my config.json after logging in looks likethis:
cat .docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.2 (linux)"
},
"credsStore": "secretservice"
This would appear to not contain a token. I generated this from running docker login and providing my credentials. Any ideas anyone?
There's no token for your private repo in the config.json file, but only for docker hub.
So you need to re-authenticate within your private repository:
docker logout <my_private_repo> && docker login <my_private_repo> -u <user> -p <pass> && cat ~/.docker/config.json
Should be a bit of this:
"auths": {
"my_private_repo": {
"auth": "c3VraG92ZXJzsdfdsQXNocmV2b2h1czg4"
}
I tried to set variables in Azure Pipelines' Release which can be used Command Task in Release to replace variables' values to Docker Kubernetes' .yaml file.
It works fine for me but I need to prepare several Command Tasks to replace variables one by one.
For example, I set variable TESTING1_(value:Test1), TESTING2_(value:Test2) and TESTING3_(value:Test3) in Pipelines' Release. Then I only used Command Task to replace TESTING1_ to $(TESTING1_) in Docker Kubernetes' .yaml file. Below is original environment setting in .yaml file:
spec:
containers:
- name: devops
env:
- name: TESTING1
value: TESTING1_
- name: TESTING2
value: $(TESTING2_)
After ran Pipelines' Release, print out results in NodeJS were:
console.log(process.env.TESTING1); --> Test1
console.log(process.env.TESTING2); --> $(TESTING2_)
console.log(process.env.TESTING3); --> undefined
i think you should be use config maps for that (maybe update values in config maps). you shouldn't be updating containers directly. this gives you flexibility and management. example:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
and then if some value changes you update the config map and all the pods that reference this config map get new value.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
I want to pass some values from Kubernetes yaml file to the containers. These values will be read in my Java app using System.getenv("x_slave_host").
I have this dockerfile:
FROM jetty:9.4
...
ARG slave_host
ENV x_slave_host $slave_host
...
$JETTY_HOME/start.jar -Djetty.port=9090
The kubernetes yaml file contains this part where I added env section:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: master
spec:
template:
metadata:
labels:
app: master
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: master
image: xregistry.azurecr.io/Y:latest
ports:
- containerPort: 9090
volumeMounts:
- name: shared-data
mountPath: ~/.X/experiment
- env:
- name: slave_host
value: slavevalue
- name: jupyter
image: xregistry.azurecr.io/X:latest
ports:
- containerPort: 8000
- containerPort: 8888
volumeMounts:
- name: shared-data
mountPath: /var/folder/experiment
imagePullSecrets:
- name: acr-auth
Locally when I did the same thing using docker compose, it worked using args. This is a snippet:
master:
image: master
build:
context: ./master
args:
- slave_host=slavevalue
ports:
- "9090:9090"
So now I am trying to do the same thing but in Kubernetes. However, I am getting the following error (deploying it on Azure):
error: error validating "D:\\a\\r1\\a\\_X\\deployment\\kub-deploy.yaml": error validating data: field spec.template.spec.containers[1].name for v1.Container is required; if you choose to ignore these errors, turn validation off with --validate=false
In other words, how to rewrite my docker compose file to kubernetes and passing this argument.
Thanks!
env section should be added under containers, like this:
containers:
- name: master
env:
- name: slave_host
value: slavevalue
To elaborate a on #Kun Li's answer, besides adding environment variables e.g. in the Deployment manifest directly you can create a ConfigMap (or Secret depending on the data being stored) and reference these in your manifests. This is a good way of sharing the same environment variables across applications, compared to manually adding environment variables to several different applications.
Note that a ConfigMap can consist of one or more key: value pairs and it's not limited to storing environment variables, it's just one of the use cases. And as i mentioned before, consider using a Secret if the data is classified as sensitive.
Example of a ConfigMap manifest, in this case used for storing an environment variable:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-env-var
data:
slave_host: slavevalue
To create a ConfigMap holding one key=value pair using kubectl create:
kubectl create configmap my-env --from-literal=slave_host=slavevalue
To get hold of all environment variables configured in a ConfigMap use the following in your manifest:
containers:
envFrom:
- configMapRef:
name: my-env-var
Or if you want to pick one specific environment variable from your ConfigMap containing several variables:
containers:
env:
- name: slave_host
valueFrom:
configMapKeyRef:
name: my-env-var
key: slave_host
See this page for more examples of using ConfigMap's in different situations.
I have this repo, and docker-compose up will launch the project, create 2 containers (a DB and API), and everything works.
Now I want to build and deploy to Kubernetes. I try docker-compose build but it complains there's no Dockerfile. So I start writing a Dockerfile and then discover that docker/Dockerfiles don't support loading ENV vars from an env_file or .env file. What gives? How am I expected to build this image? Could somebody please enlighten me?
What is the intended workflow for building a docker image with the appropriate environment variables?
Those environment variables shouldn't be set at docker build step but at running the application on Kubernetes or docker-compose.
So:
Write a Dockerfile and place it at root folder. Something like this:
FROM node
COPY package.json .
RUN npm install
COPY . .
ENTRYPOINT ["npm", "start"]
Modify docker-compose.yaml. In the image field you must specify the name for the image to be built. It should be something like this:
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
There is no need to set user and working_dir
Build the image with docker-compose build (you can also do this with docker build)
Now you can use docker-compose up to run your app locally, with the .env file
To deploy it on Kubernetes you need to publish your image in dockerhub (unless you run Kubernetes locally):
docker push YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
Finally, create a Kubernetes manifest. Sadly kubernetes doesn't support env files as docker-compose do, you'll need to manually set these variables in the manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-api
labels:
app: platform-api
spec:
replicas: 1
selector:
matchLabels:
app: platform-api
template:
metadata:
labels:
app: platform-api
spec:
containers:
- name: platform-api
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: develop
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-db
labels:
app: platform-db
spec:
replicas: 1
selector:
matchLabels:
app: platform-db
template:
metadata:
labels:
app: platform-db
spec:
containers:
- name: arangodb
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
ports:
- containerPort: 8529
env:
- name: ARANGO_ROOT_PASSWORD
value: localhost
Deploy it with kubectl create
Please note that this code is just indicative, I don't know exactly your user case. Find more information in docker-compose and kubernetes docs and tutorials. Good luck!
I've updated the project on github, it now all works, and the readme documents how to run it.
I realized that env vars are considered runtime vars, which is why --env-file is an option for docker run and not docker build. This must also (I assume) be why docker-compose.yml has the env_file option, which I assume just passes the file to docker build. And in Kubernetes, I think these are passed in from a configmap. This is done so the image remains more portable; same project can be run with different vars passed in, no rebuild required.
Thanks ignacio-millán for the input.