My project setup is like this:
project-ci (only docker)
nextjs
backend
Staging and Production are the same so I pass in docker-compose.yml file of staging only this argument (both environments are built with these commands first npm run build and then npm run start)
args:
NEXT_PUBLIC_URL: arbitrary_value
in Dockerfile of the nextjs put these commands
ARG NEXT_PUBLIC_URL
ENV NEXT_PUBLIC_URL=$NEXT_PUBLIC_URL
so variable will then be accessible in nextjs with process.env.NEXT_PUBLIC_URL.
So far if I try to console.log(process.env.NEXT_PUBLIC_URL) in index.js the value is always undefined. Any ideas what is wrong, also checked the docs but the result was still undefined
https://nextjs.org/docs/api-reference/next.config.js/runtime-configuration
https://nextjs.org/docs/api-reference/next.config.js/environment-variables
You can access env variables using public runtime config property:
next.config.js
module.exports = {
publicRuntimeConfig: {
NEXT_PUBLIC_URL: process.env.NEXT_PUBLIC_URL,
}
};
then:
import getConfig from "next/config";
const { publicRuntimeConfig } = getConfig();
console.log(publicRuntimeConfig.NEXT_PUBLIC_URL);
If it doesn't work, make sure the env variables are created inside docker container
While NextJs has many pros, this is the one con that I have found with it. Because of their feature, Advanced Static Optimization(https://nextjs.org/docs/advanced-features/automatic-static-optimization) env vars are converted into their values at build time. This is not very well documented on their site, but after a lot of testing I figured out the solution.
So you have to tell your docker image that when it starts up to npm run build before npm run start It will consume the env vars currently in the docker image and optimize them into the build with the values you wanted.
Cheers.
Related
I have an environment variable on my server (local raspberry pi) that stores a token that I need. When I run the docker container myself, it doesn't seem to have any problems getting the token from the environment variable. The variable is exported in my .bashrc and can be seen with echo.
When running a workflow through github actions with the same steps, the application cannot find the token.
The environment variable is consistently not there when I check for it. After thinking maybe it was having trouble accessing my .bashrc file, I made a github secret and have been trying to pass the value referencing that instead as you can see below.
I have these lines in my Dockerfile:
ARG MY_TOKEN
ENV MY_TOKEN=$MY_TOKEN
And this is my workflow yaml:
docker build --build-arg MY_TOKEN=${{ secrets.MY_TOKEN }} -t my_img ~/my_project
docker run -de MY_TOKEN --restart=always --name=my_container my_img
Any guidance will be greatly appreciated, I feel like I could get this to work by passing system arguments but
I'm unsure if that's good practice
I'd feel better if someone could point out my bonehead mistake thats holding me back before doing something else
I have an application With vue 3 + ViteJs. I'm trying to configure the env variables in docker compose but in vitejs configuration step the env vars was empty, I deduce that is not in running time and the vite config is in "build time".
Furthermore, I see an option with .env file (or .env.staging, or .env.production, ...) but I'm interested in another option more "dynamic", in docker, dev, stg, pro, ... I want different configurations.
The question is, anyone knows an alternative to .env files ?
I'm trying to move some of our environment variables out of the .env file onto the operating system, and it seems to be working with some of the variables but not with others. It's a complicated issue so let me show while telling.
On my local Windows OS I set the variables COLORS_PRIMARY and TEST_LABEL.
I access both in nuxt.config.js and also in a page component.
// ./nuxt.config.js (irrelevant parts removed)
vuetify: {
theme: {
themes: {
light: {
primary: process.env.COLORS_PRIMARY || '#EB4E1A',
}
}
}
},
publicRuntimeConfig: {
TEST_LABEL: process.env.TEST_LABEL,
COLORS_PRIMARY: process.env.COLORS_PRIMARY,
}
// ./pages/index.vue
<template>
<div class="version-label">
label: {{ $config.TEST_LABEL || 'config empty' }} |
color: {{ $config.COLORS_PRIMARY || 'color empty' }}
</div>
</template>
Both of these work fine locally - I can see the their values printed to the page, and in View Page Source, as set in Windows' Environment Variables panel, and Vuetify parses COLORS_PRIMARY correctly, setting a button I have in another component to #FFFF00.
The trouble comes in when I deploy it with Docker to Kubernetes Engine, and then try to pull the values from the Workload's YAML file. At which point only TEST_LABEL pulls through but not COLORS_PRIMARY.
The weird part is that both values are visible in View Page Source and printed to the screen, but the button's background color is #EB4E1A - the fallback value set in nuxt.config.js in the vuetify.theme object.
From all of this I've gathered that for some reason client-side env values work perfectly fine but server-side values (or at least, values accessed in nuxt.config.js) aren't accessible, and I can't figure out why or how to overcome it.
My best guess is that it has something to do with how I dockerize the application, or how I use process.env.whatever in nuxt.config.js, so I rate my configuration probably has to change somehow but I'm not sure what to change it to.
The only resource I could find that almost touches on what I'm on about here is this dev.to article, but he's just talking about OS-based environment variables in general, and I can't figure out how to translate it into something useful to my situation.
Here's my Dockerfile:
FROM node:14.15.4-alpine
# create destination directory
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
# update and install dependency
RUN apk update && apk upgrade
RUN apk add git
# copy the app, note .dockerignore
COPY . /usr/src/nuxt-app/
RUN npm install
RUN npm run build
EXPOSE 8080
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=8080
CMD [ "npm", "start" ]
Basically, what I'm trying to do is replicate how Heroku handles environment variables, in GCP and Kubernetes Engine, so if you don't know how to answer my initial question, I'll equally appreciate any advice on other ways of storing and accessing env vars on GCP+KE.
I have a react app built with webpack that I want to deploy inside a docker container. I'm currently using the DefinePlugin to pass the api url for the app along with some other environment variables into the app during the build phase. The relevant part of my webpack config looks something like:
plugins: [
new DefinePlugin({
GRAPHQL_API_URL: JSON.stringify(process.env.GRAPHQL_API_URL),
DEBUG: process.env.DEBUG,
...
}),
...
]
Since this strategy requires the environment variables at build time, my docker file is a little icky, since I need to actually put the webpack build call as part of the CMD command:
FROM node:10.16.0-alpine
WORKDIR /usr/app/
COPY . ./
RUN npm install
# EXPOSE and serve -l ports should match
EXPOSE 3000
CMD npm run build && npm run serve -- -l 3000
I'd love for the build step in webpack to be a layer in the docker container (a RUN command), so I could potentially clean out all the source files after the build succeeds, and so start up is faster. Is there a standard strategy for dealing with this issue of using information from the docker environment when you are only serving static files?
How do I use environment variables in a static site inside docker?
This question is broader than your specific problem I think. The generic answer to this is, you can't, by nature of the fact that the content is static. If you need the API URL to be dynamic and modifiable at runtime then there needs to be some feature to support that. I'm not familiar enough with webpack to know if this can work but there is a lot of information at the following link that might help you.
Passing environment-dependent variables in webpack
Is there a standard strategy for dealing with this issue of using information from the docker environment when you are only serving static files?
If you are happy to have the API URL baked into the image then the standard strategy with static content in general is to use a multistage build. This generates the static content and then copies it to a new base image, leaving behind any dependencies that were required for the build.
https://docs.docker.com/develop/develop-images/multistage-build/
I run some installation scripts via docker, they change ~/.bashrc but then I need to source it to use installed commands in RUN instructions below.
Tried obvious RUN . ~/.bashrc and got /bin/sh: 13: /root/.bashrc: shopt: not found error.
Tried RUN . ~/.profile and got mesg: ttyname failed: Inappropriate ioctl for device
I do not want to use ENV instructions. The point of having external installation scripts is to use them in non-Docker environments, for example when running unit tests locally. ENV instructions would duplicate environment setup which is already done in installation scripts.
You should not try to set up shell dotfiles in Docker. Many typical paths do not run them at all; for example
# In a Dockerfile
CMD ["some", "command", "here"]
# From the command line
docker run myimage some command here
The Docker environment is, fundamentally, different from a standalone Linux system; in addition to shell dotfiles, "home directory" isn't really a Docker concept, and if you have a multi-part process, on Docker it's standard to run each part in a separate container, but on standalone Linux you could use the init system to keep all of the parts running together. If you're expecting things to work exactly the same with exactly the same installation scripts, a virtual machine would be a better technological match for what you're attempting.
("Inappropriate ioctl for device" also suggests that there are things in the dotfiles that strongly expect to be run from an actual terminal, which you don't necessarily have at docker build time.)
My generic advice here is:
If possible, install things in the "system" directories within the image and avoid needing custom environment variable settings. (Don't use a version manager like nvm or rvm; don't use a Python virtual environment.)
If you do have to set environment variables, ENV is the way to do it.
If you really can't do either of the above, you can set environment variables in an ENTRYPOINT script before launching the main process; but if it's important to you that variables show up in docker inspect or docker exec shells, they won't be set there.
(Also remember that each RUN command launches a new container with a totally new shell environment. You can RUN . .profile; foo, but the environment variable settings won't carry through to the next RUN line.)