I'm trying to move some of our environment variables out of the .env file onto the operating system, and it seems to be working with some of the variables but not with others. It's a complicated issue so let me show while telling.
On my local Windows OS I set the variables COLORS_PRIMARY and TEST_LABEL.
I access both in nuxt.config.js and also in a page component.
// ./nuxt.config.js (irrelevant parts removed)
vuetify: {
theme: {
themes: {
light: {
primary: process.env.COLORS_PRIMARY || '#EB4E1A',
}
}
}
},
publicRuntimeConfig: {
TEST_LABEL: process.env.TEST_LABEL,
COLORS_PRIMARY: process.env.COLORS_PRIMARY,
}
// ./pages/index.vue
<template>
<div class="version-label">
label: {{ $config.TEST_LABEL || 'config empty' }} |
color: {{ $config.COLORS_PRIMARY || 'color empty' }}
</div>
</template>
Both of these work fine locally - I can see the their values printed to the page, and in View Page Source, as set in Windows' Environment Variables panel, and Vuetify parses COLORS_PRIMARY correctly, setting a button I have in another component to #FFFF00.
The trouble comes in when I deploy it with Docker to Kubernetes Engine, and then try to pull the values from the Workload's YAML file. At which point only TEST_LABEL pulls through but not COLORS_PRIMARY.
The weird part is that both values are visible in View Page Source and printed to the screen, but the button's background color is #EB4E1A - the fallback value set in nuxt.config.js in the vuetify.theme object.
From all of this I've gathered that for some reason client-side env values work perfectly fine but server-side values (or at least, values accessed in nuxt.config.js) aren't accessible, and I can't figure out why or how to overcome it.
My best guess is that it has something to do with how I dockerize the application, or how I use process.env.whatever in nuxt.config.js, so I rate my configuration probably has to change somehow but I'm not sure what to change it to.
The only resource I could find that almost touches on what I'm on about here is this dev.to article, but he's just talking about OS-based environment variables in general, and I can't figure out how to translate it into something useful to my situation.
Here's my Dockerfile:
FROM node:14.15.4-alpine
# create destination directory
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
# update and install dependency
RUN apk update && apk upgrade
RUN apk add git
# copy the app, note .dockerignore
COPY . /usr/src/nuxt-app/
RUN npm install
RUN npm run build
EXPOSE 8080
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=8080
CMD [ "npm", "start" ]
Basically, what I'm trying to do is replicate how Heroku handles environment variables, in GCP and Kubernetes Engine, so if you don't know how to answer my initial question, I'll equally appreciate any advice on other ways of storing and accessing env vars on GCP+KE.
Related
My project setup is like this:
project-ci (only docker)
nextjs
backend
Staging and Production are the same so I pass in docker-compose.yml file of staging only this argument (both environments are built with these commands first npm run build and then npm run start)
args:
NEXT_PUBLIC_URL: arbitrary_value
in Dockerfile of the nextjs put these commands
ARG NEXT_PUBLIC_URL
ENV NEXT_PUBLIC_URL=$NEXT_PUBLIC_URL
so variable will then be accessible in nextjs with process.env.NEXT_PUBLIC_URL.
So far if I try to console.log(process.env.NEXT_PUBLIC_URL) in index.js the value is always undefined. Any ideas what is wrong, also checked the docs but the result was still undefined
https://nextjs.org/docs/api-reference/next.config.js/runtime-configuration
https://nextjs.org/docs/api-reference/next.config.js/environment-variables
You can access env variables using public runtime config property:
next.config.js
module.exports = {
publicRuntimeConfig: {
NEXT_PUBLIC_URL: process.env.NEXT_PUBLIC_URL,
}
};
then:
import getConfig from "next/config";
const { publicRuntimeConfig } = getConfig();
console.log(publicRuntimeConfig.NEXT_PUBLIC_URL);
If it doesn't work, make sure the env variables are created inside docker container
While NextJs has many pros, this is the one con that I have found with it. Because of their feature, Advanced Static Optimization(https://nextjs.org/docs/advanced-features/automatic-static-optimization) env vars are converted into their values at build time. This is not very well documented on their site, but after a lot of testing I figured out the solution.
So you have to tell your docker image that when it starts up to npm run build before npm run start It will consume the env vars currently in the docker image and optimize them into the build with the values you wanted.
Cheers.
I have two Dockerfiles (maybe will have more) with the list of environment variables, same for the both files. Let's say:
ENV VAR1="value1"
ENV VAR2="value2"
ENV VAR3="value3"
Can I somehow move this setup to a file, which can be used in all the Dockerfiles, where it's required?
I want to remove duplicates and have a common place for setting those variables.
You can split these into a custom base image. That image would look like
FROM ubuntu:18.04 # or whatever else you're using
ENV VAR1="value1"
ENV VAR2="value2"
ENV VAR3="value3"
# and that's all
You would have to manually build this in most situations
docker build -t my/env-base -f Dockerfile.env .
and then you can refer to it in the downstream Dockerfiles
FROM my/env-base
# the rest of the Dockerfile commands as normal
Tooling like Docker Compose won't really be aware of this image layering. There's no good way to list a base image that needs to be built as a dependency of other things, but shouldn't run a container on its own. If you do change these values you'll have to manually rebuild the base image, then rebuild the application images.
You should also consider whether you need all of these environment variables. In other SO questions I see variables used for filesystem paths (which can be fixed in an isolated Docker image), usernames (not a Docker concept really), credentials (keep far away from the image, it's really easy to get them back out), versions, and URLs. You might be able to get away with using fixed values for these (use /app rather than $INSTALL_PATH), or have a sensible default in your application code.
I have a react app built with webpack that I want to deploy inside a docker container. I'm currently using the DefinePlugin to pass the api url for the app along with some other environment variables into the app during the build phase. The relevant part of my webpack config looks something like:
plugins: [
new DefinePlugin({
GRAPHQL_API_URL: JSON.stringify(process.env.GRAPHQL_API_URL),
DEBUG: process.env.DEBUG,
...
}),
...
]
Since this strategy requires the environment variables at build time, my docker file is a little icky, since I need to actually put the webpack build call as part of the CMD command:
FROM node:10.16.0-alpine
WORKDIR /usr/app/
COPY . ./
RUN npm install
# EXPOSE and serve -l ports should match
EXPOSE 3000
CMD npm run build && npm run serve -- -l 3000
I'd love for the build step in webpack to be a layer in the docker container (a RUN command), so I could potentially clean out all the source files after the build succeeds, and so start up is faster. Is there a standard strategy for dealing with this issue of using information from the docker environment when you are only serving static files?
How do I use environment variables in a static site inside docker?
This question is broader than your specific problem I think. The generic answer to this is, you can't, by nature of the fact that the content is static. If you need the API URL to be dynamic and modifiable at runtime then there needs to be some feature to support that. I'm not familiar enough with webpack to know if this can work but there is a lot of information at the following link that might help you.
Passing environment-dependent variables in webpack
Is there a standard strategy for dealing with this issue of using information from the docker environment when you are only serving static files?
If you are happy to have the API URL baked into the image then the standard strategy with static content in general is to use a multistage build. This generates the static content and then copies it to a new base image, leaving behind any dependencies that were required for the build.
https://docs.docker.com/develop/develop-images/multistage-build/
I'm working on building a website in Go, which is hosted on my home server via docker.
What I'm trying to do:
I make changes to my website/server locally, then push them to github. I'd like to write a dockerfile such that it pulls this data from my github, builds the image, which my docker-compose file will then use to create the container.
Unfortunately, all of my attempts have been somewhat close but wrong.
FROM golang:1.8-onbuild
MAINTAINER <my info>
RUN go get <my github url>
ENV webserver_path /website/
ENV PATH $PATH: webserver_path
COPY website/ .
RUN go build .
ENTRYPOINT ./website
EXPOSE <ports>
This file is kind of a combination of a few small guides I found through google searches, but none quite gave me the information I needed and it never quite worked.
I'm hoping somebody with decent docker experience can just put a Dockerfile together for me to use as a guide so I can find what I'm doing wrong? I think what I'm looking for can be done in only a few lines, and mine is a little more verbose than needed.
ADDITIONAL BUT PROBABLY UNNECESSARY INFORMATION BELOW
Project layout:
Data: is where my go files are Sidenote: This was throwing me errors when trying to build image, something about not being in the environment path. Not sure if that is helpful
Static: CSS, JS, Images
TPL: go template files
Main.go: launches server/website
There are several strategies:
Using of pre-build app. Build your app using
go build command according to target system architecture and OS (using GOOS and GOARCH system variable for example) then use COPY docker command to move this builded file (with assets and templates) to your WORKDIR and finally run it via CMD or ENTRYPOINT (last is preferable). Dockerfile for this example will look like:
FROM scratch
ENV PORT 8000 EXPOSE $PORT
COPY advent / CMD ["/advent"]
Build by dockerfile. Typical Dockerfile:
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
# Copy the local package files to the container's workspace.
ADD . /go/src/github.com/golang/example/outyet
# Build the outyet command inside the container.
# (You may fetch or manage dependencies here,
# either manually or with a tool like "godep".)
RUN go install github.com/golang/example/outyet
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/outyet
# Document that the service listens on port 8080.
EXPOSE 8080
Using GitHub. Build your app and pull to dockerhub as ready to use image.
Github supports Webhooks which can be used to do all sorts of things automagically when you push to a git repo. Since you're already running a web server on your home box, why don't you have Github send a POST request to that when it receives a commit on master and have your home box re-download the git repo and restart web services from that?
I was able to solve my issue by just creating an automated build through docker hub, and just using this for my dockerfile:
FROM golang-onbuild
EXPOSE <ports>
It isn't exactly the correct answer to my question, but it is an effective workaround. The automated build connects with my github repo the way I was hoping my dockerfile would.
I have a custom module that I want to install on a container running the bitnami/magento docker image within a kubernetes cluster.
I am currently trying to install the module from a local dir into the containers Dockerfile:
# run bitnami's magento container
FROM bitnami/magento:2.2.5-debian-9
# add magento_code directory to the bitnami magento install
# ./magento_data/code contains the module, i.e. Foo/Bar
ADD ./magento_data/code /opt/bitnami/magento/htdocs/app/code
After building and running this image the site pings back a 500 error. The pod logs show that Magento installs correctly but it doesn't know what to do with the custom module:
Exception #0 (UnexpectedValueException): Setup version for module 'Foo_Bar' is not specified
Therefore to get things working I have to open a shell to the container and run some commands:
$ php /opt/bitnami/magento/htdocs/bin/magento setup:upgrade
$ chown -R bitnami:daemon /opt/bitnami/magento/htdocs
The first sorts the magento set up issue, the second ensures the next time an http request comes in Magento is able to correctly generate any directories and files it needs.
This gives me a functioning container, however, kubernetes is not able to rebuild this container as I am manually running a bunch of commands after Magento has installed.
I thought about running the above commands within the containers readinessProbe however not sure if it would work as not 100% on the state of Magento when that is first called, alongside it seeming very hacky.
Any advice on how to best set up custom modules within a bitnami/magento container would be much appreciated.
UPDATE:
Since opening this issue I've been discussing it further on Github: https://github.com/bitnami/bitnami-docker-magento/issues/82
I've got it working via the use of composer instead of manually adding the module to the app/code directory.
I was able to do this by firstly adding the module to Packagist, then I stored my Magento Marketplace authentication details in auth.json:
{
"http-basic": {
"repo.magento.com": {
"username": <MAGENTO_MARKETPLACE_PUBLIC_KEY>,
"password": <MAGENTO_MARKETPLACE_PRIVATE_KEY>
}
}
}
You can get the public & private key values by creating a new access key within marketplace. Place the file in the modules root, alongside your composer.json.
Once I had that I updated my Dockerfile to use the auth.json and require the custom module:
# run bitnami's magento container
FROM bitnami/magento:2.2.5
# Require custom modules
WORKDIR /opt/bitnami/magento/htdocs/
ADD ./auth.json .
RUN composer require foo/bar
I then completed a new install, creating the db container alongside the magento container. However it should also work fine with an existing db so long as the modules versions are the same.