I have a NextJS application that accesses a database from the API.
When in development, I have a .env file that contains the host, port, username, password, and database. After running npm run dev, the API functionality works as expected. Even if I run npm run build and npm run start on my local machine, it works correctly.
The problem comes after I push to Github and have Github build the app and deploy it as a docker container. For some reason, the docker build does not accept my environment variables loaded through an attached .env file.
To further elaborate, the environment variables are either on the dev machine or attached to the docker container on the production server. They are mounted to the same place (the root directory of the project: /my-site/.env) but the production API does not work.
The environment variables are included in /lib/db.tsx:
import mysql from "mysql2/promise";
const executeQuery = async (query: string) => {
try {
const db = await mysql.createConnection({
host: process.env.MYSQL_HOST,
database: process.env.MYSQL_DATABASE,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
});
const [results] = await db.execute(query, []);
db.end();
return results;
} catch (error) {
return { error };
}
};
export default executeQuery;
This file is included in the API endpoints as:
import executeQuery from "../../../../lib/db";
Again, since it works on the development computer, I think there is an issue is with the building of the docker container.
Here is my included Dockerfile:
FROM node:lts as dependencies
WORKDIR /my-site
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
FROM node:lts as builder
WORKDIR /my-site
COPY . .
COPY --from=dependencies /my-site/node_modules ./node_modules
RUN yarn build
FROM node:lts as runner
WORKDIR /my-site
ENV NODE_ENV production
# If you are using a custom next.config.js file, uncomment this line.
# COPY --from=builder /my-site/next.config.js ./
COPY --from=builder /my-site/public ./public
COPY --from=builder /my-site/.next ./.next
COPY --from=builder /my-site/node_modules ./node_modules
COPY --from=builder /my-site/package.json ./package.json
EXPOSE 3000
CMD ["yarn", "start"]
Any and all assistance is greatly appreciated!
Edit: Other things I have attempted:
Add them as environment variables to the docker container in the docker compose file (under environment) and verified that they are accessible inside of the container using echo $MYSQL_USER.
Mounting the .env file inside of the .next folder
Mounting the .env file inside of the .next/server folder
I ended up solving my own issue after hours of trying to figure it out.
My solution was to create a .env.production file and commit it to git.
I also adjusted my Dockerfile to include: COPY --from=builder /my-site/.env.production ./
I am not a fan of that solution, as it involves pushing secrets to a repo, but it works.
Related
Having read the Quasar framework's description for Handling process.env, I understand that it is possible to add environment variables when building the application for development or production.
You can even go one step further. Supply it with values taken from the
quasar dev/build env variables:
// quasar.config.js
build: {
env: {
FOO: process.env.FOO,
}
}
Then, I can use that variable by using process.env.FOO.
For staging and production, however, I'm building a Docker image which runs an NGINX serving the final dist/spa folder. I'd like to pass an environment variable when deploying the application, so that I can configure the FOO variable depending on its value in the docker-compose.yml:
// staging
services:
image: my-quasar-image
environment:
FOO: "STAGING"
// production
services:
image: my-quasar-image
environment:
FOO: "PROD"
I have found some blog post which mentions that you could create a custom entrypoint.sh for the Docker image which reads env variables and adds them to the window object but I wonder if there might be a more "elegant" solution.
The primary question is: Is it possible to pass in (Docker) environment variables before the application starts and which are then available on process.env?
This is how I sorted my requirement that works perfectly for my use case.
A quick review of what I wanted to do: Be able to pass in environment variables via a docker-compose file to a Vue.js application to allow different team members to test different development APIs depending on their assignment(localhost if running the server locally, api-dev, api-staging, api-prod).
Update public/index.html to contain following at the head:
<script>
// CONFIGURATIONS_PLACEHOLDER
</script>
There is no need to update vue.config.js as we are using the public folder for configuration.
Create new file env.js to consume runtime variables (keep it inside src folder)
export default function getEnv(name) {
return window?.configs?.[name] || process.env[name];
}
Create new bash file set-env-variable.sh in the root folder of the app.
#!/bin/sh
JSON_STRING='window.configs = { \
"VUE_APP_VAR1":"'"${VUE_APP_VAR1}"'", \
"VUE_APP_VAR2":"'"${VUE_APP_VAR2}"'" \
}'
sed -i "s#// CONFIGURATIONS_PLACEHOLDER#${JSON_STRING}#" /usr/share/nginx/html/index.html
exec "$#"
Update docker file (assuming it's in the root folder of your vue app)
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY ./set-env-variable.sh /docker-entrypoint.d
RUN chmod +x /docker-entrypoint.d/set-env-variable.sh
RUN dos2unix /docker-entrypoint.d/set-env-variable.sh
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Deployment
vue-app:
....
volumes:
- "./nginx/templates/:/etc/nginx/templates/"
environment:
VUE_APP_VAR1: my-app
VUE_APP_VAR2: 8080
Consuming variables in vue app
import getEnv from "./service/env.js";
var myVar = getEnv("VUE_APP_VAR1");
Hi everyone I'm facing a strange issue in the creating of a docker image for a remix.run app, and using it inside a github job.
I have this Dockerfile
FROM node:16-alpine as deps
WORKDIR /app
ADD package.json yarn.lock ./
RUN yarn install
# Build the app
FROM node:16-alpine as build
ENV NODE_ENV=production
WORKDIR /app
COPY --from=deps /app/node_modules /app/node_modules
COPY . .
RUN yarn run build
# Build production image
FROM node:16-alpine as runner
ENV NODE_ENV=production
ENV PORT=80
WORKDIR /app
COPY --from=deps /app/node_modules /app/node_modules
COPY --from=build /app/build /app/build
COPY --from=build /app/public /app/public
COPY --from=build /app/api /app/api
COPY . .
EXPOSE 80
CMD ["npm", "run", "start"]
If build the image on my local machine, everything works fine, and I'm able to run the container and point at it.
I made a github workflow build the same image and push it on my docker hub.
But when the github job runs it always fail with this error
Step 16/21 : COPY --from=build /app/build /app/build
COPY failed: stat app/build: file does not exist
My remix.run config is:
/**
* #type {import('#remix-run/dev/config').AppConfig}
*/
module.exports = {
appDirectory: "app",
assetsBuildDirectory: "public/build",
publicPath: "/build/",
serverBuildDirectory: "api/_build",
devServerPort: 8002,
ignoredRouteFiles: [".*"],
};
Thanks in advance for any help
You are using the config option serverBuildDirectory: "api/_build", so when you run the remix build, your server built files are in api/_build/ and not in build/ directory.
In your docker image, at the last stage you try to copy the content of the build/ directory from the previous stage named build. But that directory does not exist there. The server content is built in api/_build/ instead.
So you just don't need that line:
COPY --from=build /app/build /app/build
One possible reason I see for the fact it works for you locally:
You have a local directory named build/, and which contains your local build for server files. Maybe because you built it before changing the serverBuildDirectory option. And it's probably ignored from your git repository.
When you build your docker image, the stage named build copies
everything from your local directory to its own environment with
COPY . ., and so it gets your local build/ directory.
then during the last stage that directory is copied to the final environment with COPY --from=build /app/build /app/build.
When you do the same thing in a fresh environment like github actions, the build/ directory does not exist in the environment that runs the docker command so it's not copied from stage to stage. And you finally get an error trying to copy something that does not exist.
This artifact copying is probably not what you want. If you do the build in your docker image, you don't want also to copy it from your local environment.
To avoid these unwanted copies, you can add a .dockerignore file with at least the following things:
node_modules
build
api/_build
public/build
I was trying to dockerize my existing simple vue app , following on this tutorial from vue webpage https://v2.vuejs.org/v2/cookbook/dockerize-vuejs-app.html. I successfully created the image and the container. My problem is that when I edit my code like "hello world" in App.vue it will not automatically update or what they called this hot reload ? or should I migrate to the latest Vue so that it will work ?
docker run -it --name=mynicevue -p 8080:8080 mynicevue/app
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
# RUN npm run build
EXPOSE 8080
CMD [ "http-server", "serve" ]
EDIT:
Still no luck. I comment out the npm run build. I set up also vue.config.js and add this code
module.exports = {
devServer: {
watchOptions: {
ignored: /node_modules/,
aggregateTimeout: 300,
poll: 1000,
},
}
};
then I run the container like this
`docker run -it --name=mynicevue -v %cd%:/app -p 8080:8080 mynicevue/app
when the app launches to browser I get this error in terminal and the browser is whitescreen
"GET /" Error (404): "Not found"
Can someone help me please of my Dockerfile what is wrong or missing so that I can play my vue app using docker ?
Thank you in advance.
Okay I tried your project in my local and here's how you do it.
Dockerfile
FROM node:lts-alpine
# bind your app to the gateway IP
ENV HOST=0.0.0.0
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
EXPOSE 8080
ENTRYPOINT [ "npm", "run", "dev" ]
Use this command to run the docker image after you build it:
docker run -v ${PWD}/src:/app/src -p 8080:8080 -d mynicevue/app
Explanation
It seems that Vue is expecting your app to be bound to your gateway IP when it is served from within a container. Hence ENV HOST=0.0.0.0 inside the Dockerfile.
You need to mount your src directory to the running container's /app/src directory so that the changes in your local filesystem directly reflects and visible in the container itself.
The way in Vue to watch for the file changes is using npm run dev, hence ENTRYPOINT [ "npm", "run", "dev" ] in Dockerfile
if you tried previous answers and still doesn't work , try adding watch:{usePolling: true} to vite.config.js file
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
server: {
host: true,
port: 4173,
watch: {
usePolling: true
}
}
})
I am working on sails application In my application I have used mysql and mongo adpater to connect with different database. Both db are hosted on somewhere on internet. Application is working fine in my local environment. I am facing issue once I add project to docker container. I am able to generate docker image and run docker container. When I call simple routers where DB connection is not exists it's working fine but when I call Testcontroller which is return data from mongodb. it give me ReferenceError: Test is not define. here Test is mongodb's entity.
DockerFile:
FROM node:latest
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "./"]
RUN npm install --verbose --force && mv node_modules ../
COPY . .
EXPOSE 80
CMD npm start
TestController
/**
* TestController
* #description :: Server-side actions for handling incoming requests.
*
* #help :: See https://sailsjs.com/docs/concepts/actions
*/
module.exports = {
index: async function(req, res) {
var data = await Test.find(); // Here I am getting error Test is not define.
res.json(data);
}
};
Routes.js
'GET /test': {controller:'test', action:'index'}
I found issue I am moving node_modules to previous directory that is the issue.
below Configuration works for me.
FROM node:latest
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "./"]
RUN npm install --verbose --force
COPY . .
EXPOSE 80
CMD npm start
QUESTION: (edited: solution is added at the end of this post)
I have VueJS project (developed in webpack), which I want to docker-size.
My Dockerfile looks like:
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "run", "dev"]
which is basically following the flow from this post.
I also have a .dockerignore file, where I copied the same files from my .gitignore and it looks like:
.DS_Store
node_modules/
/dist/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Editor directories and files
.idea
.vscode
*.suo
*.ntvs*
*.njsproj
*.sln
.git/
I have created a docker image with the command:
docker build -t test/my-image-name .
and then run it into a container with the command:
docker run -it -p 8080:8080 --rm --name my-container-name test/my-image-name
as a result of this last command, I got the same output in the terminal (which is normally showing in cases of debugging with webpack / vuejs) as when I run the app locally:
BUT: at the end, in the browser window the app is not loaded
If I run the commands docker images and docker ps I can see that the image and the container are there, and while creating them, I did not got any error messages.
I found this post and had a few tries for changing the Dockerfile as:
Option 1
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
ENTRYPOINT ["ng", "serve", "-H", "0.0.0.0"]
Option 2
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
ENTRYPOINT ["ng", "serve", "-H", "0.0.0.0"]
EXPOSE 8080
CMD ["npm", "run", "dev"]
But it seems none of them is working.
btw. my package.json file looks like:
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"build": "node build/build.js"
}
So I'm wondering: how to make the app to be opened in the browser from the docker image?
SOLUTION: not, sure if this was the reason for the fix, but I did two things. As mentioned, I'm working with the VueJS and webpack, so inside of the file named config/index.js, which initially looked like:
module.exports = {
dev: {
// Paths
assetsSubDirectory: 'static',
assetsPublicPath: '/',
proxyTable: {},
// Various Dev Server settings
host: 'localhost', // <---- this one
port: 8080,
I changed the host property from 'localhost' into '0.0.0.0', removed the EXPOSE 8080 line from the Dockerfile (the initial Docker file from my question above) since I noticed that the port from the config file is used by default and also restarted the installed Docker tool on my local machine.