ERROR: Cannot find "/app/config/config.json". Have you run "sequelize init"? - docker

Dockerfile
FROM node:16.14.2-alpine as build
WORKDIR /myapp
COPY package*.json ./
RUN npm ci
COPY . ./
ENV NODE_ENV='dev'
RUN npm run build
FROM build
EXPOSE 3000
CMD ["node"] // list reduced to one item
// .sequelizerc
const path = require('path');
module.exports = {
'config': path.resolve('/src', 'dbconfig.js'),
'models-path': path.resolve('src', 'models')
};
Structure
- src
- dbconfig.js
- .sequelizerc
When running on docker i get the error
ERROR: Cannot find "/app/config/config.json". Have you run "sequelize init"?

You have to define .sequelizerc file in root folder specifying path to your database config, migration and seeders folder,otherwise sequelize will looks for them in its default location.
It will look somewhat like this
//.sequelize.rc file
module.exports = {
'config': /* Path to db config file */,
'migrations-path': /* Path to migration folder */,
'seeders-path': /* Path to seeders folder */,
'models-path': /* Path to models folder */,
}
Note: After creating this file you have to rebuild the docker image for it to work.

Related

NextJS environment variables not accessible in production build

I have a NextJS application that accesses a database from the API.
When in development, I have a .env file that contains the host, port, username, password, and database. After running npm run dev, the API functionality works as expected. Even if I run npm run build and npm run start on my local machine, it works correctly.
The problem comes after I push to Github and have Github build the app and deploy it as a docker container. For some reason, the docker build does not accept my environment variables loaded through an attached .env file.
To further elaborate, the environment variables are either on the dev machine or attached to the docker container on the production server. They are mounted to the same place (the root directory of the project: /my-site/.env) but the production API does not work.
The environment variables are included in /lib/db.tsx:
import mysql from "mysql2/promise";
const executeQuery = async (query: string) => {
try {
const db = await mysql.createConnection({
host: process.env.MYSQL_HOST,
database: process.env.MYSQL_DATABASE,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
});
const [results] = await db.execute(query, []);
db.end();
return results;
} catch (error) {
return { error };
}
};
export default executeQuery;
This file is included in the API endpoints as:
import executeQuery from "../../../../lib/db";
Again, since it works on the development computer, I think there is an issue is with the building of the docker container.
Here is my included Dockerfile:
FROM node:lts as dependencies
WORKDIR /my-site
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
FROM node:lts as builder
WORKDIR /my-site
COPY . .
COPY --from=dependencies /my-site/node_modules ./node_modules
RUN yarn build
FROM node:lts as runner
WORKDIR /my-site
ENV NODE_ENV production
# If you are using a custom next.config.js file, uncomment this line.
# COPY --from=builder /my-site/next.config.js ./
COPY --from=builder /my-site/public ./public
COPY --from=builder /my-site/.next ./.next
COPY --from=builder /my-site/node_modules ./node_modules
COPY --from=builder /my-site/package.json ./package.json
EXPOSE 3000
CMD ["yarn", "start"]
Any and all assistance is greatly appreciated!
Edit: Other things I have attempted:
Add them as environment variables to the docker container in the docker compose file (under environment) and verified that they are accessible inside of the container using echo $MYSQL_USER.
Mounting the .env file inside of the .next folder
Mounting the .env file inside of the .next/server folder
I ended up solving my own issue after hours of trying to figure it out.
My solution was to create a .env.production file and commit it to git.
I also adjusted my Dockerfile to include: COPY --from=builder /my-site/.env.production ./
I am not a fan of that solution, as it involves pushing secrets to a repo, but it works.

Hot reload in Vue does not work inside a Docker container

I was trying to dockerize my existing simple vue app , following on this tutorial from vue webpage https://v2.vuejs.org/v2/cookbook/dockerize-vuejs-app.html. I successfully created the image and the container. My problem is that when I edit my code like "hello world" in App.vue it will not automatically update or what they called this hot reload ? or should I migrate to the latest Vue so that it will work ?
docker run -it --name=mynicevue -p 8080:8080 mynicevue/app
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
# RUN npm run build
EXPOSE 8080
CMD [ "http-server", "serve" ]
EDIT:
Still no luck. I comment out the npm run build. I set up also vue.config.js and add this code
module.exports = {
devServer: {
watchOptions: {
ignored: /node_modules/,
aggregateTimeout: 300,
poll: 1000,
},
}
};
then I run the container like this
`docker run -it --name=mynicevue -v %cd%:/app -p 8080:8080 mynicevue/app
when the app launches to browser I get this error in terminal and the browser is whitescreen
"GET /" Error (404): "Not found"
Can someone help me please of my Dockerfile what is wrong or missing so that I can play my vue app using docker ?
Thank you in advance.
Okay I tried your project in my local and here's how you do it.
Dockerfile
FROM node:lts-alpine
# bind your app to the gateway IP
ENV HOST=0.0.0.0
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
EXPOSE 8080
ENTRYPOINT [ "npm", "run", "dev" ]
Use this command to run the docker image after you build it:
docker run -v ${PWD}/src:/app/src -p 8080:8080 -d mynicevue/app
Explanation
It seems that Vue is expecting your app to be bound to your gateway IP when it is served from within a container. Hence ENV HOST=0.0.0.0 inside the Dockerfile.
You need to mount your src directory to the running container's /app/src directory so that the changes in your local filesystem directly reflects and visible in the container itself.
The way in Vue to watch for the file changes is using npm run dev, hence ENTRYPOINT [ "npm", "run", "dev" ] in Dockerfile
if you tried previous answers and still doesn't work , try adding watch:{usePolling: true} to vite.config.js file
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
server: {
host: true,
port: 4173,
watch: {
usePolling: true
}
}
})

Why my docker file does not copy the HTML files?

My directory is:
-Dockerfile
app/
-main.go
media/
/css
/html
/img
/svg
Inside the html folder, I have subfolders to organise my HTML files, so the path to the HTML files is media/html/*/*.html
And I have my Dockerfile as follows:
FROM golang:alpine
# Set necessary environmet variables needed for our image
ENV GO111MODULE=on \
CGO_ENABLED=0 \
GOOS=linux \
GOARCH=amd64
# Copy the code into the container
COPY media .
# Move to working directory /build
WORKDIR /build
# Copy the code from /app to the build folder into the container
COPY app .
# Configure the build (go.mod and go.sum are already copied with prior step)
RUN go mod download
# Build the application
RUN go build -o main .
WORKDIR /app
# Copy binary from build to main folder
RUN cp /build/main .
# Export necessary port
EXPOSE 8080
# Command to run when starting the container
CMD ["/app/main"]
and my main.go is:
package main
import (
"net/http"
"github.com/gin-gonic/gin"
)
func main() {
// We create the instance for Gin
r := gin.Default()
// Path to the static files. /static is rendered in the HTML and /media is the link to the path to the images, svg, css.. the static files
r.StaticFS("/static", http.Dir("../media"))
// Path to the HTML templates. * is a wildcard
r.LoadHTMLGlob("../media/html/*/*.html")
r.NoRoute(renderHome)
// This get executed when the users gets into our website in the home domain ("/")
r.GET("/", renderHome)
r.Run(":8080")
}
func renderHome(c *gin.Context) {
c.HTML(http.StatusOK, "my-html.html", gin.H{})
}
Problem is, I can run without problem my app in Golang with go run main.go, I can build the Docker image without problems, but on the moment to run a Docker container from the image, I got the error:
panic: html/template: pattern matches no files: ../media/html/*/*.html
The path is correct (since is also proven because I can run it in plain go) and it seems that Docker is not coping the files correctly, or at least not in the right directory. What is failing? The full simple project can be found here
media is a bad choice for a Docker folder, because a typical Linux container already has a /media folder.
But that's not the root cause.
The root cause is that COPY media . copies the contents of media folder to /. You probably want COPY media/ /media/ if you want to preserve the media folder itself (or use WORKDIR /media).
As a debug tool, you can run your container with a shell as entrypoint to "look around" it without starting your app:
docker build . -t test
docker run -it --rm test sh
/app # ls /media
cdrom floppy usb
/app # ls -R /html
/html:
website
/html/website:
my-html.html
As you can see your media/html folder is located at /html.
Some more notes:
It's a good idea to move go mod download to before COPY app so that the downloaded modules can be cached:
FROM golang:alpine
WORKDIR /build
COPY app/go.mod app/go.sum ./
RUN go mod download
COPY app .
RUN go build -o main .
WORKDIR /app
RUN cp /build/main .
COPY media /media/
EXPOSE 8080
CMD ["/app/main"]
As a next step you can look into two-stage builds to not depend on the golang image for running the compiled app (is only needed for building really).

VueCLI3 app (nginx/docker) use environment specific variables

How to externalize and consume environment variables from a Vue App:
Created with VueCLI3
Deployed in a docker container
Using NGINX
Some details:
The project is built once and deployed to test and live environments. So, I want to externalize some variables which change through the environments (like URLs to call, domains, usernames etc.). The classical usage with .env file variations with VUE_APP_ prefixed does not help this issue as their values are injected in the code during the build stage: They are not variables once it is built.
Trying it out, I have found a blog post making use of dotenv and some extra configuration; but I could not put it together with the configuration in this VueCLI 3 official guide. The solution does not need to adopt a similar approach though, I am just trying to make a way out.
Probably not a useful information, but I am planning to define those environment variables in Config Maps in Kubernetes configuration.
I think I've accomplished to overcome this case. I am leaving the resolution here.
Define your environment-specific environment variables in .env.development (for development purposes) and add them also to the Pod configuration with correxponding values.
Add a configuration.js file somewhere in your Vue project source folder. It would act as a wrapper for determining whether the runtime is development (local) or production (container). It is like the one shown here, but importing/configuring dotenv is not required:
export default class Configuration {
static get EnvConfig () {
return {
envKey1: '$ENV_KEY_1',
envKey2: '$ENV_KEY_2'
}
}
static value (key) {
// If the key does not exist in the EnvConfig object of the class, return null
if (!this.EnvConfig.hasOwnProperty(key)) {
console.error(`Configuration: There is no key named "${key}". Please add it in Configuration class.`)
return
}
// Get the value
const value = this.EnvConfig[key]
// If the value is null, return
if (!value) {
console.error(`Configuration: Value for "${key}" is not defined`)
return
}
if (!value.startsWith('$VUE_APP_')) {
// value was already replaced, it seems we are in production (containerized).
return value
}
// value was not replaced, it seems we are in development.
const envName = value.substr(1) // Remove $ and get current value from process.env
const envValue = process.env[envName]
if (!envValue) {
console.error(`Configuration: Environment variable "${envName}" is not defined`)
return
}
return envValue
}
}
Create an entrypoint.sh. With some modification, it would look like follows:
#!/bin/bash
function join_by { local IFS="$1"; shift; echo "$*"; }
# Find vue env vars
vars=$(env | grep VUE_APP_ | awk -F = '{print "$"$1}')
vars=$(join_by ',' $vars)
echo "Found variables $vars"
for file in /app/js/app.*;
do
echo "Processing $file ...";
# Use the existing JS file as template
cp $file $file.tmpl
envsubst "$vars" < $file.tmpl > $file
rm $file.tmpl
done
nginx -g 'daemon off;'
In your Dockerfile, add a CMD for running this entrypoint.sh script above as a bootstrapping script during container creation. So that, every time you start a container, it will get the environment variables from the pod configuration and inject it to the Configuration class shown in Step 2.
# build stage
FROM node:lts-alpine as build-stage
# make the 'app' folder the current working directory
WORKDIR /app
# Copy package*.json and install dependencies in a separaate step to enable caching
COPY package*.json ./
RUN npm install
# copy project files and folders to the current working directory
COPY ./ .
# install dependencies and build app for production with minification
RUN npm run build
# Production stage
FROM nginx as production-stage
RUN mkdir /app
# copy 'dist' content from the previous stage i.e. build
COPY --from=build-stage /app/dist /app
# copy nginx configuration
COPY nginx.conf /etc/nginx/nginx.conf
# Copy the bootstrapping script to inject environment-specific values and pass it as argument current to entrypoint
COPY entrypoint.sh entrypoint.sh
# Make the file executable
RUN chmod +x ./entrypoint.sh
CMD ["./entrypoint.sh"]
Finally, instead of process.env use our wrapper configuration class like Configuration.value('envKey1'). And voila!

Multi-stage build cannot copy from previous stage - File not found

I have the docker file as follows:
FROM node:8 as builder
WORKDIR /usr/src/app
COPY ./src/register_form/package*.json .
RUN npm install
COPY ./src/register_form .
RUN yarn build
FROM tensorflow/tensorflow:1.10.0-gpu-py3
COPY --from=builder /usr/src/app/register_form/build/index.html /app/src/
WORKDIR /app
ENTRYPOINT ["python3"]
CMD ["/app/src/main.pyc"]
However, it cannot copy the index.html from the builder stage. Although when I list the folder in the first stage, the files are there.
The error is:
Step 8/22 : COPY --from=builder ./register_form/build/ /app/src/
COPY failed: stat /var/lib/docker/overlay2/5470e05501898502b3aa437639f975ca3e4bfb5a1e897281e62e07ab89866304/merged/register_form/build: no such file or directory
How can I fix this problem - the COPY --from=builder docker command?
I think you are misusing COPY command. As it is told in docs:
If src is a directory, the entire contents of the directory are
copied, including filesystem metadata.
Note: The directory itself is not copied, just its contents.
So your command COPY ./src/register_form . does NOT create register_form folder in container, but instead copies all contents. You can try adding:
RUN ls .
to your Dockerfile to make sure.
As noticed by #BMitch in comments, you can explicitly set destination folder name to achieve expected results:
COPY ./src/register_form/ register_form/

Resources