Dockerfile not being found in MERN application - docker

I know that there has been others who have asked this question on here before, however, I have gone through them and have tried the suggestions. I believe that its a complex issue because everyone's files look different and varies from the other based on placements and paths, which I am not familiar yet in Docker. Now, when I run on docker-compose build, the program tells me that
Building server
Traceback (most recent call last): File "compose/cli/main.py", line
67, in main File "compose/cli/main.py", line 126, in perform_command
File "compose/cli/main.py", line 302, in build File
"compose/project.py", line 468, in build File "compose/project.py",
line 450, in build_service File "compose/service.py", line 1147, in
build compose.service.BuildError: (<Service: server>, {'message':
'Cannot locate specified Dockerfile: ./client/Dockerfile'})
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "docker-compose", line 3, in
File "compose/cli/main.py", line 78, in main TypeError: can
only concatenate str (not "dict") to str [34923] Failed to execute
script docker-compose
I have tried placing the Dockerfile from the client to the same directory as the docker-compose.yml file to eliminate path discrepencies, however, it still says the same thing. Please let me know if you have any suggestions. Thanks!
Here is my docker-compose.yml file
version: "3.7"
services:
server:
build:
context: ./server
dockerfile: ./client/Dockerfile
image: myapp-server
container_name: myapp-node-server
command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/:/usr/src/app
- /usr/src/app/node_modules
ports:
- "5050:5050"
depends_on:
- mongo
env_file: ./server/.env
environment:
- NODE_ENV=development
networks:
- app-network
mongo:
image: mongo
volumes:
- data-volume:/data/db
ports:
- "27017:27017"
networks:
- app-network
client:
build:
context: ./client
dockerfile: Dockerfile
image: myapp-client
container_name: myapp-react-client
command: npm start
volumes:
- ./client/:/usr/app
- /usr/app/node_modules
depends_on:
- server
ports:
- "3000:3000"
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
data-volume:
node_modules:
web-root:
driver: local
Here is the Dockerfile in the client folder
FROM node:10.16-alpine
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Here is the Dockerfile in the server folder
FROM node:10.16-alpine
# Create App Directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install Dependencies
COPY package*.json ./
RUN npm install --silent
# Copy app source code
COPY . .
# Exports
EXPOSE 5050
CMD ["npm","start"]

EDIT 1: The issue was having an unusual path to the dockerfiles: client/docker-mern-basic. You can see this in the VSCode file explorer for the client paths. Resolved by making paths and context/dockerfile paths consistent, eliminating the extra docker-mern-basic path. See comments below.
EDIT 0: this doesn't solve the issue, I'll remove this if I can't find any other possible issues.
Your path for the server.build.dockerfile isn't relative to your context. You're providing the folder to use as "root" as server so Docker is actually looking for the path ./server/client/Dockerfile.
I think your issue is not giving a path relative to your context:
services:
server:
build:
context: ./server
dockerfile: Dockerfile

Related

failed to load .env file but still gets the environment variables

That's the code segment where I load .env file and set the variables.
// init executes the initial configuration.
func init() {
// loads the file for env. variables
if err := godotenv.Load(".env"); err != nil {
log.Printf("failed to load env file: %v\n", err)
}
// env variables are set based values within the .env file.
os.Setenv("dbname", os.Getenv("DB_NAME"))
os.Setenv("username", os.Getenv("DB_USERNAME"))
os.Setenv("pw", os.Getenv("DB_PASSWORD"))
os.Setenv("dbport", os.Getenv("DB_PORT"))
os.Setenv("server_port", os.Getenv("SERVER_PORT"))
os.Setenv("hostname", os.Getenv("DB_CONTAINER_NAME"))
}
When I run docker-compose up --build server everything works despite the error I get as follows:
server_1 | 2022/09/07 11:44:06 failed to load env file: open .env: no such file or directory
However, environments are somehow set.
Here is my docker-compose.yml.
version: '3.8'
services:
db:
image: postgres:14.1-alpine
container_name: ${DB_CONTAINER_NAME}
restart: always
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- ${DB_PORT}:${DB_PORT}
env_file:
- .env
volumes:
- db:/var/lib/postgresql/data
- ./psql/statements/create-tables.sql:/docker-entrypoint-initdb.d/create_table.sql
server:
build:
context: .
dockerfile: Dockerfile
env_file: .env
depends_on:
- ${DB_CONTAINER_NAME}
networks:
- default
ports:
- ${SERVER_PORT}:${SERVER_PORT}
volumes:
db:
driver: local
And my Dockerfile for go application.
FROM golang:1.18
WORKDIR /src
COPY go.sum go.mod ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /bin/app .
FROM alpine
COPY --from=0 /bin/app /bin/app
ENTRYPOINT ["/bin/app"]
I've altered some of Dockerfile's content I get different errors. What might be the reason for such a problem? Because it fails to load but still works.
Looks like the env variables are set by docker compose from .env file mentioned in yaml file with env_file: .env
open .env: no such file or directory is printed from inside the go app as the .env file is not available inside the container, no COPY/ADD command for same in Dockerfile.

Force update shared volume in docker compose

my docker file for ui image is as follows
FROM node:alpine as prodnode
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
CMD ["npm", "run", "build"]
and my docker compose looks like below.
version: "3"
services:
nginx:
depends_on:
- backend
- ui
restart: always
volumes:
- ./nginx/prod.conf:/etc/nginx/conf.d/default.conf
- static:/usr/share/nginx/html
build:
context: ./nginx/
dockerfile: Dockerfile
ports:
- "80:80"
backend:
build:
context: ./backend/
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./backend:/app
environment:
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
ui:
tty: true
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
context: ./ui/
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./ui:/app
- static:/app/build
postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres_password
volumes:
static:
I am trying to build static content and copy the content between ui container to nginx container.I use shared volume.Everything works fine as expected.But when I change contents of ui and build again, changes are not reflecting.I tried following thing.
docker-compose down
docker-compose up --build
docker-compose up
None of them is replacing the static content with the new build.
Only when i remove the static volume like below
docker volume rm skeleton_static
and then do
docker-compose up --build
It is changing the content now.. How do i automatically replace the static contents on every docker-compose up or docker-compose up --build thanks.
Named volumes are presumed to hold user data in some format Docker can't understand; Docker never updates their content after they're originally created, and if you mount a volume over image content, the old content in the volume hides updated content in the image. As such, I'd avoid named volumes here.
It looks like in the setup you show, the ui container doesn't actually do anything: its main container process is to build the application, and then it exits immediately. A multi-stage build is a more appropriate approach here, and it will let you compile the application during the image build phase without declaring a do-nothing container or adding the complexity of named volumes.
# ui/Dockerfile
# First stage: build the application; note this is
# very similar to the existing Dockerfile
FROM node:alpine as prodnode
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
RUN ["npm", "run", "build"] # not CMD
# Second stage: nginx server serving that application
FROM nginx:latest
COPY --from=prodnode /app/build /usr/share/nginx/html
# use default CMD from the base image
In your docker-compose.yml file, you don't need separate "build" and "serve" containers, these are now combined together.
version: "3.8"
services:
backend:
build: ./backend
environment:
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
depends_on:
- postgres
# no volumes:
ui:
build: ./ui
depends_on:
- backend
ports:
- '80:80'
# no volumes:
postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres_password
volumes: # do persist database data
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
A similar problem will apply to the anonymous volume you've used for the backend service's node_modules directory, and it will ignore any changes to the package.json file. Since all of the application's code and library dependencies are already included in the image, I've deleted the volumes: block that would overwrite those.

docker-compose: how to automatically propagate changes (both frontend and backend)?

In Docker Compose, we have two services (a backend in Flask and a frontend in React) running at the same time in different directories. What are best practices for automatically updating the frontend service or backend service when ha change to the respective code is made?
In our case, we have:
frontend/
index.html
docker-compose.yml
Dockerfile
src
App.js
index.js
..
And our backend is:
backend/
app.py
Dockerfile
docker-compose.yml
This is our docker-compose.yml file:
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ../frontend
dockerfile: ../frontend/Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
volumes:
- .:/frontend
app:
image: python:3.9
build:
context: .
dockerfile: ./Dockerfile
command: app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
Typically, we reload the app (on change) almost instantly from a volume in the volume's section. This approach correctly changes the backend service when the backend code is changed, but not the frontend service. Also, we have 2 docker-compose files, one in frontend, one in backend, which we hope to somehow learn how to consolidate.
Edit: These are the logs that work for the backend (app_1 is the backend) but do not work for the frontend:
app_1 | * Detected change in '/app/app.py', reloading
app_1 | environ({'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOSTNAME': '***', 'PGPASSWORD': '***', 'POSTGRESQL_PASSWORD': 'magical_password', 'POSTGRESQL_HOST': 'backend-database-1', 'POSTGRESQL_USER_NAME': '***', 'LOCAL_ENVIRONMENT': 'True', 'FLASK_ENV': 'development', 'LANG': 'C.UTF-8', 'GPG_KEY': '***', 'PYTHON_VERSION': '3.9.13', 'PYTHON_PIP_VERSION': '22.0.4', 'PYTHON_SETUPTOOLS_VERSION': '58.1.0', 'PYTHON_GET_PIP_URL': 'https://github.com/pypa/get-pip/raw/6ce3639da143c5d79b44f94b04080abf2531fd6e/public/get-pip.py', 'PYTHON_GET_PIP_SHA256': '***', 'HOST': '0.0.0.0', 'PORT': '8080', 'HOME': '/root', 'KMP_INIT_AT_FORK': 'FALSE', 'KMP_DUPLICATE_LIB_OK': 'True', 'WERKZEUG_SERVER_FD': '3', 'WERKZEUG_RUN_MAIN': 'true'})
app_1 | * Restarting with stat
app_1 | * Tip: There are .env or .flaskenv files present. Do "pip install python-dotenv" to use them.
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 203-417-897
Edit 2: We followed the link suggested in the comments. We attempted setting both WATCHPACK_POLLING and CHOKIDAR_USEPOLLING to “true” but no luck. And we refactored our docker-compose file to be outside the directories like so:
docker-compose.yml
frontend/
index.html
Dockerfile
src
App.js
index.js
..
backend/
app.py
Dockerfile
Here is the new docker-compose
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ./frontend
cache_from:
- node:alpine
dockerfile: ./Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING="true"
volumes:
- /app/node_modules
- ./frontend:/app
app:
image: python:3.9
build:
context: ./backend
cache_from:
- python:3.9
dockerfile: ./Dockerfile
command: backend/app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- backend/database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/backend/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
app:
And here are our Dockerfile for frontend
FROM node:alpine
RUN mkdir -p /frontend
WORKDIR /frontend
# We copy just the package.json first to leverage Docker cache
COPY package.json /frontend
RUN npm install --legacy-peer-deps
COPY . /frontend
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE ${PORT}
CMD ["npm", "start"]
and backend
FROM python:3.9
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=8080
EXPOSE ${PORT}
# This runs the app in the container
CMD [ "app.py" ]
Still backend hot reloads and every time we make a change the change is detected and picked up and reflected in docker-compose immediately. But frontend requires a restart with this command docker-compose down --volumes && docker-compose build --no-cache && docker-compose up the output we get from docker-compose is no logs. It’s like docker-compose can’t see the changes.
Edit 3: Any help would be much appreciated!

Docker container works, but fails when build from docker-compose

I have an application with 3 containers:
client - an angular application,
gateway - a .NET Core application,
api - a .NET Core application
I am having trouble with the container hosting the angular application.
Here is my Docker file:
#stage 1
FROM node:alpine as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
#stage 2
FROM nginx:alpine
COPY --from=node /app/dist/caliber_client /usr/share/nginx/html
EXPOSE 80
and here is the docker compose file:
# Please refer https://aka.ms/HTTPSinContainer on how to setup an https developer certificate for your ASP .NET Core service.
version: '3.4'
services:
calibergateway:
image: calibergateway
container_name: caliber-gateway
build:
context: .
dockerfile: caliber_gateway/Dockerfile
ports:
- 7000:7000
environment:
- ASPNETCORE_ENVIRONMENT=Development
networks:
- caliber-local
caliberapi:
image: caliberapi
container_name: caliber-api
build:
context: .
dockerfile: caliber_api/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
networks:
- caliber-local
caliberclient:
image: caliber-client-image
container_name: caliber-client
build:
context: .
dockerfile: caliber_client/Dockerfile
ports:
- 7005:7005
networks:
- caliber-local
networks:
caliber-local:
external: true
When I build and run the angular container independently, I can connect and run the site, however if I try to build it with docker-compose, I get the following error:
enoent ENOENT: no such file or directory, open '/app/package.json'
I can see that npm cannot find the package.json, but I am copying the whole site to the /app directory in the docker file, so I am not sure where the disconnect is.
Thank you.
In the Dockerfile, the left-hand side of COPY statements is always interpreted relative to the build: { context: } directory in the docker-compose.yml file (or the build: directory if there's not a nested argument, or the docker build directory argument; but in any case never anything outside this directory tree).
In a comment, you say
The package.json is one level deeper than the docker-compose.yml file. It is at the same level of the Dockerfile in the caliber_client folder.
Assuming that client application is self-contained, you can change the build definition to use the client subdirectory as the build context
build:
context: caliber_client
dockerfile: Dockerfile
or, since dockerfile: Dockerfile is the default, the shorter
build: caliber_client
If it's important to you to use the parent directory as the build context (maybe you're including some shared files that you don't show in the question) then you can also change the Dockerfile to refer to the subdirectory.
# when the build: { context: } is the parent directory of this one
COPY caliber_client .

How to create the directory in a Dockerfile

I struggle to create a directory in my Dockerfile below. Entering the container after building the image I can't find the directory "models". "ds" directory in path "/usr/src/app/ds/models" is an application directory which was copied. Could you please tell me what is wrong here.
FROM python:3.8
ENV PYTHONUNBUFFERED=1
ENV DISPLAY :0
WORKDIR /usr/src/app
COPY . .
RUN mkdir -p /usr/src/app/ds/models
My docker-compose.yaml file contains volume:
version: '3.8'
services:
app:
build: .
command:
- /bin/bash
- -c
- python manage.py runserver 0.0.0.0:8000
restart: always
volumes:
- .:/usr/src/app
ports:
- '8000:8000'
When your docker-compose.yml file says
volumes:
- .:/usr/src/app
that host directory completely replaces the /usr/src/app directory from your image. This means pretty much nothing in your Dockerfile has an effect; if you try to deploy this setup to another system, you've never run the code in the image.
I'd recommend deleting this block, and also the command: override (make it be the default CMD in the Dockerfile instead).
I need to download models to this directory
Mount only the specific directory you need into your container; don't overwrite the entire application tree. Potentially consider keeping that data directory in a different part of the filesystem.
version: '3.8'
services:
app:
build: .
# no command:
restart: always
volumes:
# only the models subdirectory, not the entire application
- ./ds/models:/usr/src/app/ds/models
ports:
- '8000:8000'

Resources