I have an application with 3 containers:
client - an angular application,
gateway - a .NET Core application,
api - a .NET Core application
I am having trouble with the container hosting the angular application.
Here is my Docker file:
#stage 1
FROM node:alpine as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
#stage 2
FROM nginx:alpine
COPY --from=node /app/dist/caliber_client /usr/share/nginx/html
EXPOSE 80
and here is the docker compose file:
# Please refer https://aka.ms/HTTPSinContainer on how to setup an https developer certificate for your ASP .NET Core service.
version: '3.4'
services:
calibergateway:
image: calibergateway
container_name: caliber-gateway
build:
context: .
dockerfile: caliber_gateway/Dockerfile
ports:
- 7000:7000
environment:
- ASPNETCORE_ENVIRONMENT=Development
networks:
- caliber-local
caliberapi:
image: caliberapi
container_name: caliber-api
build:
context: .
dockerfile: caliber_api/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
networks:
- caliber-local
caliberclient:
image: caliber-client-image
container_name: caliber-client
build:
context: .
dockerfile: caliber_client/Dockerfile
ports:
- 7005:7005
networks:
- caliber-local
networks:
caliber-local:
external: true
When I build and run the angular container independently, I can connect and run the site, however if I try to build it with docker-compose, I get the following error:
enoent ENOENT: no such file or directory, open '/app/package.json'
I can see that npm cannot find the package.json, but I am copying the whole site to the /app directory in the docker file, so I am not sure where the disconnect is.
Thank you.
In the Dockerfile, the left-hand side of COPY statements is always interpreted relative to the build: { context: } directory in the docker-compose.yml file (or the build: directory if there's not a nested argument, or the docker build directory argument; but in any case never anything outside this directory tree).
In a comment, you say
The package.json is one level deeper than the docker-compose.yml file. It is at the same level of the Dockerfile in the caliber_client folder.
Assuming that client application is self-contained, you can change the build definition to use the client subdirectory as the build context
build:
context: caliber_client
dockerfile: Dockerfile
or, since dockerfile: Dockerfile is the default, the shorter
build: caliber_client
If it's important to you to use the parent directory as the build context (maybe you're including some shared files that you don't show in the question) then you can also change the Dockerfile to refer to the subdirectory.
# when the build: { context: } is the parent directory of this one
COPY caliber_client .
Related
my docker file for ui image is as follows
FROM node:alpine as prodnode
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
CMD ["npm", "run", "build"]
and my docker compose looks like below.
version: "3"
services:
nginx:
depends_on:
- backend
- ui
restart: always
volumes:
- ./nginx/prod.conf:/etc/nginx/conf.d/default.conf
- static:/usr/share/nginx/html
build:
context: ./nginx/
dockerfile: Dockerfile
ports:
- "80:80"
backend:
build:
context: ./backend/
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./backend:/app
environment:
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
ui:
tty: true
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
context: ./ui/
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./ui:/app
- static:/app/build
postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres_password
volumes:
static:
I am trying to build static content and copy the content between ui container to nginx container.I use shared volume.Everything works fine as expected.But when I change contents of ui and build again, changes are not reflecting.I tried following thing.
docker-compose down
docker-compose up --build
docker-compose up
None of them is replacing the static content with the new build.
Only when i remove the static volume like below
docker volume rm skeleton_static
and then do
docker-compose up --build
It is changing the content now.. How do i automatically replace the static contents on every docker-compose up or docker-compose up --build thanks.
Named volumes are presumed to hold user data in some format Docker can't understand; Docker never updates their content after they're originally created, and if you mount a volume over image content, the old content in the volume hides updated content in the image. As such, I'd avoid named volumes here.
It looks like in the setup you show, the ui container doesn't actually do anything: its main container process is to build the application, and then it exits immediately. A multi-stage build is a more appropriate approach here, and it will let you compile the application during the image build phase without declaring a do-nothing container or adding the complexity of named volumes.
# ui/Dockerfile
# First stage: build the application; note this is
# very similar to the existing Dockerfile
FROM node:alpine as prodnode
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
RUN ["npm", "run", "build"] # not CMD
# Second stage: nginx server serving that application
FROM nginx:latest
COPY --from=prodnode /app/build /usr/share/nginx/html
# use default CMD from the base image
In your docker-compose.yml file, you don't need separate "build" and "serve" containers, these are now combined together.
version: "3.8"
services:
backend:
build: ./backend
environment:
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
depends_on:
- postgres
# no volumes:
ui:
build: ./ui
depends_on:
- backend
ports:
- '80:80'
# no volumes:
postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres_password
volumes: # do persist database data
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
A similar problem will apply to the anonymous volume you've used for the backend service's node_modules directory, and it will ignore any changes to the package.json file. Since all of the application's code and library dependencies are already included in the image, I've deleted the volumes: block that would overwrite those.
Ι have this Dockerfile:
FROM openjdk:11
ENV JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
The structure of my application is:
Demo:
--deployment:
--Dockerfile
--src/
--docker-compose.yaml
--target:
--app.jar
Code snippet from docker-compose file:
api:
container_name: backend
image: backend
build:
context: deployment/
ports:
- "8080:8080"
When I am putting the Dockerfile in the same directory with the docker-compose and I am changing the docker compose to:
api:
container_name: backend
image: backend
build: .
ports:
- "8080:8080"
Is running as expected. But I want to put the Dockerfile into the deployment folder, since there I have the helm chart and others docker-comopose files which are using this Dockerfile.
My question is:
How I can specify the correct path of the target folder in the Dockerfile?
You cannot copy anything which is out of the build context. If you want to keep the current project structure a solution would be in your compose file for the api service:
build:
context: .
dockerfile: deployment/Dockerfile
I struggle to create a directory in my Dockerfile below. Entering the container after building the image I can't find the directory "models". "ds" directory in path "/usr/src/app/ds/models" is an application directory which was copied. Could you please tell me what is wrong here.
FROM python:3.8
ENV PYTHONUNBUFFERED=1
ENV DISPLAY :0
WORKDIR /usr/src/app
COPY . .
RUN mkdir -p /usr/src/app/ds/models
My docker-compose.yaml file contains volume:
version: '3.8'
services:
app:
build: .
command:
- /bin/bash
- -c
- python manage.py runserver 0.0.0.0:8000
restart: always
volumes:
- .:/usr/src/app
ports:
- '8000:8000'
When your docker-compose.yml file says
volumes:
- .:/usr/src/app
that host directory completely replaces the /usr/src/app directory from your image. This means pretty much nothing in your Dockerfile has an effect; if you try to deploy this setup to another system, you've never run the code in the image.
I'd recommend deleting this block, and also the command: override (make it be the default CMD in the Dockerfile instead).
I need to download models to this directory
Mount only the specific directory you need into your container; don't overwrite the entire application tree. Potentially consider keeping that data directory in a different part of the filesystem.
version: '3.8'
services:
app:
build: .
# no command:
restart: always
volumes:
# only the models subdirectory, not the entire application
- ./ds/models:/usr/src/app/ds/models
ports:
- '8000:8000'
I need to configure docker-compose.yml in a way that will invalidate the local image's docker cache, based on a certain file's checksum.
If it's not possible, I'd like to be able to somehow version the docker-compose.yml or Dockerfile, so that it would rebuild the Docker image of a specific service. I'd want to avoid pushing images to DockerHub. Unless it's an absolute the only solution.
At all costs, I want to avoid bash scripts and in general, writing imperative logic. I'm also not interested in CLI solutions, like passing additional flags to docker-compose up command.
Context:
We use docker-compose during the development of our application.
Our app has also a Dockerfile for building the app localy. We don't push docker images into DockerHub, we just have Dockerfile locally and in docker-compose.yml we declare the sourcecode and package.json (a file that for nodeJS applications use to declare dependencies) as volumes. Now sometimes, we modify the package.json, and docker-compose up throws an error, because the image is already built locally and the previous built doesn't contain the new dependencies. I'd want to be able to tell docker-compose.yml to automatically build a new image if there have been any changes to package.json file since we pull dependencies during the build stage.
docker-compose.yml
version: "3.8"
services:
web:
build:
context: .
ports:
- "8000:8000"
command: npx nodemon -L app.js
volumes:
- ./app:/usr/src/app
- /usr/src/app/node_modules
env_file:
- .env
depends_on:
- mongo
mongo:
image: mongo:latest
container_name: mongo_db
volumes:
- ./config/init.sh:/docker-entrypoint-initdb.d/init.sh
- ./config/mongod.conf:/etc/mongod.conf
- ./logs:/var/log/mongodb/
- ./db:/data/db
env_file:
- .env
ports:
- "27017:27017"
restart: on-failure:5
command: ["mongod", "-f", "/etc/mongod.conf"]
volumes:
db-data:
mongo-config:
Dockerfile:
FROM node:14.15.1
RUN mkdir -p /usr/src/app
# Create app directory
WORKDIR /usr/src/app
#Install app dependencies
COPY package.json /usr/src/app
# Install dependencies
RUN npm install
EXPOSE 8000
CMD ["node", "/app/app.js"]
https://docs.docker.com/compose/production/
Removing any volume bindings for application code, so that code stays
inside the container and can’t be changed from outside
I'd like to build image for production with my app code.
I have a file docker-compose-prod.yml
version: '3'
services:
------
nginx:
build:
context: ./docker/nginx
image: my_nginx:v1
ports:
- 80:80
volumes:
- ./docker/app:/var/www/html
depends_on:
- php
------
The code of my app located in ./docker/app.
The Dockerfile located in ./docker/nginx and I can't with command COPY to copy an app code outside /docker/nginx folder.
When I run a build command I get an image without app contend in /var/www/html:
docker-compose -f docker-compose-prod.yml build
How to build an image in this case with my an app code?
You could pass the dockerfile in the build argument: https://docs.docker.com/compose/compose-file/#dockerfile
This way, I think that you can change your app context to be ./docker, and in the Dockerfile, copy the app folder to /var/www/html. This way, you no longer have to specify a volume when starting the app.
Correct config looks like:
version: '3'
services:
------
nginx:
build:
context: ./docker
dockerfile: nginx/Dockerfile-prod
image: my_nginx:v1
ports:
- 80:80
------
And the Dockerfile-prod in /docker/nginx
...
COPY ./app /var/www/html
...