I'm trying to copy my ./dist after building my angular app.
here is my Dockerfile
# Create image based off of the official Node 10 image
FROM node:12-alpine
RUN apk update && apk add --no-cache make git
RUN mkdir -p /home/project/frontend
# Change directory so that our commands run inside this new directory
WORKDIR /home/project/frontend
# Copy dependency definitions
COPY package*.json ./
RUN npm cache verify
## installing packages
RUN npm install
COPY ./ ./
RUN npm run build --output-path=./dist
COPY /dist /var/www/front
but when I run docker-compose build dashboard I get this error
Service 'dashboard' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builderxxx/dist: no such file or directory
I don't know why is there something wrong?
if you need to check also docker-compose file
...
dashboard:
container_name: dashboard
build: ./frontend
image: dashboard
container_name: dashboard
restart: unless-stopped
networks:
- app-network
...
The Dockerfile COPY directive copies content from the build context (the host-system directory in the build: line) into the image. If you're just trying to move around content within the image, you can RUN cp or RUN mv to use the ordinary Linux shell commands instead.
RUN npm run build --output-path=./dist \
&& cp -a dist /var/www/front
Related
My Setup:
I have 3 Services defined in my docker-compose.yml: frontend backend and postgresql. postgresql is pulled from docker-hub.
frontend and backend are built from their own Dockerfiles, most of the Code of these Dockerfiles is the same and only EXPOSE ENTRPOINT CMD and ARG-Values differ from each other. That is why I wanted to create a 'base-Dockerfile' that these two Services can "include".
Sadly I found out I can not simply "include" a Dockerfile into another Dockerfile, I have to create an Image.
So I tried to create a base image for frontend and backend in my docker-compose.yml:
services:
frontend_base:
image: frontend_base_image
build:
context: ./
dockerfile: base.dockerfile
args:
- WORKDIR=/app/frontend/
- TOOLSDIR=${PWD}/docker/tools
- LOCALDIR=${PWD}/app/frontend/client
backend_base:
image: backend_base_image
build:
context: ./
dockerfile: base.dockerfile
args:
- WORKDIR=/app/backend/
- TOOLSDIR=${PWD}/docker/tools
- LOCALDIR=${PWD}/app/backend/api
frontend:
depends_on:
- frontend_base
# Some more stuff for the service
backend:
depends_on:
- backend_base
# Some more stuff for the service
My 'base-Dockerfile':
FROM node:18
# Set in docker-compose.yml-file
ARG WORKDIR
ARG TOOLSDIR
ARG LOCALDIR
ENV WORKDIR=${WORKDIR}
# Install dumb-init for the init system
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64
RUN chmod +x /usr/local/bin/dumb-init
WORKDIR ${WORKDIR}
RUN mkdir -p ${WORKDIR}
# Copy package.json to the current workdir (for npm install)
COPY ${LOCALDIR}/package*.json ${WORKDIR}
# Install all Packages (refereed from package.json)
RUN npm install
COPY ${TOOLSDIR}/start.sh /usr/local/bin/start.sh
COPY ${LOCALDIR}/ ${WORKDIR}
The Problem I am facing:
My frontend and backend Dockerfiles try to pull the 'base-image' from docker.io
=> ERROR [docker-backend internal] load metadata for docker.io/library/backend_base_image:latest 0.9s
=> ERROR [docker-frontend internal] load metadata for docker.io/library/frontend_base_image:latest 0.9s
=> CANCELED [frontend_base_image internal] load metadata for docker.io/library/node:18
My Research:
I do not know if my approach is possible, I did not find much Resources about this (integrated with docker-compose) online, only Resources about building the Images via Shell and then using them in a Dockerfile. I also tried this and ran into some other issues, where I could not provide correct arguments to the base-Dockerfile.
So I firstly wanted to find out if it is possible with docker-compose.
I am sorry if this is super obvious and my Question is dumb, I am relatively new to Docker.
We could use the feature of a multistage containerfile to define all three images in a single containerfile:
FROM node:18 AS base
# Set in docker-compose.yml-file
ARG WORKDIR
ARG TOOLSDIR
ARG LOCALDIR
ENV WORKDIR=${WORKDIR}
# Install dumb-init for the init system
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64
RUN chmod +x /usr/local/bin/dumb-init
WORKDIR ${WORKDIR}
RUN mkdir -p ${WORKDIR}
# Copy package.json to the current workdir (for npm install)
COPY ${LOCALDIR}/package*.json ${WORKDIR}
# Install all Packages (refereed from package.json)
RUN npm install
COPY ${TOOLSDIR}/start.sh /usr/local/bin/start.sh
COPY ${LOCALDIR}/ ${WORKDIR}
FROM base AS frontend
...
FROM base AS backend
...
In our docker-compose.yml, we can then build a specific stage for the frontend- and backend-service:
...
frontend:
image: frontend
build:
context: ./
target: frontend
dockerfile: base.dockerfile
...
backend:
image: backend
build:
context: ./
target: backend
dockerfile: base.dockerfile
...
If you want a single base image with shared tools, you can do this almost exactly the way you describe; but the one caveat is that you can't describe the base image in the docker-compose.yml file. You need to run separately from Compose
docker build -t base-image -f base.dockerfile .
I would not try to install any application code in that base Dockerfile. Where you for example install an init wrapper that needs to be shared across all of your application images, that does make sense. I think it's fine to tie a Dockerfile to a specific source-tree and image layout, and don't typically recommend passing filesystem paths as ARGs.
# base.dockerfile
FROM node:18
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64 \
&& chmod +x /usr/local/bin/dumb-init
COPY docker/tools/start.sh /usr/local/bin/
ENTRYPOINT ["dumb-init", "--"]
CMD ["start.sh"]
The per-image Dockerfiles will look pretty similar – and like every other Node Dockerfile – but there's no harm in repeating this, in much the same way that your components probably have similar-looking but self-contained package.json files.
# */Dockerfile
FROM base-image
WORKDIR /app # also creates it
COPY package*.json ./
RUN npm ci
COPY ./ ./
RUN npm build
EXPOSE 3000
# CMD ["npm", "run", "start"] # if the start.sh from the base is wrong
Of note, this gives you some flexibility to change things if the two image setups aren't identical; if you need an additional build step, or if you want to run a dev server, or package the frontend into a lighter-weight Nginx server.
In the Compose file you'd declare these normally with a build: block. Compose isn't aware of the base image and there's no way to tell it about it.
version: '3.8'
services:
frontend:
build: ./app/frontend/client
ports: ['3000:3000']
backend:
build: ./app/backend/api
ports: ['3001:3000']
One thing I've done here which at least reduces the number of variable references is to consistently use . as the current directory name. In the Compose file that's the directory containing the docker-compose.yml; on the left-hand side of COPY it's the build: context directory on the host; on the right-hand side of COPY it's the most recent WORKDIR. Using . where appropriate means you don't have to repeat the directory name, so you do have a little flexibility if you do need to rearrange your source tree or container filesystem.
Quick question regarding Dockerfile.
I've got a folder structure like so:
docker-compose.yml
client
src
package.json
Dockerfile
...etc
Client folder contains reactjs application and root is nodejs server with typescript. I've created Dockerfile like so:
FROM node
RUN mkdir -p /server/node_modules && chown -R node:node /server
WORKDIR /server
USER node
COPY package*.json ./
RUN npm install
COPY --chown=node:node . ./dist
RUN npm run build-application
COPY /src/views ./dist/src/views
COPY /src/public ./dist/src/public
EXPOSE 4000
CMD node dist/src/index.js
npm run build-application command executes client build (npm run build --prefix ./client) and server(rimraf dist && mkdir dist && tsc -p .). The problem is that Docker cannot find client folder error:
npm ERR! enoent ENOENT: no such file or directory, open '/server/client/package.json'
npm ERR! enoent This is related to npm not being able to find a file.
Can someone explain why? And how to fix this?
Docker compose file:
...
server:
build:
context: ./server
dockerfile: Dockerfile
image: mazosios-pedutes-server
container_name: mazosios-pedutes-server
restart: unless-stopped
networks:
- app-network
env_file:
- ./server/.env
ports:
- "4000:4000"
Since the error is saying that there is no client/package.json in /server my question is the following one.
Is your ./client directory located within /server?
Dockerfile WORKDIR instruction makes all commands that follow it to be executed within the directory that you pass to WORKDIR as parameter.
I guess if you add RUN tree -d (lists only nested directories) after the last COPY instruction, you will be able to see where your client directory is located and you will be able to fix the path to it.
I installed Go on Ubuntu 16.04. This is my GOPATH=/home/{username}/work.
I created a project into /home/{username}/work/src.
This is my project folder hierarchy.
project-name
services
configuration
api
main.go
Dockerfile
bff
api
main.go
Dockerfile
docker-compose.yml
favicon.ico
README.md
I can build and run with my dockerfile but I can't build and up with docker-compose.
I couldn't find any solution.
Configuration service dockerfile:
FROM golang:1.11.1-alpine3.8 as builder
RUN apk update && apk add git && go get gopkg.in/natefinch/lumberjack.v2
RUN mkdir -p /go/src/project-name/services/configuration
RUN CGO_ENABLED=0
RUN GOOS=linux
ADD . /go/src/project-name/services/configuration
ENV GOPATH /go
WORKDIR /go/src/project-name/services/configuration/api
RUN go get
RUN go build
FROM alpine
RUN apk update
RUN apk add curl
RUN mkdir -p /app
COPY --from=builder /go/src/project-name/services/configuration/api/ /app/
RUN chmod +x /app/api
WORKDIR /app
EXPOSE 5001
ENTRYPOINT ["/app/api"]
It works with dockerfile.
This is my docker-compose file:
version: '3.4'
services:
bff:
image: project-name/bff:${TAG:-latest}
build:
context: .
dockerfile: services/bff/Dockerfile
ports:
- "5000:5000"
container_name: bff
depends_on:
- configuration
configuration:
image: project-name/configuration:${TAG:-latest}
build:
context: .
dockerfile: services/configuration/Dockerfile
ports:
- "5001:5001"
container_name: configuration
It didn't work.
When the “run go get” command runs, it gives an error, the error is:
can't load package: package project-name/services/configuration/api: no Go files in /go/src/project-name/services/configuration/api
ERROR: Service 'configuration' failed to build: The command '/bin/sh -c go get' returned a non-zero code: 1
In your Dockerfile, you say
ADD . /go/src/project-name/services/configuration
which expects the build context directory on the host to contain the source files. But your docker-compose.yml file says
build:
context: .
dockerfile: services/configuration/Dockerfile
where the context directory is the root of your source control tree, not the specific Go source directory you're trying to build. If you change this to
build:
context: services/configuration
# Default value of "dockerfile: Dockerfile" will be right
it will likely work better.
In plain Docker commands, your current docker-compose.yml file says the equivalent of
cd $GOPATH/src/project-name
docker build -f services/configuration/Dockerfile .
But you're probably actually running
cd $GOPATH/src/project-name/services/configuration
docker build .
and what directory is the current directory matters.
When I build my app without using docker-compose :
docker build -t aspnetapp .
docker run -d -p 8080:80 --name myapp aspnetapp
the docker container runs just fine. But when I try to run it, using docker-compose up, the image builds successfully but the website gives me errors on loading.
dockerfile:
FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get install -y nodejs
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "app.dll"]
docker-compose:
version: '3.4'
services:
app:
image: aspnetapp
build: ./app
ports:
- "8080:80"
AggregateException: One or more errors occurred. (One or more errors occurred. (Failed to start 'npm'. To resolve this:.
[1] Ensure that 'npm' is installed and can be found in one of the PATH directories.
Current PATH enviroment variable is: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Make sure the executable is in one of those directories, or update your PATH.
removing
environment: - ASPNETCORE_ENVIRONMENT=Development
from the compose.override file fixed all this. I'm not sure why it's the problem.
Is your Dockerfile in ./ or ./app? If it's in ./ change docker-compose.yml to build: ./
EDIT
I've just realised that you are specifying both image and build in your docker-compose.yml; get rid of the image.
I'm attempting to have a dev container and a "production" container built from a single Dockerfile, it already "works" but I do not have access to the dev container after the build (multistage intermediaries are cached, but not tagged in a useful way).
The Dockerfile is as-so:
# See https://github.com/facebook/flow/issues/3649 why here
# is a separate one for a flow using image ... :(
FROM node:8.9.4-slim AS graphql-dev
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
RUN apt update && apt install -y libelf1
ADD ./.babelrc /graphql-api/
ADD ./.eslintignore /graphql-api/
ADD ./.eslintrc /graphql-api/
ADD ./.flowconfig /graphql-api/
ADD ./.npmrc /graphql-api/
ADD ./*.json5 /graphql-api/
ADD ./lib/ /graphql-api/lib
ADD ./package.json /graphql-api/
ADD ./schema/ /graphql-api/schema
ADD ./yarn.lock /graphql-api/
RUN yarn install --production --silent && npm install --silent
CMD ["npm", "run", "lint-flow-test"]
# Cleans node_modules etc, see github.com/tj/node-prune
# this container contains no node, etc (golang:latest)
FROM golang:latest AS graphql-cleaner
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
COPY --from=graphql-dev graphql-api .
RUN go get github.com/tj/node-prune/cmd/node-prune
RUN node-prune
# Minimal end-container (Alpine 💖)
FROM node:8.9.4-alpine
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
COPY --from=graphql-cleaner graphql-api .
EXPOSE 3000
CMD ["npm", "start"]
Ideally I'd be able to start graphql-dev and the final container both with a docker-compose.yml, as so:
version: '3'
services:
graphql-dev:
image: graphql-dev
build: ./Dockerfile
volumes:
- ./lib:/graphql-api/lib
- ./schema:/graphql-api/schema
graphql-prod:
image: graphql
build: ./Dockerfile
The two final steps are the "shrinking" for the final build (saves over 250Mb for us) are not really required except for in the production build.
If I extract the dockerfile into two.. somehow Dockerfile.prod and Dockerfile.dev then I have to manage dependencies between them as I can't force prod to always build dev (can I?)
If I were somehow able to specify target on the build in the docker-compose.yml file I could do it, there were some issues, but specifying a target under build in my yml file yields an error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.graphql-dev.build contains unsupported option: 'target'