After running unittests as part of building a docker-compose file, a file created in the container is not showing up on my local filesystem.
I have the following Dockerfile:
# IDM.Test/Dockerfile
FROM microsoft/aspnetcore-build:2.0
WORKDIR /src
# Variables
ENV RESTORE_POLICY --no-restore
ENV IGNORE_WARNINGS -nowarn:msb3202,nu1503
# Restore
COPY IDM.sln ./
# Copying and restoring other projects...
COPY IDM.Test/IDM.Test.csproj IDM.Test/
RUN dotnet restore IDM.Test/IDM.Test.csproj $IGNORE_WARNINGS
# Copy
COPY . .
# Test
RUN dotnet test IDM.Test/IDM.Test.csproj -l "trx;LogFileName=test-results.xml"
RUN ls -alR
When running RUN ls -alR I can see that the file /src/IDM.Test/TestResults/test-results.xml is produced within the container. So far so good.
I'm using docker-compose -f docker-compose.test.yml build to start building.
The docker-compose looks like this:
version: '3'
services:
idm.webapi:
image: idmwebapi
build:
context: .
dockerfile: IDM.Test/Dockerfile
volumes:
- ./IDM.Test/TestResults:/IDM.Test/TestResults/
I have created the folder IDM.Test/TestResults locally, but nothing appears after successfully running the docker-compose build command.
Any clues?
Maybe with this explanation we can solve it. Let me say some obvious things to avoid confusion, step by step. Container creation has two steps:
docker build / docker-compose build -> Creates image
docker run / docker compose up / docker-compose run -> Creates container
Volumes are created in SECOND STEP (container creation), while your command dotnet test IDM.Test/IDM.Test.csproj -l "trx;LogFileName=test-results.xml" is being executed in first one (image creation).
If you creates a folder inside container in the same path where you've
mounted volume, data in this new folder will only be available locally
inside container.
Definitively, my recommendation can be resumed in the following points:
Check that destination folder of mounted volume is not created in building phase, so, is not defined any RUN mkdir /IDM.Test/TestResults/ in your Dockerfile.
Another little recommendation is not mandatory but I like to define volumes with absolute path in docker-compose file.
Don't execute commands in Dockerfile which produce data you want outside, unless you specify this command as an ENTRYPOINT or CMD, not RUN.
In Dockerfile, ENTRYPOINT or CMD (or command: in docker-compose) specify commands executed after buildind, when container starts.
Try with this Dockerfile:
# IDM.Test/Dockerfile
FROM microsoft/aspnetcore-build:2.0
WORKDIR /src
# Variables
ENV RESTORE_POLICY --no-restore
ENV IGNORE_WARNINGS -nowarn:msb3202,nu1503
# Restore
COPY IDM.sln ./
# Copying and restoring other projects...
COPY IDM.Test/IDM.Test.csproj IDM.Test/
RUN dotnet restore IDM.Test/IDM.Test.csproj $IGNORE_WARNINGS
# Copy
COPY . .
# Test
CMD dotnet test IDM.Test/IDM.Test.csproj -l "trx;LogFileName=test-results.xml"
Or this docker-compose:
version: '3'
services:
idm.webapi:
image: idmwebapi
build:
context: .
dockerfile: IDM.Test/Dockerfile
volumes:
- ./IDM.Test/TestResults:/IDM.Test/TestResults/
command: >
dotnet test IDM.Test/IDM.Test.csproj -l "trx;LogFileName=test-results.xml"
After container creation, you can check with ls your generated files.
Related
I would like to have the files created on the building phase stored on my local machine
I have this Dockerfile
FROM node:17-alpine as builder
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
RUN npm i -g #angular/cli
COPY . .
RUN ng build foo --prod
RUN touch test.txt #This is just for test
CMD ["ng", "serve"] #Just for let the container running
I also created a shared volume via docker compose
services:
client:
build:
dockerfile: Dockerfile.prod
context: ./foo
volumes:
- /app/node_modules
- ./foo:/app
If I attach a shell to the running container and run touch test.txt, the file is created on my local machine.
I can't understand why the files are not created on the building phase...
If I use a multi stage Dockerfile the dist folder on the container is created (just adding this to the Dockerfile), but still I can't see it on the local machine
FROM nginx
EXPOSE 80
COPY --from=builder /app/dist/foo /usr/share/nginx/html
I can't understand why the files are not created on the building
phase...
That's because the build phase doesn't involve volume mounting.
Mounting volumes only occur when creating containers, not building images. If you map a volume to an existing file or directory, Docker "overrides" the image's path, much like a traditional linux mount. Which means, before creating the container, you image has everything from /app/* pre-packaged, and that's why you're able to copy the contents in the multistage build.
However, as you defined a volume with the - ./foo:/app config in your docker-compose file, the container won't have those files anymore, and instead the /app folder will have the current contents of your ./foo directory.
If you wish to copy the contents of the image to a mounted volume, you'll have to do it in the ENTRYPOINT, as it runs upon container instantiation, and after the volume mounting.
I have NestJS monorepo project with structure as below:
...
apps
app1
app2
app3
...
If I got an idea correctly, I have possibility to run all the applications in same time, i.e. I run command and have access to apps by paths like http://my.domain/app1/, http://my.domain/app2/, http://my.domain/app3/ or in some similar way. And I need to put all apps in a docker container(s) and run them from there.
I haven't found something about this proceess. Did I undestand the idea correctly and where could I know more about deployment NestJS monorepo project?
This is how I solved it:
apps
app1
Dockerfile
...
app2
Dockerfile
...
app3
Dockerfile
...
docker-compose.yml
Each Dockerfile does the same:
FROM node:16.15.0-alpine3.15 AS development
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM node:16.15.0-alpine3.15 AS production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production --omit=dev
COPY --from=development /usr/src/app/dist ./dist
CMD ["npm", "run", "start-app1:prod"]
Where the last line should start the application so adjust that to your project naming.
Later you should build each of the images in your CI/CD pipeline and deploy them separately. To run the docker build from the root folder of the project you just need to provide a Dockerfile path for -f parameter, for example:
docker build -f apps/app1/Dockerfile -t app1:version1 .
docker build -f apps/app2/Dockerfile -t app2:version1 .
docker build -f apps/app3/Dockerfile -t app3:version1 .
To run it locally for tests, utilize docker-compose.yml
version: '3.8'
services:
app1:
image: app1:version1
ports:
- 3000:3000 # set according to your project setup
app2:
...
app3:
...
And start it by calling docker compose up
How can I mount a volume to store my .m2 repo so I don't have to download the internet on every build?
My build is a Multi stage build:
FROM maven:3.5-jdk-8 as BUILD
COPY . /usr/src/app
RUN mvn --batch-mode -f /usr/src/app/pom.xml clean package
FROM openjdk:8-jdk
COPY --from=BUILD /usr/src/app/target /opt/target
WORKDIR /opt/target
CMD ["/bin/bash", "-c", "find -type f -name '*.jar' | xargs java -jar"]
You can do that with Docker >18.09 and BuildKit. You need to enable BuildKit:
export DOCKER_BUILDKIT=1
Then you need to enable experimental dockerfile frontend features, by adding as first line do Dockerfile:
# syntax=docker/dockerfile:experimental
Afterwards you can call the RUN command with cache mount. Cache mounts stay persistent during builds:
RUN --mount=type=cache,target=/root/.m2 \
mvn --batch-mode -f /usr/src/app/pom.xml clean package
Although the anwer from #Marek Obuchowicz is still valid, here is a small update.
First add this line to Dockerfile:
# syntax=docker/dockerfile:1
You can set the DOCKER_BUILDKIT inline like this:
DOCKER_BUILDKIT=1 docker build -t mytag .
I would also suggest to split the dependency resolution and packagin phase, so you can take the full advantage from Docker layer caching (if nothing changes in pom.xml it will use the cached layer with already downloaded dependencies). The full Dockerfile could look like this:
# syntax=docker/dockerfile:1
FROM maven:3.6.3-openjdk-17 AS MAVEN_BUILD
COPY ./pom.xml ./pom.xml
RUN --mount=type=cache,target=/root/.m2 mvn dependency:go-offline -B
COPY ./src ./src
RUN --mount=type=cache,target=/root/.m2 mvn package
FROM openjdk:17-slim-buster
EXPOSE 8080
COPY --from=MAVEN_BUILD /target/myapp-*.jar /app.jar
ENTRYPOINT ["java","-jar","/app.jar","-Xms512M","-Xmx2G","-Djava.security.egd=file:/dev/./urandom"]
I'm doing a multi-stage Docker build:
# Dockerfile
########## Build stage ##########
FROM golang:1.10 as build
ENV TEMP /go/src/github.com/my-id/my-go-project
WORKDIR $TEMP
COPY . .
RUN make build
########## Final stage ##########
FROM alpine:3.4
# ...
ENV HOME /home/$USER
ENV TEMP /go/src/github.com/my-id/my-go-project
COPY --from=build $TEMP/bin/my-daemon $HOME/bin/
RUN chown -R $USER:$GROUP $HOME
USER $USER
ENTRYPOINT ["my-daemon"]
and the Makefile contains in part:
build: bin
go build -v -o bin/my-daemon cmd/my-daemon/main.go
bin:
mkdir $#
This all works just fine with a docker build.
Now I want to use Codeship, so I have:
# codeship-services.yml
cachemanager:
build:
image: my-daemon
dockerfile: Dockerfile
and:
# codeship-steps.yml
- name: my-daemon build
tag: master
service: my-service
command: true
The issue is if I do jet steps --master, it builds everything OK, but then runs the container as if I did a docker run. Why? I don't want it to do that.
It's as if I would have to have two separate Dockerfiles: one only for the build stage and one only for the run stage and use the former with jet. But then this defeats the point of Docker multi-stage builds.
I was able to solve this problem using multi-stage builds split into two different files following this guide: https://documentation.codeship.com/pro/common-issues/caching-multi-stage-dockerfile/
Basically, you'll take your existing Dockerfile and split it into two files like so, with the second referencing the first:
# Dockerfile.build-stage
FROM golang:1.10 as build-stage
ENV TEMP /go/src/github.com/my-id/my-go-project
WORKDIR $TEMP
COPY . .
RUN make build
# Dockerfile
FROM build-stage as build-stage
FROM alpine:3.4
# ...
ENV HOME /home/$USER
ENV TEMP /go/src/github.com/my-id/my-go-project
COPY --from=build $TEMP/bin/my-daemon $HOME/bin/
RUN chown -R $USER:$GROUP $HOME
USER $USER
ENTRYPOINT ["my-daemon"]
Then, in your codeship-service.yml file:
# codeship-services.yml
cachemanager-build:
build:
dockerfile: Dockerfile.build-stage
cachemanager-app:
build:
image: my-daemon
dockerfile: Dockerfile
And in your codeship-steps.yml file:
# codeship-steps.yml
- name: cachemanager build
tag: master
service: cachemanager-build
command: <here you can run tests or linting>
- name: publish to registry
tag: master
service: cachemanager-app
...
I don't think you want to actually run the Dockerfile because it will start your app. We use the second stage to push a smaller build to an image registry.
I'm using docker and docker-compose to set up a build pipeline. I've got a front-end that's written in javascript and needs to be built before being used. The backend is written in go.
To make this component integrate with the rest of our docker-compose setup, I want to do the building in a docker image as well.
This is the flow I'm going for:
during build do:
build the frontend stuff and put it in /output (that is bound to the
output volume
build the backend server
when running do:
run the server, it has access to the build files in /output
I'm quite new to docker and docker-compose so I'm not sure if this is possible, or even the right thing to do.
For reference, here's my docker-compose.yml:
version: '2'
volumes:
output:
driver: local
services:
frontend:
build: .
volumes:
- output:/output
backend:
build: ./backend
depends_on:
- frontend
volumes:
- output:/output
and Dockerfile:
FROM node
# create working dir
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD package.json /usr/src/app/package.json
# install packages
RUN npm install
COPY . /usr/src/app
# build frontend files and place results in /output
RUN npm build
RUN cp /usr/src/app/build/* /output
And backend/Dockerfile:
FROM go
# copy and build server
COPY . /usr/src/backend
WORKDIR /usr/src/backend
RUN go build
# run the server
ENTRYPOINT ["/usr/src/backend/main"]
Something is wrong here, but I do not know what. It seems as though the output of the build step are not persisted in the output volume. What can I do to fix this?
You cannot attach a volume during docker build.
The reason for this is that the goal of the docker build command is to build an image, and nothing else, it doesn't need to have volumes, as Dockerfile has ADD / COPY.
To produce your output, you should create a script which mostly does the npm install ; npm build ; cp /usr/src/app/build/* /output from your current dockerfile and use this script as the entrypoint / cmd in your dockerfile.
I'm not sure compose can run this, but in any case, I find it more clear wrapped in a shell script that first executes the frontend builder container, then executing the backend container with the output directory as a volume.