My projects structure looks as following.
app/
docker-compose.yml
test_some_pytest.py # this have some pytest code.
tests.Dockerfile
my tests.Dockerfile looks as following.
from python:3.4-alpine
RUN python --version
RUN pip --version
COPY . /APP
WORKDIR /APP
RUN pip install pytest
RUN ["pytest"]
and docker-compose.yml as following.
services
tests:
build:
context: .
dockerfile: tests.Dockerfile
When I run docker-compose up --build tests. the pytest also run but probably at some other place. it shows the following output.
.
.
.
Removing intermediate container 96f9a8ba43d2
---> 82c89715d4c0
Step 7/7 : RUN ["pytest"]
---> Running in c30ee497e5f5
============================= test session starts ==============================
platform linux -- Python 3.4.10, pytest-4.6.11, py-1.10.0, pluggy-0.13.1
rootdir: /python-test-calculator
collected 0 items
========================= no tests ran in 0.00 seconds =========================
The command 'pytest' returned a non-zero code: 5
ERROR: Service 'tests' failed to build : Build failed
If I use your tests.Dockerfile exactly as written, the following docker-compose.yaml:
version: "3"
services:
tests:
build:
context: .
dockerfile: tests.Dockerfile
And the following test_some_pytest.py:
def test_something():
assert True
It successfully runs pytest when I run docker-compose build:
$ docker-compose build
Building tests
[...]
Step 7/7 : RUN ["pytest"]
---> Running in 8d8a1f44913f
============================= test session starts ==============================
platform linux -- Python 3.4.10, pytest-4.6.11, py-1.10.0, pluggy-0.13.1
rootdir: /APP
collected 1 item
test_some_pytest.py . [100%]
=========================== 1 passed in 0.01 seconds ===========================
Removing intermediate container 8d8a1f44913f
---> 055afd5b1f8d
Successfully built 055afd5b1f8d
Successfully tagged docker_tests:latest
You can see from the above output that pytest discovered and successfully ran 1 test.
Related
This question already has an answer here:
no such file or directory package.json
(1 answer)
Closed 1 year ago.
I'm trying to have docker compile my node project in the same folder as the source.
This is my Dockerfile:
FROM node:16.8.0-alpine
WORKDIR /app
RUN rm -rf node_modules
RUN yarn install
RUN yarn build
And this is my docker-compose.yml file:
version: "3"
services:
vuebuild:
build: ./frontend
volumes:
- ./frontend:/app
But I'm getting this error:
> [5/5] RUN yarn dev:
#8 0.649 yarn run v1.22.5
#8 0.673 error Couldn't find a package.json file in "/app"
#8 0.673 info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
------
executor failed running [/bin/sh -c yarn dev]: exit code: 1
ERROR: Service 'vuebuild' failed to build : Build failed
Since you're trying to start a development environment (assuming you want to develop instide alpine, but that's another story) instead of building a Docker image for your Node app, you could use this docker-compose.yaml file:
version: '3'
services:
vuebuild:
image: node:16.8.0-alpine
command: sh -c "cd /app && yarn install && node index.js"
volumes:
- ./frontend:/app
Start the container:
$ docker-compose up
Then attach to it
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3cd8f0844e87 d2adab47ce8f "docker-entrypoint.s…" 7 seconds ago Up 6 seconds hello-node_vuebuild_1
$ docker exec -it hello-node_vuebuild_1 sh
Okay, my solution was to give up on using a Dockerfile and creating a container that build my project for me.
It won't cause me any problems as I will only run docker-compose up -d once.
Probably not the best solution but the only one that I have found.
I'm using multi-stage building with a Dockerfile like this:
#####################################
## Build the client
#####################################
FROM node:12.19.0 as web-client-builder
WORKDIR /workspace
COPY web-client/package*.json ./
# Running npm install before we update our source allows us to take advantage
# of docker layer caching. We are excluding node_modules in .dockerignore
RUN npm ci
COPY web-client/ ./
RUN npm run test:ci
RUN npm run build
#####################################
## Host the client on a static server
#####################################
FROM nginx:1.19 as web-client
COPY --from=web-client-builder /workspace/nginx-templates /etc/nginx/templates/
COPY --from=web-client-builder /workspace/nginx.conf /etc/nginx/nginx.conf
COPY --from=web-client-builder /workspace/build /var/www/
#####################################
## Build the server
#####################################
FROM openjdk:11-jdk-slim as server-builder
WORKDIR /workspace
COPY build.gradle settings.gradle gradlew ./
COPY gradle ./gradle
COPY server/ ./server/
RUN ./gradlew --no-daemon :server:build
#####################################
## Start the server
#####################################
FROM openjdk:11-jdk-slim as server
WORKDIR /app
ARG JAR_FILE=build/libs/*.jar
COPY --from=server-builder /workspace/server/$JAR_FILE ./app.jar
ENTRYPOINT ["java","-jar","/app/app.jar"]
I also have a docker-compose.yml like this:
version: "3.8"
services:
server:
restart: always
container_name: server
build:
context: .
dockerfile: Dockerfile
target: server
image: server
ports:
- "8090:8080"
web-client:
restart: always
container_name: web-client
build:
context: .
dockerfile: Dockerfile
target: web-client
image: web-client
environment:
- LISTEN_PORT=80
ports:
- "8091:80"
The two images involved here, web-client and server are completely independent. I'd like to take advantage of multi-stage build parallelization.
When I run docker-compose build (I'm on docker-compose 1.27.4), I get output like this
λ docker-compose build
Building server
Step 1/24 : FROM node:12.19.0 as web-client-builder
---> 1f560ce4ce7e
... etc ...
Step 6/24 : RUN npm run test:ci
---> Running in e9189b2bff1d
... Runs tests ...
... etc ...
Step 24/24 : ENTRYPOINT ["java","-jar","/app/app.jar"]
---> Using cache
---> 2ebe48e3b06e
Successfully built 2ebe48e3b06e
Successfully tagged server:latest
Building web-client
Step 1/11 : FROM node:12.19.0 as web-client-builder
---> 1f560ce4ce7e
... etc ...
Step 6/11 : RUN npm run test:ci
---> Using cache
---> 0f205b9549e0
... etc ...
Step 11/11 : COPY --from=web-client-builder /workspace/build /var/www/
---> Using cache
---> 31c4eac8c06e
Successfully built 31c4eac8c06e
Successfully tagged web-client:latest
Notice that my tests (npm run test:ci) run twice (Step 6/24 for the server target and then again at Step 6/11 for the web-client target). I'd like to understand why this is happening, but I guess it's not a huge problem, because at least it's cached by the time it gets around to the tests the second time.
Where this gets to be a bigger problem is when I try to run my build in parallel. Now I get output like this:
λ docker-compose build --parallel
Building server ...
Building web-client ...
Building server
Building web-client
Step 1/11 : FROM node:12.19.0 as web-client-builderStep 1/24 : FROM node:12.19.0 as web-client-builder
---> 1f560ce4ce7e
... etc ...
Step 6/24 : RUN npm run test:ci
---> e96afb9c14bf
Step 6/11 : RUN npm run test:ci
---> Running in c17deba3c318
---> Running in 9b0faf487a7d
> web-client#0.1.0 test:ci /workspace
> react-scripts test --ci --coverage --reporters=default --reporters=jest-junit --watchAll=false
> web-client#0.1.0 test:ci /workspace
> react-scripts test --ci --coverage --reporters=default --reporters=jest-junit --watchAll=false
... Now my tests run in parallel twice, and the output is interleaved for both parallel runs ...
It's clear that the tests are running twice now, because now that I'm running the builds in parallel, there's no chance for them to cache.
Can anyone help me understand this? I thought that one of the high points of docker multi-stage builds was that they were parallelizable, but this behavior doesn't make sense to me. What am I misunderstanding?
Note
I also tried enabling BuildKit for docker-compose. I had a harder time making sense of the output. I don't believe it was running things twice, but I'm also not sure that it was parallelizing. I need to dig more into it, but my main question stands: I'm hoping to understand why multi-stage builds don't run in parallel in the way I expected without BuildKit.
You can split this into two separate Dockerfiles. I might write a web-client/Dockerfile containing the first two stages (changing the relative COPY paths to ./), and leave the root-directory Dockerfile to build the server application. Then your docker-compose.yml file can point at these separate directories:
services:
server:
build: . # equivalent to {context: ., dockerfile: Dockerfile}
web-client:
build: web-client
As #Stefano notes in their answer, multi-stage builds are more optimized around building a single final image, and in the "classic" builder they always run from the beginning up through the named target stage without any particular logic for where to start.
why multi-stage builds don't run in parallel in the way I expected without BuildKit.
That's the high point of BuildKit.
The main purpose of the multistage in Docker is to produce smaller images by keeping only what's required by the application to properly work. e.g.
FROM node as builder
COPY package.json package-lock.json
RUN npm ci
COPY . /app
RUN npm run build
FROM nginx
COPY --from=/app/dist --chown=nginx /app/dist /var/www
All the development tools required for building the project are simply not copied into the final image. This translates into smaller final images.
EDIT:
From the BuildKit documentation:
BuildKit builds are based on a binary intermediate format called LLB that is used for defining the dependency graph for processes running part of your build. tl;dr: LLB is to Dockerfile what LLVM IR is to C.
In other words, BuildKit is able to evaluate the dependencies for each stage allowing parallel execution.
I am sure I have run this command before, but I tested the following command in my terminal and got this error:
✗ docker run aa1112d76852 npm run test -- --coverage
ERRO[0001] error waiting for container: context canceled
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"npm\": executable file not found in $PATH": unknown.
I am concerned because this is the command with the exception of the container id, that I will be placing in my .travis.yml file. Where is the error in how I put it together this time?
This is my Dockerfile configuration:
FROM node:alpine as builder
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
FROM nginx
COPY --from=builder /app/build /usr/share/nginx/html
This is my docker-compose.yml file:
version: "3"
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
So this worked previously because I was building it from Dockerfile.dev which has this last command that is crucial:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Whereas the new container I was using was built from Dockerfile which has this configuration:
FROM node:alpine as builder
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
FROM nginx
COPY --from=builder /app/build /usr/share/nginx/html
Notice the missing CMD ["npm", "run", "start"].
So the command should work in my .travis.yml file because I build it with my Dockerfile.dev like so:
before_install:
- docker build -t danale/docker-react -f Dockerfile.dev .
Just for context, this question is related to the "Docker and Kubernetes: The Complete Guide" course on Udemy which is fairly well-recommended on Reddit etc.
This error occurs when you pass the image ID for the production build (created using the Dockerfile) rather than the image ID of the Dockerfile.dev:
root#ubuntu-docker:/home/paul/frontend# docker run USERNAME/docker-react npm run test
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:345: starting container process caused "exec: \"npm\": executable
file not found in $PATH": unknown.
root#ubuntu-docker:/home/paul/frontend# docker run USERNAME/docker-react-dev npm run
test
> frontend#0.1.0 test /app
> react-scripts test
PASS src/App.test.js
✓ renders without crashing (92ms)
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Snapshots: 0 total
Time: 3.794s
Ran all test suites.
I believe it's because the Dockerfile.dev has the line "npm run start" which starts the development server and then allows it to receive the test command, whereas the production version just builds the app then serves it using nginx. I'm not familiar enough with React to really get my head around it.
However if anyone else (like the OP, and me) was doing the course and struggling with this error, I hope this helped.
I am trying to learn Docker. I have a Hello World Django server application. When I try to run my server using a Dockerfile, my server is unreachable. But when I use docker-compose, I am able to access it.
My question is why, especially when they are quite similar.
My Dockerfile:
FROM python:3
# Set the working directory to /app
WORKDIR /bryne
# Copy the current directory contents into the container at /app
ADD . /bryne
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
# CMD specifcies the command to execute to start the server running.
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
# done!
Commands used when running server using Dockerfile:
docker build -t swyne-latest
docker run swyne-latest
Result: Cannot access server at 127.0.0.1:8000
My docker-compose.yml:
version: '3'
services:
web:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
container_name: swyne
volumes:
- .:/bryne
ports:
- "8000:8000"
Commands used when running server using docker-compose:
docker-compose up
Result: Able to access my server at 127.0.0.1:8000
Thanks
Edit: Output from Dockerfile build:
$ docker build -t swyne-latest .
Sending build context to Docker daemon 60.15MB
Step 1/6 : FROM python:3
3: Pulling from library/python
05d1a5232b46: Already exists
5cee356eda6b: Already exists
89d3385f0fd3: Already exists
80ae6b477848: Already exists
28bdf9e584cc: Already exists
523b203f62bd: Pull complete
e423ae9d5ac7: Pull complete
adc78e8180f7: Pull complete
60c9f1f1e6c6: Pull complete
Digest: sha256:5caeb1a2119661f053e9d9931c1e745d9b738e2f585ba16d88bc3ffcf4ad727b
Status: Downloaded newer image for python:3
---> 7a35f2e8feff
Step 2/6 : WORKDIR /bryne
---> Running in 9ee8283c6cc6
Removing intermediate container 9ee8283c6cc6
---> 5bbd14170c84
Step 3/6 : ADD . /bryne
---> 0128101457f5
Step 4/6 : RUN pip install --trusted-host pypi.python.org -r requirements.txt
---> Running in 55ab661b1b55
Collecting Django>=2.1 (from -r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/32/ab/22530cc1b2114e6067eece94a333d6c749fa1c56a009f0721e51c181ea53/Django-2.1.2-py3-none-any.whl (7.3MB)
Collecting pytz (from Django>=2.1->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/30/4e/27c34b62430286c6d59177a0842ed90dc789ce5d1ed740887653b898779a/pytz-2018.5-py2.py3-none-any.whl (510kB)
Installing collected packages: pytz, Django
Successfully installed Django-2.1.2 pytz-2018.5
Removing intermediate container 55ab661b1b55
---> dce5400552b2
Step 5/6 : EXPOSE 8000
---> Running in c74603a76b54
Removing intermediate container c74603a76b54
---> ee5ef2bf2999
Step 6/6 : CMD ["python", "manage.py", "runserver", "127.0.0.1:8000"]
---> Running in 4f5ea428f801
Removing intermediate container 4f5ea428f801
---> 368f73366b69
Successfully built 368f73366b69
Successfully tagged swyne-latest:latest
$ docker run swyne-latest
(no output)
I guess it's normal that unlike docker-compose up, docker run swyne-latest does not allow you to access the web application at 127.0.0.1:8000.
Because the docker-compose.yml file (which is read by docker-compose but not by docker itself) specifies many parameters, in particular the port mapping, which should otherwise be passed as CLI parameters of docker run.
Could you try running docker run -p 8000:8000 instead?
Also, I guess that the line command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000" should probably put inside the Dockerfile itself with a CMD or ENTRYPOINT directive, not in the docker-compose.yml file.
Actually, I've just taken a look at the output of your docker build command and there is an orthogonal issue:
the command
CMD ["python", "manage.py", "runserver", "127.0.0.1:8000"]
should be replaced with
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
(See this SO answer for more feedback on this issue, albeit in another language, Java instead of Python.)
As an aside, the complete command to compile the Dockerfile is not docker build -t swyne-latest but docker build -t swyne-latest . (with the final dot corresponding to the folder of the Docker build context).
Everything I tried following Dockerfile and docker compose references to pass an environment variable to the Docker image did not work.
I want to make this env var available during docker build when using docker-compose.
On the Docker host I have:
export BUILD_VERSION=1.0
app.js
console.log('BUILD_VERSION: ' + process.env.BUILD_VERSION);
Dockerfile:
FROM node
ADD app.js /
ARG BUILD_VERSION
ENV BUILD_VERSION=$BUILD_VERSION
RUN echo Build Time: $BUILD_VERSION
RUN node /app.js
CMD echo Run Time: $BUILD_VERSION
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
args:
- BUILD_VERSION
If I build the image directly, the env var is passed fine:
docker build -t test --no-cache --build-arg BUILD_VERSION .
and is also available at run-time:
$ docker run --rm test
Run Time: 1.0
$ docker run --rm test node /app
BUILD_VERSION: 1.0
but not with docker compose.
docker-compose up --build
...
Step 5/7 : RUN echo Build Time: $BUILD_VERSION
---> Running in 6115161f33bf
Build Time:
---> c691c619018a
Removing intermediate container 6115161f33bf
Step 6/7 : RUN node /app.js
---> Running in f51831cc5e1e
BUILD_VERSION:
It's only available at run-time:
$ docker run --rm test
Run Time: 1.0
$ docker run --rm test node /app
BUILD_VERSION: 1.0
I also tried using environment in docker-compose.yml like below which again only makes it available at run-time but not build-time:
version: '3'
services:
app:
build:
context: .
environment:
- BUILD_VERSION
Please advise, how can I make it work in the least convoluted way?
Your example is working for me.
Have you tried deleting the images and building again? Docker won't re-build your image despite environment variables changed if the image is in cache.
You can delete them with:
docker-compose down --rmi all
Edit, I show here how it is working for me at build time:
$ cat Dockerfile
FROM alpine
ARG BUILD_VERSION
ENV BUILD_VERSION=$BUILD_VERSION
RUN echo Build Time: $BUILD_VERSION
$ cat docker-compose.yml
version: '3'
services:
app:
build:
context: .
args:
- BUILD_VERSION
Build:
$ export BUILD_VERSION=122221
$ docker-compose up --build
Creating network "a_default" with the default driver
Building app
Step 1/4 : FROM alpine
latest: Pulling from library/alpine
8e3ba11ec2a2: Pull complete
Digest: sha256:7043076348bf5040220df6ad703798fd8593a0918d06d3ce30c6c93be117e430
Status: Downloaded newer image for alpine:latest
---> 11cd0b38bc3c
Step 2/4 : ARG BUILD_VERSION
---> Running in b0a1a79967a0
Removing intermediate container b0a1a79967a0
---> 9fa331d63f6d
Step 3/4 : ENV BUILD_VERSION=$BUILD_VERSION
---> Running in a602c27689a5
Removing intermediate container a602c27689a5
---> bf2181423c93
Step 4/4 : RUN echo Build Time: $BUILD_VERSION <<<<<< (*)
---> Running in 9d828cefcfab
Build Time: 122221
Removing intermediate container 9d828cefcfab
---> 2b3afa3d348c
Successfully built 2b3afa3d348c
Successfully tagged a_app:latest
Creating a_app_1 ... done
Attaching to a_app_1
a_app_1 exited with code 0
As the other answer mentioned, you can use docker-compose build --no-cache, and you can avoid mentioning "app" if you have multiple services, so docker-compose will build all the services. What you can do to handle different build versions in the same docker-compose build is to use different env vars, like:
$ cat docker-compose
version: '3'
services:
app1:
build:
context: .
args:
- BUILD_VERSION=$APP1_BUILD_VERSION
app2:
build:
context: .
args:
- BUILD_VERSION=$APP2_BUILD_VERSION
Export:
$ export APP1_BUILD_VERSION=1.1.1
$ export APP2_BUILD_VERSION=2.2.2
Build:
$ docker-compose build
Building app1
Step 1/4 : FROM alpine
latest: Pulling from library/alpine
8e3ba11ec2a2: Pull complete
Digest: sha256:7043076348bf5040220df6ad703798fd8593a0918d06d3ce30c6c93be117e430
Status: Downloaded newer image for alpine:latest
---> 11cd0b38bc3c
Step 2/4 : ARG BUILD_VERSION
---> Running in 0b66093bc2ef
Removing intermediate container 0b66093bc2ef
---> 906130ee5da8
Step 3/4 : ENV BUILD_VERSION=$BUILD_VERSION
---> Running in 9d89b48c875d
Removing intermediate container 9d89b48c875d
---> ca2480695149
Step 4/4 : RUN echo Build Time: $BUILD_VERSION
---> Running in 52dec27874ec
Build Time: 1.1.1
Removing intermediate container 52dec27874ec
---> 1b3654924297
Successfully built 1b3654924297
Successfully tagged a_app1:latest
Building app2
Step 1/4 : FROM alpine
---> 11cd0b38bc3c
Step 2/4 : ARG BUILD_VERSION
---> Using cache
---> 906130ee5da8
Step 3/4 : ENV BUILD_VERSION=$BUILD_VERSION
---> Running in d29442339459
Removing intermediate container d29442339459
---> 8b26def5ef3a
Step 4/4 : RUN echo Build Time: $BUILD_VERSION
---> Running in 4b3de2d223e5
Build Time: 2.2.2
Removing intermediate container 4b3de2d223e5
---> 89033b10b61e
Successfully built 89033b10b61e
Successfully tagged a_app2:latest
You need to set argument in docker-compose.yml as shown which will then be overriden to passed env variable -
version: '3'
services:
app:
build:
context: .
args:
- BUILD_VERSION
Next export environment variable you need to pass.
$ export BUILD_VERSION=1.0
Now build the image using command
$ docker-compose build --no-cache --build-arg BUILD_VERSION=$BUILD_VERSION app
You can pass in args to build, from the docker-compose file to the docker build. It is surprising the env vars aren't used for run and build.
// docker-compose.yml
version: '3'
services:
app:
build:
context: .
environment:
- BUILD_VERSION
args:
- BUILD_VERSION=${BUILD_VERSION}
volumes:
...
// Dockerfile
FROM node
ADD app.js /
ARG BUILD_VERSION
ENV BUILD_VERSION=$BUILD_VERSION
RUN echo Build Time: $BUILD_VERSION
RUN node /app.js
CMD echo Run Time: $BUILD_VERSION