RUN command cannot access volumes - docker

It appears RUN in a dockerfile can't see my volume directory where ENTRYPOINT can.
Here is an example with a dockerfile and docker-compose.yml that is having the issue:
FROM microsoft/dotnet:2.0-sdk
EXPOSE 5000
ENV ASPNETCORE_ENVIRONMENT=Development
WORKDIR /src/testing
RUN dotnet restore
ENTRYPOINT ["dotnet", "run", "--urls=http://0.0.0.0:5000"]
docker-compose.yml:
version: "3.4"
services:
doctnetcore-api-project:
build: ./api/
container_name: doctnetcore-api-project
image: doctnetcore-api-project:development
restart: 'always'
networks:
- mynetwork
volumes:
- /api/src:/src
networks:
mywebmc:
external:
name: mynetwork
When I run docker-compose up I get an error shown below:
MSBUILD : error MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.
ERROR: Service 'doctnetcore-api-project' failed to build: The command '/bin/sh -c dotnet restore' returned a non-zero code: 1
If I comment out RUN dotnet restore and run dotnet restore manually before running docker-compose, it works fine.
So for whatever reason it appears RUN can't see my volume directory and ENTRYPOINT can see my volume directory.

The statements in a Dockerfile are executed at build-time (docker build) and at this point there are no volumes present.
In contrast, the ENTRYPOINT is executed when you run a container (docker run) which has access to potentially mapped volumes.

Related

Flask App on Docker Image and Docker-compose.yml cannot import "app"

I tried to deploy a flask app project. When I directly docker-compose up --build is working but when I create the image and save&load to another place my same docker-compose.yml file cannot run properly. The error is cannot import "app".
working area:
Dockerfile
docker-compose.yml
src/app.py
not working era:
image.tar file(which is using for docker load)
docker-compose.yml
Dockerfile:
RUN mkdir /app
COPY . /app
WORKDIR /app/src
COPY requirements.txt /app/src/requirements.txt
RUN pip install -r requirements.txt
ENV FLASK_APP="app.py"
EXPOSE 5005
CMD python -u -m flask run --host=0.0.0.0
docker-compose.yml:
version: "3"
services:
app:
restart: unless-stopped
build: .
ports:
- "5005:5005"
volumes:
- .:/app
expose:
- 5005
src/app.py:
if __name__ == "__app__":
# start up api
app.run(port=5005, debug=True, host="0.0.0.0")
The result of docker-compose up on the not working era is Error:cannot import "app".
The docker-compose.yml file wouldn't work on a machine where the application code doesn't exist because of the volume mount defined here
volumes:
- .:/app
This volume mount will tell Docker to take whatever's in the . directory of the host machine and mount it to the /app directory on the container. In this case, on your machine where the app doesn't run, you mentioned that the application code doesn't exist in this directory, so it would make sense that the application doesn't run. If you remove the volume mount from the docker-compose.yml file on the second machine and try to run the container again, things should work normally as long as your application code is all copied over to the image correctly. You can read more about Docker volumes here.

Dockerfile entrypoint conflicts with docker-compose context

I have a docker setup that does not have the Dockerfile or docker-compose at the root because there are many services.
build
client.Dockerfile
deployments
docker-compose.yml
web
core
scripts
run.sh
docker-compose
version: "3.1"
services:
client:
build:
context: ..
dockerfile: ./build/client.Dockerfile
volumes:
- ./web/core:/app
ports:
- 3000:3000
- 35729:35729
And then the dockerfile:
FROM node:10.11
ADD web/core/yarn.lock /yarn.lock
ADD web/core/package.json /package.json
ENV NODE_PATH=/node_modules
ENV PATH=$PATH:/node_modules/.bin
RUN yarn
WORKDIR /app
ADD web/core /app
EXPOSE 3000
EXPOSE 35729
RUN cat /app/scripts/run.sh
ENTRYPOINT ["/bin/bash", "/app/scripts/run.sh"]
CMD ["start"]
Now the RUN command displays the result of the file so it is there. However, when running docker-compose up the client_1 | /bin/bash: /app/scripts/run.sh: No such file or directory
I'm guessing it has something to do with the docker-compose context because when the dockerfile was at the root, it seemed to work fine.
I'm getting the feeling that docker is designed essentially to work only at the root.
Context:
I want a live reloading create-react-app server like this: https://www.peterbe.com/plog/how-to-create-react-app-with-docker.
I would like to setup my project this way: https://github.com/golang-standards/project-layout
Your volume is wrongly mounting. This should fix the issue. I created the similar folder structure. From the root folder of build ran docker-compose -f ./deployments/docker-compose.yml up. It works normally only thing i change volume path.
volumes:
- ../web/core:/app

Asp.net core with linux docker container

For a asp.net core project template in VS 2017, there are docker-compose.ci.build.yml:
version: '3'
services:
ci-build:
image: microsoft/aspnetcore-build:1.0-2.0
volumes:
- .:/src
working_dir: /src
command: /bin/bash -c "dotnet restore ./WebAppCoreDockerTest.sln && dotnet publish ./WebAppCoreDockerTest.sln -c Release -o ./obj/Docker/publish"
docker-compose.yml:
version: '3'
services:
webappcoredockertest:
image: webappcoredockertest
ports:
- 8080:80
build:
context: ./WebAppCoreDockerTest
dockerfile: Dockerfile
and Dockerfile:
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80
RUN echo "Oh dang look at that $source"
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "WebAppCoreDockerTest.dll"]
The Dockerfile uses the files in obj/Docker/publish (copied to container) if source argument is null and docker-compose.ci.build.yml puhlishs project to obj/Docker/publish, I assume they are related.
So the question is How to use them together? (docker-compose.ci.build.yml publishes files to obj/Docker/publish and Dockerfile use published files)
The docker-compose.ci.build.yml file is for building the project, as you mentioned. It uses the aspnetcore-build base image which is much heftier than the aspnetcore image used by the Dockerfile. The Dockerfile gets used by the build section of the docker-compose.yml file.
Everything works together with these two steps:
docker-compose -f docker-compose.ci.build.yml run ci-build
docker-compose up --build
The -f option in the first command allows you to specify a compose file other than the default docker-compose.yml.

named docker volume not updating using docker-compose

I'm trying to have one service to build my client side and then share it to the server using a named volume. Every time I do a docker-compose up --build I want the client side to build and update the named volume clientapp:. How do I do that?
docker-compose.yml
version: '2'
volumes:
clientapp:
services:
database:
image: mongo:3.4
volumes:
- /data/db
- /var/lib/mongodb
- /var/log/mongodb
client:
build: ./client
volumes:
- clientapp:/usr/src/app/client
server:
build: ./server
ports:
- "3000:3000"
environment:
- DB_1_PORT_27017_TCP_ADDR=database
volumes:
- clientapp:/usr/src/app/client
depends_on:
- client
- database
client Dockerfile
FROM node:6
ENV NPM_CONFIG_LOGLEVEL warn
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
# builds my application into /client
CMD ["npm", "build"]
By definition, a volume is the persistent directories that docker won't touch other than to perform an initial creation when they are empty. If this is your code, it probably shouldn't be a volume.
With that said, you can:
Delete the volume between runs with docker-compose down -v and it will be recreated and initialized on the next docker-compose up -d.
Change your container startup scripts to copy the files from some other directory in the image to the volume location on startup.
Get rid of the volume and include the code directly in the image.
I'd recommend the latter.
Imagine you shared your src folder like this :
...
volumes:
- ./my_src:/path/to/docker/src
...
What worked for me is to chown the my_src folder :
chown $USER:$USER -R my_src
It turned out some files were created by root and couldn't be modified by docker.
Hope it helps !

Docker Compose not starting mongo service even main service depends on it

I am trying to construct ci process to build, test and publish my .NET Core app using Docker-Compose and bash scripts.
I have UnitTests, IntegrationTests and XApi projects in a folder
and have created DockerFile and docker-compose.yml like below.
IntegrationTests are dependent to mongointegration, so I added links and depends_on attributes to testandpublish service in docker-compose.yml.
When I try to docker-compose up or docker-compose up testandpublish,
it fails to connect mongo. (DockerFile - step 10), mongo service has not been started yet (Don't understand why)
In step 10, if I change RUN to CMD, it can connect to mongo, docker-compose works fine. But this time I cannot detect tests are failed or succeed in my sh script, because now it does not break docker-compose up command..
My question is: Why docker compose does not start service mongointegration? And if it is impossible, how can I understand that service testandpublish failed? Thanks.
Structure:
XProject
-src
-Tests
-UnitTests
-IntegrationTests
-Dockerfile
-docker-compose.yml
-XApi
My Dockerfile content is (I have added line numbers to explain problem here):
1.FROM microsoft/dotnet:1.1.0-sdk-projectjson
2.COPY . /app
3.WORKDIR /app/src/Tests/UnitTests
4.RUN ["dotnet", "restore"]
5.RUN ["dotnet", "build"]
6.RUN ["dotnet", "test"]
7.WORKDIR /app/src/Tests/IntegrationTests
8.RUN ["dotnet", "restore"]
9.RUN ["dotnet", "build"]
10.RUN ["dotnet", "test"]
11.WORKDIR /app/src/XApi
12.RUN ["dotnet", "restore"]
13.RUN ["dotnet", "build"]
14.CMD ["dotnet", "publish", "-c", "Release", "-o", "publish"]
and my docker-compose.yml
version: "3"
services:
testandpublish:
build: .
links:
- mongointegration
depends_on:
- mongointegration
mongointegration:
image: mongo
ports:
- "27017:27017"
The image build phase and the container run phase are two very seperate steps in docker-compose.
Build and Run Differences
The build phase creates each of the image layers from the steps in the Dockerfile. Each happens in standalone containers. None of your service config, apart from the build: stanza specific to a services build, is available during the build.
Once the image is built, it can be run as a container with the rest of your docker-compose service config.
Instead of running tests in your Dockerfile, you could create a script to use as the CMD that runs all your test steps in the container.
#!/bin/sh
set -uex
cd /app/src/Tests/UnitTests
dotnet restore
dotnet build
dotnet test
cd /app/src/Tests/IntegrationTests
dotnet restore
dotnet build
dotnet test"
cd /app/src/XApi
dotnet restore
dotnet build
dotnet publish -c Release -o publish
If the microsoft/dotnet:1.1.0-sdk-projectjson image is Windows based you might need to convert this to the equivalent CMD or PS commands.
Container Dependencies
depends_on doesn't work quite as well as most people assume it will. In it's simple form, depends_on only waits for the container to launch before moving onto starting the dependent container. It's not smart enough to wait for the process inside the container be ready. Proper dependencies can be done with a healthcheck and a condition.
services:
testandpublish:
build: .
links:
- mongointegration
depends_on:
mongointegration:
condition: service_healthy
mongointegration:
image: mongo
ports:
- "27017:27017"
healthcheck:
test: ["CMD", "docker-healthcheck"]
interval: 30s
timeout: s
retries: 3
Using the Docker health check script, after it's been copied into the container via a Dockerfile.
#!/bin/bash
set -eo pipefail
host="$(hostname --ip-address || echo '127.0.0.1')"
if mongo --quiet "$host/test" --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 1)'; then
exit 0
fi
exit 1
RUN steps are executed when Docker builds the image and no containers are available yet. Instead CMD step is executed on the run time and Docker Compose has already started depending mongointegration container.

Resources