Docker container cannot copy file into volume - docker

I am pretty new to docker, so i might be doing something truly wrong
I need to share some files between docker containers, using a docker compose file
I have already created a volume like this
docker volume create shared
After that i can check the created volume
docker volume inspect shared
[
{
"CreatedAt": "2019-03-08T14:54:57-05:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/shared/_data",
"Name": "shared",
"Options": {},
"Scope": "local"
}
]
My docker-compose.yaml file looks like this
version: '3.1'
services:
service1:
build:
context: Service1
dockerfile: Dockerfile
restart: always
container_name: server1-server
volumes:
- shared:/shared
service2:
build:
context: Service2
dockerfile: Dockerfile
restart: always
container_name: server2-server
volumes:
- shared:/shared
volumes:
shared:
external: true
And the Dockerfile looks like this (just for testing purposes)
FROM microsoft/dotnet:2.2-sdk AS build-env
RUN echo "test" > /shared/test.info
When i issue a docker-compose up command i get this error
/bin/sh: 1: cannot create /shared/test.info: Directory nonexistent
If i modify the Dockerfile to this
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app
COPY *.csproj ./
RUN cp *.csproj /shared/
I get this error
cp: cannot create regular file '/shared/': Not a directory
Any ideas how to achieve this ?

A Dockerfile contains instructions to create an image. After the image is built, the image can be run as a container.
A volume is attached when launching containers.
It thus makes no sense to use Dockerfile instructions to copy a file into a volume while building an image.
Volumes are generally used to share runtime data between containers, or to keep data after a container is stopped.

Related

Docker volume mounting empty

I'm making an image to host a PHP application. I'm using COPY to populate /var/www/html with the app files and creating a VOLUME /var/www/html to be able to mount the dir on the host and edit files like config.
But:
When I mount the volume on docker-compose.yml, the directory is empty.
When I omit the "volmue" entry on the docker-compose.yml and connect with the container shell, the directory /var/www/html is filled.
I already read tons of examples and documentations but, sincerely, don't know what is wrong.
dockerfile:
FROM php:8.0-apache-buster
LABEL description="OCOMON 3.2 Frontend (noDB)"
RUN docker-php-ext-install pdo_mysql
WORKDIR /var/www/html/ocomon
COPY ./ocomon .
VOLUME /var/www/html/ocomon
docker-compose.yml:
version: '3.5'
services:
ocomon:
image: ocomon:3.2
container_name: ocomon
volumes:
- ./volumes/ocomon/ocomon:/var/www/ocomon
ports:
- 4682:80
Assuming your host's directory is ${PWD}/www/html, then you need only provide the volumes value in docker-compose.yml and it should be:
Dockerfile:
FROM php:8.0-apache-buster
LABEL description="OCOMON 3.2 Frontend (noDB)"
RUN docker-php-ext-install pdo_mysql
WORKDIR /var/www/html/ocomon
COPY ./www/html/ocomon .
and:
docker-compose.yml:
version: '3.5'
services:
ocomon:
image: ocomon:3.2
container_name: ocomon
volumes:
- ${PWD}/www/html:/var/www/html
ports:
- 4682:80
Explanation
Dockerfile VOLUMES creates a volume in the image. By following WORKDIR (which always create if the path does not exist), with VOLUME, you overwrite whatever was either in (or created by WORKDIR). NOTE this is done during image build.
Docker Compose volumes mounts a directory from the host into the image. The syntax is volumes: - ${HOST_PATH}:${IMAGE_PATH}. NOTE this is done during image run.

In Docker, how do I copy files from a local directory so that I can then copy those files into my Docker container?

I'm using Docker
Docker version 19.03.8, build afacb8b
I have the following docker-compose.yml file ...
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
and here is the Docker file it uses to build ...
FROM microsoft/mssql-server-linux:latest
# Create work directory
RUN mkdir -p /usr/work
WORKDIR /usr/work
# Copy all scripts into working directory
COPY . /usr/work/
# Grant permissions for the import-data script to be executable
RUN chmod +x /usr/work/import-data.sh
EXPOSE 1433
CMD /bin/bash ./entrypoint.sh
On my local machine, I have some files in a "../../scripts/myproject/*.sql" directory (the ".." are relative to the directory where my docker-compose.yml file is stored). Is there a way I can run "docker-compose up" and have those files copied into a directory from which I can then copy them into the container's "/usr/work" directory?
There are 2 ways to solve this, with one being easier than the other, but both have use cases.
The easy way
You could mount the directory directly to the container through the docker-compose like this:
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
volumes:
- ../../scripts/myproject:/path/to/dir
Note the added volumes compared to the yaml in your question. This will mount the myproject directory to /path/to/dir within the container. What this will also mean is that if the sql-server-db container writes to any of the files in /path/to/dir, then the file in myproject on the host machine will also change, since the files are mounted.
The less easy way
You could copy the files directly during the build of the image. This is a little bit harder, since the build stage of docker doesn't allow the copying of parent directories unless you add some extra arguments. What needs to happen is that you set the context of the build stage to a different directory than the current directory. The context determines which files are sent to the build stage. This is the same directory as the directory the Dockerfile resides in by default.
To take this approach, you need the following in your docker-compose.yml:
version: "3.2"
services:
sql-server-db:
build:
context: ../..
dockerfile: path/to/Dockerfile # Here you should specify the path to your Dockerfile, this is a relative path from your context
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
So above the context is now ../.. so that you are able to copy files two directories above. You can then copy the myproject directory in your Dockerfile like this:
FROM microsoft/mssql-server-linux:latest
COPY ./scripts/myproject /myfiles
The advantage of this approach is that the files are copied instead of being mounted, so the docker container can write whatever it wants to these files, without affecting the host machine.

Docker volume clears folder in container on windows 10

I've created a simple docker with a nodejs server.
FROM node:12.16.1-alpine
WORKDIR /usr/src
COPY ./app/package.json .
RUN yarn
COPY ./app ./app
This works great and the service is running.
Now I'm trying to run the docker with a volume for local development using docker compose:
version: "3.4"
services:
web:
image: my-node-app
volumes:
- ./app:/usr/src/app
ports:
- "8080:8080"
command: ["yarn", "start"]
build:
context: .
dockerfile: ./app/Dockerfile
This is my folder structure in the host:
The service works without the volume. When I add the volume, the /usr/src/app is empty (even though it is full as shown in the folder structure).
Inspecting the docker container I get the following mount config:
"Mounts": [
{
"Type": "bind",
"Source": "/d/development/dockerNCo/app",
"Destination": "/usr/src/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
But still, browsing to the folder via shell of vscode show it as empty.
In addition, the command: docker volume ls shows an empty list.
I'm running docker 18.09.3 on windows 10.
Is there anything wrong with the configuration? How is it supposed to work?
Adding the volume to your service will remove all the files /usr/src/app and mount the content of ./app from your host machine. This also means that all files generated by running yarn in the docker image will be lost because they exist only in the docker image. This is the expected behaviour of adding volume in docker and it is not a bug.
volumes:
- ./app:/usr/src/app
Usually and for none development Envs you don't need the volume at all here.
If you would like to see the files on your host, you need to run yarn command from docker-compose (you can use an entry point)

Files inside Docker container not updating when I edit in host

I am using Docker which is running fine.
I can start a Docker image using docker-compose.
docker-compose rm nodejs; docker-compose rm db; docker-compose up --build
I attached a shell to the Docker container using
docker exec -it nodejs_nodejs_1 bash
I can view files inside the container
(inside container)
cat server.js
Now when I edit the server.js file inside the host, I would like the file inside the container to change without having to restart Docker.
I have tried to add volumes to the docker-compose.yml file or to the Dockerfile, but somehow I cannot get it to work.
(Dockerfile, not working)
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
VOLUMES ["/usr/src/app"]
EXPOSE 8080
CMD [ "npm", "run", "watch" ]
or
(docker-compose.yml, not working)
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- src: /usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example
volumes:
src:
There is probably a simple guide somewhere, but I havn't found it yet.
If you want a copy of the files to be visible in the container, use a bind mount volume (aka host volume) instead of a named volume.
Assuming your docker-compose.yml file is in the root directory of the location that you want in /usr/src/app, then you can change your docker-compose.yml as follows:
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- .:/usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example

dotnet watch run in container with multi-project solution

I'm trying to create a Dockerfile along with a docker-compose.yml file to run dotnet watch run on a multi project ASP.Net Core solution. The goal is to have a container watching for changes in all of the three projects.
My solution structure is this:
Nc.Application
Nc.Domain
Nc.Infrastructure
docker-compose.yml
Nc.Application contains the main project to run, and the other two folders are .Net standard projects referenced by the main project. Inside Nc.Application i have a folder, Docker, with my dockerfile.
Controllers
Docker
Development.Dockerfile
Properties
Program.cs
Startup.cs
...
My Dockerfile and compose file contains the following:
Development.Dockerfile
FROM microsoft/dotnet:2.1-sdk AS build
ENTRYPOINT [ "dotnet", "watch", "run", "--no-restore", "--urls", "http://0.0.0.0:5000" ]
docker-compose.yml
version: '3'
services:
nc.api:
container_name: ncapi_dev
image: ncapi:dev
build:
context: ./Nc.Application
dockerfile: Docker/Development.Dockerfile
volumes:
- ncapi.volume:.
ports:
- "5000:5000"
- "5001:5001"
volumes:
ncapi.volume:
When i try to run docker-compose up i get the following error:
ERROR: for f6d811109779_ncapi_dev Cannot create container for service nc.api: invalid volume specification: 'nc_ncapi.volume:.:rw': invalid mount config for type "volume": invalid mount path: '.' mount path
must be absolute
ERROR: for nc.api Cannot create container for service nc.api: invalid volume specification: 'nc_ncapi.volume:.:rw': invalid mount config for type "volume": invalid mount path: '.' mount path must be absolute
ERROR: Encountered errors while bringing up the project.
I don't know what the path for the volume should be, as the idea is to create
a container not directly containing files, but watching files in a folder on my system.
Does anyone have any suggestions as to how to go about this?
EDIT:
I updated WORKDIR in Dockerfile to /app/Nc.Application, updated the volume path to be ./:/app and removed the named volume volumes: ncapi.volume. However, i now receive the following error:
ncapi_dev | watch : Polling file watcher is enabled
ncapi_dev | watch : Started
ncapi_dev | /usr/share/dotnet/sdk/2.1.403/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(198,5): error NETSDK1004: Assets file '/app/Nc.Application/c:/Users/Christian/Documents/source/nc/Nc.Application/obj/project.assets.json' not found. Run a NuGet package restore to generate this file. [/app/Nc.Application/Nc.Application.csproj]
ncapi_dev |
ncapi_dev | The build failed. Please fix the build errors and run again.
ncapi_dev | watch : Exited with error code 1
ncapi_dev | watch : Waiting for a file to change before restarting dotnet...
Update: The latest VS Code Insiders introduced Remote Development which allows you to directly work inside a container. Worth checking this out.
You shouldn't mount things at the root of the container. Use another mount point like /app. Also, you don't need a named volume but a bind mount for this situation.
Make changes like this
Development.Dockerfile
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /app
ENTRYPOINT [ "dotnet", "watch", "run", "--no-restore", "--urls", "http://0.0.0.0:5000" ]
docker-compose.yml
version: '3'
services:
nc.api:
container_name: ncapi_dev
image: ncapi:dev
build:
context: ./Nc.Application
dockerfile: Docker/Development.Dockerfile
volumes:
- ./:/app
ports:
- "5000:5000"
- "5001:5001"

Resources