dotnet watch run in container with multi-project solution - docker

I'm trying to create a Dockerfile along with a docker-compose.yml file to run dotnet watch run on a multi project ASP.Net Core solution. The goal is to have a container watching for changes in all of the three projects.
My solution structure is this:
Nc.Application
Nc.Domain
Nc.Infrastructure
docker-compose.yml
Nc.Application contains the main project to run, and the other two folders are .Net standard projects referenced by the main project. Inside Nc.Application i have a folder, Docker, with my dockerfile.
Controllers
Docker
Development.Dockerfile
Properties
Program.cs
Startup.cs
...
My Dockerfile and compose file contains the following:
Development.Dockerfile
FROM microsoft/dotnet:2.1-sdk AS build
ENTRYPOINT [ "dotnet", "watch", "run", "--no-restore", "--urls", "http://0.0.0.0:5000" ]
docker-compose.yml
version: '3'
services:
nc.api:
container_name: ncapi_dev
image: ncapi:dev
build:
context: ./Nc.Application
dockerfile: Docker/Development.Dockerfile
volumes:
- ncapi.volume:.
ports:
- "5000:5000"
- "5001:5001"
volumes:
ncapi.volume:
When i try to run docker-compose up i get the following error:
ERROR: for f6d811109779_ncapi_dev Cannot create container for service nc.api: invalid volume specification: 'nc_ncapi.volume:.:rw': invalid mount config for type "volume": invalid mount path: '.' mount path
must be absolute
ERROR: for nc.api Cannot create container for service nc.api: invalid volume specification: 'nc_ncapi.volume:.:rw': invalid mount config for type "volume": invalid mount path: '.' mount path must be absolute
ERROR: Encountered errors while bringing up the project.
I don't know what the path for the volume should be, as the idea is to create
a container not directly containing files, but watching files in a folder on my system.
Does anyone have any suggestions as to how to go about this?
EDIT:
I updated WORKDIR in Dockerfile to /app/Nc.Application, updated the volume path to be ./:/app and removed the named volume volumes: ncapi.volume. However, i now receive the following error:
ncapi_dev | watch : Polling file watcher is enabled
ncapi_dev | watch : Started
ncapi_dev | /usr/share/dotnet/sdk/2.1.403/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(198,5): error NETSDK1004: Assets file '/app/Nc.Application/c:/Users/Christian/Documents/source/nc/Nc.Application/obj/project.assets.json' not found. Run a NuGet package restore to generate this file. [/app/Nc.Application/Nc.Application.csproj]
ncapi_dev |
ncapi_dev | The build failed. Please fix the build errors and run again.
ncapi_dev | watch : Exited with error code 1
ncapi_dev | watch : Waiting for a file to change before restarting dotnet...

Update: The latest VS Code Insiders introduced Remote Development which allows you to directly work inside a container. Worth checking this out.
You shouldn't mount things at the root of the container. Use another mount point like /app. Also, you don't need a named volume but a bind mount for this situation.
Make changes like this
Development.Dockerfile
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /app
ENTRYPOINT [ "dotnet", "watch", "run", "--no-restore", "--urls", "http://0.0.0.0:5000" ]
docker-compose.yml
version: '3'
services:
nc.api:
container_name: ncapi_dev
image: ncapi:dev
build:
context: ./Nc.Application
dockerfile: Docker/Development.Dockerfile
volumes:
- ./:/app
ports:
- "5000:5000"
- "5001:5001"

Related

How can I add a file to my volume without writing a new file to the host?

I'm trying to run a Next.js project inside docker-compose. To take advantage of hot-reloading, I'm mounting in the entire project to the Docker image as a volume.
So far, so good!
This is where things are starting to get tricky: For this particular project, it turns out Apple Silicon users need a .babelrc file included in their dockerized app, but NOT in the files on their computer.
All other users do not need a .babelrc file at all.
To sum up, this is what I'd like to be able to do:
hot reload project (hence ./:/usr/src/app/)
have an environment variable write content to /usr/src/app/.babelrc.
not have a .babelrc in the host's project root.
My attempt at solving was including the .babelrc under ci-cd/.babelrc in the host file system.
Then I tried mounting the file as a volume like - ./ci-cd/.babelrc:/usr/src/app/.babelrc. But then a .babelrc file gets written back to the root of the project in the host filesystem.
I also tried include COPY ./ci-cd/.babelrc /usr/src/app/.babelrc within the Dockerfile, but it seems to be overwritten with docker-composes's volume property.
Here's my Dockerfile:
FROM node:14
WORKDIR /usr/src/app/
COPY package.json .
RUN npm install
And the docker-compose.yml:
version: "3.8"
services:
# Database image
psql:
image: postgres:13
restart: unless-stopped
ports:
- 5432:5432
# image for next.js project
webapp:
build: .
command: >
bash -c "npm run dev"
ports:
- 3002:3002
expose:
- 3002
depends_on:
- testing-psql
volumes:
- ./:/usr/src/app/

Sharing named volumes in Docker Compose gives me 'device or resource busy' in one of my containers

Here is the problem I have :
My docker-compose file looks as below :
version: '3.7'
services:
development:
build:
context: .
dockerfile: build.Dockerfile
volumes:
- .:/development:z # Source directory
- buildarea:/buildarea:z # Build directory
service:
build:
context: .
dockerfile: run.Dockerfile
volumes:
- .:/development:z
- buildarea:/buildarea:z # Build Directory
depends_on:
- development
volumes:
buildarea:
This is a C++ CMake project. I want to build my code within my container and use as build-directory (for my cmake-build) /buildarea within my container and mount that to named volume, so it can be shared by the service container.
When I try via Docker CLI I get :
docker-compose build development
docker-compose run --rm development
[root#bc8c4d32d1a6 development]# ....make command...
OSError: [Errno 16] Device or resource busy: '/buildarea'
make: *** [Makefile:5: docker] Error 1
When I use VSCode Remote Containers :
1. Open the folder in a container
2. run the same build command
it works fine
Why is that? Both of them are using the development container, but it looks like it doesn't work the same from CLI & VSCode.

Docker container works, but fails when build from docker-compose

I have an application with 3 containers:
client - an angular application,
gateway - a .NET Core application,
api - a .NET Core application
I am having trouble with the container hosting the angular application.
Here is my Docker file:
#stage 1
FROM node:alpine as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
#stage 2
FROM nginx:alpine
COPY --from=node /app/dist/caliber_client /usr/share/nginx/html
EXPOSE 80
and here is the docker compose file:
# Please refer https://aka.ms/HTTPSinContainer on how to setup an https developer certificate for your ASP .NET Core service.
version: '3.4'
services:
calibergateway:
image: calibergateway
container_name: caliber-gateway
build:
context: .
dockerfile: caliber_gateway/Dockerfile
ports:
- 7000:7000
environment:
- ASPNETCORE_ENVIRONMENT=Development
networks:
- caliber-local
caliberapi:
image: caliberapi
container_name: caliber-api
build:
context: .
dockerfile: caliber_api/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
networks:
- caliber-local
caliberclient:
image: caliber-client-image
container_name: caliber-client
build:
context: .
dockerfile: caliber_client/Dockerfile
ports:
- 7005:7005
networks:
- caliber-local
networks:
caliber-local:
external: true
When I build and run the angular container independently, I can connect and run the site, however if I try to build it with docker-compose, I get the following error:
enoent ENOENT: no such file or directory, open '/app/package.json'
I can see that npm cannot find the package.json, but I am copying the whole site to the /app directory in the docker file, so I am not sure where the disconnect is.
Thank you.
In the Dockerfile, the left-hand side of COPY statements is always interpreted relative to the build: { context: } directory in the docker-compose.yml file (or the build: directory if there's not a nested argument, or the docker build directory argument; but in any case never anything outside this directory tree).
In a comment, you say
The package.json is one level deeper than the docker-compose.yml file. It is at the same level of the Dockerfile in the caliber_client folder.
Assuming that client application is self-contained, you can change the build definition to use the client subdirectory as the build context
build:
context: caliber_client
dockerfile: Dockerfile
or, since dockerfile: Dockerfile is the default, the shorter
build: caliber_client
If it's important to you to use the parent directory as the build context (maybe you're including some shared files that you don't show in the question) then you can also change the Dockerfile to refer to the subdirectory.
# when the build: { context: } is the parent directory of this one
COPY caliber_client .

How to copy files inside container with docker-compose

I have a simple image that runs a jar file. That jar file inside the image needs a special configuration file in order to run.
In the location with the docker-compose.yml I have a folder named "carrier" and under this folder I have that file.
The docker-compose.yml:
version: "3.3"
services:
web:
image: "myimage:1.80.0.0"
ports:
- "61003:61003"
volumes:
- ./carrier:/var/local/Config/
When I hit docker-compose up it complains that the file is not there, so it doesn't copy it.
If I do another option like I did in the .sh command, something like this:
volumes:
- ./carrier:/var/local/Config/:shared
It complains about another error:
C:\Tasks\2246>docker-compose up
Removing 2246_web_1
Recreating 1fbf5d2bcea4_2246_web_1 ... error
ERROR: for 1fbf5d2bcea4_2246_web_1 Cannot start service web: path /host_mnt/c/Tasks/2246/carrier is mounted on / but it is not a shared mount
Can someone please help me?
Copy the files using Dockerfile, use below;
FROM myimage:1.80.0.0
RUN mkdir -p /var/local/Config/
COPY carrier /var/local/Config/
EXPOSE 61003
docker-compose.yml
version: "3.3"
services:
web:
build:
dockerfile: Dockerfile
context: '.'
ports:
- "61003:61003"
In the end, run below command to build new image and start container
docker-compose up -d --build
You can use Dockerfile if it does not copy.
Dockerfile;
FROM image
COPY files /var/local/Config/
EXPOSE 61003
Docker-compose;
version: "3.3"
services:
web:
build: . (path contains Dockerfile)
ports:
- "61003:61003"
volumes:
- ./carrier:/var/local/Config/
Remove the last /
volumes:
- ./carrier:/var/local/Config
I'm not sure but you can try to set full access permissions for all user to /carrier:
chmod -R 777 /carrier
Thanks all for all your answers.
Seems like finally docker warned me with some comparisons over the windows files vs Linux files when building the image. (Not with docker compose but with Dockerfile).
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Tried it on linux and works.

In Docker, how do I copy files from a local directory so that I can then copy those files into my Docker container?

I'm using Docker
Docker version 19.03.8, build afacb8b
I have the following docker-compose.yml file ...
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
and here is the Docker file it uses to build ...
FROM microsoft/mssql-server-linux:latest
# Create work directory
RUN mkdir -p /usr/work
WORKDIR /usr/work
# Copy all scripts into working directory
COPY . /usr/work/
# Grant permissions for the import-data script to be executable
RUN chmod +x /usr/work/import-data.sh
EXPOSE 1433
CMD /bin/bash ./entrypoint.sh
On my local machine, I have some files in a "../../scripts/myproject/*.sql" directory (the ".." are relative to the directory where my docker-compose.yml file is stored). Is there a way I can run "docker-compose up" and have those files copied into a directory from which I can then copy them into the container's "/usr/work" directory?
There are 2 ways to solve this, with one being easier than the other, but both have use cases.
The easy way
You could mount the directory directly to the container through the docker-compose like this:
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
volumes:
- ../../scripts/myproject:/path/to/dir
Note the added volumes compared to the yaml in your question. This will mount the myproject directory to /path/to/dir within the container. What this will also mean is that if the sql-server-db container writes to any of the files in /path/to/dir, then the file in myproject on the host machine will also change, since the files are mounted.
The less easy way
You could copy the files directly during the build of the image. This is a little bit harder, since the build stage of docker doesn't allow the copying of parent directories unless you add some extra arguments. What needs to happen is that you set the context of the build stage to a different directory than the current directory. The context determines which files are sent to the build stage. This is the same directory as the directory the Dockerfile resides in by default.
To take this approach, you need the following in your docker-compose.yml:
version: "3.2"
services:
sql-server-db:
build:
context: ../..
dockerfile: path/to/Dockerfile # Here you should specify the path to your Dockerfile, this is a relative path from your context
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
So above the context is now ../.. so that you are able to copy files two directories above. You can then copy the myproject directory in your Dockerfile like this:
FROM microsoft/mssql-server-linux:latest
COPY ./scripts/myproject /myfiles
The advantage of this approach is that the files are copied instead of being mounted, so the docker container can write whatever it wants to these files, without affecting the host machine.

Resources