PROBLEM
I use docker-compose for development with Python/Flask. I want my host codebase to sync with one inside docker container but not...
SITUATION
My working directory structure is below:
.
├── Dockerfile
├── docker-compose.yml
├── app.py
└── requirements.txt
I made bind mount from host's current directory to container's /app.
Dockerfile:
FROM python:3.7.3-alpine3.9
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install --upgrade pip && \
pip install -r requirements.txt
COPY . .
CMD gunicorn -b 0.0.0.0:9000 -w 4 app:app
docker-compose.yml:
version: '3'
services:
web:
build: .
ports:
- "4649:9000"
volumes:
- .:/app
When I access http://localhost:4649 I can see correct response so Docker container is working well. However response don't update when I change app.py.
I inspected the container and the result is below
"Mounts": [
{
"Type": "bind",
"Source": "/Users/emp-mac-zakiooo/dev/jinja-pwa",
"Destination": "/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
It looks very right so I have no idea about this problem 😇
OMG I found my files are correctly synced but gunicorn cached them so added --reload to CMD in Dockerfile, and finally it fixed.
Thank you for helping and soooo sorry for my foolishness...!
You could achieve that by doing the following:
volumes:
mount:
driver: local
driver_opts:
type: nfs
o: addr=host.docker.internal,rw,nolock,hard,nfsvers=3
device: ":${PWD}/path"
In your volume declaration within a service, you can do the following:
version: '3'
services:
web:
build: .
ports:
- "4649:9000"
volumes:
- mount:/app
Add the following to /etc/exports to enable docker access to your nfs volumes
<path to $WORKSPACE> 127.0.0.1 -alldirs -mapall=0:80 once it's there you need to run sudo nfsd restart to include your changes.
if ever docker-compose stops responding when using NFS, restarting docker usually fixes it
I hope that this helps!
Related
Dockerfile:
FROM node:18.13.0
ENV WORK_DIR=/app
RUN mkdir -p ${WORK_DIR}
WORKDIR ${WORK_DIR}
RUN mkdir ${WORK_DIR}/data
RUN chmod -R 755 ${WORK_DIR}/data
COPY package*.json ./
RUN npm ci
COPY . .
docker-compose.yml:
version: '3.8'
services:
fetch:
container_name: fetch
build: .
command: sh -c "npx prisma migrate deploy && npm start"
restart: unless-stopped
depends_on:
- postgres
volumes:
- ./data:/app/data:z
The container fetches new files and saves them into a directory configured by the app running in the container, defaulting to data/. The issue is that they're all created as root and cannot be manipulated by the host. If I chown the dir on the host, it works, but any new files are then created as root again.
I've tried a couple different variations of creating a new user in Dockerfile and passing host user info into the compose file but it always seems to result in a disconnect between the Dockerfile and compose file. I'm trying to keep things as easy as docker compose up, if possible.
I'm trying to download a PDF and save it inside my application which is dockerized. The problem seems to be that I'm not allowed to write files inside the container. Here are the steps I took:
I first created a named volume myvol: docker volume create myvol. I then updated my docker-compose.yml file as follows:
version: "3"
services:
webpp:
container_name: webapp
image: webapp:latest
build: .
ports:
- "8000:8000"
volumes:
- myvol:/app
volumes:
myvol:
And for reference, my Dockerfile:
FROM python:3.7
WORKDIR /app
ADD ./requirements.txt .
RUN pip install -r requirements.txt
ADD . .
RUN groupadd -g 999 appuser && \
useradd -r -d /app -u 999 -g appuser appuser
RUN chown -R appuser:appuser /app
USER appuser
RUN mkdir webapp/_tmp; chmod a+rwx webapp/_tmp/
EXPOSE 8000
ENTRYPOINT ["python", "main.py"]
When I run my code, I get a permission denied. The same happens if I exec into the container and try to write something, or for example wget http://duck.com also gives me a Permission denied error.
I'm unsure what's wrong, as even docker inspect seems correct...
"Mounts": [
{
"Type": "volume",
"Name": "myvol",
"Source": "/var/lib/docker/volumes/myvol/_data",
"Destination": "/app",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
],
...
"Volumes": {
"/app": {}
},
"WorkingDir": "/app",
"Entrypoint": [
"python",
"main.py"
],
...
the volumes section in your compose - myvol:/app will overwrite the permission and all what it contains in the /app folder , you may delete it from your compose.
if you want to use volumes to presist data , create another mount as /app and point to it in your python code
I'm using AWS ECS repository for docker images.
My docker-compose.yml file looks like:
version: "3"
services:
my-front-end:
image: myFrontEndImage:myTag
links:
- my-back-end
ports:
- "8080:8080"
logging:
driver: 'json-file'
options:
max-size: "50m"
my-back-end:
image: myBackEndImage:myTag
ports:
- "3000:3000"
logging:
driver: 'json-file'
options:
max-size: "50m"
And what I need is, to be able to pass a environment variable from the docker-compose file, into my docker image.
What I tried was adding the lines for environment (following the example).
version: "3"
services:
my-front-end:
image: myFrontEndImage:myTag
links:
- my-back-end
environment:
- BACKEND_SERVER_PORT=3001
ports:
- "8080:8080"
logging:
driver: 'json-file'
options:
max-size: "50m"
my-back-end:
image: myBackEndImage:myTag
ports:
- "3000:3000"
logging:
driver: 'json-file'
options:
max-size: "50m"
And then in my project (which is a VueJS project) I'm trying to access it by process.env.BACKEND_SERVER_PORT. But I do not see my value and when I tried console.log(process.env); I see that it has only the value {NODE_ENV: "development"}.
So my question here is, how to pass the env variable from the docker-compose to my docker image, so that I will be able to use it inside my project?
Everything in the project works fine, I'm working on this project a long time and docker-compose file works, it's just, now when I have a need of adding this environment variable, I can't make it work.
EDIT: adding few more files, per request in comment.
The .Dockerfile for my-front-end looks like:
FROM node:8.11.1
WORKDIR /app
COPY package*.json ./
RUN npm i npm#latest -g && \
npm install
COPY . .
CMD ["npm", "start"]
As mentioned, this is an VueJS application and here is the part of package.json which you may be interested in:
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"build": "node build/build.js"
},
While the .Dockerfile for my-back-end looks like:
FROM node:8.11.1
WORKDIR /app/server
COPY package*.json ./
RUN npm i npm#latest -g && \
npm install
COPY . .
CMD ["npm", "start"]
My back-end is actually an express.js app that is listening on a separate port and the app is placed in a folder server under the root of the project.
Here is the part of package.json which you may be interested in:
"scripts": {
"start": "nodemon src/app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
I think you are doing everything right in terms of configuring docker-compose. And it seems like there is some nuances on passing an environment variable to VueJS application.
According to answers to this question you need to name your variables with VUE_APP_* to be able to get them from client-side
As the title indicates, i have a container that is unable to bind from host port to container port. I tried searching for similar issues, but have not found any related to using dotnet watch in a docker container since Microsoft introduced the microsoft/dotnet docker repo with dotnet watch built into the sdk image.
Any suggestions as to what i am doing wrong are much appreciated.
Dockerfile
FROM microsoft/dotnet:2.1.301-sdk as build
ENV DOTNET_USE_POLLING_FILE_WATCHER 1
WORKDIR /app
COPY . .
RUN dotnet restore
EXPOSE 5000-5001
ENTRYPOINT [ "dotnet", "watch", "run", "--no-restore"]
docker-compose.yml
version: "3"
services:
esportapp:
container_name: esportapp
image: esportapp:dev
build:
context: .
dockerfile: Docker/dev.Dockerfile
volumes:
- esportapp.volume:/app
ports:
- "5000:5000"
- "5001:5001"
volumes:
esportapp.volume:
Complete error:
esportapp | Hosting environment: Development
esportapp | Content root path: /app
esportapp | Now listening on: https://localhost:5001
esportapp | Now listening on: http://localhost:5000
esportapp | Application started. Press Ctrl+C to shut down.
esportapp | warn: Microsoft.AspNetCore.Server.Kestrel[0]
esportapp | Unable to bind to https://localhost:5001 on the IPv6 loopback interface: 'Cannot assign requested address'.
esportapp | warn: Microsoft.AspNetCore.Server.Kestrel[0]
esportapp | Unable to bind to http://localhost:5000 on the IPv6 loopback interface: 'Cannot assign requested address'.
Just ran into this problem myself. I don't think dotnet watch run plays nicely with localhost type urls. Try setting your hosting url to https://0.0.0.0:5000 in your container.
In the dockerfile with:
ENTRYPOINT [ "dotnet", "watch", "run", "--no-restore", "--urls", "https://0.0.0.0:5000"]
Or in launchSettings.json like:
{
"profiles": {
"[Put your project name here]": {
"commandName": "Project",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development",
"DOTNET_USE_POLLING_FILE_WATCHER": "true"
},
"applicationUrl": "https://0.0.0.0:5000/"
}
}
}
Now to get it to automatically reload from within the container you have to use the polling file watcher. That's what the second environment variable is for. (This is pretty common, you've got to do this with webpack, angular, etc).
In your case, you need to change the esportsapp.volume to a directory on your host:
volumes:
- ./:/app
That will map the /app volume in your container to the docker-compose directory. The problem you're facing is that the app is built in a volume on your project's default docker-compose network, so when you change a file in the source directory, it's not actually changing in that volume. With this fix, however, you'll run into the problem of the dotnet restore and dotnet watch inside the container changing your host's files. There's a fix for all of that, if you're interested...
My Usual .Net Core App Docker setup
To debug, run: docker-compose -f run.yml up --build
To build a release: docker-compose -f build.yml up --build
Project structure
/ # source control root
/build.yml # docker-compose file for building a release
/run.yml # docker-compose file for running locally & debugging
/project # an application
/project/build.Dockerfile # the docker container that will build "project" for release
/project/run.Dockerfile # the docker container that will build and run "project" locally for debugging
/project/.dockerignore # speeds up container builds by excluding large directories like "packages" or "node_modules"
/project/src # where I hide my source codez
/project/src/Project.sln
/project/src/Project/Project.csproj
/project/src/Project/Directory.Build.props # keeps a docker mapped volume from overwriting .dlls on your host
/project/src/Project.Data/Project.Data.csproj # typical .Net project structure
/web-api # another application...
Directory.Build.props (put this in the same folder as your .csproj, keeps your dotnet watch run command from messing with the source directory on your host)
<Project>
<PropertyGroup>
<DefaultItemExcludes>$(DefaultItemExcludes);$(MSBuildProjectDirectory)/obj/**/*</DefaultItemExcludes>
<DefaultItemExcludes>$(DefaultItemExcludes);$(MSBuildProjectDirectory)/bin/**/*</DefaultItemExcludes>
</PropertyGroup>
<PropertyGroup Condition="'$(DOTNET_RUNNING_IN_CONTAINER)' == 'true'">
<BaseIntermediateOutputPath>$(MSBuildProjectDirectory)/obj/container/</BaseIntermediateOutputPath>
<BaseOutputPath>$(MSBuildProjectDirectory)/bin/container/</BaseOutputPath>
</PropertyGroup>
<PropertyGroup Condition="'$(DOTNET_RUNNING_IN_CONTAINER)' != 'true'">
<BaseIntermediateOutputPath>$(MSBuildProjectDirectory)/obj/local/</BaseIntermediateOutputPath>
<BaseOutputPath>$(MSBuildProjectDirectory)/bin/local/</BaseOutputPath>
</PropertyGroup>
</Project>
run.yml (docker-compose.yml for debugging)
version: "3.5"
services:
project:
build:
context: ./project
dockerfile: run.Dockerfile
ports:
- 5000:80
volumes:
- ./project/src/Project:/app
run.Dockerfile (the Dockerfile for debugging)
FROM microsoft/dotnet:2.1-sdk
# install the .net core debugger
RUN apt-get update
RUN apt-get -y --no-install-recommends install unzip
RUN apt-get -y --no-install-recommends install procps
RUN rm -rf /var/lib/apt/lists/*
RUN curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l /vsdbg
VOLUME /app
WORKDIR /app
CMD dotnet watch run --urls http://0.0.0.0:80
build.yml (the docker-compose.yml for building release versions)
version: "3.5"
services:
project:
build:
context: ./project
dockerfile: build.Dockerfile
volumes:
- ./project:/app
build.Dockerfile (the Dockerfile for building release versions)
FROM microsoft/dotnet:2.1-sdk
VOLUME /app
# restore as a separate layer to speed up builds
WORKDIR /src
COPY src/Project/Project.csproj .
RUN dotnet restore
COPY src/Project/ .
CMD dotnet publish -c Release -o /app/out/
There's a simple solution, add the following two lines into your docker-compose.yml file, and the error will disappear.
environment:
- ASPNETCORE_URLS=https://+;http://+;
Just wanted to share one other issue I ran recently while containerizing a set of projects even if everything is configured correctly. Hope might help someone else.
Very inconspicuous but some of projects had in Program.cs:
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.Sources.Clear();
...
Which removes all of the sources and ChainedConfigurationSource specifically that leads to the error above.
I solved the same issue in the following way:
Added the following in appsettings.json to force Kestrel to listen to port 80.
"Kestrel": {
"EndPoints": {
"Http": {
"Url": "http://+:80"
}
}
}
Exposed the port in dockerfile
ENV ASPNETCORE_URLS=http://+:80
EXPOSE 80
Ran the container using the below command.
docker run -p 8080:80 <image-name>:<tag>
The app exposed on http://localhost:8080/
I'm trying to have one service to build my client side and then share it to the server using a named volume. Every time I do a docker-compose up --build I want the client side to build and update the named volume clientapp:. How do I do that?
docker-compose.yml
version: '2'
volumes:
clientapp:
services:
database:
image: mongo:3.4
volumes:
- /data/db
- /var/lib/mongodb
- /var/log/mongodb
client:
build: ./client
volumes:
- clientapp:/usr/src/app/client
server:
build: ./server
ports:
- "3000:3000"
environment:
- DB_1_PORT_27017_TCP_ADDR=database
volumes:
- clientapp:/usr/src/app/client
depends_on:
- client
- database
client Dockerfile
FROM node:6
ENV NPM_CONFIG_LOGLEVEL warn
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
# builds my application into /client
CMD ["npm", "build"]
By definition, a volume is the persistent directories that docker won't touch other than to perform an initial creation when they are empty. If this is your code, it probably shouldn't be a volume.
With that said, you can:
Delete the volume between runs with docker-compose down -v and it will be recreated and initialized on the next docker-compose up -d.
Change your container startup scripts to copy the files from some other directory in the image to the volume location on startup.
Get rid of the volume and include the code directly in the image.
I'd recommend the latter.
Imagine you shared your src folder like this :
...
volumes:
- ./my_src:/path/to/docker/src
...
What worked for me is to chown the my_src folder :
chown $USER:$USER -R my_src
It turned out some files were created by root and couldn't be modified by docker.
Hope it helps !