I am trying to use Docker volume/bind mount so that I don't need to build my project again and again after every small change. I do not get any error but changes in the local files are not visible in container thus I still have to rebuild the project for the new files system snapshot.
Following solution seemed to work for some people.Therefore,
I have tried restarting Docker and Reset Credentials at Docker Desktop-->Setting-->Shared Drives
Here is my docker-compose.yml file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
I have tried through Docker CLI too. but problem persists
docker build -f Dockerfile.dev .
docker run -p 3000:3000 -v /app/node_modules -v ${pwd}:/app image-id
Windows does copy the files in current directory to container but they are
not in sync
I am using Windows 10 power shell and docker version 18.09.2
UPDATE:
I have checked container contents
using command
docker exec -t -i container-id sh
and the printed file contents using command
cat filename
And from this it is clear that the files container references to has/have changed/updated but I still don't understand why do i have to restart container to see the changes in browser.
Should not they be apparent after just refreshing the tab?
Related
I have installed rootless docker on ubuntu 20.04 [https://docs.docker.com/engine/security/rootless/][1]
I have download vscodium appimage from [https://github.com/VSCodium/vscodium/releases/download/1.66.0/VSCodium-1.66.0-1648720116.glibc2.17-x86_64.AppImage][1]
i have shared host directory containing this Appimage with rootless docker container. But it doesn't run. When I manually install(apt-get install) any GUI package(ex. firefox) inside the container it runs successfully.
output of the command: docker-compose up vscodium
Creating vscodium ... done
Attaching to vscodium
vscodium | codium: error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory
vscodium exited with code 127
content of file docker-compose.yml
version: "3"
services:
vscodium:
image: python:3.10.4-bullseye
entrypoint: custom-docker-entrypoint.sh
container_name: vscodium
environment:
- DISPLAY=${DISPLAY}
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix:ro
- $HOME/.Xauthority:$HOME/.Xauthority:ro
- ./custom-docker-entrypoint.sh:/usr/local/bin/custom-docker-entrypoint.sh
- ./appImages/VSCodium.AppImage:/ide/VSCodium.AppImage
network_mode: host
content of file custom-docker-entrypoint.sh
#!/bin/sh
chmod a+x /ide/VSCodium.AppImage
/ide/VSCodium.AppImage --appimage-extract-and-run
A few notes on running AppImages insude docker:
AppImages require fuse to run which is usually not available/usable on docker
Extract the AppImage contents and mount that folder on your docker
missing libnss3.so, you will have to install this on the host system. If it doesn't work you will have to report it to the AppImage author to for them to include it in the bundle.
In a docker-compose.yml file I have defined the following service:
php:
container_name: php
build:
context: ./container/php
dockerfile: Dockerfile
networks:
- saasnet
volumes:
- ./services:/var/www/html
- ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf
environment:
- "DB_PORT=3306"
- "DB_HOST=database"
It all builds fine, and another service (nginx) using the same volume mapping, - ./services:/var/www/html finds php as expected, so it all works in the browser. So far, so good.
But now I want to go into the container because I want to run composer install from a certain directory inside the container. So I go into the container using:
docker run -it php bash
And I find myself in the container at /var/www/html, where I expect to be able to navigate as if I were on my host machine in ./services directory, but ls at this point inside the container shows no files at all.
What am I missing or not understanding about how this works?
Your problem is that your are not specifying the volume on your run command - docker run is not aware of your docker-compose.yml. If you want to run it with all your options as specifiend in it, you need to either use docker-compose run, or pass all options to docker run:
docker-compose run php bash
docker run -it -e B_PORT=3306 -e DB_HOST=database -v ./services:/var/www/html -v ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf php bash
I am trying to use Docker for local development. The problem is that when I make a change to my code, I have to run the following commands to see the updates locally:
docker-compose down
docker images # Copy the name of the image
docker rmi <IMAGE_NAME>
docker-compose up -d
That's quite a mouthful, and takes a while. (Possibly I could make it into a bash script, but do you think that is a good idea?)
My real question is: Is there a command that I can use (even manually each time) that will update the image & container? Or do I have to go through the entire workflow above every time I make a change in my code?
Just for reference, here is my Dockerfile and docker-compose.yml.
Dockerfile
FROM node:12.18.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 4000
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Even though there are multiple good answers to this question, I think they missed the point, as the OP is asking about the local dev environment. The command I usually use in this situation is:
docker-compose up -d --build
If there aren't any errors in Dockerfile, it should rebuild all the images before bringing up the stack. It could be used in a shell script if needed.
#!/bin/bash
sudo docker-compose up -d --build
If you need to tear down the whole stack, you can have another script:
#!/bin/bash
sudo docker-compose down -v
The -v flag removes all the volumes so you can have a fresh start.
NOTE: In some cases, sudo might not be needed to run the command.
When a docker image is build the artifacts are already copied and no new change can reflect until you rebuild the image.
But
If it is only for local development, then you can leverage volume sharing to update code inside container in runtime. The idea is to share your app/repo directory on host machine with /usr/src/app (as per your Dockerfile) and with this approach your code (and new changes) will be appear on both host and the running container.
Also, you will need to restart the server on every change and for this you can run your app using nodemon (as it watches for changes in code and restarts the server)
Changes required in Dockerfile.
services:
web:
...
container_name: web
...
volumes:
- /path/in/host/machine:/usr/src/app
...
...
ports:
- "3000:3000"
depends_on:
- mongo
You may use Docker Swarm as an orchestration tool to apply rolling updates. Check Apply rolling updates to a service.
Basically you issue docker compose up once and do it with a shell script maybe, and once you get your containers running and then you may create a Jenkinsfile or configure a CI/CD pipeline to pull the updated image and apply it to running container with previous image with docker service update <NEW_IMAGE>.
Here is a simplified version of my docker-compose.yml (it's the volume in buggy-service that does not behave as I expect):
version: '3.4'
services:
local-db:
image: postgres:9.6
environment:
- DB_NAME=${DB_NAME}
# other env vars (not important)
ports:
- 5432:5432
volumes:
- ~/.docker-volumes/${DB_NAME}/postgresql/data:/var/lib/postgresql/data
- postgresql:/docker-entrypoint-initdb.d
buggy-service:
build:
context: .
dockerfile: Dockerfile.test
target: buggy-image
args:
# bunch of args (not important)
volumes:
- /Users/me/temp:/temp
volumes:
postgresql:
driver_opts:
type: none
device: /Users/me/postgresql
o: bind
If I do docker-compose -f docker-compose.yml up -d local-db, a container for it starts up automatically and I find that /Users/me/postgresql on the host machine (Mac OSX) binds correctly to /docker-entrypoint-initdb.d with content synced.
However, if I do docker-compose -f docker-compose.yml up --build -d buggy-service, a container does not start up automatically.
Question: How do I get buggy-service to behave like local-db, i.e., start up automatically with the required volume mounted?
Here's the stripped down version of Dockerfile.test referenced by buggy-service:
FROM microsoft/dotnet:2.1-sdk-alpine AS buggy-image
# Bunch of ARG definitions (not important)
VOLUME /temp
# other stuff (not important)
ENTRYPOINT ["/bin/bash"]
# Other FROMs
Edit 1
A bit more info about what I’m trying to achieve...
The buggy-container I’m trying to get working runs .Net Core as the base image. Its purpose is to run dotnet test and generate coverage reports, which can then be consumed in the host, which may either be a local dev machine or a build server (in this case, BitBucket pipelines).
... followed by docker run -dit --name buggy-container buggy-image
This command creates a new container, not based on anything in the compose yml file. Without a volume specification, it will only get an anonymous volume since you've defined the volume in the Dockerfile (I tend to recommend against defining a volume there). You can see the anonymous volumes with a docker volume ls command, they'll be the ones with a long unique id and no reference to what they belong to.
To define a host volume from docker run, you need the -v flag:
docker run -dit -v /Users/me/temp:/temp --name buggy-container buggy-image
From your now changed question, you have a new issue. Your container specifies a single command to run in the entrypoint:
ENTRYPOINT ["/bin/bash"]
When bash runs, it reads input from stdin. When that input ends, like when you run a container with no input attached, bash will exit. When the process your container runs exits, the container exits. From the details available, I can't tell you what that command should be, but a good starting point is to look at other images on docker hub that perform a similar task that you're trying to run, and look at the Dockerfile they use (many hub images point back to a GitHub repo with the full source).
I have a project with the following file structure:
- Dockerfile
- app/
- file.txt
- uploads/
The file.txt file contains Hello 1.
The Dockerfile generates the app image and is quite simple:
FROM busybox
COPY ./app /var/www/app
VOLUME /var/www/app/uploads
The generated image is pushed to Docker Hub on the michaelperrin/app-test repository.
On my server where the app is deployed, I have the following docker-compose.yml file:
version: '2'
services:
app:
image: michaelperrin/app-test:0.1.0
working_dir: /var/www/app
volumes:
- /var/www/app
nginx:
image: nginx:1.11
volumes_from:
- app
working_dir: /var/www/app
It defines two containers:
The app image.
A Nginx server, that has access to the app files.
The app is run with the docker-compose up -d command.
Running docker-compose exec nginx cat test-file.txt will therefore display:
Hello 1
Now, suppose I do the following steps:
Update the content of file.txt with Hello 2 on my local machine.
Build a new image of my app (that copies the new version of file.txt)
Tag it and push it on Docker Hub as version 0.2.0.
Change my docker-compose.yml file on the server to tell that I now use michaelperrin/app-test:0.2.0 for my app.
Run docker-compose up -d (and docker-compose restart to be sure).
Then the terminal outputs:
Status: Downloaded newer image for michaelperrin/app-test:0.2.0
Recreating apptest_app_1
Recreating apptest_nginx_1
And here is my problem:
If I run docker-compose exec nginx cat test-file.txt it will still display Hello 1, and not Hello 2.
The only solution I found was to do the following:
docker-compose stop app
docker-compose rm app
docker-compose up -d
Is there any better solution?
The problem with the rm solution is that it will remove all other files that could have been created inside the app container by my app, in the /var/www/app/uploads directory (despite the fact it is declared as a volume in the Dockerfile).
I think (and really hope) that this is not possible. You create an instance (container) from your image with the state it has in the moment as it was built. You'd have unintended side effects when the creation of a new image has an effect on the containers.
Therefore you should remove the old containers and build fresh ones with the new image.