Docker container doesn't update when file changes are made on host - docker

I'm using a docker image as my dev environment for a specific version of PHP. I am using PHP for a command line script so every time I make a change to the script I would like it to automatically make changes in the container.
I'm not sure if this is even possible. I assumed it was. I mostly use docker-compose and I can add VOLUMES easily to achieve this, but not in this instance with docker.
My Dockerfile:
FROM php:7.2-cli
COPY ./app /usr/src/app/
WORKDIR /usr/src/app
ENTRYPOINT [ "php" ]
CMD [ "./index.php" ]
I first run:
docker build -t app .
And then
docker run app
Everything works well. But if I change something in index.php I have to run the steps again. Is this expected behaviour or is there any way to have docker watch for changes?

docker run -v /home/user/location:/usr/src/app app
Use a volume so that the files within the container reflect local changes.
https://docs.docker.com/storage/volumes/#choose-the--v-or---mount-flag

If you are editing a file using vim/sublime outside docker in your host, this is normal, because vim/sublime does not simple "edit" that file, but create a new file. see : https://github.com/moby/moby/issues/15793
solution:
After some googling, Sublime text has atomic_save instead so Adding "atomic_save": false to user preferences worked (After a restart).
if you are using docker-compose, use this command:
$ docker-compose run --build

Related

I want to write a docker file where my container can load db file from a directory and past it to application directory and on exit wrtie it back

I am working with Golang application that saves the information inside sqlite file and that resideds inside the data/sqlite.db same directory as docker file. My docker file is something like this
p.s: guys it's my very first docker file please be kind to me :(
FROM golang:1.16.4
ENV GIN_MODE=release
ENV PORT=8081
ADD . /go/src/multisig-svc
WORKDIR /go/src/multisig-svc
RUN go mod download
RUN go build -o bin/multisig-svc cmd/main.go
EXPOSE $PORT
ENTRYPOINT ./bin/multisig-svc
I deployed this application to the Google cloud plateform but somehow the container gets restarted there and after that my db is vanished. So i researched and try to use volumes.
I build the container using this command docker build -t svc . and then run it with docker run -p 8080:8081 -v data:/var/dump -it svc but i can not see the data folder is getting copied to /var/dump directory. My basic idea is , Whenever the container start it loads the db file from dump and then past it to data directory so application can use it and when it exits it copy it back to dump directory. I don't know if i am on right track any help would really be appreciated.
#Edit
The issue is when no request arrives for 15 minutes GPC shut down the container and start it when there comes a request again. Now the issue is to somehow fetch the db file from dump directory update it and write it back to the dump dir when container goes down for future use.
For a local run and if you are running on a VM, you need to specify the absolute path of the directory you want to mount as a bind mount into your directory. In this case something like that should work
docker run -p 8080:8081 -v $(pwd)/data:/var/dump -it svc
When you don't specify the absolute path, the volume you're mounting to your running container is a named volume manage by the docker daemon. And it is not located in a path related to your current working directory. You can find more information about how work docker volumes here https://docs.docker.com/storage/volumes/
However there are multiple environment on GCP (app engine, kubernetes, VMs), so depending on your environment you may need to adapt this solution.

Docker Autoreload with CompileDaemon

I am working on trying to improve my development environment using Docker and Go but I am struggling to get auto reloading in my containers working when there is a file change. I am on Windows running Docker Desktop version 18.09.1 if that matters.
I am using CompileDaemon for reloading and my DockerFile is defined as follows
FROM golang:1.11-alpine
RUN apk add --no-cache ca-certificates git
RUN go get github.com/githubnemo/CompileDaemon
WORKDIR /go/src/github.com/testrepo/app
COPY . .
EXPOSE 8080
ENTRYPOINT CompileDaemon -log-prefix=false -directory="." -build="go build /go/src/github.com/testrepo/app/cmd/api/main.go" -command="/go/src/github.com/testrepo/app/main"
My project structure follows
app
api
main.go
In my docker-compose file I have the correct volumes set and the files are being updated in my container on when I make changes locally.
The application is also started correctly using CompileDaemon on its first load, its just not ever updated on file changes.
On first load I see...
Running build command!
Build ok.
Restarting the given command.
Then any changes I make are not resulting in a restart even though I can connect to the container and see that the changes are reflected in the expected files.
Ensure that you have volumes mounted for the services you are using, it is what makes the hot reloading work inside of a Docker container
See the full explanation
Proper way of installing the Compile Daemon is
RUN go install -mod=mod github.com/githubnemo/CompileDaemon
Reference: https://github.com/githubnemo/CompileDaemon/issues/45#issuecomment-1218054581

Developing in docker-compose. Getting the container to recognise code changes

I have a docker-container with a Python3 environment and various libraries installed.
I'm trying to develop a simple Python program against this environment.
So what I have is a volume with my source code outside the container which is ADDed and set as WORKDIR in the Dockerfile.
I'm then shelling into the container and trying to run the program on the command-line.
When I hit an error, I want to simply change the source in my editor which is outside the container, and run again.
However, when I do this, the executing code in the container doesn't seem to be taking any notice of the changes I made.
If I do
docker-compose up --build
and rebuild the container then it does.
Obviously this is very slow.
Surely it should be possible for the container to see changes to the code I'm working on without being rebuilt? If so, how do I make this happen?
Using ADD bakes files into a container image, so as you've noticed, updating files in a running application requires an entire container rebuild and restart. To get around this, you can mount a directory on your host machine over the path you've copied into your container using ADD.
To do this with Docker, you can use -v or --volume. Using Docker Compose, you can list the directory to be mounted under volumes:. For example, if you had the following in your build file:
# Copy app code into the container working directory
ADD /my/app/code /usr/app/src
You can then mount your live code over the baked-in files at container start time (note that directory paths must be absolute - you can use $PWD for this):
$ docker run -v /my/live/app/code:/usr/app/src python:latest
$ docker run -v "$PWD"/app/code:/usr/app/src python:latest
The docker-compose.yml equivalent is as follows:
my-service:
image: python:latest
volumes:
- /my/live/app/code:/usr/app/src
- ./relative/paths:/work/too
There's more about bind mounts in the documentation.

Docker, copy files in production and use volume in development

I'm new to using docker for development but wanted to try it in my latest project and have ran into a couple of questions.
I have a scenario where I want to link the current project directory as a volume to a running docker container in development mode, so that file changes can be done locally without restarting the container each time. To do this, I have the following comand:
docker run --name app_instance -p 3100:80 -v $(pwd):/app appimage
In contrast, in production I want to copy files from the current project directory.
E.G in the docker file have ADD . /app (With a .dockerignore file to ignore certain folders). Also, I would like to mount a volume for persistent storage. For this scenario, I have the following command :
docker run --name app_instance -p 80:80 -v ./filestore:/app/filestore appimage
My problem is that with only one dockerfile, for the development command a volume will be mounted at /app and also files copied with ADD . /app. I haven't tested what happens in this scenario, but I am assuming it is incorrect to have both for the same destination.
My question is, what is the best practice to handle such a situation?
Solutions I have thought of:
Mount project folder to different path than /app during development and ignore the /app directory created in the container by the dockerfile
Have two docker files, one that copies the current project and one that does not.
My problem is that with only one dockerfile, for the development command a volume will be mounted at /app and also files copied with ADD . /app. I haven't tested what happens in this scenario, but I am assuming it is incorrect to have both for the same destination.
For this scenario, it will do as follows:
a) Add your code in host server to app folder in container when docker build.
b) Mount your local app to the folder in the container when docker run, here will always your latest develop code.
But it will override the contents which you added in dockerfile, so this could meet your requirements. You should try it, no need for any complex solution.

Why do the changes I make in my working directory not show up in my Docker container?

I would like to run a test a parse-dashboard via Docker, as documented in the readme.
I am getting the error message, "Parse Dashboard can only be remotely accessed via HTTPS." Normally, you can bypass this by adding the line "allowInsecureHTTP": true in your parse-dashboard-config.json file. But even if I have added this option to my config file, the same message is displayed.
I tried to edit the config file in the Docker container, whereupon I discovered that none of my local file changes where present in the container. It appeared as though my project was an unmodified version of the code from the github repository.
Why do the changes that I make to the files in my working directory on the host machine not show up in the Docker container?
But what it is upload to my docker, it's in fact the config file of my master branch.
It depends:
what that "docker" is: the official DockerHub or a private docker registry?
how it is uploaded: do you build an image and then use docker push, or do you simply do a git push back to your GitHub repo?
Basically, if you want to see the right files in your Docker container that you run, you must be sure to run an image you have built (docker build) after a Dockerfile which COPY files from your current workspace.
If you do a docker build from a folder where your Git repo is checked out at the right branch, you will get an image with the right files.
The Dockerfile from the parse-dashboard repository you linked uses ADD . /src. This is a bad practice (because of the problems you're running into). Here are two different approaches you could take to work around it:
Rebuild the Image Each Time
Any time you change anything in the working directory (which the Dockerfile ADDs to /src), you need to rebuild for the change to take effect. The exception to this is src/Parse-Dashbaord/parse-dashboard-config.json, which we'll mount in with a volume. The workflow would be nearly identical to the one in the readme:
$ docker build -t parse-dashboard .
$ docker run -d -p 8080:4040 -v ./src/Parse-Dashbaord/parse-dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json parse-dashboard
Use a Volume
If we're going to use a volume to do this, we don't even need the custom Dockerfile shipped with the project. We'll just use the official Node image, upon which the Dockerfile is based.
In this case, Docker will not run the build process for you, so you should do it yourself on the host machine before starting Docker:
$ npm install
$ npm run build
Now, we can start the generic Node Docker image, and ask it do serve our project directory.
$ docker run -d -p 8080:4040 -v ./:/src node:4.7.2 "cd /src && npm run dashboard"
Changes will take effect immediately because you mount ./ into the image as a volume. Because it's not done with ADD, you don't need to rebuild the image each time. We can use the generic node image because if we're not ADDing a directory and running the build commands, there's nothing our image will do differently than the official one.

Resources