I have a project with the following file structure:
- Dockerfile
- app/
- file.txt
- uploads/
The file.txt file contains Hello 1.
The Dockerfile generates the app image and is quite simple:
FROM busybox
COPY ./app /var/www/app
VOLUME /var/www/app/uploads
The generated image is pushed to Docker Hub on the michaelperrin/app-test repository.
On my server where the app is deployed, I have the following docker-compose.yml file:
version: '2'
services:
app:
image: michaelperrin/app-test:0.1.0
working_dir: /var/www/app
volumes:
- /var/www/app
nginx:
image: nginx:1.11
volumes_from:
- app
working_dir: /var/www/app
It defines two containers:
The app image.
A Nginx server, that has access to the app files.
The app is run with the docker-compose up -d command.
Running docker-compose exec nginx cat test-file.txt will therefore display:
Hello 1
Now, suppose I do the following steps:
Update the content of file.txt with Hello 2 on my local machine.
Build a new image of my app (that copies the new version of file.txt)
Tag it and push it on Docker Hub as version 0.2.0.
Change my docker-compose.yml file on the server to tell that I now use michaelperrin/app-test:0.2.0 for my app.
Run docker-compose up -d (and docker-compose restart to be sure).
Then the terminal outputs:
Status: Downloaded newer image for michaelperrin/app-test:0.2.0
Recreating apptest_app_1
Recreating apptest_nginx_1
And here is my problem:
If I run docker-compose exec nginx cat test-file.txt it will still display Hello 1, and not Hello 2.
The only solution I found was to do the following:
docker-compose stop app
docker-compose rm app
docker-compose up -d
Is there any better solution?
The problem with the rm solution is that it will remove all other files that could have been created inside the app container by my app, in the /var/www/app/uploads directory (despite the fact it is declared as a volume in the Dockerfile).
I think (and really hope) that this is not possible. You create an instance (container) from your image with the state it has in the moment as it was built. You'd have unintended side effects when the creation of a new image has an effect on the containers.
Therefore you should remove the old containers and build fresh ones with the new image.
Related
I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.
I am trying to use Docker volume/bind mount so that I don't need to build my project again and again after every small change. I do not get any error but changes in the local files are not visible in container thus I still have to rebuild the project for the new files system snapshot.
Following solution seemed to work for some people.Therefore,
I have tried restarting Docker and Reset Credentials at Docker Desktop-->Setting-->Shared Drives
Here is my docker-compose.yml file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
I have tried through Docker CLI too. but problem persists
docker build -f Dockerfile.dev .
docker run -p 3000:3000 -v /app/node_modules -v ${pwd}:/app image-id
Windows does copy the files in current directory to container but they are
not in sync
I am using Windows 10 power shell and docker version 18.09.2
UPDATE:
I have checked container contents
using command
docker exec -t -i container-id sh
and the printed file contents using command
cat filename
And from this it is clear that the files container references to has/have changed/updated but I still don't understand why do i have to restart container to see the changes in browser.
Should not they be apparent after just refreshing the tab?
Short version:
I want to add files in a docker container in docker-compose or Dockerfile and I want to make it accessible from other containers that I made in docker-compose file. How can I do that?
Long version:
I have a Python app in a container that uses a .csv file to generate a POJO machine learning model.
I also have a Java app in a container that uses the POJO machine learning model and appends the .csv file. The java app has a fileWatcher() method implemented.
The containers are made from the docker-compose file that calls Dockerfiles for each one of them. So I want to add them this way and not with CMD docker commands.
You can add the same named volume to different containers:
docker volume create --name volume_data
docker run -t -i -v volume_data:/public debian:jessie /bin/bash
docker run -t -i -v volume_data:/public2 debian:jessie /bin/bash
or as docker-compose.yml
services:
assets:
image: any_asset_image
volumes:
- assets:"/public/assets"
proxy:
image: nginx
volumes:
- assets
volumes:
- assets
I have f.e. two containers in docker
compose.yml
version: '2'
services:
nginx:
image: local-nginx:0.3
ports:
- "81:81"
volumes_from:
- webapp
webapp:
image: local-webapp:0.65
webapp Dockerfile
FROM node:4.3.0
...
VOLUME /www
CMD npm run some_script
So, what's happening, webapp container shares folder /www to nginx, and static files are serving from nginx container.
I'm starting my app with command
docker-compose -f compose.yml up
everything working fine, good. But when I want for example run application with another version of webapp local-webapp:0.66
I change version to 0.66 in compose.yml, stop current containers and run again
docker-compose -f compose.yml up
But, I still see the same version of webapp. when i go inside nginx container I still see the same files from previous 0.65. To see correct files, I must remove all containers, and then again docker-compose -f compose.yml up.
So, the question. How is this possible to configure my compose.yml file to update volume without removing all containers?
This is because Compose preserves volumes.
If you want new data, you have two options:
don't use a volume
remove the containers first
I'm a newbie to docker.
I want to create an image with my web application. I need some application server, e.g. wlp, then I need some database, e.g. postgres.
There is a Docker image for wlp and there is a Docker image for postgres.
So I created following simple Dockerfile.
FROM websphere-liberty:javaee7
FROM postgres:latest
Now, maybe it's a lame, but when I build this image
docker build -t wlp-db .
run container
docker run -it --name wlp-db-test wlp-db
and check it
docker exec -it wlp-db-test /bin/bash
only postgres is running and wlp is not even there. Directory /opt is empty.
What am I missing?
You need to use docker-compose file. This makes you bind two different containers that are running two different images. One holding your server and the other the database services.
Here is the Example of a nodejs server container working with a mongodb container
First of All, i write the docker file to configure the main container
FROM node:latest
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src
ADD app/package.json package.json
RUN npm install
EXPOSE 3000
CMD npm start
Then i Create the docker-compose file to configure both containers and link them
version: '3' #docker-compose version
services: #Services are your different containers
node_server: #First Container, containing nodejs serveer
build: . #Saying that all of my source files are at the root path
volumes: #volume are for hot reload for exemple
- "./app:/src/app"
ports: #binding the host port with the machine
- "3030:3000"
links: #Linking the first service with the named mongo service (see below)
- "mongo:mongo"
mongo: #declaration of the mongodb container
image: mongo #using mongo image
ports: #port binding for mongodb is required
- "27017:27017"
I hope this helped.
Each service should have its own image/dockerfile. You start multiple containers and connect them over a network to be able to communicate.
If you wish to compose multiple containers in one file, check out docker-compose, which is made for just that!
You can't FROM multiple times in one file and expect both processes to run
That's creating each layer from the images, but only one entry point for the process, which is Postgres, because it's second
This pattern is typically only done when you have some "setup" docker image, then a "runtime" image on top of it.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
Also what you're trying to do is not very adherent to "microservices". Run the database separately from your application. Docker Compose can assist you with that, and almost all the examples on dockers website use Postgres with some web app
Plus, you're starting an empty database and server. You need to copy at least a WAR, for example, to run your server code