I have f.e. two containers in docker
compose.yml
version: '2'
services:
nginx:
image: local-nginx:0.3
ports:
- "81:81"
volumes_from:
- webapp
webapp:
image: local-webapp:0.65
webapp Dockerfile
FROM node:4.3.0
...
VOLUME /www
CMD npm run some_script
So, what's happening, webapp container shares folder /www to nginx, and static files are serving from nginx container.
I'm starting my app with command
docker-compose -f compose.yml up
everything working fine, good. But when I want for example run application with another version of webapp local-webapp:0.66
I change version to 0.66 in compose.yml, stop current containers and run again
docker-compose -f compose.yml up
But, I still see the same version of webapp. when i go inside nginx container I still see the same files from previous 0.65. To see correct files, I must remove all containers, and then again docker-compose -f compose.yml up.
So, the question. How is this possible to configure my compose.yml file to update volume without removing all containers?
This is because Compose preserves volumes.
If you want new data, you have two options:
don't use a volume
remove the containers first
Related
I'm running a docker-compose setup, and when I want to update files in my image I create a new docker image. Though the problem is; the file I'm editing is located in the persistent volume, meaning the Docker image itself will get the changes, but since I'm not deleting docker-compose volumes the volume will be used by the new image, hence the old file will be used by new image.
Running docker-compose down -v is not an options because I want to keep other existing files in the volume (logs etc.).
I want to know if it possible to do this without too much hacks, since I'm looking to automate this.
Example docker-compose.yml
version: '3.3'
services:
myService:
image: myImage
container_name: myContainer
volumes:
- data_volume:/var/data
volumes:
data_volume
NOTE: The process of doing change in my case:
docker-compose down
docker build -t myImage:t1 .
docker compose up -d
You could start a container, mount the volume and execute a command to delete single files. Something like
docker run -d --rm -v data_volume:/var/data myImage rm /var/data/[file to delete]
I have 2 services: nginx and web
When I build web image I build the frontend via the command npm install && npm run build
But I need prepared files in both containers: in the web and in the nginx.
How to share files between containers (images)? I can't simply use volumes, because they will be mounted only in runtime.
The Dockerfile COPY directive can copy files from an arbitrary image. While it's most commonly used in multi-stage builds, you can use it with any image, even one you built yourself.
Say your docker-compose.yml file looks like:
version: '3.8'
services:
web:
build: .
image: my/web
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
ports: [8000:80]
Note that we've explicitly given the web image a name; also notice that there are no volumes: in this setup.
In the proxy image, we can then copy files out of that image:
# Dockerfile.nginx
FROM nginx
COPY --from=my/web /app/static /usr/share/nginx/html
The only complication here is that Compose doesn't know that one image is built off of the other. You'll probably have to manually tell it to rebuild the application image so that it gets built before the proxy image.
docker-compose build web
docker-compose build
docker-compose up -d
You can use this in a more production-oriented setup to deploy this application without having the code directly available. You can create a base docker-compose.yml that names an image: for both containers, and then add a separate docker-compose.override.yml file that has the build: blocks. After running docker-compose build twice as above, you can docker-compose push the built images, and then run this container stack on your production system getting the images from the registry; without a local copy of the source tree and without volumes.
I am new in docker and docker-compose. I am working on a project which uses both docker and docker compose. After adding a line in the key "volumes" of docker-compose.yml. I did docker restart
docker restart service1
and the volume was automatically recognized. Below part of the docker-compose.yml
services:
service1:
build:
context: .
dockerfile: Dockerfile.service1
container_name: service1_name
hostname: service1_name
volumes:
- /etc/teste/service1/conf.d/:/usr/share/logstash/pipeline/ #I added this line
My question Why docker restart recognizes modifications in docker-compose.yml file, where can I find this "setting"?
Working with docker-compose the scenario should be the following:
Create your docker-compose.yml
docker-compose up # this will build everything and run your services
If you make any changes to your compose file, stating docker-compose build 'yourservicename' will rebuild that specific service.
The command "docker restart" as you can see in the help page is for restarting containers
Until now, I have used a local LAMP stack to develop my web projects and deploy them manually to the server. For the next project I want to use docker and docker-compose to create a mariaDB, NGINX and a project container for easy developing and deploying.
When developing I want my code directory on the host machine to be synchronised with the docker container. I know that could be achieved by running
docker run -dt --name containerName -v /path/on/host:/path/in/container
in the cli as stated here, but I want to do that within a docker-compose v2 file.
I am as far as having a docker-composer.yml file looking like this:
version: '2'
services:
db:
#[...]
myProj:
build: ./myProj
image: myProj
depends_on:
- db
volumes:
myCodeVolume:/var/www
volumes:
myCodeVolume:
How can I synchronise my /var/www directory in the container with my host machine (Ubuntu desktop, macos or Windows machine)?
Thank you for your help.
It is pretty much the same way, you do the host:container mapping directly under the services.myProj.volumes key in your compose file:
version: '2'
services:
...
myProj:
...
volumes:
/path/to/file/on/host:/var/www
Note that the top-level volumes key is removed.
This file could be translated into:
docker create --links db -v /path/to/file/on/host:/var/www myProj
When docker-compose finds the top-level volumes section it tries to docker volume create the keys under it first before creating any other container. Those volumes could be then used to hold the data you want to be persistent across containers.
So, if I take your file for an example, it would translate into something like this:
docker volume create myCodeVolume
docker create --links db -v myCodeVoume:/var/www myProj
I have a project with the following file structure:
- Dockerfile
- app/
- file.txt
- uploads/
The file.txt file contains Hello 1.
The Dockerfile generates the app image and is quite simple:
FROM busybox
COPY ./app /var/www/app
VOLUME /var/www/app/uploads
The generated image is pushed to Docker Hub on the michaelperrin/app-test repository.
On my server where the app is deployed, I have the following docker-compose.yml file:
version: '2'
services:
app:
image: michaelperrin/app-test:0.1.0
working_dir: /var/www/app
volumes:
- /var/www/app
nginx:
image: nginx:1.11
volumes_from:
- app
working_dir: /var/www/app
It defines two containers:
The app image.
A Nginx server, that has access to the app files.
The app is run with the docker-compose up -d command.
Running docker-compose exec nginx cat test-file.txt will therefore display:
Hello 1
Now, suppose I do the following steps:
Update the content of file.txt with Hello 2 on my local machine.
Build a new image of my app (that copies the new version of file.txt)
Tag it and push it on Docker Hub as version 0.2.0.
Change my docker-compose.yml file on the server to tell that I now use michaelperrin/app-test:0.2.0 for my app.
Run docker-compose up -d (and docker-compose restart to be sure).
Then the terminal outputs:
Status: Downloaded newer image for michaelperrin/app-test:0.2.0
Recreating apptest_app_1
Recreating apptest_nginx_1
And here is my problem:
If I run docker-compose exec nginx cat test-file.txt it will still display Hello 1, and not Hello 2.
The only solution I found was to do the following:
docker-compose stop app
docker-compose rm app
docker-compose up -d
Is there any better solution?
The problem with the rm solution is that it will remove all other files that could have been created inside the app container by my app, in the /var/www/app/uploads directory (despite the fact it is declared as a volume in the Dockerfile).
I think (and really hope) that this is not possible. You create an instance (container) from your image with the state it has in the moment as it was built. You'd have unintended side effects when the creation of a new image has an effect on the containers.
Therefore you should remove the old containers and build fresh ones with the new image.