I have api and static services in docker compose file,
services:
static:
build: ./static
volumes:
- uploads:/app/uploads
api:
build: ./api
volumes:
uploads:
In structure it's looks like
api
Dockerfile
uploads.js -> uploads request api handler
static
Dockerfile
uploads -> directory for saving static content
How I can access uploads directory inside static service from api service? Code above says that uploads directory doesn't exists when I try access it from api service.
If you map the volume to the api service as well, it'll be available in both.
Like this:
services:
static:
build: ./static
volumes:
- uploads:/app/uploads
api:
build: ./api
volumes:
- uploads:/app/uploads
volumes:
uploads:
Related
I am new to docker and had a problem that hope you could help.
I have defined multiple services (HTTPSERV, IMED, etc...) in my docker-compose file and these services have a python code inside and a docker file for running them. The dockerfile also copies the required files into a host path defined in docker-compose. The HTTPSERV and IMED must share a text file and expose it to an external user sending a GET request to HTTPSERV.
In docker-compose I have defined a local host directory and bind it to a named volume. The services and docker files are meant to share each service files and run.
As soon as I run the docker-compose, the files related to the first service copies files in the PATH directory where "src" and change the permission right of the "src" folder not letting the other services copy their files. This causes the next services to fail to find the appropriate files and the whole orchestration process fails.
version: "3.9"
networks:
default:
ipam:
config:
- subnet: 172.28.0.2/20
services:
httpserv:
user: root
container_name: httpserver
build: ./HTTPSERV
volumes:
- myapp:/httpApp:rw
networks:
default:
ipv4_address: 172.28.0.5
ports:
- "8080:3000"
rabitQ:
user: root
container_name: rabitQ
image: rabbitmq:3.8-management
networks:
default:
ipv4_address: 172.28.0.4
ports:
- "9000:15672"
imed:
user: root
container_name: IMED-Serv
build: ./IMED
volumes:
- myapp:/imed:rw
networks:
- default
# restart: on-failure
orig:
user: root
container_name: ORIG-Serv
build: ./ORIG
volumes:
- myapp:/orig:rw
networks:
- default
# restart: on-failure
obse:
container_name: OBSE-Serv
build: ./OBSE
volumes:
- myapp:/obse:rw
networks:
- default
# restart: on-failure
depends_on:
- "httpserv"
links:
- httpserv
volumes:
myapp:
driver: local
driver_opts:
type: none
o: bind
device: /home/dockerfiles/hj/a3/src
The content of the docker file is similar for most of the services and is as follow:
FROM python:3.8-slim-buster
WORKDIR /imed
COPY . .
RUN pip install --no-cache-dir -r imed-requirements.txt
RUN chmod 777 ./imed.sh
CMD ["./imed.sh"]
The code has root access and the UserID and GroupID are set
I also used the anonymously named volumes but the same issue happens.
In Docker it's often better to avoid "sharing files" as a first-class concept. Imagine running this in a clustered system like Kubernetes; if you have three copies of each service, and they're running in a cloud of a hundred systems, "sharing files" suddenly becomes difficult.
Given the infrastructure you've shown here, you have a couple of straightforward options:
Add an HTTP endpoint to update the file. You'd need to protect this endpoint in some way, since you don't want external callers accessing it; maybe filter it in an Nginx reverse proxy, or use a second HTTP server in the same process. The service that has the updated content would then call something like
r = requests.post('http://webserv/internal/file.txt', data=contents)
r.raise_for_status()
Add an HTTP endpoint to the service that owns the file. When the Web server service starts up, and periodically after that, it makes a request
r = requests.get('http://imed/file.txt')
You already have RabbitMQ in this stack; add a RabbitMQ consumer in a separate thread, and post the updated file content to a RabbitMQ topic.
There's some potential trouble with the "push" models if the Web server service restarts, since it won't be functional until the other service sends it the current version of file.
If you really do want to do this with a shared filesystem, I'd recommend creating a dedicated directory and dedicated volume to do this. Do not mount a volume over your entire application code. (It will prevent docker build from updating the application, and in the setup you've shown, you're mounting the same application-code volume over every service, so you're running four copies of the same service instead of four different services.)
Adding in the shared volume, but removing a number of other unnecessary options, I'd rewrite the docker-compose.yml file as:
version: '3.9'
services:
httpserv:
build: ./HTTPSERV
ports:
- "8080:3000"
volumes:
- shared:/shared
rabitQ:
image: rabbitmq:3.8-management
hostname: rabitQ # (RabbitMQ specifically needs this setting)
ports:
- "9000:15672"
imed:
build: ./IMED
volumes:
- shared:/shared
orig:
build: ./ORIG
obse:
build: ./OBSE
depends_on:
- httpserv
volumes:
shared:
I'd probably have the producing service unconditionally copy the file into the volume on startup, and have the consuming service block on it being present. (Don't depend on Docker to initialize the volume; it doesn't work on Docker bind mounts, or on Kubernetes, or if the underlying image has changed.) So long as the files are world-readable the two services' user IDs don't need to match.
I have a backup service in my docker-compose.yml file:
backup:
build:
context: .
dockerfile: Dockerfile.backup
volumes:
- ./:/web
depends_on:
- web
It generates backups of my web app in /home/backups/ with a cron job. I would like to have access to a Dropbox folder inside that container. Then I'll change the backup destination folder to that folder.
I found that:
dropbox:
image: janeczku/dropbox
environment:
DBOX_UID: 33
DBOX_GID: 33
volumes:
- /local/directory/:/dbox/Dropbox
restart: always
container_name: dropbox
network_mode: 'bridge'
But I don't know how to make these services work together the way I want.
Hello Guys I am facing a problem in volumes_from in docker-compose file.
- i have 3 services first one has my app files and the second is php-fpm which take volume from the data service.
my file is like this.
version: '2'
services:
cms_data:
image: ""image from private repository contain application file"
container_name: "cms-data"
php-fpm:
image: "image from private repository contain php configuration"
container_name: "php-fpm"
env_file:
- ../.env.production
volumes_from:
- cms_data
working_dir: /iprice/octobercms
expose:
- 9000
depends_on:
- cms_data
restart: "always"
nginx:
image: "image from private repository contain nginx configuration"
container_name: "nginx"
ports:
- "80:80"
- "443:443"
links:
- php-fpm
volumes_from:
- cms_data
depends_on:
- cms_data
restart: "always"
the cms-data image has the files which is correct.
but the php-fpm container doesn't please help.
volumes_from mounts the volumes present on other containers. It does not create new volumes.
The cms-data container does not have any volumes associated with it. So volumes_from cant do anything. If you want to share a particular folder inside cms-data, first create a volume linking that folder.
NOTE: creating a volume will overwrite the contents of the internal container with the /path/on/host folder. So first copy the contents of the container folder to this host folder.
Run the current docker-compose as is so the containers start.
Copy the contents from the cms-data container to the host folder:
docker cp :/path/to/shared/folder /path/on/host
Make the following changes to the docker-compose file and restart.
services:
cms_data:
image: ""image from private repository contain application file"
container_name: "cms-data"
volumes:
- /path/on/host:/path/to/shared/folder
...
My docker-compose defines two containers. I want one container shares a volume to the other container.
version: '3'
services:
web-server:
env_file: .env
container_name: web-server
image: web-server
build:
dockerfile: docker/Dockerfile
ports:
- 3000:3000
- 3500:3500
volumes:
- static-content: /workspace/static
command: sh /workspace/start.sh
backend-server:
volumes:
- static-content: /workspace/static
volumes:
static-content:
The above docker composer file declares two services, web-server and backend-server. And I declares the named volume static-content under services. I got below error when I run docker-composer -f docker-composer.yml up:
services.web-server.volumes contains an invalid type, it should be a string
services.backend-server.volumes contains an invalid type, it should be a string
so how can I share volumes throw docker-composer?
You have an extra space in your volume string that causes Yaml to change the parsing from an array of strings to an array of name/value maps. Remove that space in your volume entries (see below) to prevent this error:
version: '3'
services:
web-server:
env_file: .env
container_name: web-server
image: web-server
build:
dockerfile: docker/Dockerfile
ports:
- 3000:3000
- 3500:3500
volumes:
- static-content:/workspace/static
command: sh /workspace/start.sh
backend-server:
volumes:
- static-content:/workspace/static
volumes:
static-content:
For more details, see the compose file section on volumes short syntax.
You need to use the docker volumes syntax, without spaces
<local_path>:<service_path>:<optional_rw_attributes>
For example:
./:/your_path/
will map the present working directory to /your_path
And this example:
./:/your_path/:ro
will map the present working directory to /your_path with read only permissions
Read these docs for more info:
https://docs.docker.com/compose/compose-file/#volume-configuration-reference
I'm trying to create an Nginx/PHP FPM setup with docker compose and am having issues with the version 3 volumes syntax/changes.
My Dockerfile:
FROM php:7-fpm
VOLUME /var/www/html
My docker-compose.yml:
version: "3"
services:
php:
build: .
volumes:
- ./html:/var/www/html
web:
image: nginx
links:
- php
ports:
- "8888:80"
volumes:
- php:/var/www/html
- ./default.conf:/etc/nginx/conf.d/default.conf
volumes:
php:
When I add an index.php file into ./html, I can view that by going to http://localhost:8888, but any static files (like CSS) return a 404 because Nginx cannot find those in its container (/var/www/html is empty on the nginx container). With version 3 docker compose files do not have volumes_from anymore, which is basically what I'm trying to replicate.
How can I get this to work with version 3?
For using "Named volumes" for sharing files between containers you need to define
1) volumes: section on the top level of yml file and define volume name
volumes:
php:
2) define volume section on first container like you did (Where share will mount)
web:
volumes:
- php:/var/www/html #<container_name>:<mount_point>
3) define volume section on second container (Share will mount from)
php:
volumes:
- php:/var/www/html
4) (optionally) If you need to store volume data on the host machine you can use local-persist docker plugin. You can specify docker volume driver and path where you data will be stored.
volumes:
php:
driver: local-persist
driver_opts:
mountpoint: /path/on/host/machine/
In your case you forgot define volume name for php container. Just replace
php:
build: .
volumes:
- ./html:/var/www/html
to
php:
build: .
volumes:
- php:/var/www/html
and use Local Persist Docker Plugin