Does anyone know how to mount an NFS to a docker container using docker compose? I keep getting the same error each time I run 'docker compose up':
Error response from daemon: error while mounting volume '/var/lib/docker/volumes/test_data/_data': failed to mount local volume: mount /etc/hapee-2.2/certs:/var/lib/docker/volumes/test_data/_data, data: addr=10.15.50.27,nolock,soft: invalid argument
Exports file from the NFS appears to be set up correctly. I've tried deleting '/home/' from the file path as well. IP address is redacted.
/etc/hapee-2.2/certs <ipaddressofpc>(rw,sync,no_subtree_check,no_root_squash)
I keep suspecting the docker-compose.yml, but I'm not sure what the issue with it would be
version: '3.7'
services:
hapee:
image: haproxy:2.2
container_name: test
restart: always
ports:
- "80:80"
- "443:443"
- "8888:8888"
volumes:
- data:/etc/hapee-2.2/certs
logging:
options:
max-size: 100m
max-file: "3"
volumes:
data:
driver_opts:
type: "nfs"
o: "addr=10.15.50.27,nolock,soft,ro"
device: "/etc/hapee-2.2/certs"
Alternatively, does anyone know another method of mounting SSL certificates to an HA Proxy container with docker compose?
Thanks!
Related
I have the NFS server setup and the firewall is opened for ports 111 and 2049.
and I have an NFS client and also configured for ports 111 and 2049.
The connection between the servers is working fine for the above ports
when I mount manually from the NFS client it mounted successfully.
however, I want to create NFS volume in my docker-compose file to mount directly to the NFS server.
but I'm receiving a connection timed out message
ERROR: for web Cannot create container for service web: failed to mount local volume:
mount :/root/app/django-static:/var/lib/docker/volumes/django-static/_data, data:
addr=x.x.x.x: connection timed out
Here is my docker-compose file:
version: "3.2"
services:
proxy:
image: nginx-1-14:0.1
depends_on:
- web
restart: always
ports:
- "80:80"
volumes:
- nginx-config:/etc/nginx
- nginx-logs:/var/log/nginx
- django-static:/code/static
- django-media:/code/media
web:
image: django-app-v1
restart: always
ports:
- "8000:8000"
volumes:
- django-static:/code/static
- django-media:/code/media
environment:
- "DEBUG_MODE=False"
- "DJANGO_SECRET_KEY="
- "DB_HOST=x.x.x.x”
- "DB_PORT=5432"
- "DB_NAME=db"
- "DB_USERNAME=user"
- "DB_PASSWORD=password"
volumes:
nginx-config:
nginx-logs:
django-static:
driver_opts:
type: "nfs"
o: "addr=<NFS_IP>,rw"
device: ":/root/app/django-static"
django-media:
driver_opts:
type: "nfs"
o: "addr=<NFS_IP>,rw"
device: ":/root/app/django-media"
here is my /etc/exports in NFS server:
/root/app/django-media <NFS_client_IP>(rw,sync,no_root_squash)
/root/app/django-static <NFS_client_IP>(rw,sync,no_root_squash)
I followed this article to setup NFS
So, NFS configured correctly between server and client
but the issue in docker as it can't access the NFS server
does it need a specific port or other permissions in /etc/exports file?
Thank you!
For anyone having the same issue
it worked after I use NFS4 in volume creation in docker-compose
volumes:
django-media:
driver_opts:
type: "nfs"
o: "nfsvers=4,addr=<NFS_IP>”
device: ":/root/app/django-media"
django-static:
driver_opts:
type: "nfs"
o: "nfsvers=4,addr=<NFS_IP>"
device: ":/root/app/django-static"
permission denied errors when attempting to mount an nfs drive to a docker container using a docker-compose file.
This error only applies when running Docker for Windows. I am able to successfully mount the drive on an Ubuntu host.
docker-compose file
version: '2'
services:
builder:
image: some_image
ports:
- "8888:8080"
volumes:
- "nfsmountCC:</container/path>"
volumes:
nfsmountCC:
driver: local
driver_opts:
type: nfs
o: addr=<nfs_IP_Address>
device: ":</nfs/server/dir/path>"
Docker for Windows Produces
ERROR: for test_1 Cannot start service builder: b"error while mounting volume '/var/lib/docker/volumes/test-master_nfsmountCC/_data': error while mounting volume with options: type='nfs' device=':</nfs/server/dir/path>' o='addr=<nfs_IP_Address>': permission denied"
The following worked for me with Docker Toolbox on Windows 7 mounting a NFS volume from an Ubuntu server:
NFS Server side:
allow the nfs and mountd services on your firewall (if you have one) on the NFS server
add the insecure option in each relevant entry of your '/etc/exports' file
Docker client side: add the hard and nolock options to the NFS volume definition
version: '3.7'
services:
builder:
image: some_image
ports:
- "8888:8080"
volumes:
- "nfsmountCC:</container/path>"
volumes:
nfsmountCC:
driver: local
driver_opts:
type: nfs
o: "addr=<nfs_IP_Address>,rw,hard,nolock"
device: ":</nfs/server/dir/path>"
I have successfully created a Windows Docker Image and I can successfully run the container with docker-compose, for full functionality the application expects a folder location to exist, which is currently a NFS attached drive that contains a folder inside it that the application reads and writes to and is also used by other applications.
I have tried to find valid examples and documentation on Docker for Windows, Windows Containers and Docker Compose for Volumes that are NFS.
This is what I am currently trying
version: "3.4"
services:
service:
build:
context: .
dockerfile: ./Dockerfile
image: imageName
ports:
- 8085:80
environment:
-
volumes:
- type: volume
source: nfs_example
target: C:\example
volume:
nocopy: true
volumes:
nfs_example:
driver: local
driver_opts:
type: "nfs"
o: "addr=\\fileserver.domain,nolock,soft,rw"
device: ":\\Shared\\Folder"
The error message I get is:
ERROR: create service_nfs_example: options are not supported on this platform
NFS doesn't work.I solved it using SMB on the host, then mounted that volume.
New-SmbGlobalMapping -RemotePath \\contosofileserver\share1 -Credential Get-Credentials -LocalPath G:
version: "3.4"
services:
service:
build:
context: .
dockerfile: ./Dockerfile
image: imageName
ports:
- 8085:80
environment:
-
volumes:
- type: bind
source: G:\share1
target: C:\inside\container
This microsoft documentation on the windows containers helped me achieve this.
I am trying to use the following docker-stack.yml file to deploy my services to my Docker Swarm version 17.06-ce. I want to use volumes to map the C:/logs directory on my Windows host machine to the /var/log directory inside my container.
version: '3.3'
services:
myapi:
image: mydomain/myimage
ports:
- "5000:80"
volumes:
- "c:/logs:/var/log/bridge"
When I remove the volumes section, my containers start fine. After adding the volume, the container never even attempts to start i.e.
docker container ps --all does not show my container.
docker events does not show the container trying to start.
The following command works for me, so I know that my syntax is correct:
docker run -it -v "c:/logs:/var/log/bridge" alpine
I've read the volumes documentation a few times now. Is the syntax for my volume correct? Is this a supported scenario? Is this a Docker bug?
Docker run will work when you run it in version 2 and with docker-compose we can run the custom volume mounting.
In version three we have to use the named volumes with default volume path or custom path.
Here is the docker-compose with default volume
version: "3.3"
services:
mysql:
image: mysql
volumes:
- db-data:/var/lib/mysql/data
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
volumes:
db-data:
volume is mounted to default /var/lib/docker/volumes/repo/_data
We have option to mount the custom path to the volume
version: "3.3"
services:
mysql:
image: mysql
volumes:
- db-data:/var/lib/mysql/data
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
volumes:
db-data:
driver: local
driver_opts:
o: bind
type: none
device: /home/ubuntu/db-data/
VOLUMES FOR SERVICES, SWARMS, AND STACK FILES
I want to setup ownCloud with Docker and Docker-Compose. To achieve this I have a docker-compose.yml with 3 containers and their volumes.
version: '2'
services:
nginx:
build: ./nginx
networks:
- frontend
- backend
volumes:
- owncloud:/var/www/html
owncloud:
build: ./owncloud
networks:
- backend
volumes:
- owncloud:/var/www/html
- data:/data
mysql:
build: ./mariadb
volumes:
- mysql:/var/lib/mysql
networks:
- backend
volumes:
owncloud:
driver: local
data:
driver: local
mysql:
driver: local
networks:
frontend:
driver: bridge
backend:
driver: bridge
I also tried it without the data volume. ownCloud could not write to /data or without this volume to /var/www/html/data. The log only shows timestamps whenever I accessed ownCloud. Changing from data:/data to a hosted volume /var/ownclouddata:/data results in no difference.
The Dockerfiles have only one line each: FROM:image
I´ve tried adding RUN mkdir /data, but it didn´t fix anything.
You need to mount the volumes in the Dockerfile something like this.
VOLUME /data
Later in your docker-compose file, you can either use a named volume like you did earlier or simply use it like this.
/mnt/test:/data
Here /mnt/test is your host volume path and /data is your docker container path.
Hope it helps!