I want to create a redis container and need to mount the rdp and log file inside the container to host machine. I have a docker-compose file as below,
version: '2'
services:
redis:
image: xxxx/redis
container_name: redis
restart: unless-stopped
volumes:
- /redis/data:/var/lib/redis
- /redis/log:/var/log/redis
ports:
- "6379:6379"
If it mount the volume as above, redis is not running inside the container and sometimes i am getting unable to save rdb file due to permission denied error. If i create a container without volume mount, everything is working fine.
After doing some search i found that the permission inside redis for /var/lib/redis and /var/log/redis folder is redis:redis but the permission for the mounted volume in host machine is root:root. I cant change the permission as redis in host machine because the redis user will not be available. If i change the permission as root:root inside container, rdp files are not getting saved.
Can anyone please suggest how to fix this issue.?
Related
EDITED:
Rclone has a bucket mounted to the host directory /home/user/rclone. I want to access the contents of this directory inside nextcloud docker instance. So I would bind mount it to /var/www/html/data. With the option shared, any changes made in the container will be reflected in the host, and vice versa.
I have set the permission of /home/user/rclone to be 777. And the content is visible with a ls command from the host. Once the docker container is restarted, a ls command from within the container does not show any files. Rclone is still running properly.
I am suspecting that because the volume nextcloud is mounted at /var/www/html, the mount of the bind mount at /var/www/html/data is covered up.
So then I picked another directory inside the container, namely /mnt and tried it. Still no files show up with a ls command.
My nextcloud docker compose: (mysql does not have anything to do with this; showing the /var/www/html/data mount version only.)
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=xxx
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
network_mode: npm_default
container_name: db
app:
image: nextcloud:latest
restart: always
links:
- db
volumes:
- nextcloud:/var/www/html
- /home/user/rclone:/var/www/html/data:shared
environment:
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- NEXTCLOUD_TRUSTED_DOMAINS=xxx
network_mode: npm_default
container_name: nextcloud
Another way of putting it:
rclone cloud storage --> host --> —> docker —-> nextcloud external storage
So the reason why the nextcloud service cannot view the content is due to permission problems.
If you do exec -it nextcloud bash to check the contents, they are there because you are root.
So the proper solution if you want to use a shared bind mount with the host directory, is to set permission to 666 so the others in the container can view the file.
But at the end, I have figured that volume plugins are a way better solution, thus this is somewhat deprecated.
I stumbled across a problem with docker volumes while starting docker containers with a docker compose file (MariaDB, RabbitMQ, Maven). I start them simply with docker-compose up -d (WITHOUT SUDO)
My volumes are definied like this:
...
volumes:
- ./production/mysql:/var/lib/mysql:z
...
Everything is working fine and the ./production directory is created (where the volumes are mapped)
But when I again try to restart the docker containers with down/up, I get following error:
error checking context: 'no permission to read from '…/production/mysql/aria_log.00000001'
When I check the mentioned file I saw that it needs root:root permission. This is because the file is generated with the root user inside the container. So I tried to use namespace as mentioned in the docs.
Anyway the error still occurs. Any ideas or references?
Thanks.
Docker Compose File:
version: '3.8'
services:
mysql:
image: mariadb:latest
restart: always
env_file:
- config.env
volumes:
- ./production/mysql:/var/lib/mysql:z
environment:
MYSQL_DATABASE: ${DATABASE_NAME}
MYSQL_USER: ${DATABASE_USER}
MYSQL_PASSWORD: ${DATABASE_PASSWORD}
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD}
networks:
- testnetwork
networks:
testnetwork:
The issue comes from the mapping between the host user/group IDs and the ones inside the container. One of the solutions is to use a named volume and avoid all this hassle, but you can also do the following:
Add user: ${UID}:${GID} to your service inside the docker-compose file.
Run UID=${id -u} GID=${id -g} docker-compose up. This way you make sure that the user in the container will have the same UID/GID as the user on the host and files created in the container will have proper permissions.
NOTE: Docker for Mac (using the osxfs driver) does this behind the scenes and you don't need to worry about users and groups.
Run the Docker daemon as a non-root user this can be helpfull for your purpose.
all document are here.
I have a shared network folder, e.g.
\\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp
There is a file in the shared folder that I would like to be visible to my dockerized application. The ultimate goal is, to have my application pick up and process this file, then put it into a database.
I have a Dockerfile and docker-compose.yml that I am thinking I will need to add a volume with the shared folder location (I'm not sure if this is the correct approach, this is where I need help!)
So far I've tried adding a volume in my yml which threw an error when i did docker-compose up -d
airflow:
build: ./airflow
image: digitalImage/airflow
container_name: di-airflow
environment:
AIRFLOW__CORE__EXECUTOR: 'LocalExecutor'
POSTGRES_USER: 'airflowStuff'
POSTGRES_PASSWORD: 'postgresCreds'
POSTGRES_HOST: 'host-postgres'
POSTGRES_PORT: '5432'
POSTGRES_DB: 'postgres-db'
DATE_VALUE: '1 DEC 2020 00:00:00'
volumes:
- ./airflow/released_dags:/usr/local/airflow/dags
- \\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp:/usr/local/airflow/dags/inboundFiles
networks:
- di-airflowStuff
ports:
- 8081:8080
depends_on:
- postgres
ERROR: Cannot create container for service airflow: \pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp%! (EXTRA string=is not a valid Windows path)
p.s. I can access this shared folder location from my file explorer and python without a problem.
You don't need docker-compose to mount an external volume to your container, just configure it when running the container:
docker run --name name -v path_host:path_in_container image:tag
both directories must exist
Microsoft recomends mapping shares to network drives (if you're running docker on Windows):
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts
I have a problem mounting a WD MyCloud EX2 NAS as an NFS share for a Nextcloud and MariaDB container combination, using Docker Compose. When I run docker-compose up -d, here's the error I get:
Creating nextcloud_app_1 ... error
ERROR: for nextcloud_app_1 Cannot create container for service app: b"error while mounting volume with options: type='nfs' device=':/mnt/HD/HD_a/nextcloud' o='addr=192.168.1.73,rw': permission denied"
ERROR: for app Cannot create container for service app: b"error while mounting volume with options: type='nfs' device=':/mnt/HD/HD_a/nextcloud' o='addr=192.168.1.73,rw': permission denied"
ERROR: Encountered errors while bringing up the project.
Here's docker-compose.yml (all sensitive info replaced with <brackets>:
Version: '2'
volumes:
nextcloud:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.73,rw
device: ":/mnt/HD/HD_a/nextcloud"
db:
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=<****>
- MYSQL_PASSWORD=<****>
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- NEXTCLOUD_ADMIN_USER=<****>
- NEXTCLOUD_ADMIN_PASSWORD=<****>
app:
image: nextcloud
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
restart: always
I SSHd into the NAS box to check /etc/exports and sure enough, it was using all_squash, so I changed that.
Here's the /etc/exports file on the NAS box:
"/nfs/nextcloud" 192.168.1.73(rw,no_root_squash,sync,no_wdelay,insecure_locks,insecure,no_subtree_check,anonuid=501,anongid=1000)
"/nfs/Public" 192.168.1.73(rw,no_root_squash,sync,no_wdelay,insecure_locks,insecure,no_subtree_check,anonuid=501,anongid=1000)
Then, I refreshed the service with exportfs -a
Nothing changed - docker-compose throws the same error. And I'm deleting all containers and images and redownloading the image every time I attempt the build.
I've read similar questions and done everything I can think of. I also know this is a container issue because I can access the NFS share quite happily from the command line thanks to my settings in /etc/fstabs.
What else should I be doing here?
In our case, we are mounting the nfs volume localy on the docker host, then mounting the folder inside the containers.
We are running with oracle-linux 7, with SElinux enable.
We fixed by adding the following parameter inside /etc/fstab in the fs_mntops block (see https://man7.org/linux/man-pages/man5/fstab.5.html):
defaults,context="system_u:object_r:svirt_sandbox_file_t:s0"
Try to check with the command line ini nextcloud folder "ls -l /var/www/html", see groups and users who can access it
I fixed it by removing the anonuid=501,anongid=1000 entries in the NAS box's /etc/exports file, and I also managed to enter the wrong IP - the NAS box wasn't granting access to the Ubuntu computer that was trying to connect with it.
We are using docker volume to store some static files from the docker host's folder. When I restart the container the files which are updated/added in the host directory I can see the changes in docker volume.
However, when I delete a file from host machine. The file is not getting deleted. I have to use docker volume as this is a shareable resource.
Following is my docker compose file
version: '2'
volumes:
test-volume: {}
services:
test:
image: test-volume:test1
volumes:
- test-volume:/var/myapp
test-gateway:
image: test-gateway:latest
ports:
- "8080:80"
volumes_from:
- test
volumes:
- ./test-gateway/conf.d:/etc/nginx/conf.d
environment:
- SITE_ROOT=root
- SITE_ROOT_ROUTE_FROM=/
- SITE_ROOT_ROUTE_DIRECTORY=/var/myapp/static
So, if I remove any file from myapp and restart the docker container the file is not getting deleted from the docker volume.
Is there anything I am missing?
In you compose file you are creating a named volumes test-volume. This volume is references in the test and test-gateway services.
This volume has nothing to do with folders on your docker host. Therefore you cannot delete files on the host and expect this will be reflected in your named volume.
In case you want to map a folder on your docker host your need to use the syntax you used for mapping conf.d file.
volumes:
- ./some_folder_on_host:/some_folder_in_container