Im setting a dev environment using docker & wsl 2
no much experience in the topic , just basic knowledge of linux terminal commands and the concept of docker .
now Im trying to dockerize a laravel application in an nginx container , the nginx ships with a user called 'www-data' , but my host has the user of 'ahmed' .
now , when I mount the application directory to the container , I get permission issues regarding to the truth saying that the user inside the container has no rights to access the files maintained by the user 'ahmed'
changing the ownership to www-data didnt work with me , the vscode server ( running remotely from wsl ) cant update files because the permissions were set to another user 'www-data' !
so what to do in this case ?
Related
I am using an ubuntu host (22.04) which uses docker container in which I defined my build environment (compiler, toolchain, usb devices). I created a volume share so that I can access the git repo on my host, inside my container.
The problem is, when I compile a project, and I need to do something on my host with the build artifacts (e.g. upload a binary to a web portal), the files belong to the root user (which is the only user on my docker environment). Thus, I need to chmod specific files before I can access them on my host which is annoying.
I tried to run the docker image with a user name, but then VScode no longer is able to install stuff when it connects to the docker container.
Is there a way to get an active user in my container, and still allow VScode remote-container to install extensions on connecting to the container? Or is there a better way to avoid chmodding all build results?
I am trying to run the puppet pupperware suite (all 3 servers/puppet server/puppet DB/DB server).
I am using the official Yaml file provided by puppetlabs for docker compose : https://github.com/puppetlabs/pupperware/blob/master/docker-compose.yml
When I run that Yaml file in docker compose however, I am running into the following error (from docker-compose logs):
postgres_1 | ls: cannot open directory '/docker-entrypoint-initdb.d/': Permission denied
And as a result, the build fails (only the puppet server comes up, but not the other ones).
My docker host is a Fedora 33 virtual machine running inside a Proxmox environment. Proxmox runs on the physical host.
I have disabled SELinux, and I am running docker (moby) rootless. My local user (uid 1000) can run docker without sudo.
I believe I need to set permission in the container (probably via a Dockerfile) but I am not sure how to change that and I am not sure how to use a Dockerfile and docker-compose simultaneously.
thank you for your help
The docker-compose file is from the Puppet 6 era. The docker images that the Pupperware setup currently pulls, are latest, which is Puppet 7.
I got my pre-existing setup functioning again by changing the image names to:
puppet/puppetserver:6.14.1
postgres:9.6
puppet/puppetdb:6.13.1
Maybe this works for you as well.
well, since it's been a month and you have no answers I will tell try to help you with what I know.
You should put a Dockerfile in the root of your project. It contains commands to be run by the docker daemon AND the commands run by the linux inside the container. Then it runs through the contents of your docker-compose.yml and runs the commands in there.
So to solve the permission problem you should add RUN, which executes the linux command in Bash and add data to the folder.
Also look at this answer
In my centos system, I add a user to group docker, and I found such a user can access any folder by attach folder to container via docker run -it -v path-to-directory:directory-in-container. For example, I have a folder with mode 700 which can only access by root, but if someone who doesn't have permission to access this folder run a container and mount this folder to container, he can access this folder in container.How can I prevent such a user to attach unauthorized directories to docker container? My docker version is 17.03.0-ce, system os centOS 7.0. Thanks!
You should refer and follow the container principles for security or external volume permission setting up. But you want to test simply for container features.
You can set up the path-to-directory access mode 777, yes it's world-readable-writable access mode. It's no additional owner-group setting and access mode setting for any container volume mapping.
chmod 777 /path/to/directory
Docker daemon runs as root and normally starts the containers as root, with users inside mapping one-to-one to the host users, so anybody in the docker group has effective root permissions.
There is an option to tell dockerd to run the containers via subusers of a specific user, see [https://docs.docker.com/engine/security/userns-remap/]. That prevents full root access, but everybody accessing the docker daemon will be running the containers under that user—and if that user is not them, they won't be able to usefully mount things in their home.
Also I believe it is incompatible with --privileged containers, but of course those give you full root access via other means as well anyway.
I'm setting up docker-compose for my php project on my mac. I mount a shared volume with all the code from host machine into container.
Obviously, I do not want to run the php container as root user. But do I have another option?
I tried:
Changing owner of the project files in Dockerfile
Changing owner in entrypoint
Both methods work fine, until you create a file in IDE - in this case the file appears to be owner by root inside container and then you need to restart the container (which is horrible experience for development)
Changing UID for user www-data in container to UID from my host user.
It didn't work, since the files are owned by root
Do I miss any points here?
I face a problem to implement the NAS on container in my datacenter. here is the scenario when the app( Container ) starts it needs to create the NAS folder on the host and should access the files from there. folders should be create in the run time. things i tried.
Using -v i have connected the server folder and container manually ( But this will fail when the container moved to other server )
--from option i can able to connect the one container files to others [ this also done manually] ( But is there any way when the container starts it can create a storage folder in the other container and access the files) ?
Your guess/suggestion will be much appreciated.