The only info about rootless data dir on the official Docker Docs website is:
The data dir is set to ~/.local/share/docker by default. The data dir should not be on NFS.
However I didn't found any option to change this during installation via dockerd-rootless-setuptool.sh install
Change default data directory for Docker on non-root user (rootless mode).
Related
I'm running Docker Desktop on Windows 10. I used the repository for the Fonduer Tutorials to create an image to run with docker. This works fine so far and I am able to run the notebooks which are included in the repository.
I now would like to copy some jupyter notebooks and other data from the host to the container called fonduer-tutorials-jupyter-1 to be able to make use of the fonduer framework.
I am able to copy the files to the container and also to open the jupyter notebooks, but they unfortunately do open in read-only mode.
How can I copy files from host to container and still have permission to write on a windows machine?
I read a lot about options like chown and other flags to use with COPY, but it seems like they're not available on windows machines.
Let's assume my UID received with id -u is 1000 and my GID received with id -g is 2000 if that is relevant to a solution.
To prevent copying the files manually and avoid the linked access restrictions described above a better solution is to map the host directory to a volume within the container via .yml-File, in this case the docker-compose.yml. To do so, the following needs to be added to the .yml-File.
volumes:
- [path to host directory]:[container path where the files should be placed in]
With this the files will be available both, within the container as well as on host.
I'm new to Google Cloud and Docker and I can't for the life of me figure out how to copy directories from the Docker container (pushed to the Container Registry) to the Google Compute Engine instance. I think I need to mount the volume but I don't really know how. In the docker container the main directory is /app which has my files. Basically I want to do this to see the docker container's files in Google Cloud.
I assumed that if i did: docker pull [HOSTNAME]/[PROJECT-ID]/[IMAGE]:[TAG] inside the cloud shell that the files would show up somewhere in the cloud shell i.e. in var/lib/docker but when I cd to var/lib/docker and type in: ls I get
ls: cannot open directory '.': Permission denied
Just to add I've tried following the "Connecting to Cloud Storage buckets" tutorial https://cloud.google.com/compute/docs/disks/gcs-buckets
But realised that this is for single files. Is it possible to copy over the whole root directory of the Docker image using gsutil? Do I need to use something else instead, like persistent disks?
You need to have docker installed in order to run your images and of course be able to copy anything from inside the image to your host filesystem.
Use docker cp CONTAINER:SRC_PATH DEST_PATH to copy files.
Have a look at the official Docker Documentation on how to use this command.
Simillar topic was also discussed here on StackOverflow and has a very good answer.
Do they use environment / config variables to link the persistent storage to the project related docker image ?
So that everytime new VM is assigned, the cloud shell image can be run with those user specific values ?
Not sure to have caught all your questions and concerns. So, Cloud Shell is in 2 parts:
The container that contains all the installed library, language support/sdk, binaries (docker for example). This container is stateless and you can change it (in the setting section of Cloud Shell) if you want to deploy a custom container. For example, it's what is done with Cloud Run Button for deploying a Cloud Run service automatically.
The volume dedicated to the current user that is mounted in the Cloud Shell container.
By the way, you can easily deduce that all you store outside the /home/<user> directory is stateless and not persist. /tmp directory, docker image (pull or created),... all of these are lost when the Cloud Shell start on other VM.
Only the volume dedicated to the user is statefull, and limited to 5Gb. It's linux environment and you can customize the .profile and .bash_rc files as you want. You can store keys in /.ssh/ directory and all the other tricks that you can do on Linux in your /home directory.
When I start scm manager via docker:
docker run sdorra/scm-manager
How do I get scm manager to retrieve/store its configuration data and repositories from/to an existing directory on the main filesystem?
You can use docker volumes to achieve this. With a docker volume you can mount a folder from your host into your container. In the case of the scm-manager home directory this could look like this:
docker run -v /host/path:/var/lib/scm sdorra/scm-manager:1.60
The left side of the "-v" parameter specifies the path on the host filesystem and the right side specifies the path in the container.
Note: The scm-manager docker container uses a user with the uid 1000, so you have to be sure that the user can read and write this volume: chown -R 1000:1000 /host/path.
When we install any application there creates lots of file structure and also generates logs file in specified path of the application. When we are running same application in docker container , its also creates those files. How can we access those file . As I know we can use docker exec command with bash to interact in command prompt or terminal but is it possible to access same using winscp or gui based any 3rd party tools.
You could mount a volume on the container, so the "locally" generated docker files can be accessed from the host. For example:
docker run -v host_dir:container_dir yourDocker (...)
Your docker's process will save the files to his local container_dir, and you could access them via your host's host_dir.