Copy docker volume to google compute engine instance - docker

I have a google compute engine instance (Ubuntu 16.04 LTS).
I want to copy docker volumes from my local machine to the google compute engine instance. I tried to use the command given in - How to copy docker volume from one machine to another?
But it didn't work. Please help.

Mount the volumes on your local machines, so that data can be accessible as File System level.
Then You need to have 'gcloud' setup properly on your local-machine.
Then you can use gcloud commands to copy the data from local-machine to GCE-Instances.
Method to copy files from local machine to GCP Instances

Related

How to organize data into separate vhdx disks using Docker (on WSL2)?

I am running Docker on Windows 11 with WSL2 integration enabled. If I create a volume named "my_awesome_volume" in Docker Desktop then the following folder is created:
\\wsl.localhost\docker-desktop-data\data\docker\volumes\my_awesome_volume\_data
I understand that this data lives in the ext4.vhdx for docker-desktop-data. I don't want to store data for all of my containers here. I'd like to isolate some of my data into separate vhdx disks for portability and organization purposes. So, I created a new my_storage.vhdx for data storage, formatted it as ext4, and mounted it in WSL. This was successful, and I can access/read/write this storage from any distro on my system using the following path from within the distro:
/mnt/wsl/my_storage_folder
Or from Windows (any distro works but my example uses docker-desktop-data):
\\wsl.localhost\docker-desktop-data\mnt\wsl\my_storage_folder
I am unable to access this storage using a Docker volume, though.
I understand how to create a volume with access to the host file system, like this:
volumes:
- /c/my_host_folder/config:/config
Of course, performance is better if files aren't read from the Windows host, but if I do this:
volumes:
- my_awesome_volume:/config
My data is pointed at the docker-desktop-data container.
Is it possible to create a Docker volume (in Windows w/ WSL2) that points to a folder in the my_storage.vhdx I created? How?
I tried to follow a couple of examples using the local driver options, but I couldn't get anything to work.

Copy docker configuration to other PC

I’m using docker composer in order to run ChirpStack on my Windows 10 machine. I need to reinstall operating system, but how to keep working ChirpStack docker system without creating new one?
If all the base images you're using are from public repos and are not only saved on your machine you only need to save your docker configuration, since you're using docker compose you can just copy the docker-compose.yml file to an external storage medium and you're all set. Unless you have some more dependencies that only exist on your computer that's all the files you need.

How does the data in HOME directory persist on cloud shell?

Do they use environment / config variables to link the persistent storage to the project related docker image ?
So that everytime new VM is assigned, the cloud shell image can be run with those user specific values ?
Not sure to have caught all your questions and concerns. So, Cloud Shell is in 2 parts:
The container that contains all the installed library, language support/sdk, binaries (docker for example). This container is stateless and you can change it (in the setting section of Cloud Shell) if you want to deploy a custom container. For example, it's what is done with Cloud Run Button for deploying a Cloud Run service automatically.
The volume dedicated to the current user that is mounted in the Cloud Shell container.
By the way, you can easily deduce that all you store outside the /home/<user> directory is stateless and not persist. /tmp directory, docker image (pull or created),... all of these are lost when the Cloud Shell start on other VM.
Only the volume dedicated to the user is statefull, and limited to 5Gb. It's linux environment and you can customize the .profile and .bash_rc files as you want. You can store keys in /.ssh/ directory and all the other tricks that you can do on Linux in your /home directory.

docker container using nfs directory on remote host as volume

I have an application in my local host.
The application use files from directory on remote host as data base.
I should docker this application
How can I use this directory?
I tried to use it as volume but it didn't work
the files of the directory are inside container, but the application doesn't recognize it
If you somehow map remote directory into your local host, why not using the same technique inside docker?
If for some reasons you cant (lets say, you don't want to install additional drivers in your container), you still can use volumes:
Lets say on your local host your directory (which is somehow synchronized with remote endpoint) is called /home/sync_folder. Then you start docker in following manner:
docker run -it -v /home/sync_folder:/shares ubuntu ls /shares
I've written ubuntu just as an example. ls /shares illustrates ow to access directory inside container

GCloud: Copying Files from Local Machine into a Docker Container

Is there a straightforward way to copy files from a local machine into a docker container within a VM instance on Google Compute Engine?
I know gcloud compute ssh --container=XX is an easy way to execute commands on a container, but there's no analogous gcloud compute scp --container=XX. Note: I created this VM and docker container with the command gcloud alpha compute instances create-from-container ...
Note, better than just being able to transfer files, it would be nice to have an rsync type functionality.
Unfortunately, looks like it's not available without some setup on your part (and it's not in beta): creating a volume map notwithstanding, you could do it by running sshd inside the container listening on it's own port mapped to the host:
gcloud compute firewall-rules create CONTAINER-XX-SSH-RULE --allow
tcp:2022 --target-tags=XX-HOST
gcloud compute scp --port 2022 --recurse stuff/ user#XX-HOSTNAME:
or
scp -r -P 2022 stuff/ user#xx-host-ip:
I generally use an approach where I use object storage in-between local machines and a cloud VMs. On AWS I use s3 sync, on Google you can use gsutil rsync
First the data on a 'local' development machine gets pushed into object storage when I'm ready to deploy it.
(The data in question is a snapshot of a git repository + some binary
files).
(Sometimes the development machine in question is a laptop, sometimes
my desktop, sometimes a cloud IDE. They all run git).
Then the VM pulls content from object storage using s3 sync. I think you can do the same with gsutil to pull data from Google object storage into a Google container. (In fact it seems you can even rsync between clouds using gsutil).
This is my shoestring dev-ops environment. It's a little bit more work, but using object storage as a middleman for syncing snapshots of data between machines provides a bit of flexibility, a reproducible environment and peace of mind.

Resources