GCloud: Copying Files from Local Machine into a Docker Container - docker

Is there a straightforward way to copy files from a local machine into a docker container within a VM instance on Google Compute Engine?
I know gcloud compute ssh --container=XX is an easy way to execute commands on a container, but there's no analogous gcloud compute scp --container=XX. Note: I created this VM and docker container with the command gcloud alpha compute instances create-from-container ...
Note, better than just being able to transfer files, it would be nice to have an rsync type functionality.

Unfortunately, looks like it's not available without some setup on your part (and it's not in beta): creating a volume map notwithstanding, you could do it by running sshd inside the container listening on it's own port mapped to the host:
gcloud compute firewall-rules create CONTAINER-XX-SSH-RULE --allow
tcp:2022 --target-tags=XX-HOST
gcloud compute scp --port 2022 --recurse stuff/ user#XX-HOSTNAME:
or
scp -r -P 2022 stuff/ user#xx-host-ip:

I generally use an approach where I use object storage in-between local machines and a cloud VMs. On AWS I use s3 sync, on Google you can use gsutil rsync
First the data on a 'local' development machine gets pushed into object storage when I'm ready to deploy it.
(The data in question is a snapshot of a git repository + some binary
files).
(Sometimes the development machine in question is a laptop, sometimes
my desktop, sometimes a cloud IDE. They all run git).
Then the VM pulls content from object storage using s3 sync. I think you can do the same with gsutil to pull data from Google object storage into a Google container. (In fact it seems you can even rsync between clouds using gsutil).
This is my shoestring dev-ops environment. It's a little bit more work, but using object storage as a middleman for syncing snapshots of data between machines provides a bit of flexibility, a reproducible environment and peace of mind.

Related

Can a docker registry be copied from one machine to another?

Can a docker registry populated on one host be 'tree-copied' to another machine and be 'turned on' as a pre-populated docker registry served by the new host?
I am working on a project providing Platform-as-a-Service which includes a docker registry service. These run in disconnected environments (not connected to the Internet). One very time consuming aspect of each deployment is creating an empty registry and loading, tagging, and pushing hundreds of docker images (tens of gigabytes of data) from a compressed tar into the registry for each new deployment.
I am thinking it would be faster to do this differently. Instead of a tarball of docker files, could we at 'build time' create and populate the docker registry then and compress that. At deploy time we just unpack the registry into /var/lib/registry or wherever...
But, I don't know if any of the data in the registry is dependent upon, say, a machine ID, certificate, or other aspect of the host upon which the registry was first running.
It seems to me an equivalent question is, if I populate two docker registries running on different machines with the same set of docker images in the same order, will the file contents of the registry folder be the same (or similar, allowing for timestamps and such?)
Every time I search for "docker registry transfer" or "move docker registry to new machine" or similar terms, I am flooded with answers about moving single docker images from one machine or registry to another, but don't see anything about docker registry migration or portability.
I haven't had the time or resources to test this out; maybe someone already expert in docker registry structures could clue me in that this is practical (or can say it absolutely will not work) I can make a better decision about whether to pursue getting the time and machines to demonstrate this approach.
Thank you.
I don't know if any of the data in the registry is dependent upon,
say, a machine ID, certificate, or other aspect of the host upon which
the registry was first running.
The configuration will be associated to the registry, for example, if you are running a secure registry by adding certs to the registry.
In this case, you will have to configure the registry in the same manner on a new machine as on the previous machine, use a configuration manager(like ansible) for that.
Instead of a tarball of docker files, could we at 'build time' create
and populate the docker registry then and compress that. At deploy
time we just unpack the registry into /var/lib/registry or wherever...
Adding to what #DazWilkin already mentioned in the comments, a storage location can be configured which can be,
filesystem: the rootdirectory default is /var/lib/registry
based on the cloud provider if the registry is deployed on a private cloud
Example, S3 bucket for AWS
You can take backup of that like the rootdirectory in case of filesystem or attach the storage location to the new registry.
**Words of caution**
Try to use the exact configuration and version of the docker registry.

How does the data in HOME directory persist on cloud shell?

Do they use environment / config variables to link the persistent storage to the project related docker image ?
So that everytime new VM is assigned, the cloud shell image can be run with those user specific values ?
Not sure to have caught all your questions and concerns. So, Cloud Shell is in 2 parts:
The container that contains all the installed library, language support/sdk, binaries (docker for example). This container is stateless and you can change it (in the setting section of Cloud Shell) if you want to deploy a custom container. For example, it's what is done with Cloud Run Button for deploying a Cloud Run service automatically.
The volume dedicated to the current user that is mounted in the Cloud Shell container.
By the way, you can easily deduce that all you store outside the /home/<user> directory is stateless and not persist. /tmp directory, docker image (pull or created),... all of these are lost when the Cloud Shell start on other VM.
Only the volume dedicated to the user is statefull, and limited to 5Gb. It's linux environment and you can customize the .profile and .bash_rc files as you want. You can store keys in /.ssh/ directory and all the other tricks that you can do on Linux in your /home directory.

How to run container in a remote docker host with Jenkins

I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.

Copy docker volume to google compute engine instance

I have a google compute engine instance (Ubuntu 16.04 LTS).
I want to copy docker volumes from my local machine to the google compute engine instance. I tried to use the command given in - How to copy docker volume from one machine to another?
But it didn't work. Please help.
Mount the volumes on your local machines, so that data can be accessible as File System level.
Then You need to have 'gcloud' setup properly on your local-machine.
Then you can use gcloud commands to copy the data from local-machine to GCE-Instances.
Method to copy files from local machine to GCP Instances

How do I run Docker on Google Compute Engine?

What's the procedure for installing and running Docker on Google Compute Engine?
Until the recent GA release of Compute Engine, running Docker was not supported on GCE (due to kernel restrictions) but with the newly announced ability to deploy and use custom kernels, that restriction is no longer intact and Docker now works great on GCE.
Thanks to proppy, the instructions for running Docker on Google Compute Engine are now documented for you here: http://docs.docker.io/en/master/installation/google/. Enjoy!
They now have a VM which has docker pre-installed now.
$ gcloud compute instances create instance-name
--image projects/google-containers/global/images/container-vm-v20140522
--zone us-central1-a
--machine-type f1-micro
https://developers.google.com/compute/docs/containers/container_vms
A little late, but I wanted to add an answer with a more detailed workflow and links, since answers are still rather scattered:
Create a Docker image
a. Locally
b. Using Google Container Builder
Push local Docker image to Google Container Repository
docker tag <current name>:<current tag> gcr.io/<project name>/<new name>
gcloud docker -- push gcr.io/<project name>/<new name>
UPDATE
If you have upgraded to Docker client versions above 18.03, gcloud docker commands are no longer supported. Instead of the above push, use:
docker push gcr.io/<project name>/<new name>
If you have issues after upgrading, see more here.
Create a compute instance.
This process actually obfuscates a number of steps. It creates a virtual machine (VM) instance using Google Compute Engine, which uses a Google-provided, container-optimized OS image. The image includes Docker and additional software responsible for starting our docker container. Our container image is then pulled from the Container Repository, and run using docker run when the VM starts. Note: you still need to use docker attach even though the container is running. It's worth pointing out only one container can be run per VM instance. Use Kubernetes to deploy multiple containers per VM (the steps are similar). Find more details on all the options in the links at the bottom of this post.
gcloud beta compute instances create-with-container <desired instance name> \
--zone <google zone> \
--container-stdin \
--container-tty \
--container-image <google repository path>:<tag> \
--container-command <command (in quotes)> \
--service-account <e-mail>
Tip You can view available gcloud projects with gcloud projects list
SSH into the compute instance.
gcloud beta compute ssh <instance name> \
--zone <zone>
Stop or Delete the instance. If an instance is stopped, you will still be billed for resources such as static IPs and persistent disks. To avoid being billed at all, use delete the instance.
a. Stop
gcloud compute instances stop <instance name>
b. Delete
gcloud compute instances delete <instance name>
Related Links:
More on deploying containers on VMs
More on zones
More create-with-container options
As of now, for just Docker, the Container-optimized OS is certainly the way to go:
gcloud compute images list --project=cos-cloud --no-standard-images
It comes with Docker and Kubernetes preinstalled. The only thing it lacks is the Cloud SDK command-line tools. (It also lacks python3, despite Google's announce of Python 2 sunset on 2020-01-01. Well, it's still 27 days to go...)
As an additional piece of information I wanted to share, I was searching for a standard image that would offer both docker and gcloud/gsutil preinstalled (and found none, oops). I do not think I'm alone in this boat, as gcloud is the thing you could hardly go by without on GCE¹.
My best find so far was the Ubuntu 18.04 image that came with their own (non-Debian) package manager, snap. The image comes with the Cloud SDK preinstalled, and Docker installs literally in a snap, 11 seconds on an F1 instance initial test, about 6s on an n1-standard-1. The only snag I hit was the error message that the docker authorization helper was not available; an attempt to add it with gcloud components install failed because the SDK was installed as a snap, too. However, the helper is actually there, only not in the PATH. The following was what got me the both tools available in a single transient builder VM in the least amount of setup script runtime, starting off the supported Ubuntu 18.04 LTS image²:
snap install docker
ln -s /snap/google-cloud-sdk/current/bin/docker-credential-gcloud /usr/bin
gcloud -q auth configure-docker
¹ I needed both for a Daisy workflow imaging a disk with both artifacts from GS buckets and a couple huge, 2GB+ library images from the local gcr.io registry that were shared between the build (as cloud builder layers) and the runtime (where I had to create and extract containers to the newly built image). But that's besides the point; one may needs both tools for a multitude of possible reasons.
² Use gcloud compute images list --uri | grep ubuntu-1804 to get the most current one.
Google's GitHub site offers now a gce image including docker. https://github.com/GoogleCloudPlatform/cloud-sdk-docker-image
It's as easy as:
creating a Compute Engine instance
curl https://get.docker.io | bash
Using docker-machine is another way to host your google compute instance with docker.
docker-machine create \
--driver google \
--google-project $PROJECT \
--google-zone asia-east1-c \
--google-machine-type f1-micro $YOUR_INSTANCE
If you want to login this machine on google cloud compute instance, just use docker-machine ssh $YOUR_INSTANCE
Refer to docker machine driver gce
There is now improved support for containers on GCE:
Google Compute Engine is extending its support for Docker containers. This release is an Open Preview of a container-optimized OS image that includes Docker and an open source agent to manage containers. Below, you'll find links to interact with the community interested in Docker on Google, open source repositories, and examples to get started. We look forward to hearing your feedback and seeing what you build.
Note that this is currently (as of 27 May 2014) in Open Preview:
This is an Open Preview release of containers on Virtual Machines. As a result, we may make backward-incompatible changes and it is not covered by any SLA or deprecation policy. Customers should take this into account when using this Open Preview release.
Running Docker on GCE instance is not supported. The instance goes down and not able to login again.
We can use the Docker image given by the GCE, to create a instance.
If your google cloud virtual machine is based on ubuntu use the following command to install docker
sudo apt install docker.io
You may use this link: https://cloud.google.com/cloud-build/docs/quickstart-docker#top_of_page.
The said link explains how to use Cloud Build to build a Docker image and push the image to Container Registry. You will first build the image using a Dockerfile and then build the same image using the Cloud Build's build configuration file.
Its better to get it while creating compute instance
Go to the VM instances page.
Click the Create instance button to create a new instance.
Under the Container section, check Deploy container image.
Specify a container image name under Container image and configure options to run the container if desired. For example, you can specify gcr.io/cloud-marketplace/google/nginx1:1.12 for the container image.
Click Create.
Installing Docker on GCP Compute Engine VMs:
This is the link to GCP documentation on the topic:
https://cloud.google.com/compute/docs/containers#installing
In it it links to the Docker install guide, you should follow the instructions depending on what type of Linux you have running in the vm.

Resources