What's the procedure for installing and running Docker on Google Compute Engine?
Until the recent GA release of Compute Engine, running Docker was not supported on GCE (due to kernel restrictions) but with the newly announced ability to deploy and use custom kernels, that restriction is no longer intact and Docker now works great on GCE.
Thanks to proppy, the instructions for running Docker on Google Compute Engine are now documented for you here: http://docs.docker.io/en/master/installation/google/. Enjoy!
They now have a VM which has docker pre-installed now.
$ gcloud compute instances create instance-name
--image projects/google-containers/global/images/container-vm-v20140522
--zone us-central1-a
--machine-type f1-micro
https://developers.google.com/compute/docs/containers/container_vms
A little late, but I wanted to add an answer with a more detailed workflow and links, since answers are still rather scattered:
Create a Docker image
a. Locally
b. Using Google Container Builder
Push local Docker image to Google Container Repository
docker tag <current name>:<current tag> gcr.io/<project name>/<new name>
gcloud docker -- push gcr.io/<project name>/<new name>
UPDATE
If you have upgraded to Docker client versions above 18.03, gcloud docker commands are no longer supported. Instead of the above push, use:
docker push gcr.io/<project name>/<new name>
If you have issues after upgrading, see more here.
Create a compute instance.
This process actually obfuscates a number of steps. It creates a virtual machine (VM) instance using Google Compute Engine, which uses a Google-provided, container-optimized OS image. The image includes Docker and additional software responsible for starting our docker container. Our container image is then pulled from the Container Repository, and run using docker run when the VM starts. Note: you still need to use docker attach even though the container is running. It's worth pointing out only one container can be run per VM instance. Use Kubernetes to deploy multiple containers per VM (the steps are similar). Find more details on all the options in the links at the bottom of this post.
gcloud beta compute instances create-with-container <desired instance name> \
--zone <google zone> \
--container-stdin \
--container-tty \
--container-image <google repository path>:<tag> \
--container-command <command (in quotes)> \
--service-account <e-mail>
Tip You can view available gcloud projects with gcloud projects list
SSH into the compute instance.
gcloud beta compute ssh <instance name> \
--zone <zone>
Stop or Delete the instance. If an instance is stopped, you will still be billed for resources such as static IPs and persistent disks. To avoid being billed at all, use delete the instance.
a. Stop
gcloud compute instances stop <instance name>
b. Delete
gcloud compute instances delete <instance name>
Related Links:
More on deploying containers on VMs
More on zones
More create-with-container options
As of now, for just Docker, the Container-optimized OS is certainly the way to go:
gcloud compute images list --project=cos-cloud --no-standard-images
It comes with Docker and Kubernetes preinstalled. The only thing it lacks is the Cloud SDK command-line tools. (It also lacks python3, despite Google's announce of Python 2 sunset on 2020-01-01. Well, it's still 27 days to go...)
As an additional piece of information I wanted to share, I was searching for a standard image that would offer both docker and gcloud/gsutil preinstalled (and found none, oops). I do not think I'm alone in this boat, as gcloud is the thing you could hardly go by without on GCE¹.
My best find so far was the Ubuntu 18.04 image that came with their own (non-Debian) package manager, snap. The image comes with the Cloud SDK preinstalled, and Docker installs literally in a snap, 11 seconds on an F1 instance initial test, about 6s on an n1-standard-1. The only snag I hit was the error message that the docker authorization helper was not available; an attempt to add it with gcloud components install failed because the SDK was installed as a snap, too. However, the helper is actually there, only not in the PATH. The following was what got me the both tools available in a single transient builder VM in the least amount of setup script runtime, starting off the supported Ubuntu 18.04 LTS image²:
snap install docker
ln -s /snap/google-cloud-sdk/current/bin/docker-credential-gcloud /usr/bin
gcloud -q auth configure-docker
¹ I needed both for a Daisy workflow imaging a disk with both artifacts from GS buckets and a couple huge, 2GB+ library images from the local gcr.io registry that were shared between the build (as cloud builder layers) and the runtime (where I had to create and extract containers to the newly built image). But that's besides the point; one may needs both tools for a multitude of possible reasons.
² Use gcloud compute images list --uri | grep ubuntu-1804 to get the most current one.
Google's GitHub site offers now a gce image including docker. https://github.com/GoogleCloudPlatform/cloud-sdk-docker-image
It's as easy as:
creating a Compute Engine instance
curl https://get.docker.io | bash
Using docker-machine is another way to host your google compute instance with docker.
docker-machine create \
--driver google \
--google-project $PROJECT \
--google-zone asia-east1-c \
--google-machine-type f1-micro $YOUR_INSTANCE
If you want to login this machine on google cloud compute instance, just use docker-machine ssh $YOUR_INSTANCE
Refer to docker machine driver gce
There is now improved support for containers on GCE:
Google Compute Engine is extending its support for Docker containers. This release is an Open Preview of a container-optimized OS image that includes Docker and an open source agent to manage containers. Below, you'll find links to interact with the community interested in Docker on Google, open source repositories, and examples to get started. We look forward to hearing your feedback and seeing what you build.
Note that this is currently (as of 27 May 2014) in Open Preview:
This is an Open Preview release of containers on Virtual Machines. As a result, we may make backward-incompatible changes and it is not covered by any SLA or deprecation policy. Customers should take this into account when using this Open Preview release.
Running Docker on GCE instance is not supported. The instance goes down and not able to login again.
We can use the Docker image given by the GCE, to create a instance.
If your google cloud virtual machine is based on ubuntu use the following command to install docker
sudo apt install docker.io
You may use this link: https://cloud.google.com/cloud-build/docs/quickstart-docker#top_of_page.
The said link explains how to use Cloud Build to build a Docker image and push the image to Container Registry. You will first build the image using a Dockerfile and then build the same image using the Cloud Build's build configuration file.
Its better to get it while creating compute instance
Go to the VM instances page.
Click the Create instance button to create a new instance.
Under the Container section, check Deploy container image.
Specify a container image name under Container image and configure options to run the container if desired. For example, you can specify gcr.io/cloud-marketplace/google/nginx1:1.12 for the container image.
Click Create.
Installing Docker on GCP Compute Engine VMs:
This is the link to GCP documentation on the topic:
https://cloud.google.com/compute/docs/containers#installing
In it it links to the Docker install guide, you should follow the instructions depending on what type of Linux you have running in the vm.
Related
I am looking for any sample project to create MarkLogic 3 Nodes Cluster directly from ML Docker Hub image (https://hub.docker.com/_/marklogic) via Gradle on a single machine.
The idea is to auto spin off different ML versions for the dev enviroment set up.
Current three node cluster example in gitbub ml-gradle is to install ML from rpm installation package.
I would like to directly use the ML docker hub image instead.
MarkLogic on Docker Hub includes instructions for spinning up a cluster with this simple command:
docker-compose -f cluster.yml up -d --scale dnode=2
To run this, pull down the Docker image (you'll need an account on Docker Hub (free) and you'll need to do the checkout process to get access to the MarkLogic image (also free)). Then you can create the cluster.yml file using the example given on the setup instructions page on Docker Hub.
As #rjrudin points out, you can set up a gradle task to do this.
ml-gradle is typically used to deploy an application to an existing ML cluster. To actually create the ML cluster, you use the "docker" executable. You can automate this via Gradle's Exec task if you wish, but doing so is outside the scope of ml-gradle which assumes that you already have an ML cluster setup.
My high-level architecture is described in Cloud Endpoints for gRPC.
The Server below is a Compute Engine instance with Docker installed, running two containers (the ESP, and my server):
As per Getting started with gRPC on Compute Engine, I SSH into the VM and install Docker on the instance (see Install Docker on the VM instance). Finally I pull down the two Docker Containers (ESP and my server) and run them.
I've been reading around Container-Optimized OS from Google.
Rather than provisioning an instance with an OS and then installing Docker, I could just provision the OS with a Container-Optimized OS, and then pull-down my containers and run them.
However the only gRPC tutorials are for gRPC on Kubernetes Engine, gRPC on Kubernetes, and gRPC on Compute Engine. There is no mention of Container OS.
Has anyone used Container OS with gRPC, or can anyone see why this wouldn't work?
Creating an instance for advanced scenarios looks relevant because it states:
Use this method to [...] deploy multiple containers, and to use
cloud-init for advanced configuration.
For context, I'm trying to move to CI/CD in Google Cloud, and removing the need to install Docker would be a step in that direction.
You can basically follow almost the same instructions in the Getting started with gRPC on Compute Engine guide to deploy your gRPC server with the ESP on Container-Optimized OS. In your case, just see Container-Optimized OS as an OS with pre-installed Docker (there are more features but, in your case, only this one is interesting).
It is possible to use cloud-init if you want to automate the startup of your Docker containers (gRPC server + ESP) when the VM instance starts. The following cloud-init.cfg file automates the startup of the same containers presented in the documentation examples (with bookstore sample app). You can replace the Creating a Compute Engine instance part with two steps.
Create a cloud-init config file
Create cloud-init.cfg with the following content :
#cloud-config
runcmd:
- docker network create --driver bridge esp_net
- docker run
--detach
--name=bookstore
--net=esp_net
gcr.io/endpointsv2/python-grpc-bookstore-server:1
- docker run
--detach
--name=esp
--net=esp_net
--publish=80:9000
gcr.io/endpoints-release/endpoints-runtime:1
--service=bookstore.endpoints.<YOUR_PROJECT_ID>.cloud.goog
--rollout_strategy=managed
--http2_port=9000
--backend=grpc://bookstore:8000
Just after starting the instance, cloud-init will read this configuration and :
create a Docker network (esp_net)
run the bookstore container
run the ESP container. In this container startup command, replace <YOUR_PROJECT_ID> with your project ID (or replace the whole --service option depending on your service name)
Create a Compute Engine instance with Container-Optimized OS
You can create the instance from the Console, or via the command-line :
gcloud compute instances create instance-1 \
--zone=us-east1-b \
--machine-type=n1-standard-1 \
--tags=http-server,https-server \
--image=cos-73-11647-267-0 \
--image-project=cos-cloud \
--metadata-from-file user-data=cloud-init.cfg
The --metadata-from-file will populate user-data metadata with the contents of cloud-init.cfg. This cloud-init config will be taken into account when the instance will start.
You can validate this works by :
SSHing into instance-1, and running docker ps to see your running containers (gRPC server + ESP). You may experience some delay between the startup of your instance and the startup of both containers
calling your gRPC service with a client. For example (always with the bookstore application presented in the docs) :
INSTANCE_IP=$(gcloud compute instances describe instance-1 --zone us-east1-b --format="value(network_interfaces[0].accessConfigs.natIP)")
python bookstore_client.py --host $INSTANCE_IP --port 80 # returns a valid response
Note that you can also choose to not use cloud-init. You can directly run the docker run commands (the same as in cloud-init.cfg file) on your VM with Container-Optimized OS, exactly like you would do on any other OS.
I'm new to docker.Most of the tutorials on docker cover the same thing.I'm afraid I'm just ending up with piles of questions,and no answers really. I've come here after my fair share of Googling, kindly help me out with these basic questions.
When we install a docker,where it gets installed? Is it in our computer in local or does it happen in cloud?
Where does containers get pulled into?I there a way I can see what is inside the container?(I'm using Ubuntu 18.04)
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
Looks like you are confused after reading to many documents. Let me try to put this in simple words. Hope this will help.
When we install a docker,where it gets installed? Is it in our
computer in local or does it happen in cloud?
We install the docker on VM be it you on-prem VM or cloud. You can install the docker on your laptop as well.
Where does containers get pulled into?I there a way I can see what is
inside the container?(I'm using Ubuntu 18.04)
This question can be treated as lack of terminology awareness. We don't pull the container. We pull the image and run the container using that.
Quick terminology summary
Container-> Containers allow you to easily package an application's code, configurations, and dependencies into a template called an image.
Dockerfile-> Here you mention your commands or infrastructure blueprint.
Image -> Image gets derived from Dockerfile. You use image to create and run the container.
Yes, you can log inside the container. Use below command
docker exec -it <container-id> /bin/bash
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
You can pull the opensource image from Docker-hub
When you clone the git project which is docerized, you can look for Dockerfile in that project and create the your own image by build it.
docker build -t <youimagenae:tag> .
When you build or pull the image it get store in to your local.
user docker images command
Refer the below cheat-sheet for more commands to play with docker.
The docker daemon gets installed on your local machine and everything you do with the docker cli gets executed on your local machine and containers.
(not sure about the 1st part of your question). You can easily access your docker containers by docker exec -it <container name> /bin/bash for that you will need to have the container running. Check running containers with docker ps
(again I do not entirely understand your question) The images that you pull get stored on your local machine as well. You can see all the images present on your machine with docker images
Let me know if it was helpful and if you need any futher information.
which GCP Compute Engine instance Data Scientist use to build Docker images and push them on GCP Container Registry ?
I am not allowed to have Docker installed on my Laptop
I cannot build it on CloudShell because my image is too big (>5 GB)
I can do it on Debian/Unbuntu VM but I need to installed Docker (I need to reinstall SDK, to install Docker and add user)
I can do it on Container-Optimized OS VM but SDK (need to push the image on CP Container Registry) need to be installed as well as python 2
Is there another easier option with everything already pre installed (Docker and SDK) ? How people do that in general ? Is there other points I should take into account for making my choice of Compute Engine instance ?
Have a look at Google Cloud Build
With it, you can build (among other things) a container image merely by providing a Dockerfile (and any source files).
The service will build your image and push it to Google Container Registry:
https://cloud.google.com/cloud-build/docs/quickstart-docker
I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.