Can Google's Container OS be used with gRPC on Compute Engine? - docker

My high-level architecture is described in Cloud Endpoints for gRPC.
The Server below is a Compute Engine instance with Docker installed, running two containers (the ESP, and my server):
As per Getting started with gRPC on Compute Engine, I SSH into the VM and install Docker on the instance (see Install Docker on the VM instance). Finally I pull down the two Docker Containers (ESP and my server) and run them.
I've been reading around Container-Optimized OS from Google.
Rather than provisioning an instance with an OS and then installing Docker, I could just provision the OS with a Container-Optimized OS, and then pull-down my containers and run them.
However the only gRPC tutorials are for gRPC on Kubernetes Engine, gRPC on Kubernetes, and gRPC on Compute Engine. There is no mention of Container OS.
Has anyone used Container OS with gRPC, or can anyone see why this wouldn't work?
Creating an instance for advanced scenarios looks relevant because it states:
Use this method to [...] deploy multiple containers, and to use
cloud-init for advanced configuration.
For context, I'm trying to move to CI/CD in Google Cloud, and removing the need to install Docker would be a step in that direction.

You can basically follow almost the same instructions in the Getting started with gRPC on Compute Engine guide to deploy your gRPC server with the ESP on Container-Optimized OS. In your case, just see Container-Optimized OS as an OS with pre-installed Docker (there are more features but, in your case, only this one is interesting).
It is possible to use cloud-init if you want to automate the startup of your Docker containers (gRPC server + ESP) when the VM instance starts. The following cloud-init.cfg file automates the startup of the same containers presented in the documentation examples (with bookstore sample app). You can replace the Creating a Compute Engine instance part with two steps.
Create a cloud-init config file
Create cloud-init.cfg with the following content :
#cloud-config
runcmd:
- docker network create --driver bridge esp_net
- docker run
--detach
--name=bookstore
--net=esp_net
gcr.io/endpointsv2/python-grpc-bookstore-server:1
- docker run
--detach
--name=esp
--net=esp_net
--publish=80:9000
gcr.io/endpoints-release/endpoints-runtime:1
--service=bookstore.endpoints.<YOUR_PROJECT_ID>.cloud.goog
--rollout_strategy=managed
--http2_port=9000
--backend=grpc://bookstore:8000
Just after starting the instance, cloud-init will read this configuration and :
create a Docker network (esp_net)
run the bookstore container
run the ESP container. In this container startup command, replace <YOUR_PROJECT_ID> with your project ID (or replace the whole --service option depending on your service name)
Create a Compute Engine instance with Container-Optimized OS
You can create the instance from the Console, or via the command-line :
gcloud compute instances create instance-1 \
--zone=us-east1-b \
--machine-type=n1-standard-1 \
--tags=http-server,https-server \
--image=cos-73-11647-267-0 \
--image-project=cos-cloud \
--metadata-from-file user-data=cloud-init.cfg
The --metadata-from-file will populate user-data metadata with the contents of cloud-init.cfg. This cloud-init config will be taken into account when the instance will start.
You can validate this works by :
SSHing into instance-1, and running docker ps to see your running containers (gRPC server + ESP). You may experience some delay between the startup of your instance and the startup of both containers
calling your gRPC service with a client. For example (always with the bookstore application presented in the docs) :
INSTANCE_IP=$(gcloud compute instances describe instance-1 --zone us-east1-b --format="value(network_interfaces[0].accessConfigs.natIP)")
python bookstore_client.py --host $INSTANCE_IP --port 80 # returns a valid response
Note that you can also choose to not use cloud-init. You can directly run the docker run commands (the same as in cloud-init.cfg file) on your VM with Container-Optimized OS, exactly like you would do on any other OS.

Related

GCE doesn't deploy GCR image correctly

I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.

Start service using systemctl inside docker container

In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.

Linking two applications in Cloud Foundry

My application is split in two different applications (microservices?). One is deployed to CF as a docker container and another is deployed as a regular WAR file (grails application). The docker container is a Python/Flask application that exposes a REST API. The WebApp Uses this rest API.
Previously I had these applications as two separate docker containers and would execute them using the docker --link flag but now I'm looking to put these applications in Cloud Foundry. In the docker world I used to do the following.
docker run -d --link python_container -p 8080:8080 --name war_container war_image
The above command would create environment variables in war_container that would expose the ip address and port for the python_container. Using these environment variables I could communicate to the python_container from my WAR application. These environment variables would look like this:
PYTHON_CONTAINER_PORT_5000_TCP_ADDR=<ipaddr>
PYTHON_CONTAINER_PORT_5000_TCP_PORT=<port>
Question
Is it possible to do something similar in Cloud Foundry? I have the docker container already deployed in CF. How can I link the my WAR application such a way that the environment variables get created that get linked to the python_container as soon as I push the war file.
Currently I push the docker container to PCFDev using this:
cf push my-app -o 192.168.0.102:5002/my/container:latest
Then I push the war file using
cf push cf-sample -n cf-sample
The manifest.yml for cf-sample is:
---
applications:
- name: cfsample
memory: 1G
instances: 1
path: target/cf-sample-0.1.war
buildpack: java_buildpack
services:
- mysql
- rabbitmq
How would does the Docker way work if you had multiple instances of your Python app, and hence multiple IPs/ports?
For PCF, you could just have the Java app talk to your Python app "through the front door": https://my-python-app.local.pcfdev.io.
You can also try to discover your Python app's IP and port (see https://docs.run.pivotal.io/devguide/deploy-apps/environment-variable.html#CF-INSTANCE-IP for instance) and then pass these values to your Java app as environment variables, but this is a very brittle solution.
If you're interested in direct container-to-container networking, you might be interested in reading about and giving feedback on this proposal.

Is it possible to use cloud-init and heat-cfntools inside a Docker container?

I want to use OpenStack Heat to create an application which consists of several Docker containers, and monitor some metrics of these containers, like: CPU/Mem utilization, and other application-specific metrics.
So is it possible to install cloud-init and heat-cfntools when prepare the Docker image via Dockerfile, and then run a Docker container based on the image which has cloud-init and heat-cfntools running in it?
Thanks!
So is it possible to install cloud-init and heat-cfntools when prepare the Docker image via Dockerfile
It is possible to use cloud-init inside a Docker container, if you (a) have an image with cloud-init installed, (b) have the correct commands configured in your ENTRYPOINT or CMD script, and (c) your container is running in an environment with an available metadata service.
Of those requirements, (c) is probably the most problematic; unless you are booting containers using the nova-docker driver, is is unlikely that your containers will have access to the Nova metadata service.
I am not particularly familiar with heat-cfntools, although a quick glance at the code suggests that it may work without cloud-init by authenticating against the Heat CFN API using ec2-style credentials, which you would presumably need to provide via environment variables or something.
That said, it is typically much less necessary to run cloud-init inside Docker containers, the theory being that if you need to customize an image, you'll use a Dockerfile to build a new one based on that image and re-deploy, or specify any necessary additional configuration via environment variables.
If your tools require monitoring processes on the host, you'll probably want to run with
docker run --pid=host
This is a feature introduced in Docker Engine version 1.5.
See http://docs.docker.com/reference/run/#pid-settings

How do I run Docker on Google Compute Engine?

What's the procedure for installing and running Docker on Google Compute Engine?
Until the recent GA release of Compute Engine, running Docker was not supported on GCE (due to kernel restrictions) but with the newly announced ability to deploy and use custom kernels, that restriction is no longer intact and Docker now works great on GCE.
Thanks to proppy, the instructions for running Docker on Google Compute Engine are now documented for you here: http://docs.docker.io/en/master/installation/google/. Enjoy!
They now have a VM which has docker pre-installed now.
$ gcloud compute instances create instance-name
--image projects/google-containers/global/images/container-vm-v20140522
--zone us-central1-a
--machine-type f1-micro
https://developers.google.com/compute/docs/containers/container_vms
A little late, but I wanted to add an answer with a more detailed workflow and links, since answers are still rather scattered:
Create a Docker image
a. Locally
b. Using Google Container Builder
Push local Docker image to Google Container Repository
docker tag <current name>:<current tag> gcr.io/<project name>/<new name>
gcloud docker -- push gcr.io/<project name>/<new name>
UPDATE
If you have upgraded to Docker client versions above 18.03, gcloud docker commands are no longer supported. Instead of the above push, use:
docker push gcr.io/<project name>/<new name>
If you have issues after upgrading, see more here.
Create a compute instance.
This process actually obfuscates a number of steps. It creates a virtual machine (VM) instance using Google Compute Engine, which uses a Google-provided, container-optimized OS image. The image includes Docker and additional software responsible for starting our docker container. Our container image is then pulled from the Container Repository, and run using docker run when the VM starts. Note: you still need to use docker attach even though the container is running. It's worth pointing out only one container can be run per VM instance. Use Kubernetes to deploy multiple containers per VM (the steps are similar). Find more details on all the options in the links at the bottom of this post.
gcloud beta compute instances create-with-container <desired instance name> \
--zone <google zone> \
--container-stdin \
--container-tty \
--container-image <google repository path>:<tag> \
--container-command <command (in quotes)> \
--service-account <e-mail>
Tip You can view available gcloud projects with gcloud projects list
SSH into the compute instance.
gcloud beta compute ssh <instance name> \
--zone <zone>
Stop or Delete the instance. If an instance is stopped, you will still be billed for resources such as static IPs and persistent disks. To avoid being billed at all, use delete the instance.
a. Stop
gcloud compute instances stop <instance name>
b. Delete
gcloud compute instances delete <instance name>
Related Links:
More on deploying containers on VMs
More on zones
More create-with-container options
As of now, for just Docker, the Container-optimized OS is certainly the way to go:
gcloud compute images list --project=cos-cloud --no-standard-images
It comes with Docker and Kubernetes preinstalled. The only thing it lacks is the Cloud SDK command-line tools. (It also lacks python3, despite Google's announce of Python 2 sunset on 2020-01-01. Well, it's still 27 days to go...)
As an additional piece of information I wanted to share, I was searching for a standard image that would offer both docker and gcloud/gsutil preinstalled (and found none, oops). I do not think I'm alone in this boat, as gcloud is the thing you could hardly go by without on GCE¹.
My best find so far was the Ubuntu 18.04 image that came with their own (non-Debian) package manager, snap. The image comes with the Cloud SDK preinstalled, and Docker installs literally in a snap, 11 seconds on an F1 instance initial test, about 6s on an n1-standard-1. The only snag I hit was the error message that the docker authorization helper was not available; an attempt to add it with gcloud components install failed because the SDK was installed as a snap, too. However, the helper is actually there, only not in the PATH. The following was what got me the both tools available in a single transient builder VM in the least amount of setup script runtime, starting off the supported Ubuntu 18.04 LTS image²:
snap install docker
ln -s /snap/google-cloud-sdk/current/bin/docker-credential-gcloud /usr/bin
gcloud -q auth configure-docker
¹ I needed both for a Daisy workflow imaging a disk with both artifacts from GS buckets and a couple huge, 2GB+ library images from the local gcr.io registry that were shared between the build (as cloud builder layers) and the runtime (where I had to create and extract containers to the newly built image). But that's besides the point; one may needs both tools for a multitude of possible reasons.
² Use gcloud compute images list --uri | grep ubuntu-1804 to get the most current one.
Google's GitHub site offers now a gce image including docker. https://github.com/GoogleCloudPlatform/cloud-sdk-docker-image
It's as easy as:
creating a Compute Engine instance
curl https://get.docker.io | bash
Using docker-machine is another way to host your google compute instance with docker.
docker-machine create \
--driver google \
--google-project $PROJECT \
--google-zone asia-east1-c \
--google-machine-type f1-micro $YOUR_INSTANCE
If you want to login this machine on google cloud compute instance, just use docker-machine ssh $YOUR_INSTANCE
Refer to docker machine driver gce
There is now improved support for containers on GCE:
Google Compute Engine is extending its support for Docker containers. This release is an Open Preview of a container-optimized OS image that includes Docker and an open source agent to manage containers. Below, you'll find links to interact with the community interested in Docker on Google, open source repositories, and examples to get started. We look forward to hearing your feedback and seeing what you build.
Note that this is currently (as of 27 May 2014) in Open Preview:
This is an Open Preview release of containers on Virtual Machines. As a result, we may make backward-incompatible changes and it is not covered by any SLA or deprecation policy. Customers should take this into account when using this Open Preview release.
Running Docker on GCE instance is not supported. The instance goes down and not able to login again.
We can use the Docker image given by the GCE, to create a instance.
If your google cloud virtual machine is based on ubuntu use the following command to install docker
sudo apt install docker.io
You may use this link: https://cloud.google.com/cloud-build/docs/quickstart-docker#top_of_page.
The said link explains how to use Cloud Build to build a Docker image and push the image to Container Registry. You will first build the image using a Dockerfile and then build the same image using the Cloud Build's build configuration file.
Its better to get it while creating compute instance
Go to the VM instances page.
Click the Create instance button to create a new instance.
Under the Container section, check Deploy container image.
Specify a container image name under Container image and configure options to run the container if desired. For example, you can specify gcr.io/cloud-marketplace/google/nginx1:1.12 for the container image.
Click Create.
Installing Docker on GCP Compute Engine VMs:
This is the link to GCP documentation on the topic:
https://cloud.google.com/compute/docs/containers#installing
In it it links to the Docker install guide, you should follow the instructions depending on what type of Linux you have running in the vm.

Resources