I am using ubuntu 20.04
while following book -> https://www.oreilly.com/library/view/kubeflow-for-machine/9781492050117/
on page 17, it says the following (only relevant parts) which I don't understand....
You will want to store container images called a
container registry. The container registry will be accessed by your Kubeflow cluster.
I am going to use docker hub as container registry. Next
we'll assume that you've set your container registry via an environment variable
$CONTAINER_REGISTRY, in your shell" NOTE: If you use registry that isn't on Google Cloud
Platform, you will need to configure Kubeflow pipelines container builder to have access to
your registry by following the Kaniko configuration guide -> https://oreil.ly/88Ep-
First, I do not understand how to set container registry through environment variable, am I supposed to give it a link??
Second, I've gone into Kaniko config guide and did everything as told -> creating config.json with "auth":"mypassword for dockerhub". After that In the book it says:
To make sure your docker installation is properly configured, you can write one line Dc and
push it to your registry."
Example 2.7 Specify the new container is built on top of Kubeflow's
container
FROM gcr.io/kubeflow-images-public/tensorflow-2.1.0-notebook-cpu:1.0.0
Example 2.8 Build new container and push to registry for use
IMAGE="${CONTAINER_REGISTRY}/kubeflow/test:v1"
docker build -t "${IMAGE}" -f Dockerfile . docker push "${IMAGE}"
I've created Dockerfile with code from Example2.7 inside it, then ran code from Example 2.8 however not working.
Make sure that:
You set the environment variable using export CONTAINER_REGISTRY=docker.io/your_username in your terminal (or in your ~/.bash_profile and run source ~/.bash_profile).
Your .docker/config.json does not have your password in plain text but in base64, for example the output of echo -n 'username:password' | base64
The docker build and the docker push are two separate commands, in your example they're seen as one command, unlike the book.
I'm new to docker.Most of the tutorials on docker cover the same thing.I'm afraid I'm just ending up with piles of questions,and no answers really. I've come here after my fair share of Googling, kindly help me out with these basic questions.
When we install a docker,where it gets installed? Is it in our computer in local or does it happen in cloud?
Where does containers get pulled into?I there a way I can see what is inside the container?(I'm using Ubuntu 18.04)
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
Looks like you are confused after reading to many documents. Let me try to put this in simple words. Hope this will help.
When we install a docker,where it gets installed? Is it in our
computer in local or does it happen in cloud?
We install the docker on VM be it you on-prem VM or cloud. You can install the docker on your laptop as well.
Where does containers get pulled into?I there a way I can see what is
inside the container?(I'm using Ubuntu 18.04)
This question can be treated as lack of terminology awareness. We don't pull the container. We pull the image and run the container using that.
Quick terminology summary
Container-> Containers allow you to easily package an application's code, configurations, and dependencies into a template called an image.
Dockerfile-> Here you mention your commands or infrastructure blueprint.
Image -> Image gets derived from Dockerfile. You use image to create and run the container.
Yes, you can log inside the container. Use below command
docker exec -it <container-id> /bin/bash
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
You can pull the opensource image from Docker-hub
When you clone the git project which is docerized, you can look for Dockerfile in that project and create the your own image by build it.
docker build -t <youimagenae:tag> .
When you build or pull the image it get store in to your local.
user docker images command
Refer the below cheat-sheet for more commands to play with docker.
The docker daemon gets installed on your local machine and everything you do with the docker cli gets executed on your local machine and containers.
(not sure about the 1st part of your question). You can easily access your docker containers by docker exec -it <container name> /bin/bash for that you will need to have the container running. Check running containers with docker ps
(again I do not entirely understand your question) The images that you pull get stored on your local machine as well. You can see all the images present on your machine with docker images
Let me know if it was helpful and if you need any futher information.
While trying to build a service on docker-machine i got an error of "image doesn't exist" on that docker-machine manager node. As I checked the docker images command on the manager node, no image was there as expected. But on the root docker side I have those images. I want to access these images on the manager node. I've read few articles where it was mentioned that, maybe I've to upload that image on the docker hub then pull it from that hub. But I want to access it locally. Is there any way to do this as I'm newbie to docker.
This is the command what I tried on my manager machine:
docker#manager:~$ docker service create --name "api-client" -p 4200:4200 api_client
This is my docker images output:
REPOSITORY TAG IMAGE ID CREATED SIZE
api_client latest 097b19c4deb8 27 hours ago 1.15GB
But on my docker#manager terminal, my docker image folder is empty.
The problem is that there is no repository to hold the image. The repository needs to be pulled from to a repository to each node in the Swarm before it can execute. In general you need to do the following:
Setup a repository, if you want a local repository there is a guide here, but it will be some hassle to get it up and running i and "insecure http" version. An easier way is to get yourself a free docker hub account and put your image there.
Tag your local image with the repository name. Howto is shown in the guide above.
docker tag <local image> <repository>/<image:tag>
Login to the repository (if in cloud) and push your image to the repository
docker login
docker push <repository>/<image>:<tag>
To run the image (your command)
docker service create --name "api-client" -p 4200:4200 <repository>/<image>:<tag>
Your can also try to pull an image into the local cache of a node using
docker pull <repository>/<image>:<tag>
Is it possible to pull an image from another docker machine without having to install the docker repository?
I got 2 docker machines for development and i would like to deploy an image on the second docker machine that i have build with the first one.
Is this possible?
If you have created your docker servers using docker-machine then you could do an export/import using remote access to the docker agents on each server.
docker $(docker-machine config server1) export exampleimage:1.0 | docker $(docker-machine config server2) import - exampleimage:1.0
But....it would be a lot simpler to just rebuild the image on the second server, using the same Dockerfile.
What's the procedure for installing and running Docker on Google Compute Engine?
Until the recent GA release of Compute Engine, running Docker was not supported on GCE (due to kernel restrictions) but with the newly announced ability to deploy and use custom kernels, that restriction is no longer intact and Docker now works great on GCE.
Thanks to proppy, the instructions for running Docker on Google Compute Engine are now documented for you here: http://docs.docker.io/en/master/installation/google/. Enjoy!
They now have a VM which has docker pre-installed now.
$ gcloud compute instances create instance-name
--image projects/google-containers/global/images/container-vm-v20140522
--zone us-central1-a
--machine-type f1-micro
https://developers.google.com/compute/docs/containers/container_vms
A little late, but I wanted to add an answer with a more detailed workflow and links, since answers are still rather scattered:
Create a Docker image
a. Locally
b. Using Google Container Builder
Push local Docker image to Google Container Repository
docker tag <current name>:<current tag> gcr.io/<project name>/<new name>
gcloud docker -- push gcr.io/<project name>/<new name>
UPDATE
If you have upgraded to Docker client versions above 18.03, gcloud docker commands are no longer supported. Instead of the above push, use:
docker push gcr.io/<project name>/<new name>
If you have issues after upgrading, see more here.
Create a compute instance.
This process actually obfuscates a number of steps. It creates a virtual machine (VM) instance using Google Compute Engine, which uses a Google-provided, container-optimized OS image. The image includes Docker and additional software responsible for starting our docker container. Our container image is then pulled from the Container Repository, and run using docker run when the VM starts. Note: you still need to use docker attach even though the container is running. It's worth pointing out only one container can be run per VM instance. Use Kubernetes to deploy multiple containers per VM (the steps are similar). Find more details on all the options in the links at the bottom of this post.
gcloud beta compute instances create-with-container <desired instance name> \
--zone <google zone> \
--container-stdin \
--container-tty \
--container-image <google repository path>:<tag> \
--container-command <command (in quotes)> \
--service-account <e-mail>
Tip You can view available gcloud projects with gcloud projects list
SSH into the compute instance.
gcloud beta compute ssh <instance name> \
--zone <zone>
Stop or Delete the instance. If an instance is stopped, you will still be billed for resources such as static IPs and persistent disks. To avoid being billed at all, use delete the instance.
a. Stop
gcloud compute instances stop <instance name>
b. Delete
gcloud compute instances delete <instance name>
Related Links:
More on deploying containers on VMs
More on zones
More create-with-container options
As of now, for just Docker, the Container-optimized OS is certainly the way to go:
gcloud compute images list --project=cos-cloud --no-standard-images
It comes with Docker and Kubernetes preinstalled. The only thing it lacks is the Cloud SDK command-line tools. (It also lacks python3, despite Google's announce of Python 2 sunset on 2020-01-01. Well, it's still 27 days to go...)
As an additional piece of information I wanted to share, I was searching for a standard image that would offer both docker and gcloud/gsutil preinstalled (and found none, oops). I do not think I'm alone in this boat, as gcloud is the thing you could hardly go by without on GCE¹.
My best find so far was the Ubuntu 18.04 image that came with their own (non-Debian) package manager, snap. The image comes with the Cloud SDK preinstalled, and Docker installs literally in a snap, 11 seconds on an F1 instance initial test, about 6s on an n1-standard-1. The only snag I hit was the error message that the docker authorization helper was not available; an attempt to add it with gcloud components install failed because the SDK was installed as a snap, too. However, the helper is actually there, only not in the PATH. The following was what got me the both tools available in a single transient builder VM in the least amount of setup script runtime, starting off the supported Ubuntu 18.04 LTS image²:
snap install docker
ln -s /snap/google-cloud-sdk/current/bin/docker-credential-gcloud /usr/bin
gcloud -q auth configure-docker
¹ I needed both for a Daisy workflow imaging a disk with both artifacts from GS buckets and a couple huge, 2GB+ library images from the local gcr.io registry that were shared between the build (as cloud builder layers) and the runtime (where I had to create and extract containers to the newly built image). But that's besides the point; one may needs both tools for a multitude of possible reasons.
² Use gcloud compute images list --uri | grep ubuntu-1804 to get the most current one.
Google's GitHub site offers now a gce image including docker. https://github.com/GoogleCloudPlatform/cloud-sdk-docker-image
It's as easy as:
creating a Compute Engine instance
curl https://get.docker.io | bash
Using docker-machine is another way to host your google compute instance with docker.
docker-machine create \
--driver google \
--google-project $PROJECT \
--google-zone asia-east1-c \
--google-machine-type f1-micro $YOUR_INSTANCE
If you want to login this machine on google cloud compute instance, just use docker-machine ssh $YOUR_INSTANCE
Refer to docker machine driver gce
There is now improved support for containers on GCE:
Google Compute Engine is extending its support for Docker containers. This release is an Open Preview of a container-optimized OS image that includes Docker and an open source agent to manage containers. Below, you'll find links to interact with the community interested in Docker on Google, open source repositories, and examples to get started. We look forward to hearing your feedback and seeing what you build.
Note that this is currently (as of 27 May 2014) in Open Preview:
This is an Open Preview release of containers on Virtual Machines. As a result, we may make backward-incompatible changes and it is not covered by any SLA or deprecation policy. Customers should take this into account when using this Open Preview release.
Running Docker on GCE instance is not supported. The instance goes down and not able to login again.
We can use the Docker image given by the GCE, to create a instance.
If your google cloud virtual machine is based on ubuntu use the following command to install docker
sudo apt install docker.io
You may use this link: https://cloud.google.com/cloud-build/docs/quickstart-docker#top_of_page.
The said link explains how to use Cloud Build to build a Docker image and push the image to Container Registry. You will first build the image using a Dockerfile and then build the same image using the Cloud Build's build configuration file.
Its better to get it while creating compute instance
Go to the VM instances page.
Click the Create instance button to create a new instance.
Under the Container section, check Deploy container image.
Specify a container image name under Container image and configure options to run the container if desired. For example, you can specify gcr.io/cloud-marketplace/google/nginx1:1.12 for the container image.
Click Create.
Installing Docker on GCP Compute Engine VMs:
This is the link to GCP documentation on the topic:
https://cloud.google.com/compute/docs/containers#installing
In it it links to the Docker install guide, you should follow the instructions depending on what type of Linux you have running in the vm.