I was able to get into an Azure Kubernetes Service (AKS) node by referring to Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting.
I am trying to list the images present in the worker node. Do I need to install anything like nerdctl/crictl in the nodes or is there any other command I can use which is readily available in the nodes?
In short, what's the alternative for Docker commands in AKS worker nodes?
containerd://1.4.9+azure is the CONTAINER-RUNTIME
It seems that you are using it inside the container. Go to the host process with chroot /host and use it.
The image is --image=mcr.microsoft.com/dotnet/runtime-deps:6.0. I used this.
You can try using ctr cli tool which come prepackaged with containerd.
ctr -n <namespace> image list
NOTE: for checking the namespace kindly run
ctr ns list
Check Why and how to use containerd from the command line. I am not sure if it helps or not. But it does have containerd commands to check.
The reference is Debugging Kubernetes nodes with crictl.
Use these commands to check:
sudo crictl --help
sudo crictl ps
sudo crictl images
I am looking for any sample project to create MarkLogic 3 Nodes Cluster directly from ML Docker Hub image (https://hub.docker.com/_/marklogic) via Gradle on a single machine.
The idea is to auto spin off different ML versions for the dev enviroment set up.
Current three node cluster example in gitbub ml-gradle is to install ML from rpm installation package.
I would like to directly use the ML docker hub image instead.
MarkLogic on Docker Hub includes instructions for spinning up a cluster with this simple command:
docker-compose -f cluster.yml up -d --scale dnode=2
To run this, pull down the Docker image (you'll need an account on Docker Hub (free) and you'll need to do the checkout process to get access to the MarkLogic image (also free)). Then you can create the cluster.yml file using the example given on the setup instructions page on Docker Hub.
As #rjrudin points out, you can set up a gradle task to do this.
ml-gradle is typically used to deploy an application to an existing ML cluster. To actually create the ML cluster, you use the "docker" executable. You can automate this via Gradle's Exec task if you wish, but doing so is outside the scope of ml-gradle which assumes that you already have an ML cluster setup.
I'm using the new docker compose ECS integration to create an ecs context for deploying as described here and here. I selected during docker context create ecs my-context for it to use an existing AWS profile which has us-west-2 configured as its default region. However, docker compose up always results in it deploying to us-east-1. I tried exporting DEFAULT_AWS_REGION but that didn't work either. Is there a way to set the region in the context? It looks like the older docker ecs setup command asked for the region but that cmd is now deprecated.
I was able to fix this via the AWS CLI directly.
aws configure set default.region eu-central-1
Source: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/set.html
Docker Desktop for Windows and macOS come with the docker-desktop cluster. I'm trying to figure out how to either copy it, or make a new cluster based on it as a template. I like to have clusters for each project I'm working on so that things like PVC, PV and secrets are isolated, and I can just switch between them with kubectl config use-context project1. I've been looking through documentation and Google search results and haven't identified how to do this, or if it is possible. Any suggestions?
If there's a set of resources that you want to routinely deploy to new clusters, you can create a source-control repository that contains the YAML files you need. Then when you have a new cluster you can kubectl apply -f your directory of bootstrap artifacts. Using kind, for example:
kind create cluster --name dev2
kubectl apply -f ./bootstrap/
...
kind delete cluster --name dev2
If you need to configure or parameterize this setup in some way, packaging it as a Helm chart can make sense.
This approach also means avoiding the imperative-style kubectl create, kubectl run, and kubectl expose type commands. Create the YAML files you need, check them in, and use kubectl apply to install them.
It can be a little tricky to usefully export a cluster, and this isn't something that's commonly done. For example, if you have a Pod, was it created by a Deployment or directly through a YAML file? Was that PersistentVolume hand-created, or did a provisioner create it, and are its settings specific to a particular Kubernetes environment? Working from a reproducible source-controlled tree avoids these issues.
What's the procedure for installing and running Docker on Google Compute Engine?
Until the recent GA release of Compute Engine, running Docker was not supported on GCE (due to kernel restrictions) but with the newly announced ability to deploy and use custom kernels, that restriction is no longer intact and Docker now works great on GCE.
Thanks to proppy, the instructions for running Docker on Google Compute Engine are now documented for you here: http://docs.docker.io/en/master/installation/google/. Enjoy!
They now have a VM which has docker pre-installed now.
$ gcloud compute instances create instance-name
--image projects/google-containers/global/images/container-vm-v20140522
--zone us-central1-a
--machine-type f1-micro
https://developers.google.com/compute/docs/containers/container_vms
A little late, but I wanted to add an answer with a more detailed workflow and links, since answers are still rather scattered:
Create a Docker image
a. Locally
b. Using Google Container Builder
Push local Docker image to Google Container Repository
docker tag <current name>:<current tag> gcr.io/<project name>/<new name>
gcloud docker -- push gcr.io/<project name>/<new name>
UPDATE
If you have upgraded to Docker client versions above 18.03, gcloud docker commands are no longer supported. Instead of the above push, use:
docker push gcr.io/<project name>/<new name>
If you have issues after upgrading, see more here.
Create a compute instance.
This process actually obfuscates a number of steps. It creates a virtual machine (VM) instance using Google Compute Engine, which uses a Google-provided, container-optimized OS image. The image includes Docker and additional software responsible for starting our docker container. Our container image is then pulled from the Container Repository, and run using docker run when the VM starts. Note: you still need to use docker attach even though the container is running. It's worth pointing out only one container can be run per VM instance. Use Kubernetes to deploy multiple containers per VM (the steps are similar). Find more details on all the options in the links at the bottom of this post.
gcloud beta compute instances create-with-container <desired instance name> \
--zone <google zone> \
--container-stdin \
--container-tty \
--container-image <google repository path>:<tag> \
--container-command <command (in quotes)> \
--service-account <e-mail>
Tip You can view available gcloud projects with gcloud projects list
SSH into the compute instance.
gcloud beta compute ssh <instance name> \
--zone <zone>
Stop or Delete the instance. If an instance is stopped, you will still be billed for resources such as static IPs and persistent disks. To avoid being billed at all, use delete the instance.
a. Stop
gcloud compute instances stop <instance name>
b. Delete
gcloud compute instances delete <instance name>
Related Links:
More on deploying containers on VMs
More on zones
More create-with-container options
As of now, for just Docker, the Container-optimized OS is certainly the way to go:
gcloud compute images list --project=cos-cloud --no-standard-images
It comes with Docker and Kubernetes preinstalled. The only thing it lacks is the Cloud SDK command-line tools. (It also lacks python3, despite Google's announce of Python 2 sunset on 2020-01-01. Well, it's still 27 days to go...)
As an additional piece of information I wanted to share, I was searching for a standard image that would offer both docker and gcloud/gsutil preinstalled (and found none, oops). I do not think I'm alone in this boat, as gcloud is the thing you could hardly go by without on GCE¹.
My best find so far was the Ubuntu 18.04 image that came with their own (non-Debian) package manager, snap. The image comes with the Cloud SDK preinstalled, and Docker installs literally in a snap, 11 seconds on an F1 instance initial test, about 6s on an n1-standard-1. The only snag I hit was the error message that the docker authorization helper was not available; an attempt to add it with gcloud components install failed because the SDK was installed as a snap, too. However, the helper is actually there, only not in the PATH. The following was what got me the both tools available in a single transient builder VM in the least amount of setup script runtime, starting off the supported Ubuntu 18.04 LTS image²:
snap install docker
ln -s /snap/google-cloud-sdk/current/bin/docker-credential-gcloud /usr/bin
gcloud -q auth configure-docker
¹ I needed both for a Daisy workflow imaging a disk with both artifacts from GS buckets and a couple huge, 2GB+ library images from the local gcr.io registry that were shared between the build (as cloud builder layers) and the runtime (where I had to create and extract containers to the newly built image). But that's besides the point; one may needs both tools for a multitude of possible reasons.
² Use gcloud compute images list --uri | grep ubuntu-1804 to get the most current one.
Google's GitHub site offers now a gce image including docker. https://github.com/GoogleCloudPlatform/cloud-sdk-docker-image
It's as easy as:
creating a Compute Engine instance
curl https://get.docker.io | bash
Using docker-machine is another way to host your google compute instance with docker.
docker-machine create \
--driver google \
--google-project $PROJECT \
--google-zone asia-east1-c \
--google-machine-type f1-micro $YOUR_INSTANCE
If you want to login this machine on google cloud compute instance, just use docker-machine ssh $YOUR_INSTANCE
Refer to docker machine driver gce
There is now improved support for containers on GCE:
Google Compute Engine is extending its support for Docker containers. This release is an Open Preview of a container-optimized OS image that includes Docker and an open source agent to manage containers. Below, you'll find links to interact with the community interested in Docker on Google, open source repositories, and examples to get started. We look forward to hearing your feedback and seeing what you build.
Note that this is currently (as of 27 May 2014) in Open Preview:
This is an Open Preview release of containers on Virtual Machines. As a result, we may make backward-incompatible changes and it is not covered by any SLA or deprecation policy. Customers should take this into account when using this Open Preview release.
Running Docker on GCE instance is not supported. The instance goes down and not able to login again.
We can use the Docker image given by the GCE, to create a instance.
If your google cloud virtual machine is based on ubuntu use the following command to install docker
sudo apt install docker.io
You may use this link: https://cloud.google.com/cloud-build/docs/quickstart-docker#top_of_page.
The said link explains how to use Cloud Build to build a Docker image and push the image to Container Registry. You will first build the image using a Dockerfile and then build the same image using the Cloud Build's build configuration file.
Its better to get it while creating compute instance
Go to the VM instances page.
Click the Create instance button to create a new instance.
Under the Container section, check Deploy container image.
Specify a container image name under Container image and configure options to run the container if desired. For example, you can specify gcr.io/cloud-marketplace/google/nginx1:1.12 for the container image.
Click Create.
Installing Docker on GCP Compute Engine VMs:
This is the link to GCP documentation on the topic:
https://cloud.google.com/compute/docs/containers#installing
In it it links to the Docker install guide, you should follow the instructions depending on what type of Linux you have running in the vm.