GCP instance group doesn't start containers - docker

I have an instance template that is supposed to run my app in a container running on Google Cloud's Container-Optimized OS. When I create a single VM from this template it runs just fine, but when I use it to create an instance group the containers don't start.
According to the logs the machine didn't even try to start them.
I tried to compare the output from gcloud compute instances describe <instance-name> for the instance that works OK against one of the instances in the MIG, but other than some differences in the network interfaces and some that are due to the fact that one instance is managed by an instance group and the other one isn't I don't see anything unusual.
I also noticed that when I SSH to the instance that works, I get this message:
########################[ Welcome ]########################
# You have logged in to the guest OS. #
# To access your containers use 'docker attach' command #
###########################################################
but when I SSH to one of the instances in the MIG, I don't see it.
Is there a problem with using the container-optimized OS in an instance group?
My instance template is defined as follows:
creationTimestamp: '2022-11-09T03:25:29.896-08:00'
description: ''
id: '757769630202081478'
kind: compute#instanceTemplate
name: server-using-docker-hub-1
properties:
canIpForward: false
confidentialInstanceConfig:
enableConfidentialCompute: false
description: ''
disks:
- autoDelete: true
boot: true
deviceName: server-using-docker-hub
index: 0
initializeParams:
diskSizeGb: '10'
diskType: pd-balanced
sourceImage: projects/cos-cloud/global/images/cos-stable-101-17162-40-20
kind: compute#attachedDisk
mode: READ_WRITE
type: PERSISTENT
keyRevocationActionType: NONE
labels:
container-vm: cos-stable-101-17162-40-20
machineType: e2-micro
metadata:
fingerprint: 76mZ3i--POo=
items:
- key: gce-container-declaration
value: |-
spec:
containers:
- name: server-using-docker-hub-1
image: docker.io/rinbar/kwik-e-mart
env:
- name: AWS_ACCESS_KEY_ID
value: <redacted>
- name: AWS_SECRET_ACCESS_KEY
value: <redacted>
- name: SECRET_FOR_SESSION
value: <redacted>
- name: SECRET_FOR_USER
value: <redacted>
- name: MONGODBURL
value: mongodb+srv://<redacted>#cluster0.<redacted>.mongodb.net/kwik-e-mart
- name: DEBUG
value: server:*
- name: PORT
value: '80'
stdin: false
tty: false
restartPolicy: Always
# This container declaration format is not public API and may change without notice. Please
# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine.
kind: compute#metadata
networkInterfaces:
- kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/rons-project-364411/global/networks/default
stackType: IPV4_ONLY
subnetwork: https://www.googleapis.com/compute/v1/projects/rons-project-364411/regions/me-west1/subnetworks/default
reservationAffinity:
consumeReservationType: ANY_RESERVATION
scheduling:
automaticRestart: true
onHostMaintenance: MIGRATE
preemptible: false
provisioningModel: STANDARD
serviceAccounts:
- email: 629139871582-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring.write
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/trace.append
shieldedInstanceConfig:
enableIntegrityMonitoring: true
enableSecureBoot: false
enableVtpm: true
tags:
items:
- http-server
selfLink: https://www.googleapis.com/compute/v1/projects/rons-project-364411/global/instanceTemplates/server-using-docker-hub-1

I'm unable to replicate your issue; it worked for me.
I wonder whether your issue is container registry permissions? I don't use MIGs but assume a MIG runs as a service account and that perhaps yours doesn't have appropriate permission (to access the container registry)?
The one caveat is that gcloud compute instance-templates create-with-container is confusing and I am unable to resolve how to use --create-disk and --disk flags. I ended up using the Console to create the template. The Console's tool to generate the equivalent gcloud command is also incorrect (submitted feedback).
Q="74331370"
PROJECT="$(whoami)-$(date +%y%m%d)-${Q}"
ZONE="us-west1-c"
TEMPLATE="tmpl"
GROUP="group"
IMAGE="gcr.io/kuar-demo/kuard-amd64:blue"
SIZE="2"
MIN=${SIZE}
MAX=${MIN}
# This command is confusing
# Ultimately I used the console to save time
gcloud compute instance-templates create-with-container ${TEMPLATE} \
--project=${PROJECT} \
--machine-type=f1-micro \
--tags=http-server \
--container-image=${IMAGE} \
--create-disk=image-project=cos-cloud,image-family=cos-stable,mode=rw,size=10,type=pd-balanced \
--disk=auto-delete=yes,boot=yes,device-name=${TEMPLATE}
gcloud beta compute instance-groups managed create ${GROUP} \
--project=${PROJECT} \
--base-instance-name=${GROUP} \
--size=${SIZE} \
--template=${TEMPLATE} \
--zone=${ZONE} \
--list-managed-instances-results=PAGELESS
gcloud beta compute instance-groups managed set-autoscaling ${GROUP} \
--project=${PROJECT} \
--zone=${ZONE} \
--min-num-replicas=${MIN} \
--max-num-replicas=${MAX} \
--mode=off
INSTANCES=$(\
gcloud compute instance-groups managed list-instances ${GROUP} \
--project=${PROJECT} \
--zone=${ZONE} \
--format="value(instance)")
for INSTANCE in ${INSTANCES}
do
gcloud compute ssh ${INSTANCE} \
--project=${PROJECT} \
--zone=${ZONE} \
--command="docker container ls"
done
Yields (edited for clarity):
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
dd902f2d5e29 ${IMAGE} "/kuard" 4 minutes ago Up 4 minutes klt-tmpl-rqhp
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
0182f3e7f3dc ${IMAGE} "/kuard" 4 minutes ago Up 4 minutes klt-tmpl-azxs

Since the instances in the group have no external IP addresses, you need to enable Private Google Access or Cloud NAT to allow the instances to pull the container image from Container Registry / Artifact Registry / Docker Hub / any other container registry.

Related

How to fetch secrets from vault to my jenkins configuration as code installation with helm?

I am triying to deploy a Jenkins using helm with JCASC to get vault secrets. I am using a local minikube to create mi k8 cluster and a local vault instance in my machine (not in k8 cluster).
Even that I am trying using initContainerEnv and ContainerEnv I am not able to reach the vault values. For CASC_VAULT_TOKEN value I am using vault root token.
This is helm command i run locally:
helm upgrade --install -f values.yml mijenkins jenkins/jenkins
And here is my values.yml file code:
controller:
installPlugins:
# need to add this configuration-as-code due to a known jenkins issue: https://github.com/jenkinsci/helm-charts/issues/595
- "configuration-as-code:1414.v878271fc496f"
- "hashicorp-vault-plugin:latest"
# passing initial environments values to docker basic container
initContainerEnv:
- name: CASC_VAULT_TOKEN
value: "my-vault-root-token"
- name: CASC_VAULT_URL
value: "http://localhost:8200"
- name: CASC_VAULT_PATHS
value: "cubbyhole/jenkins"
- name: CASC_VAULT_ENGINE_VERSION
value: "2"
ContainerEnv:
- name: CASC_VAULT_TOKEN
value: "my-vault-root-token"
- name: CASC_VAULT_URL
value: "http://localhost:8200"
- name: CASC_VAULT_PATHS
value: "cubbyhole/jenkins"
- name: CASC_VAULT_ENGINE_VERSION
value: "2"
JCasC:
configScripts:
here-is-the-user-security: |
jenkins:
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${JENKINS_ADMIN_ID}"
password: "${JENKINS_ADMIN_PASSWORD}"
And in my local vault I can see/reach values:
>vault kv get cubbyhole/jenkins
============= Data =============
Key Value
--- -----
JENKINS_ADMIN_ID alan
JENKINS_ADMIN_PASSWORD acosta
Any of you have an idea what I could be doing wrong?
I haven't used Vault with jenkins so I'm not exactly sure about your particular situation but I am very familiar with how finicky the Jenkins helm chart is and I was able to configure my securityRealm (with the Google Login plugin) by creating a k8s secret with the values needed first:
kubectl create secret generic googleoauth --namespace jenkins \
--from-literal=clientid=${GOOGLE_OAUTH_CLIENT_ID} \
--from-literal=clientsecret=${GOOGLE_OAUTH_SECRET}
then passing those values into helm chart values.yml via:
controller:
additionalExistingSecrets:
- name: googleoauth
keyName: clientid
- name: googleoauth
keyName: clientsecret
then reading them into JCasC like so:
...
JCasC:
configScripts:
authentication: |
jenkins:
securityRealm:
googleOAuth2:
clientId: ${googleoauth-clientid}
clientSecret: ${googleoauth-clientsecret}
In order for that to work the values.yml also needs to include the following settings:
serviceAccount:
name: jenkins
rbac:
readSecrets: true # allows jenkins serviceAccount to read k8s secrets
Note that I am running jenkins as a k8s serviceAccount called jenkins in the namespace jenkins
After debugging my jenkins installation I figured out that the main issue was not my values.yml neither my JCASC integration as I was able to see the ContainerEnv values if I go inside my jenkins pod with:
kubectl exec -ti mijenkins-0 -- sh
So I needed to expose my vault server so my jenkins is able to reach it, I used this Vault tutorial to achieve it. Which in, brief, instead of using normal:
vault server -dev
We need to use:
vault server -dev -dev-root-token-id root -dev-listen-address 0.0.0.0:8200
Then we need to export an environment variable for the vault CLI to address the Vault server.
export VAULT_ADDR=http://0.0.0.0:8200
After that, we need to determine the vault address which we are going to redirect our jenkins ping, to do that we need start a minukube ssh session:
minikube ssh
Within this SSH session, retrieve the value of the Minikube host.
$ dig +short host.docker.internal
192.168.65.2
After retrieving the value, we are going to retrieve the status of the Vault server to verify network connectivity.
$ dig +short host.docker.internal | xargs -I{} curl -s http://{}:8200/v1/sys/seal-status
And now we can connect our jenkins pod with our vault, we just need to change CASC_VAULT_URL to use http://192.168.65.2:8200 in our main .yml file like this:
- name: CASC_VAULT_URL
value: "http://192.168.65.2:8200"

How to access minikube dashboard from external browser, deployed on gcloud compute engine

I created an ubuntu instance on gcloud and installed minikube and all the required dependency in it.
Now I can do curl from gnode terminal "curl http://127.0.0.1:8080/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/" I get the HTML response back.
But I want to Access this URL from my Laptop browser.
I tried opening these Ports in
firewall of instance-node tcp:8080,8085,443,80,8005,8006,8007,8009,8009,8010,7990,7992,7993,7946,4789,2376,2377
But still unable to access the above mentioned url while replacing it with my external(39.103.89.09) IP
i.e http://39.103.89.09:8080/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
I believe I need to do some networking related changes but don't know what.
I am very new to Cloud computing and networking so please help me.
I suspect that minikube binds to the VM's localhost interface making it inaccessible from a remote machine.
There may be a way to run minikube such that it binds to 0.0.0.0 and then you may be able to use it remotely.
Alternatively, you can keep the firewall limited to e.g. 22 and use SSH to port-forward the VM's port 8080 to your localhost. `gcloud' includes a helper for this too:
Ensure minikube is running on the VM
gcloud compute ssh ${INSTANCE} --project=${PROJECT} --zone=${ZONE} --ssh-flag="-L 8080:localhost:8080"
Try accessing Kubernetes endpoints from your local machine using localhost:8080/api/v1/...
Update
OK, I created a Debian VM (n1-instance-2), installed docker and minikube.
SSH'd into the instance:
gcloud compute ssh ${INSTANCE} \
--zone=${ZONE} \
--project=${PROJECT}
Then minikube start
Then:
minikube kubectl -- get namespaces
NAME STATUS AGE
default Active 14s
kube-node-lease Active 16s
kube-public Active 16s
kube-system Active 16s
minikube appears (I'm unfamiliar it) to run as a Docker container called minikube and it exposes 4 ports to the VM's (!) localhost: 22,2376,5000,8443. The latter is key.
To determine the port mapping, either eyeball it:
docker container ls \
--filter=name=minikube \
--format="{{.Ports}}" \
| tr , \\n
Returns something like:
127.0.0.1:32771->22/tcp
127.0.0.1:32770->2376/tcp
127.0.0.1:32769->5000/tcp
127.0.0.1:32768->8443/tcp
In this case, the port we're interested in is 32768
Or:
docker container inspect minikube \
--format="{{ (index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort }}"
32768
Then, exit the shell and return using --ssh-flag:
gcloud compute ssh ${INSTANCE} \
--zone=${ZONE} \
--project=${PROJECT} \
--ssh-flag="-L 8443:localhost:32768"
NOTE 8443 will be the port on the localhost; 32768 is the remote minikube port
Then, from another shell on your local machine (and while the port-forwarding ssh continues in the other shell), pull the ca.crt, client.key and client.crt:
gcloud compute scp \
$(whoami)#${INSTANCE}:./.minikube/profiles/minikube/client.* \
${PWD} \
--zone=${ZONE} \
--project=${PROJECT}
gcloud compute scp \
$(whoami)#${INSTANCE}:./.minikube/ca.crt \
${PWD} \
--zone=${ZONE} \
--project=${PROJECT}
Now, create a config file, call it kubeconfig:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ./ca.crt
server: https://localhost:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: ./client.crt
client-key: ./client.key
And, lastly:
KUBECONFIG=./kubeconfig kubectl get namespaces
Should yield:
NAME STATUS AGE
default Active 23m
kube-node-lease Active 23m
kube-public Active 23m
kube-system Active 23m

How to Helm -set an array of objects (array of maps)?

I am trying to install Jenkins with Helm unto an Kubernetes cluster, but with tls (cert-manager, lets encrypt).
The difficulty is that the key, master.ingress.tls, takes an array, an array of objects.
helm install --name jenkins --namespace jenkins --set
master.serviceType=ClusterIP,master.ingress.enabled=true,
master.ingress.hostName=jenkins.mydomain.com,
master.ingress.annotations."certmanager\.k8s\.io\/cluster-issuer"=letsencrypt-prod,
master.ingress.tls={hosts[0]=jenkins.mydomain.com,
secretName=jenkins-cert} stable/jenkins
The relevant part is:
master.ingress.tls={hosts[0]=jenkins.mydomain.com,secretName=jenkins-cert}
Different errors arise with this and also if I try changing it:
no matches found:
master.serviceType=ClusterIP,master.ingress.enabled=true,master.ingress.hostName=jenkins.mydomain.com,master.ingress.annotations.certmanager.k8s.io/cluster-issuer=letsencrypt-prod,master.ingress.tls={master.ingress.tls[0].secretName=jenkins-cert}
or
release jenkins failed: Ingress in version "v1beta1" cannot be handled
as a Ingress: v1beta1.Ingress.Spec: v1beta1.IngressSpec.TLS:
[]v1beta1.IngressTLS: readObjectStart: expect { or n, but found ",
error found in #10 byte of ...|],"tls":["secretName|..., bigger
context
...|eName":"jenkins","servicePort":8080}}]}}],"tls":["secretName:jenkins-cert"]}}
Trying this does returns the first error above.
Different solutions tried:
- {hosts[0]=jenkins.mydomain.com,secretName=jenkins-cert}
- {"hosts[0]=jenkins.mydomain.com","secretName=jenkins-cert"}
- {hosts[0]:jenkins.mydomain.com,secretName:jenkins-cert}
- "{hosts[0]=jenkins.mydomain.com,secretName=jenkins-cert}"
- master.ingress.tls[0].secretName=jenkins-cert
- {master.ingress.tls[0].hosts[0]=jenkins.mydomain.com,master.ingress.tls[0].secretName=jenkins-cert}
How to Helm -set this correctly?
This was solved adding a custom my-values.yaml
my-values.yaml:
master:
jenkinsUrlProtocol: "https"
ingress:
enabled: true
apiVersion: "extensions/v1beta1"
labels: {}
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
kubernetes.io/ssl-redirect: "true"
hostName: jenkins.mydomain.com
tls:
- hosts:
- jenkins.mydomain.com
secretName: cert-name
Install command:
helm install --name jenkins -f my-values.yaml stable/jenkins
Paul Boone describes it in his blog article
For me and my chart it worked like this:
--set 'global.ingressTls[0].secretName=abc.example.com' --set 'global.ingressTls[0].hosts[0]=abc.example.com'

Run multiple podman containers, like docker-compose

I found some library that can replace docker-compose in podman but it is still under development, so my question is how can I run multiple container together, currently I am using my bash script to run all but it is good just for the first time not updating the container.
I'd prefer at first if there is any way in podman rather than using some other tool.
library (under development) --> https://github.com/muayyad-alsadi/podman-compose
I think the Kubernetes Pod concept is what you're looking for, or at least it allows you to run multiple containers together by following a well-established standard.
My first approach was like you, to do everything as a command to see it working, something like:
# Create a pod, publishing port 8080/TCP from internal 80/TCP
$ podman pod create \
--name my-pod \
--publish 8080:80/TCP \
--publish 8113:113/TCP
# Create a first container inside the pod
$ podman run --detach \
--pod my-pod \
--name cont1-name \
--env MY_VAR="my val" \
nginxdemos/hello
# Create a second container inside the pod
$ podman run --detach \
--pod my-pod \
--name cont2-name \
--env MY_VAR="my val" \
greboid/nullidentd
# Check by
$ podman container ls; podman pod ls
Now that you have a pod, you can export it as a Pod manifest by using podman generate kube my-pod > my-pod.yaml.
As soon as you try your own examples, you will see how not everything is exported as you would expect (like networks or volumes), but at least it serves you as a base where you can continue to work.
Assuming the same example, in a YAML Pod manifest, it looks like this my-pod.yaml:
# Created with podman-2.2.1
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-pod
name: my-pod
spec:
containers:
# Create the first container: Dummy identd server on 113/TCP
- name: cont2-name
image: docker.io/greboid/nullidentd:latest
command: [ "/usr/sbin/inetd", "-i" ]
env:
- name: MY_VAR
value: my val
# Ensure not to overlap other 'containerPort' values within this pod
ports:
- containerPort: 113
hostPort: 8113
protocol: TCP
workingDir: /
# Create a second container.
- name: cont1-name
image: docker.io/nginxdemos/hello:latest
command: [ "nginx", "-g", "daemon off;" ]
env:
- name: MY_VAR
value: my val
# Ensure not to overlap other 'containerPort' values within this pod
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
workingDir: /
restartPolicy: Never
status: {}
When this file is used like this:
# Use a Kubernetes-compatible Pod manifest to create and run a pod
$ podman play kube my-pod.yaml
# Check
$ podman container ls; podman pod ls
# Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a53a5c0f076 docker.io/nginxdemos/hello:latest nginx -g daemon o... 8 seconds ago Up 6 seconds ago 0.0.0.0:8080->80/tcp, 0.0.0.0:8113->113/tcp my-pod-cont1-name
351065b66b55 docker.io/greboid/nullidentd:latest /usr/sbin/inetd -... 10 seconds ago Up 6 seconds ago 0.0.0.0:8080->80/tcp, 0.0.0.0:8113->113/tcp my-pod-cont2-name
e61c68752e35 k8s.gcr.io/pause:3.2 14 seconds ago Up 7 seconds ago 0.0.0.0:8080->80/tcp, 0.0.0.0:8113->113/tcp b586ca581129-infra
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
b586ca581129 my-pod Running 14 seconds ago e61c68752e35 3
You will be able to access the 'Hello World' served by nginx at 8080, and the dummy identd server at 8113.

container labels in kubernetes

I am building my docker image with jenkins using:
docker build --build-arg VCS_REF=$GIT_COMMIT \
--build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
--build-arg BUILD_NUMBER=$BUILD_NUMBER -t $IMAGE_NAME\
I was using Docker but I am migrating to k8.
With docker I could access those labels via:
docker inspect --format "{{ index .Config.Labels \"$label\"}}" $container
How can I access those labels with Kubernetes ?
I am aware about adding those labels in .Metadata.labels of my yaml files but I don't like it that much because
- it links those information to the deployment and not the container itself
- can be modified anytime
...
kubectl describe pods
Thank you
Kubernetes doesn't expose that data. If it did, it would be part of the PodStatus API object (and its embedded ContainerStatus), which is one part of the Pod data that would get dumped out by kubectl get pod deployment-name-12345-abcde -o yaml.
You might consider encoding some of that data in the Docker image tag; for instance, if the CI system is building a tagged commit then use the source control tag name as the image tag, otherwise use a commit hash or sequence number. Another typical path is to use a deployment manager like Helm as the principal source of truth about deployments, and if you do that there can be a path from your CD system to Helm to Kubernetes that can pass along labels or annotations. You can also often set up software to know its own build date and source control commit ID at build time, and then expose that information via an informational-only API (like an HTTP GET /_version call or some such).
I'll add another option.
I would suggest reading about the Recommended Labels by K8S:
Key Description
app.kubernetes.io/name The name of the application
app.kubernetes.io/instance A unique name identifying the instance of an application
app.kubernetes.io/version The current version of the application (e.g., a semantic version, revision hash, etc.)
app.kubernetes.io/component The component within the architecture
app.kubernetes.io/part-of The name of a higher level application this one is part of
app.kubernetes.io/managed-by The tool being used to manage the operation of an application
So you can use the labels to describe a pod:
apiVersion: apps/v1
kind: Pod # Or via Deployment
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
And use the downward api (which works in a similar way to reflection in programming languages).
There are two ways to expose Pod and Container fields to a running Container:
1 ) Environment variables.
2 ) Volume Files.
Below is an example for using volumes files:
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
version: 4.5.6
component: database
part-of: etl-engine
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args: # < ------ We're using the mounted volumes inside the container
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
volumes: # < -------- We're mounting in our example the pod's labels and annotations
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
Notice that in the example we accessed the labels and annotations that were passed and mounted to the /etc/podinfo path.
Beside labels and annotations, the downward API exposed multiple additional options like:
The pod's IP address.
The pod's service account name.
The node's name and IP.
A Container's CPU limit , CPU request , memory limit, memory request.
See full list in here.
(*) A nice blog discussing the downward API.
(**) You can view all your pods labels with
$ kubectl get pods --show-labels
NAME ... LABELS
my-app-xxx-aaa pod-template-hash=...,run=my-app
my-app-xxx-bbb pod-template-hash=...,run=my-app
my-app-xxx-ccc pod-template-hash=...,run=my-app
fluentd-8ft5r app=fluentd,controller-revision-hash=...,pod-template-generation=2
fluentd-fl459 app=fluentd,controller-revision-hash=...,pod-template-generation=2
kibana-xyz-adty4f app=kibana,pod-template-hash=...
recurrent-tasks-executor-xaybyzr-13456 pod-template-hash=...,run=recurrent-tasks-executor
serviceproxy-1356yh6-2mkrw app=serviceproxy,pod-template-hash=...
Or viewing only specific label with $ kubectl get pods -L <label_name>.

Resources