Empty kubernetes VPA recommendations - docker

I have the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-rec-deployment
spec:
replicas: 2
selector:
matchLabels:
app: my-rec-deployment
template:
metadata:
labels:
app: my-rec-deployment
spec:
containers:
- name: my-rec-container
image: nginx
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-rec-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: my-rec-deployment
updatePolicy:
updateMode: "Off"
When I run it with (kubectl get vpa my-rec-vpa --output yaml), I have the following output for recco:
status:
conditions:
- lastTransitionTime: "2022-05-17T11:10:32Z"
status: "False"
type: RecommendationProvided
recommendation: {}
What is the problem with recommendations?

Related

Can't pull image from private registry

I have this error (rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/registryName/projectName:latest": failed to unpack image on snapshotter overlayfs: unexpected media type text/html) when trying to deploy with the following deployment.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: <projectName>-deployment
namespace: <projectNamespace>
spec:
selector:
matchLabels:
app: <projectname>
replicas: 1
template:
metadata:
labels:
app: <projectName>
spec:
containers:
- name: private-reg
image: <registry>/<projectName>:latest
ports:
- containerPort: 9000
imagePullSecrets:
- name: docker-hub-cred
I can't understand what's happening because using a simple pod.yml it's working:
apiVersion: v1
kind: Pod
metadata:
name: <projectName>-pod
namespace: <projectNamespace>
spec:
containers:
- name: <projectName>
image: <registry>/<projectName>:latest
imagePullSecrets:
- name: docker-hub-cred

Docket Desktop and Istio unable to access endpoint on MAC OS

I am trying to work on sample project for istio. I have two apps demo1 and demo2.
demoapp Yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-1-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-1-app
template:
metadata:
labels:
app: demo-1-app
spec:
containers:
- name: demo-1-app
image: muzimil:demo-1
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: demo-1-app
spec:
selector:
app: demo-1-app
ports:
- port: 8080
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-1-app
labels:
account: demo-1-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-2-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-2-app
template:
metadata:
labels:
app: demo-2-app
spec:
containers:
- name: demo-2-app
image: muzimil:demo2-1
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: demo-2-app
spec:
selector:
app: demo-2-app
ports:
- port: 8080
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-2-app
labels:
account: demo-2-app
And My gateway os this
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-app-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-service1
spec:
hosts:
- "*"
gateways:
- demo-app-gateway
http:
- match:
- uri:
exact: /demo1
route:
- destination:
host: demo-1-app
port:
number: 8080
- match:
- uri:
exact: /demo2
route:
- destination:
host: demo-2-app
port:
number: 8080
I tried to hit url with localhost/demo1/getDetails both 127.0.0.1/demo1/getDetails
But I am getting always 404
istioctl analyse does not give any errors.
To access the application - either change istio-ingressgateway service to NodePort or do port forwarding for the istio ingress gateway service. Edit the istio-ingressgateway service to change the service type.
type: NodePort
K8s will give a node port then you can provide the same nodeport values in istio gateway.
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: <nodeportnumber>
name: http
protocol: HTTP

Kubectl create error "could not find expected"

I'm using kbuernetes version 1.20.5 with docker 19.03.8 on a virtual machine. I'm trying to create a test elk cluster with kubernetes. When i enter kubectl create i get the followig error:
error parsing testserver.yaml: error converting YAML to JSON: yaml: line 17: could not find expected ':'
I keep checking but can't find where the missing ":" should be. I validated the yaml in yaml lint and i get valid yaml result. The yaml file is like this:
#namespace define
apiVersion: v1
kind: Namespace
metadata:
name: testlog
---
#esnodes
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode1
name: testnode1
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode1
template:
metadata:
labels:
app: testnode1
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "false"
- name: node.name
value: testnode1
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode1
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode1-claim0
# restartPolicy: Always
volumes:
- name: testnode1-claim0
hostPath:
path: /logtest/es1
type: DirectoryOrCreate
---
#es1 portservice
apiVersion: v1
kind: Service
metadata:
name: testnode1-service
namespace: testlog
labels:
app: testnode1
spec:
type: NodePort
ports:
- port: 9200
nodePort: 9201
targetPort: 9200
protocol: TCP
name: testnode1-9200
- port: 9300
nodePort: 9301
targetPort: 9300
protocol: TCP
name: testnode1-9300
selector:
app: testnode1
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode1
namespace: testlog
labels:
app: testnode1
spec:
clusterIP: None
selector:
app: testnode1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode2
name: testnode2
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode2
template:
metadata:
labels:
app: testnode2
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode2
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode2
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode2-claim0
# restartPolicy: Always
volumes:
- name: testnode2-claim0
hostPath:
path: /logtest/es2
type: DirectoryOrCreate
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode2
namespace: testlog
labels:
app: testnode2
spec:
clusterIP: None
selector:
app: testnode2
----
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode3
name: testnode3
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode3
template:
metadata:
labels:
app: testnode3
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode3
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode3
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode3-claim0
# restartPolicy: Always
volumes:
- name: testnode3-claim0
hostPath:
path: /logtest/es3
type: DirectoryOrCreate
---
#es3 dns
apiVersion: v1
kind: Service
metadata:
name: testnode3
namespace: testlog
labels:
app: testnode3
spec:
clusterIP: None
selector:
app: testnode3
---
#kibana dep
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://testnode1:9200
- name: ELASTICSEARCH_URL
value: http://testnode1:9200
image: amazon/opendistro-for-elasticsearch-kibana:1.8.0
name: kibana
# restartPolicy: Always
---
#kibana dns
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: testlog
labels:
app: kibana
spec:
clusterIP: None
selector:
app: kibana
---
#kibana port servi
apiVersion: v1
kind: Service
metadata:
name: kibana-service
namespace: testlog
labels:
app: kibana
spec:
type: NodePort
ports:
- port: 5601
nodePort: 5602
targetPort: 5601
protocol: TCP
name: kibana
selector:
app: kibana
----
#elasticsearch-hq deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch-hq
name: elasticsearch-hq
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-hq
template:
metadata:
labels:
app: elasticsearch-hq
spec:
containers:
- image: elastichq/elasticsearch-hq
name: elasticsearch-hq
# restartPolicy: Always
---
#elasticsearch-hq port service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-hq-service
namespace: testlog
labels:
app: elasticsearch-hq
spec:
type: NodePort
ports:
- port: 8081
nodePort: 8081
targetPort: 5000
protocol: TCP
name: elasticsearch-hq
selector:
app: elasticsearch-hq
There are couple of issues in the yaml file:
You have used four - in some places whereas the separator is three -.
Once, you fix the first issue, you'll see the following errors related to NodePort services as the valid range for the nodePort is 30000-32767:
Error from server (Invalid): error when creating "testserver.yaml": Service "testnode1-service" is invalid: spec.ports[0].nodePort: Invalid value: 9201: provided port is not in the valid range. The range of valid ports is 30000-32767
Error from server (Invalid): error when creating "testserver.yaml": Service "kibana-service" is invalid: spec.ports[0].nodePort: Invalid value: 5602: provided port is not in the valid range. The range of valid ports is 30000-32767
Error from server (Invalid): error when creating "testserver.yaml": Service "elasticsearch-hq-service" is invalid: spec.ports[0].nodePort: Invalid value: 8081: provided port is not in the valid range. The range of valid ports is 30000-32767
Fixing both the errors will resolve the yaml issues.
Below is the full working yaml file:
#namespace define
apiVersion: v1
kind: Namespace
metadata:
name: testlog
---
#esnodes
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode1
name: testnode1
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode1
template:
metadata:
labels:
app: testnode1
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "false"
- name: node.name
value: testnode1
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode1
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode1-claim0
# restartPolicy: Always
volumes:
- name: testnode1-claim0
hostPath:
path: /logtest/es1
type: DirectoryOrCreate
---
#es1 portservice
apiVersion: v1
kind: Service
metadata:
name: testnode1-service
namespace: testlog
labels:
app: testnode1
spec:
type: NodePort
ports:
- port: 9200
nodePort: 31201
targetPort: 9200
protocol: TCP
name: testnode1-9200
- port: 9300
nodePort: 31301
targetPort: 9300
protocol: TCP
name: testnode1-9300
selector:
app: testnode1
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode1
namespace: testlog
labels:
app: testnode1
spec:
clusterIP: None
selector:
app: testnode1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode2
name: testnode2
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode2
template:
metadata:
labels:
app: testnode2
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode2
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode2
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode2-claim0
# restartPolicy: Always
volumes:
- name: testnode2-claim0
hostPath:
path: /logtest/es2
type: DirectoryOrCreate
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode2
namespace: testlog
labels:
app: testnode2
spec:
clusterIP: None
selector:
app: testnode2
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode3
name: testnode3
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode3
template:
metadata:
labels:
app: testnode3
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode3
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode3
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode3-claim0
# restartPolicy: Always
volumes:
- name: testnode3-claim0
hostPath:
path: /logtest/es3
type: DirectoryOrCreate
---
#es3 dns
apiVersion: v1
kind: Service
metadata:
name: testnode3
namespace: testlog
labels:
app: testnode3
spec:
clusterIP: None
selector:
app: testnode3
---
#kibana dep
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://testnode1:9200
- name: ELASTICSEARCH_URL
value: http://testnode1:9200
image: amazon/opendistro-for-elasticsearch-kibana:1.8.0
name: kibana
# restartPolicy: Always
---
#kibana dns
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: testlog
labels:
app: kibana
spec:
clusterIP: None
selector:
app: kibana
---
#kibana port servi
apiVersion: v1
kind: Service
metadata:
name: kibana-service
namespace: testlog
labels:
app: kibana
spec:
type: NodePort
ports:
- port: 5601
nodePort: 31602
targetPort: 5601
protocol: TCP
name: kibana
selector:
app: kibana
---
#elasticsearch-hq deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch-hq
name: elasticsearch-hq
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-hq
template:
metadata:
labels:
app: elasticsearch-hq
spec:
containers:
- image: elastichq/elasticsearch-hq
name: elasticsearch-hq
# restartPolicy: Always
---
#elasticsearch-hq port service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-hq-service
namespace: testlog
labels:
app: elasticsearch-hq
spec:
type: NodePort
ports:
- port: 8081
nodePort: 31081
targetPort: 5000
protocol: TCP
name: elasticsearch-hq
selector:
app: elasticsearch-hq

Converting docker-compose to k8s manifest file

I am working on a task to migrate all applications from docker container to kubernetes pods. I tried kompose but it's output is even further confusing.
Can someone please help me out here? I have run out of options to try.
Here is how my docker-compose file looks like:
version: '2'
services:
auth_module:
build: .
extra_hosts:
- "dockerhost:172.21.0.1"
networks:
- default
- mongo
ports:
- 3000
networks:
mongo:
external:
name: mongo_bridge_network
Kompose output:
apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert -f docker-compose.yml -o kubemanifest.yaml
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: auth-module
name: auth-module
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: auth-module
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert -f docker-compose.yml -o kubemanifest.yaml
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/mongo_bridge_network: "true"
io.kompose.service: auth-module
spec:
containers:
- image: auth-module
imagePullPolicy: ""
name: auth-module
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
- apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: mongo_bridge_network
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/mongo_bridge_network: "true"
podSelector:
matchLabels:
io.kompose.network/mongo_bridge_network: "true"
kind: List
metadata: {}

Virtual Kubelet with AKS

I followed the doc here
When I tried to create a virtual service for Windows, I get error:
The Deployment "nanoserver-iis" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"nanoserver-iis"}: selector does not match template labels
kubectl get nodes
`NAME STATUS ROLES AGE
VERSION
aks-agentpool-27326293-0 Ready agent 15m
v1.11.3
virtual-kubelet-aci-connector-windows-westeurope Ready agent 9s
v1.11.2`
virtual-kubelet-windows.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nanoserver-iis
spec:
replicas: 1
selector:
matchLabels:
app: aci-helloworld
template:
metadata:
labels:
app: nanoserver-iis
spec:
containers:
- name: nanoserver-iis
image: microsoft/iis:nanoserver
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: virtual-kubelet-aci-connector-windows-westeurope
tolerations:
- key: virtual-kubelet.io/provider
operator: Equal
value: azure
effect: NoSchedule
Try updating the deployment definition with the following. There is an inconsistency in the YAML definition where labels don't match. Labels in the matchLabeles field and labels in the metadata field need to match. In the deployment definition, they are set to different values aci-helloworld and nanoserver-iis respectively.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nanoserver-iis
spec:
replicas: 1
selector:
matchLabels:
app: nanoserver-iis
template:
metadata:
labels:
app: nanoserver-iis
spec:
containers:
- name: nanoserver-iis
image: microsoft/iis:nanoserver
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: virtual-kubelet-aci-connector-windows-westeurope
tolerations:
- key: virtual-kubelet.io/provider
operator: Equal
value: azure
effect: NoSchedule

Resources