AKS | How to integrate VerticalPodAutoscaler - azure-aks

I am trying to implement an example application for VerticalPodAutoscaler (VPA) and got this error
error: unable to recognize "foo.yaml": no matches for kind "VerticalPodAutoscaler" in version "autoscaling.k8s.io/v1beta2"
Source code refered: https://medium.com/infrastructure-adventures/vertical-pod-autoscaler-deep-dive-limitations-and-real-world-examples-9195f8422724
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: bar
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: foo
updatePolicy:
updateMode: "Off"
I also tried combinations of v1, vXbetaY but nothing was working.
Debugging Done:
I tried to search specific examples for Azure AKS VPA but did not find any relevant documentation.
I did this kubectl api-resources | grep autoscaling and ONLY HorizontalPodAutoscaler is present in this list
Any thing that I am missing for getting VPA working on AKS?

well, since its a custom resource, you first need to install it. https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#installation

Related

Google Secret Manager secrets do not seem to work yet I can find nothing wrong

I have created a bunch of secrets using the documented CLI method like so:
echo "ak_prod_4kj56hv24hkjcg56hj2c34k5j3hbj3k124v5h243c" | gcloud secrets versions add some-api-key --data-file=-
I have set my YAML to read them at start-up, this works because my app code will throw if no value is configured.
spec:
template:
spec:
- image:
env:
- name: Some__ApiKey
valueFrom:
secretKeyRef:
key: "1"
name: some-api-key
But my code doesn't work. It was working on Azure, so this isn't a problem with my code. When I call the API, my key is rejected. A key is configured, my code checks that and besides, Cloud Run fails if it cannot read its secrets.
The problem was due to whitespace at the end of the secret.
Somehow a single whitespace character had been introduced. Looking back over my CLI command history it could be trailing whitespace after the --data-file=-
Perhaps it's the space between the " | in Google's example.
The Google console GUI does not present the secret value in quotes and so it is almost impossible to tell this has happened.
One week just on this problem. One week. The cost of badly designed software/bad sample code.
It's actually the echo. You need echo -n.
echo -n "ak_prod_4kj56hv24hkjcg56hj2c34k5j3hbj3k124v5h243c" | gcloud secrets versions add some-api-key --data-file=-

is there any security risk to public publish dockerconfigjson

I was reading the documentation about kubernetes.io/dockerconfigjson
and I just have a question: Is there any security risk to public publish dockerconfigjson? For example:
data:
.dockerconfigjson: <base64>
Posted community wiki answer for better visibility. Feel free to expand it.
As suggested by David Maze's comment:
I'd expect that to usually contain credentials to access your Docker registry...so yes, it'd be a significant security exposure to publish it?
It's dangerous and not recommended because docker config.json imported to Kubernetes is mainly used for keeping credentials used for pulling images from private registry.
Even if it's saved in base64 format as in example from Kubernetes docs (in your example too) it can be easily decoded:
my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: awesomeapps
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
Let's decode it:
user#shell:~/ $ cat my-secret.yaml | yq e '.data.".dockerconfigjson"' - | base64 -d
Really really reeeeeeeeeeaaaaaaaaaaaaaaaaaaaaaaaaaaalllllllllllllllllllllllllllllllyyyyyyyyyyyyyyyyyyyy llllllllllllllooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggg auth keys

Creating a deployment in Redhat Openshift using a specific USER id

Redhat Openshift autumatically creates a range of user ids that can be used in a given namespace, e.g.
$ oc describe namespace xyz-123
Name: xyz-123
Labels: <none>
Annotations: xx.com/accountID: id-testcluster-account
xx.com/type: System
openshift.io/sa.scc.mcs: s0:c25,c20
openshift.io/sa.scc.supplemental-groups: 1000640000/10000
openshift.io/sa.scc.uid-range: 1000640000/10000
Here is the problem:
While creating docker image, I am setting USER id in Dockerfile:
USER 1001121001:1001121001
I am specifying runAsUser in Helm charts to deploy this image:
runAsUser : 1001121001
When I try to create the deployment, the deployment fails. Because the user ID 1001121001 does not fall in the range above i.e. [1000640000, 1000640000+10000].
The deployment error:
$ oc get deployment abc-123 -n xyz-123 -o yaml
....
....
message: 'pods "abc-123-7f8fc74765-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.runAsUser: Invalid value: 1000321001: must be in the ranges: [1000660000, 1000669999]]'
....
....
Tried options - 1:
Using anyuid works as described here : https://www.openshift.com/blog/a-guide-to-openshift-and-uids
But the document says:
"Upon closer inspection of the “anyuid” SCC, it is clear that any user and any group can be used by the Pod launched by a ServiceAccount with access to the “anyuid” SCC. The “RunAsAny” strategy is effectively skipping the default OpenShift restrictions and authorization allowing the Pod to choose any ID."
Hence, I dont want to use this anyuid optiuon.
Tried option-2:
After creating a namespace get the range allowed for that namespace and select an id (say 1000660000) from that range and use that while deploying by setting that id for runAsUser: 1000660000.
All files/folders in the docker image will have the ownership/permissions set to USER 1001121001 and the container started with the id 1000660000 and hence there are issues running the container due to read/write/execute permissions.
To overcome this I need to give o+rwx permissions for all the files, which is risky.
Is there any other way to specify a USER in Dockerfile and use the same USER id during deployment in Redhat Openshift?
$ oc version
Client Version: 4.6.9
Server Version: 4.6.9
Kubernetes Version: v1.19.0+7070803
Solution:
The suggestion from Ritesh worked.
Created the namespace specifying the UID range and covering the specific USER ID. Then created the deployment in this namespace:
Created a namespace with predefined user id range (covering the specific USER id 1001121001) before deploying into the namespace.
apiVersion: v1
kind: Namespace
metadata:
name: xyz-123
annotations:
annotations:
openshift.io/sa.scc.mcs: 's0:c26,c5'
openshift.io/sa.scc.supplemental-groups: 1001120001/10000
openshift.io/sa.scc.uid-range: 1001120001/10000
If you are creating namespace while doing deployment or before then can use following option. Using this you can use runAsUser : 1001121001 (or any other user)
define the yaml file
apiVersion: v1
kind: Namespace
metadata:
name: dspx-dummy-runtimeid
annotations:
openshift.io/sa.scc.mcs: <>
openshift.io/sa.scc.supplemental-groups: <>
openshift.io/sa.scc.uid-range: <>
Use kubectl apply -f <namespace.yaml> or oc apply -f <namespace.yaml>.

Kubernete CreateContainerConfigError

I'm trying to deploy an app on k8s but I keep getting the following error
NAME READY STATUS RESTARTS AGE
pod_name 1/2 CreateContainerConfigError 0 26m
If I try to see what went wrong using kubectl describe pod pod_name I get the following error: Error from server (NotFound): pods "pod_name" not found
You didn't include the command that generated the output shown, so it's hard to tell. Perhaps you're looking at different namespaces?
One of the parameters key in the file was misspelling making the deploy fail. Unfortunately, the error message was not helpful...
CreateContainerConfigError means kubernetes is unable to find the configmap you have provided for volume.Make sure both pod and configmap are deployed in same namespace.
Do cross verify the name of configmap you have created and configmap name you have specified in pod volume definition is same.
the message Error from server (NotFound): pods "pod_name" not found is very clear for me that you deployed your pod in a different namespace
In your deployment yaml file check the value of namespace and execute the command
kubectl describe pod pod_name -n namespace_from_yaml_file

Verbose logging in Kubernetes deployment file

Iam new to Kubernetes , i want to know what does 'V' in the following mean ?
spec:
containers:
- args:
- -v=9
It seems to me it denotes verbose logging , is there any documentation for the same regarding the various levels of logging avbl like what values that arg v can take ?
Kubernetes uses glog. The available values are 1-4, as denoted in this doc

Resources