AKS: mount existing azure file share without manually providing storage key - azure-aks

I'm able to mount an existing Azure File Share in a pod providing manually the storage key:
apiVersion: v1
kind: Secret
metadata:
name: storage-secret
namespace: azure
type: Opaque
data:
azurestorageaccountname: Base64_Encoded_Value_Here
azurestorageaccountkey: Base64_Encoded_Value_Here
It should also be possible that the storage key is automatically created as secret in the AKS if the AKS has the right permissions.
-> Giving AKS (kubelet identity) the "Storage Account Key Operator Service Role" and "Reader" role
Result is the error message:
Warning FailedMount 2m46s (x5 over 4m54s) kubelet MountVolume.SetUp failed for volume "myfileshare" : rpc error: code = Internal desc = accountName() or accountKey is empty
Warning FailedMount 44s (x5 over 4m53s) kubelet MountVolume.SetUp failed for volume "myfileshare" : rpc error: code = InvalidArgument desc = GetAccountInfo(csi-44a54edbcf...................) failed with error: could not get secret(azure-storage-account-mystorage-secret): secrets "azure-storage-account-mystorage-secret" not found
I also tried to create a custom "StorageClass" and a "PersistentVolume" ( not claim)
but that changed nothing. Maybe I am on the wrong track.
Can somebody help?
Additional information:
My AKS is version 1.22.6 and I use a managed identity.

Related

cert-manager: Failed to register ACME account: invalid character '<' looking for beginning of value

I installed the cert-manager using the Helm Chart. I created a ClusterIssuer but I see that it's on a failed state:
kubectl describe clusterissuer letsencrypt-staging
ErrRegisterACMEAccount Failed to register ACME account: invalid character '<' looking for beginning of value
What could be causing this invalid character '<'?
This error is most likely the result of an incorrect server url, the url you specified is returning HTML (hence the complain about <).
Make sure that your server url is https://acme-staging-v02.api.letsencrypt.org/directory" and NOT just https://acme-staging-v02.api.letsencrypt.org/", the directory/ must be included in the url.
So the ClusterIssuer should look like this (emphasis on the .spec.acme.server)
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: name.surname#mycompany.com
privateKeySecretRef:
name: letsencrypt-staging
server: https://acme-staging-v02.api.letsencrypt.org/directory
solvers:
- dns01:
route53:
hostedZoneID: XXXXXXXXXXXXXX
region: eu-north-1
selector:
dnsZones:
- xxx.yyy.mycompany.com

How to skip TLS cert check for crictl (containerd CR) while pulling the images from private repository

I have installed k8s 1.24 version and containerd (containerd://1.5.9) is the CR for my setup (ubuntu 20.04).
I have also installed docker on my VM and have added my private repository under /etc/docker/daemon.json with the following changes:
{ "insecure-registries" : ["myPvtRepo.com:5028"] }
When I am running docker pull myPvtRepo:123/image after login to my pvt repo by using docker login myPvtRepo:123 command, I am able to pull the images while running the same command with crictl pull myPvtRepo:123/image, I am facing:
E0819 06:49:01.200489 162610 remote_image.go:218] "PullImage from
image service failed" err="rpc error: code = Unknown desc = failed to
pull and unpack image "myPvtRepo.com:5028/centos:latest": failed to
resolve reference "myPvtRepo.com:5028/centos:latest": failed to do
request: Head https://myPvtRepo.com::5028/v2/centos/manifests/latest:
x509: certificate signed by unknown authority"
image="myPvtRepo.com::5028/centos:latest" FATA[0000] pulling image:
rpc error: code = Unknown desc = failed to pull and unpack image
"myPvtRepo.com::5028/centos:latest": failed to resolve reference
"myPvtRepo.com:5028/centos:latest": failed to do request: Head
https://myPvtRepo.com::5028/v2/centos/manifests/latest: x509:
certificate signed by unknown authority
FYI, I have modified /etc/containerd/config.toml with below content.
version = 2
[plugin."io.containerd.grpc.v1.cri".registry.configs."myPvtRepo.com:5028".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://myPvtRepo.com:5028", "https://myPvtRepo.com:5038", "https://myPvtRepo.com:5037",
"https://myPvtRepo.com:5039"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."IP:5000"]
endpoint = ["http://IP:5000"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."IP:5000"]
endpoint = ["http://IP:5000"]
I have also modified containerd's endpoint to point to containerd's sock.
Can you please help me out to understand and fix that even after setting insecure_skip_verify = true for my pvt repository and restarting the containerd service why I am getting this issue.
I got a solution:
cd /usr/local/share/ca-certificates/
curl -L --remote-name http://your-artifacts.com/xyz-bundle.crt
/usr/sbin/update-ca-certificates
This one work for me.
Also make sure to update your endpoints under /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: ""
timeout: 0
debug: false
pull-image-on-create: false
disable-pull-on-run: false
You will need to specify the hosts.toml file for the private registry and add skip-verify = true.
ref: https://github.com/containerd/containerd/blob/main/docs/hosts.md
Steps:
create folders: mkdir -p /etc/containerd/certs.d/<your registry>
add these config in /etc/containerd/config.toml:
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d"
create and edit hosts.toml under the just created folder
server = "https://<your registry>"
[host."https://<your registry>"]
capabilities = ["pull", "resolve"]
skip_verify = true

jkube resource failed: Unknown type CRD

I am using jkube to deploy a springboot helloworld application on my kubernetes installation. I wanted to add a resource fragment defining a Traefik ingress route but k8s:resource fails with "Unknown type 'ingressroute'".
IngressRoute has already been defined on the cluster using custom resource definition.
How do I write my fragment?
The following works when i deploy it with kubectl.
# IngresRoute
---
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
name: demo
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`demo.domainname.com`)
kind: Rule
services:
- name: demo
port: 80
#Rohan Kumar
Thank you for your answer. I can built and deploy it, but as soon as I add a file to use my IngressRoute, then the k8s:resource target fails.
I added files - one for each CRD with filename -cr.yml and added the following to the pom file:
<pre>
<resources>
<customResourceDefinitions>
<customResourceDefinition>traefikservices.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsstores.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsoptions.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>middlewares.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressrouteudps.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressroutetcps.traefik.containo.us</customResourceDefinition>
<customResourceDefinitions>ingressroutes.traefik.containo.us</customResourceDefinitions>
</customResourceDefinitions>
</resources>
Example IngressRoute definition:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
But when running the k8s:resource I get the error:
Failed to execute goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource (default-cli) on project demo:
Execution default-cli of goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource failed: Unknown type
'ingressroute' for file 005-ingressroute.yml. Must be one of : pr, lr, pv, project, replicaset, cronjob, ds,
statefulset, clusterrolebinding, pvc, limitrange, imagestreamtag, replicationcontroller, is, rb, rc, ingress, route,
projectrequest, job, rolebinding, rq, template, serviceaccount, bc, rs, rbr, role, pod, oauthclient, ns,
resourcequota, secret, persistemtvolumeclaim, istag, customerresourcedefinition, sa, persistentvolume, crb,
clusterrb, crd, deploymentconfig, configmap, deployment, imagestream, svc, rolebindingrestriction, cj, cm,
buildconfig, daemonset, cr, crole, pb, clusterrole, pd, policybinding, service, namespace, dc
I'm from Eclipse JKube team. We have improved CustomResource support a lot in our recent v1.2.0 release. Now you only need to worry about how you name your CustomResource fragment and Eclipse JKube would detect the CustomResourceDefinition for specified IngressRoute.
I think you would need to name CustomResource fragments with a *-cr.yml at the end. This is due to distinguishing them from standard Kubernetes resources. For example I added your IngressRoute fragment in my src/main/jkube like this:
jkube-custom-resource-fragments : $ ls src/main/jkube/
ats-crd.yml crontab-crd.yml dummy-cr.yml podset-crd.yaml traefic-crd.yaml
ats-cr.yml crontab-cr.yml ingressroute-cr.yml second-dummy-cr.yml traefic-ingressroute2-cr.yml
crd.yaml dummy-crd.yml istio-crd.yaml test2-cr.yml virtualservice-cr.yml
jkube-custom-resource-fragments : $ ls src/main/jkube/traefic-ingressroute2-cr.yml
src/main/jkube/traefic-ingressroute2-cr.yml
Then you should be able to see your IngressRoute generated after k8s:resource phase:
$ mvn k8s:resource
...
$ cat target/classes/META-INF/jkube/kubernetes.yml
You can then go ahead and apply these generated manifests to your Kubernetes Cluster with apply goal:
$ mvn k8s:apply
...
$ kubectl get ingressroute
NAME AGE
demo 17s
foo 16s
I tried all this on this reproducer project and it seemed to be working okay for me: https://github.com/r0haaaan/jkube-custom-resource-fragments

[ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info

I am trying to initialize a kubernetes cluster using a yaml file config using terraform, the initialization commands are in user data, when I look in cloud-init-output.log I have this error which I do not couldn't resolve.
Here is my config yaml file
kind: ClusterConfiguration
kubernetesVersion: v1.20.2
networking:
serviceSubnet: "10.100.0.0/16"
podSubnet: "10.244.0.0/16"
apiServer:
extraArgs:
cloud-provider: "aws"
controllerManager:
extraArgs:
cloud-provider: "aws"
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
bootstrapTokens:
- token: "urepc5.tzoz0wa8skdkiesf"
description: "default kubeadm bootstrap token"
ttl: "15m"
localAPIEndpoint:
advertiseAddress: "10.0.0.226"
bindPort: 6443
And the output of cloud-init
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info':
WARNING: Error loading config file: .dockercfg: $HOME is not defined
{"ID":"NGGY:LW7B:UDDM:OZNZ:DJ7S:BVEL:AUWJ:RUQQ:4D73:ZYK2:I75V:6ZQL","Containers":0,"ContainersRunning":0,"ContainersPaused":0,"ContainersStopped":0,"Images":0,"Driver":"overlay2","DriverStatus":[["Backing Filesystem","extfs"],["Supports d_type","true"],["Native Overlay Diff","true"]],"Plugins":{"Volume":["local"],"Network":["bridge","host","ipvlan","macvlan","null","overlay"],"Authorization":null,"Log":["awslogs","fluentd","gcplogs","gelf","journald","json-file","local","logentries","splunk","syslog"]},"MemoryLimit":true,"SwapLimit":false,"KernelMemory":true,"KernelMemoryTCP":true,"CpuCfsPeriod":true,"CpuCfsQuota":true,"CPUShares":true,"CPUSet":true,"PidsLimit":true,"IPv4Forwarding":true,"BridgeNfIptables":true,"BridgeNfIp6tables":true,"Debug":false,"NFd":22,"OomKillDisable":true,"NGoroutines":34,"SystemTime":"2021-01-17T16:57:14.692026867Z","LoggingDriver":"json-file","CgroupDriver":"cgroupfs","CgroupVersion":"1","NEventsListener":0,"KernelVersion":"5.4.0-1029-aws","OperatingSystem":"Ubuntu 20.04.1 LTS","OSVersion":"20.04","OSType":"linux","Architecture":"x86_64","IndexServerAddress":"https://index.docker.io/v1/","RegistryConfig":{"AllowNondistributableArtifactsCIDRs":[],"AllowNondistributableArtifactsHostnames":[],"InsecureRegistryCIDRs":["127.0.0.0/8"],"IndexConfigs":{"docker.io":{"Name":"docker.io","Mirrors":[],"Secure":true,"Official":true}},"Mirrors":[]},"NCPU":2,"MemTotal":4124860416,"GenericResources":null,"DockerRootDir":"/var/lib/docker","HttpProxy":"","HttpsProxy":"","NoProxy":"","Name":"ip-10-0-0-226.ec2.internal","Labels":[],"ExperimentalBuild":false,"ServerVersion":"20.10.2","Runtimes":{"io.containerd.runc.v2":{"path":"runc"},"io.containerd.runtime.v1.linux":{"path":"runc"},"runc":{"path":"runc"}},"DefaultRuntime":"runc","Swarm":{"NodeID":"","NodeAddr":"","LocalNodeState":"inactive","ControlAvailable":false,"Error":"","RemoteManagers":null},"LiveRestoreEnabled":false,"Isolation":"","InitBinary":"docker-init","ContainerdCommit":{"ID":"269548fa27e0089a8b8278fc4fc781d7f65a939b","Expected":"269548fa27e0089a8b8278fc4fc781d7f65a939b"},"RuncCommit":{"ID":"ff819c7e9184c13b7c2607fe6c30ae19403a7aff","Expected":"ff819c7e9184c13b7c2607fe6c30ae19403a7aff"},"InitCommit":{"ID":"de40ad0","Expected":"de40ad0"},"SecurityOptions":["name=apparmor","name=seccomp,profile=default"],"Warnings":["WARNING: No swap limit support","WARNING: No blkio weight support","WARNING: No blkio weight_device support"],"ClientInfo":{"Debug":false,"Context":"default","Plugins":[{"SchemaVersion":"0.1.0","Vendor":"Docker Inc.","Version":"v0.9.1-beta3","ShortDescription":"Docker App","Experimental":true,"Name":"app","Path":"/usr/libexec/docker/cli-plugins/docker-app"},{"SchemaVersion":"0.1.0","Vendor":"Docker Inc.","Version":"v0.5.1-docker","ShortDescription":"Build with BuildKit","Name":"buildx","Path":"/usr/libexec/docker/cli-plugins/docker-buildx"}],"Warnings":null}}
: invalid character 'W' looking for beginning of value
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Not able to replace AKS server APP ID Key for RBAC

I need to replace AKS server AAD APP secret key, have tried ARM template increment deployment to achieve this, but failed with following error.
New-AzureRmResourceGroupDeployment : 2:00:42 PM - Error: Code=PropertyChangeNotAllowed; Message=Provisioning of resource(s) for container service
test-aks-emea in resource group test-emea-kubernetes failed. Message: {
"code": "PropertyChangeNotAllowed",
"message": "Changing property 'aadProfile.serverAppSecret' is not allowed.",
"target": "aadProfile.serverAppSecret"
}.
Is there any other other way we can replace the secret key without redeploying to cluster?
I have found the way to reset the resetAAD profile with new secret key using REST API POST method
https://learn.microsoft.com/en-us/rest/api/aks/managedclusters/resetaadprofile

Resources