Since sa name is "prometheus" in file prometheus-serviceaccount.yaml,
kind: ServiceAccount
apiVersion: v1
metadata:
name: prometheus
labels:
app: prometheus
namespace: default
do you think subject name in file prometheus-clusterrolebinding.yaml should be "prometheus" rather than current "default"?
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: prometheus
labels:
app: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: default
namespace: default
There are the similar problems for prometheus-proxy.
Related
I am trying to work on sample project for istio. I have two apps demo1 and demo2.
demoapp Yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-1-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-1-app
template:
metadata:
labels:
app: demo-1-app
spec:
containers:
- name: demo-1-app
image: muzimil:demo-1
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: demo-1-app
spec:
selector:
app: demo-1-app
ports:
- port: 8080
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-1-app
labels:
account: demo-1-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-2-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-2-app
template:
metadata:
labels:
app: demo-2-app
spec:
containers:
- name: demo-2-app
image: muzimil:demo2-1
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: demo-2-app
spec:
selector:
app: demo-2-app
ports:
- port: 8080
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-2-app
labels:
account: demo-2-app
And My gateway os this
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-app-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-service1
spec:
hosts:
- "*"
gateways:
- demo-app-gateway
http:
- match:
- uri:
exact: /demo1
route:
- destination:
host: demo-1-app
port:
number: 8080
- match:
- uri:
exact: /demo2
route:
- destination:
host: demo-2-app
port:
number: 8080
I tried to hit url with localhost/demo1/getDetails both 127.0.0.1/demo1/getDetails
But I am getting always 404
istioctl analyse does not give any errors.
To access the application - either change istio-ingressgateway service to NodePort or do port forwarding for the istio ingress gateway service. Edit the istio-ingressgateway service to change the service type.
type: NodePort
K8s will give a node port then you can provide the same nodeport values in istio gateway.
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: <nodeportnumber>
name: http
protocol: HTTP
I have the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-rec-deployment
spec:
replicas: 2
selector:
matchLabels:
app: my-rec-deployment
template:
metadata:
labels:
app: my-rec-deployment
spec:
containers:
- name: my-rec-container
image: nginx
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-rec-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: my-rec-deployment
updatePolicy:
updateMode: "Off"
When I run it with (kubectl get vpa my-rec-vpa --output yaml), I have the following output for recco:
status:
conditions:
- lastTransitionTime: "2022-05-17T11:10:32Z"
status: "False"
type: RecommendationProvided
recommendation: {}
What is the problem with recommendations?
I have a Kubernetes cluster that is running a Jenkins Pod with a service set up for Metallb. Currently when I try to hit the loadBalancerIP for the pod outside of my cluster I am unable to. I also have a kube-verify pod that is running on the cluster with a service that is also using Metallb. When I try to hit that pod outside of my cluster I can hit it with no problem.
When I switch the service for the Jenkins pod to be of type NodePort it works but as soon as I switch it back to be of type LoadBalancer it stops working. Both the Jenkins pod and the working kube-verify pod are running on the same node.
Cluster Details:
The master node is running and is connected to my router wirelessly. On the master node I have dnsmasq setup along with iptable rules that forward the connection from the wireless port to the Ethernet port. Each of the nodes is connected together via a switch via Ethernet. Metallb is setup up in layer2 mode with an address pool that is on the same subnet as the ip address of the wireless port of the master node. The kube-proxy is set to use strictArp and ipvs mode.
Jenkins Manifest:
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-sa
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
---
apiVersion: v1
kind: Secret
metadata:
name: jenkins-secret
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
type: Opaque
data:
jenkins-admin-password: ***************
jenkins-admin-user: ********
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
data:
jenkins.yaml: |-
jenkins:
authorizationStrategy:
loggedInUsersCanDoAnything:
allowAnonymousRead: false
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${jenkins-admin-username}"
name: "Jenkins Admin"
password: "${jenkins-admin-password}"
disableRememberMe: false
mode: NORMAL
numExecutors: 0
labelString: ""
projectNamingStrategy: "standard"
markupFormatter:
plainText
clouds:
- kubernetes:
containerCapStr: "10"
defaultsProviderTemplate: "jenkins-base"
connectTimeout: "5"
readTimeout: 15
jenkinsUrl: "jenkins-ui:8080"
jenkinsTunnel: "jenkins-discover:50000"
maxRequestsPerHostStr: "32"
name: "kubernetes"
serverUrl: "https://kubernetes"
podLabels:
- key: "jenkins/jenkins-agent"
value: "true"
templates:
- name: "default"
#id: eeb122dab57104444f5bf23ca29f3550fbc187b9d7a51036ea513e2a99fecf0f
containers:
- name: "jnlp"
alwaysPullImage: false
args: "^${computer.jnlpmac} ^${computer.name}"
command: ""
envVars:
- envVar:
key: "JENKINS_URL"
value: "jenkins-ui:8080"
image: "jenkins/inbound-agent:4.11-1"
ttyEnabled: false
workingDir: "/home/jenkins/agent"
idleMinutes: 0
instanceCap: 2147483647
label: "jenkins-agent"
nodeUsageMode: "NORMAL"
podRetention: Never
showRawYaml: true
serviceAccount: "jenkins-sa"
slaveConnectTimeoutStr: "100"
yamlMergeStrategy: override
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
security:
apiToken:
creationOfLegacyTokenEnabled: false
tokenGenerationOnCreationEnabled: false
usageStatisticsEnabled: true
unclassified:
location:
adminAddress:
url: jenkins-ui:8080
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: local
spec:
storageClassName: local-storage
claimRef:
name: jenkins-pv-claim
namespace: devops-tools
capacity:
storage: 16Gi
accessModes:
- ReadWriteMany
local:
path: /mnt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- heine-cluster1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jenkins-cr
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
# This role is used to allow Jenkins scheduling of agents via Kubernetes plugin.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-role-schedule-agents
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec", "pods/log", "persistentvolumeclaims", "events"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "pods/exec", "persistentvolumeclaims"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# The sidecar container which is responsible for reloading configuration changes
# needs permissions to watch ConfigMaps
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-casc-reload
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins-cr
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
# We bind the role to the Jenkins service account. The role binding is created in the namespace
# where the agents are supposed to run.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-schedule-agents
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-role-schedule-agents
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-watch-configmaps
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-casc-reload
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
annotations:
metallb.universe.tf/address-pool: default
spec:
type: LoadBalancer
loadBalancerIP: 172.16.1.5
ports:
- name: ui
port: 8080
targetPort: 8080
externalTrafficPolicy: Local
selector:
app: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-agent
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
spec:
ports:
- name: agents
port: 50000
targetPort: 50000
selector:
app: jenkins
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
version: v1
tier: backend
annotations:
checksum/config: c0daf24e0ec4e4cb59c8a66305181a17249770b37283ca8948e189a58e29a4a5
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
containers:
- name: jenkins
image: "heineza/jenkins-master:2.323-jdk11-1"
imagePullPolicy: Always
args: [ "--httpPort=8080"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Chicago
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
ports:
- containerPort: 8080
name: ui
- containerPort: 50000
name: agents
resources:
limits:
cpu: 2000m
memory: 4096Mi
requests:
cpu: 50m
memory: 256Mi
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
readOnly: false
- name: jenkins-config
mountPath: /var/jenkins_home/jenkins.yaml
- name: admin-secret
mountPath: /run/secrets/jenkins-admin-username
subPath: jenkins-admin-user
readOnly: true
- name: admin-secret
mountPath: /run/secrets/jenkins-admin-password
subPath: jenkins-admin-password
readOnly: true
serviceAccountName: "jenkins-sa"
volumes:
- name: jenkins-cache
emptyDir: {}
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pv-claim
- name: jenkins-config
configMap:
name: jenkins
- name: admin-secret
secret:
secretName: jenkins-secret
This Jenkins manifest is a modified version of what the Jenkins helm-chart generates. I redacted my secret but in the actual manifest there are base64 encoded strings. Also, the docker image I created and use in the deployment uses the Jenkins 2.323-jdk11 image as a base image and I just installed some plugins for Configuration as Code, kubernetes, and Git. What could be preventing the Jenkins pod from being accessible outside of my cluster when using Metallb?
MetalLB doesn't allow by default to re-use/share the same LoadBalancerIP addresscase.
According to MetalLB documentation:
MetalLB respects the spec.loadBalancerIP parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter.
If MetalLB does not own the requested address, or if the address is already in use by another service, assignment will fail and MetalLB will log a warning event visible in kubectl describe service <service name>.[1]
In case you need to have services on a single IP you can enable selective IP sharing. To do so you have to add the metallb.universe.tf/allow-shared-ip annotation to services.
The value of the annotation is a “sharing key.” Services can share an IP address under the following conditions:
They both have the same sharing key.
They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).
They both use the Cluster external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical). [2]
UPDATE
I tested your setup successfully with one minor difference -
I needed to remove: externalTrafficPolicy: Local from Jenkins Service spec.
Try this solution, if it still doesn't work then it's a problem with your cluster environment.
How to assign RBAC permissions to system:anonymous service account in Kubernetes?
To understand Kubernetes, I want to assign permissions to the system:anonymous service account to review permissions using kubectl auth can-i --list.
I have created the following role and rolebinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: anonymous-review-access
rules:
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectaccessreviews
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: anonymous-review-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: anonymous-review-access
subjects:
- kind: ServiceAccount
name: anonymous
namespace: default
After kubectl apply -f ... the above, I'm still not allowed to review access permissions anonymously:
$ kubectl auth can-i --list --as=system:anonymous -n default
Error from server (Forbidden): selfsubjectrulesreviews.authorization.k8s.io is forbidden: User "system:anonymous" cannot create resource "selfsubjectrulesreviews" in API group "authorization.k8s.io" at the cluster scope
How can I create proper role and rolebinding to view permissions as system:anonymous service account?
system:anonymous is not a service account.Requests that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: anonymous-review-access
rules:
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectaccessreviews
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: anonymous-review-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: anonymous-review-access
subjects:
- kind: User
name: system:anonymous
namespace: default
kubectl auth can-i --list --as=system:anonymous -n default
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
I'm trying to create two deployments, one for Wordpress the other for MySQL which refer to two different Persistent Volumes.
Sometimes, while deleting and recreating volumes and deployments, the MySQL deployment populates the Wordpress volume (ending up with a database in the wordpress-volume directory).
This is clearer when you do kubectl get pv --namespace my-namespace:
mysql-volume 2Gi RWO Retain Bound flashart-it/wordpress-volume-claim manual 1h
wordpress-volume 2Gi RWO Retain Bound flashart-it/mysql-volume-claim manual
.
I'm pretty sure the settings are ok. Please find the yaml file below.
Persistent Volume Claims + Persistent Volumes
kind: PersistentVolume
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/mount/mysql-volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
namespace: my-namespace
name: wordpress-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/mount/wordpress-volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: wordpress-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployments
kind: Deployment
apiVersion: apps/v1
metadata:
name: wordpress
namespace: my-namespace
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:5.0-php7.1-apache
name: wordpress
env:
# ...
ports:
# ...
volumeMounts:
- name: wordpress-volume
mountPath: /var/www/html
volumes:
- name: wordpress-volume
persistentVolumeClaim:
claimName: wordpress-volume-claim
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: my-namespace
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# ...
ports:
# ...
volumeMounts:
- name: mysql-volume
mountPath: /var/lib/mysql
volumes:
- name: mysql-volume
persistentVolumeClaim:
claimName: mysql-volume-claim
It's expected behavior in Kubernetes. PVC can bind to any available PV, given that storage class is matched, access mode is matched, and storage size is sufficient. Names are not used to match PVC and PV.
A possible solution for your scenario is to use label selector on PVC to filter qualified PV.
First, add a label to PV (in this case: app=mysql)
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-volume
labels:
app: mysql
Then, add a label selector in PVC to filter PV.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: mysql