RBAC: access denied when request in kubeflow notebooks - kubeflow

I tried the kserve example and faced the problem of RBAC: Access Denied.
add user
$ kubectl -n auth edit cm dex
add example infomation in staticPasswords
- email: my_email#gmail.com
hash: my_ps_hash
userID: "myuserid"
username: myusername
$ kubectl rollout restart deployment dex -n auth
add profile for create namespace
$ vi profile.yaml
apiVersion: kubeflow.org/v1beta1
kind: Profile
metadata:
name: test-namespace
spec:
owner:
kind: User
name: my_email#gmail.com
resourceQuotaSpec: {}
kubectl apply -f profile.yaml
Serving exam model
kubeflow Central Dashboard - Models - +NEW MODEL SERVER
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
annotations:
isdecar.istio.is/inject: "false"
name: "sklearn-iris"
spec:
predictor:
sklearn:
image: "kserve/sklearnserver:v0.9.0"
storageUri: "gs://kfserving-examples/models/sklearn/1.0/model"
$ kubectl get InferenceService -n test-namespace
NAME URL READY PREV LATEST PREVROLLEDOUTREVISION LATESTREADYREVISION AGE
sklearn-iris http://sklearn-iris-test2.test-namespace.example.com True 100 sklearn-iris-predictor-default-00001 81m
READY: True
Add New Notebook
kubeflow Central Dashboard - Notebooks - +New Notebook
and run code
from kserve import (utils, KServeClient)
import requests
import json
sklear_iris_input = dict(instances = [
[6.8, 2.8, 4.8, 1.4],
[6.0, 3.4, 4.5, 1.6]
])
namespace = utils.get_default_target_namespace() # Get namespace. It's worked
service_name = "sklearn-iris"
kserve = KServeClient()
isvc_resp = kserve.get(service_name, namespace = namespace) # It's worked
# http://sklearn-iris.test-namespace.svc.cluster.local/v1/models/sklearn-iris-test2:predict
isvc_url = isvc_resp['status']['address']['url']
response = requests.post(isvc_url, json = json.dumps(sklear_iris_input))
print(response.text)
RBAC: access denied
env:
kserve: 0.9.0
kubeflow: 1.6.0
kubernetes: v1.22.13 and run master node(already disable taint)
Why is this message(RBAC: access denied) occurring?
Your help is of great help to me. Help me!

Related

GCloud: Failed to pull image (400) - Permission "artifactregistry.repositories.downloadArtifacts" denied

My pod can't be created because of the following problem:
Failed to pull image "europe-west3-docker.pkg.dev/<PROJECT_ID>/<REPO_NAME>/my-app:1.0.0": rpc error: code = Unknown desc = Error response from daemon: Get https://europe-west3-docker.pkg.dev/v2/<PROJECT_ID>/<REPO_NAME>/my-app/manifests/1.0.0: denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/<PROJECT_ID>/locations/europe-west3/repositories/<REPO_NAME>" (or it may not exist)
I've never experienced anything like it. Maybe someone can help me out.
Here is what I did:
I set up a standrd Kubernetes cluster on Google Cloud in the Zone europe-west-3-a
I started to follow the steps described here https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
I built the docker imager and pushed it to the Artifcats repository
I can confirm the repo and the image are present, both in the Google Console as well as pulling the image with docker
Now I want to deploy my app, here is the deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: europe-west3-docker.pkg.dev/<PROJECT_ID>/<REPO_NAME>/my-app:1.0.0
imagePullPolicy: Always
ports:
- containerPort: 8080
The pod fails to create due to the error mentioned above.
What am I missing?
I encountered the same problem, and was able to get it working by executing:
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/artifactregistry.reader
with ${PROJECT} = the project name and ${EMAIL} = the default service account, e.g. something like 123456789012-compute#developer.gserviceaccount.com.
I suspect I may have removed some "excess permissions" too eagerly in the past.
I think the tutorial is in error.
I was able to get this working by:
Creating a Service Account and key
Assigning the account Artifact Registry permissions
Creating a Kubernetes secret representing the Service Account
Using imagePullSecrets
PROJECT=[[YOUR-PROJECT]]
REPO=[[YOUR-REPO]]
LOCATION=[[YOUR-LOCATION]]
# Service Account and Kubernetes Secret name
ACCOUNT="artifact-registry" # Or ...
# Email address of the Service Account
EMAIL=${ACCOUNT}#${PROJECT}.iam.gserviceaccount.com
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--display-name="Read Artifact Registry" \
--description="Used by GKE to read Artifact Registry repos" \
--project=${PROJECT}
# Create Service Account key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Grant Service Account role to reader Artifact Reg
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/artifactregistry.reader
# Create a Kubernetes Secret representing the Service Account
kubectl create secret docker-registry ${ACCOUNT} \
--docker-server=https://${LOCATION}-docker.pkg.dev \
--docker-username=_json_key \
--docker-password="$(cat ${PWD}/${ACCOUNT}.json)" \
--docker-email=${EMAIL} \
--namespace=d{NAMESPACE}
Then:
IMAGE="${LOCATION}-docker.pkg.dev/${PROJECT}/${REPO}/my-app:1.0.0"
echo "
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
imagePullSecrets:
- name: ${ACCOUNT}
containers:
- name: my-app
image: ${IMAGE}
imagePullPolicy: Always
ports:
- containerPort: 8080
" | kubectl apply --filename=- --namespace=${NAMESPACE}
NOTE There are other ways to achieve this.
You could use the cluster's default (Compute Engine) Service Account instead of a special-purpose Service Account as here but the default Service Account is more broadly used and granting it greater powers may be too broad.
You could add the imagePullSecrets to the GKE namespace's default service account. This would give any deployment in that namespace the ability to pull from the repository and that may also be too broad.
I think there's a GKE-specific way to grant a cluster service account GCP (!) roles.

K8s Kf-serving: Using a different storage other than Google Cloud Storage [duplicate]

Is it possible to replace the usage of Google Cloud Storage buckets with an alternative on-premises solution so that it is possible to run e.g. Kubeflow Pipelines completely independent from the Google Cloud Platform?
Yes it is possible. You can use minio, it's like s3/gs but it runs on a persistent volume of your on-premises storage.
Here are the instructions on how to use it as a kfserving inference storage:
Validate that minio is running in your kubeflow installation:
$ kubectl get svc -n kubeflow |grep minio
minio-service ClusterIP 10.101.143.255 <none> 9000/TCP 81d
Enable a tunnel for your minio:
$ kubectl port-forward svc/minio-service -n kubeflow 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Browse http://localhost:9000 to get to the minio UI and create a bucket/upload your model. Credentials minio/minio123. Alternatively you can use the mc command to do it from your terminal:
$ mc ls minio/models/flowers/0001/
[2020-03-26 13:16:57 CET] 1.7MiB saved_model.pb
[2020-04-25 13:37:09 CEST] 0B variables/
Create a secret&serviceaccount for the minio access, note that the s3-endpoint defines the path to the minio, keyid&acceskey are the credentials encoded in base64:
$ kubectl get secret mysecret -n homelab -o yaml
apiVersion: v1
data:
awsAccessKeyID: bWluaW8=
awsSecretAccessKey: bWluaW8xMjM=
kind: Secret
metadata:
annotations:
serving.kubeflow.org/s3-endpoint: minio-service.kubeflow:9000
serving.kubeflow.org/s3-usehttps: "0"
name: mysecret
namespace: homelab
$ kubectl get serviceAccount -n homelab sa -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa
namespace: homelab
secrets:
- name: mysecret
Finally, create your inferenceservice as follows:
$ kubectl get inferenceservice tensorflow-flowers -n homelab -o yaml
apiVersion: serving.kubeflow.org/v1alpha2
kind: InferenceService
metadata:
name: tensorflow-flowers
namespace: homelab
spec:
default:
predictor:
serviceAccountName: sa
tensorflow:
storageUri: s3://models/flowers

identity and authorization Istio adapter not working

I am trying to get a simple example of the App identity and Access Adapter for Istio working on Minikube. I have followed the install instructions exactly and my calls to the sample application go through as if the adapter is no even there.
platform: minikube
istio installed via istioctl
adapter install via helm.
adapter pod is running
apiVersion: "security.cloud.ibm.com/v1"
kind: OidcConfig
metadata:
name: oidc-provider-config
namespace: default
spec:
authMethod: client_secret_basic
discoveryUrl: https://us-south.appid.cloud.ibm.com/oauth/v4/
clientId: ******************************
clientSecret: ******************************
apiVersion: security.cloud.ibm.com/v1
kind: Policy
metadata:
name: sample-oidc-policy
namespace: default
spec:
targets:
-
serviceName: service/helloworld
paths:
- exact: /hello
method: ALL
policies:
- policyType: oidc
config: oidc-provider-config
Did you set global.disablePolicyChecks to false and did you enable mixer during Istio install?
Mixer is disabled by default now.
See https://istio.io/docs/reference/config/installation-options/#mixer-options
Update:
I was just able to resolve this issue on my setup by doing the following:
First check the status of disablePolicyCheck:
kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks
If this returns disablePolicyChecks: true run
istioctl manifest apply --set values.global.disablePolicyChecks=false \
--set values.mixer.policy.enabled=true \
--set values.pilot.policy.enabled=true
Running the following should show the value of disablePolicyChecks as false
kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks

How can you read a database port from application.properties with environment variables

i am very new to Spring Boot and the application.properties. I have the problem, that i need to be very flexible with my database port, because i have two different databases. Therefore i want to read the port from a environment variable. I tried the following:
spring.data.mongodb.uri = mongodb://project1:${db-password}#abc:12345/project
This code works fine, if my Database has the port 12345. But if i now try to read the port from an environment variable there is a problem.
I tried this:
spring.data.mongodb.uri = mongodb://project1:${db-password}#abc:${port}/project
The problem is the following: I am using k8 and Jenkins. The environment variable "port" is given to my program in my k8 and this works fine for "db-password", but not for the port. My Jenkins says:
"The connection string contains an invalid host 'abd:${port}'. The port '${port}' is not a valid, it must be an integer between 0 and 65535"
So now to my question:
How can i read a port as an environment variable, without getting this error?
Thank you in advance!
To inject environment variable to the pods you can do the following:
Configmap
You can create ConfigMap and configure your pods to use it.
Steps required:
Create ConfigMap
Update/Create the deployment with ConfigMap
Test it
Create ConfigMap
I provided simple ConfigMap below to store your variables:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
port: "12345"
To apply it and be able to use it invoke following command:
$ kubectl create -f example-configmap.yaml
The ConfigMap above will create the environment variable port with value of 12345.
Check if ConfigMap was created successfully:
$ kubectl get configmap
Output should be like this:
NAME DATA AGE
example-config 1 21m
To get the detailed information you can check it with command:
$ kubectl describe configmap example-config
With output:
Name: example-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
port:
----
12345
Events: <none>
Update/Create the deployment with ConfigMap
I provided simple deployment with ConfigMap included:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
envFrom:
- configMapRef:
name: example-config
ports:
- containerPort: 80
Configuration responsible for using ConfigMap:
envFrom:
- configMapRef:
name: example-config
After that you need to run your deployment with command:
$ kubectl create -f configmap-test.yaml
And check if it's working:
$ kubectl get pods
With output:
NAME READY STATUS RESTARTS AGE
nginx-deployment-84d6f58895-b4zvz 1/1 Running 0 23m
nginx-deployment-84d6f58895-dp4c7 1/1 Running 0 23m
Test it
To test if environment variable is working you need to get inside the pod and check for yourself.
To do that invoke the command:
$ kubectl exec -it NAME_OF_POD -- /bin/bash
Please provide the variable NAME_OF_POD with appropriate one for your case.
After successfully getting into container run:
$ echo $port
It should show:
root#nginx-deployment-84d6f58895-b4zvz:/# echo $port
12345
Now you can use your environment variables inside pods.

Error from server (BadRequest): container "espace-client-client" in pod "espace-client-client" is waiting to start: trying and failing to pull image

I've deployed my first app on my Kubernetes prod cluster a month ago.
I could deploy my 2 services (front / back) from gitlab registry.
Now, I pushed a new docker image to gitlab registry and would like to redeploy it in prod:
Here is my deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
labels:
app: espace-client-client
name: espace-client-client
namespace: espace-client
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: espace-client-client
spec:
containers:
- envFrom:
- secretRef:
name: espace-client-client-env
image: registry.gitlab.com/xxx/espace_client/client:latest
name: espace-client-client
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
imagePullSecrets:
- name: gitlab-registry
I have no clue what is inside gitlab-registry. I didn't do it myself, and the people who did it left the crew :( Nevertheless, I have all the permissions, so, I only need to know what to put in the secret, and maybe delete it and recreate it.
It seems that secret is based on my .docker/config.json
➜ espace-client git:(k8s) ✗ kubectl describe secrets gitlab-registry
Name: gitlab-registry
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson: 174 bytes
I tried to delete existing secret, logout with
docker logout registry.gitlab.com
kubectl delete secret gitlab-registry
Then login again:
docker login registry.gitlab.com -u myGitlabUser
Password:
Login Succeeded
and pull image with:
docker pull registry.gitlab.com/xxx/espace_client/client:latest
which worked.
file: ~/.docker/config.json is looking weird:
{
"auths": {
"registry.gitlab.com": {}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.6 (linux)"
},
"credsStore": "secretservice"
}
It doesn't seem to contain any credential...
Then I recreate my secret
kubectl create secret generic gitlab-registry \
--from-file=.dockerconfigjson=/home/julien/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
I also tried to do :
kubectl create secret docker-registry gitlab-registry --docker-server=registry.gitlab.com --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
and deploy again:
kubectl rollout restart deployment/espace-client-client -n espace-client
but I still have the same error:
Error from server (BadRequest): container "espace-client-client" in pod "espace-client-client-6c8b88f795-wcrlh" is waiting to start: trying and failing to pull image
You have to update the gitlab-registry secret because this item is used to let Kubelet to pull the protected image using credentials.
Please, delete the old secret with kubectl -n yournamespace delete secret gitlab-registry and recreate it typing credentials:
kubectl -n yournamespace create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD[ --docker-email=DOCKER_EMAIL]
where:
- DOCKER_REGISTRY_SERVER is the GitLab Docker registry instance
- DOCKER_USER is the username of the robot account to pull images
- DOCKER_PASSWORD is the password attached to the robot account
You could ignore docker-email since it's not mandatory (note the square brackets).

Resources