identity and authorization Istio adapter not working - adapter

I am trying to get a simple example of the App identity and Access Adapter for Istio working on Minikube. I have followed the install instructions exactly and my calls to the sample application go through as if the adapter is no even there.
platform: minikube
istio installed via istioctl
adapter install via helm.
adapter pod is running
apiVersion: "security.cloud.ibm.com/v1"
kind: OidcConfig
metadata:
name: oidc-provider-config
namespace: default
spec:
authMethod: client_secret_basic
discoveryUrl: https://us-south.appid.cloud.ibm.com/oauth/v4/
clientId: ******************************
clientSecret: ******************************
apiVersion: security.cloud.ibm.com/v1
kind: Policy
metadata:
name: sample-oidc-policy
namespace: default
spec:
targets:
-
serviceName: service/helloworld
paths:
- exact: /hello
method: ALL
policies:
- policyType: oidc
config: oidc-provider-config

Did you set global.disablePolicyChecks to false and did you enable mixer during Istio install?
Mixer is disabled by default now.
See https://istio.io/docs/reference/config/installation-options/#mixer-options
Update:
I was just able to resolve this issue on my setup by doing the following:
First check the status of disablePolicyCheck:
kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks
If this returns disablePolicyChecks: true run
istioctl manifest apply --set values.global.disablePolicyChecks=false \
--set values.mixer.policy.enabled=true \
--set values.pilot.policy.enabled=true
Running the following should show the value of disablePolicyChecks as false
kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks

Related

RBAC: access denied when request in kubeflow notebooks

I tried the kserve example and faced the problem of RBAC: Access Denied.
add user
$ kubectl -n auth edit cm dex
add example infomation in staticPasswords
- email: my_email#gmail.com
hash: my_ps_hash
userID: "myuserid"
username: myusername
$ kubectl rollout restart deployment dex -n auth
add profile for create namespace
$ vi profile.yaml
apiVersion: kubeflow.org/v1beta1
kind: Profile
metadata:
name: test-namespace
spec:
owner:
kind: User
name: my_email#gmail.com
resourceQuotaSpec: {}
kubectl apply -f profile.yaml
Serving exam model
kubeflow Central Dashboard - Models - +NEW MODEL SERVER
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
annotations:
isdecar.istio.is/inject: "false"
name: "sklearn-iris"
spec:
predictor:
sklearn:
image: "kserve/sklearnserver:v0.9.0"
storageUri: "gs://kfserving-examples/models/sklearn/1.0/model"
$ kubectl get InferenceService -n test-namespace
NAME URL READY PREV LATEST PREVROLLEDOUTREVISION LATESTREADYREVISION AGE
sklearn-iris http://sklearn-iris-test2.test-namespace.example.com True 100 sklearn-iris-predictor-default-00001 81m
READY: True
Add New Notebook
kubeflow Central Dashboard - Notebooks - +New Notebook
and run code
from kserve import (utils, KServeClient)
import requests
import json
sklear_iris_input = dict(instances = [
[6.8, 2.8, 4.8, 1.4],
[6.0, 3.4, 4.5, 1.6]
])
namespace = utils.get_default_target_namespace() # Get namespace. It's worked
service_name = "sklearn-iris"
kserve = KServeClient()
isvc_resp = kserve.get(service_name, namespace = namespace) # It's worked
# http://sklearn-iris.test-namespace.svc.cluster.local/v1/models/sklearn-iris-test2:predict
isvc_url = isvc_resp['status']['address']['url']
response = requests.post(isvc_url, json = json.dumps(sklear_iris_input))
print(response.text)
RBAC: access denied
env:
kserve: 0.9.0
kubeflow: 1.6.0
kubernetes: v1.22.13 and run master node(already disable taint)
Why is this message(RBAC: access denied) occurring?
Your help is of great help to me. Help me!

How to make a deployment file for a kubernetes service that depends on images from Amazon ECR?

A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c

K8s Kf-serving: Using a different storage other than Google Cloud Storage [duplicate]

Is it possible to replace the usage of Google Cloud Storage buckets with an alternative on-premises solution so that it is possible to run e.g. Kubeflow Pipelines completely independent from the Google Cloud Platform?
Yes it is possible. You can use minio, it's like s3/gs but it runs on a persistent volume of your on-premises storage.
Here are the instructions on how to use it as a kfserving inference storage:
Validate that minio is running in your kubeflow installation:
$ kubectl get svc -n kubeflow |grep minio
minio-service ClusterIP 10.101.143.255 <none> 9000/TCP 81d
Enable a tunnel for your minio:
$ kubectl port-forward svc/minio-service -n kubeflow 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Browse http://localhost:9000 to get to the minio UI and create a bucket/upload your model. Credentials minio/minio123. Alternatively you can use the mc command to do it from your terminal:
$ mc ls minio/models/flowers/0001/
[2020-03-26 13:16:57 CET] 1.7MiB saved_model.pb
[2020-04-25 13:37:09 CEST] 0B variables/
Create a secret&serviceaccount for the minio access, note that the s3-endpoint defines the path to the minio, keyid&acceskey are the credentials encoded in base64:
$ kubectl get secret mysecret -n homelab -o yaml
apiVersion: v1
data:
awsAccessKeyID: bWluaW8=
awsSecretAccessKey: bWluaW8xMjM=
kind: Secret
metadata:
annotations:
serving.kubeflow.org/s3-endpoint: minio-service.kubeflow:9000
serving.kubeflow.org/s3-usehttps: "0"
name: mysecret
namespace: homelab
$ kubectl get serviceAccount -n homelab sa -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa
namespace: homelab
secrets:
- name: mysecret
Finally, create your inferenceservice as follows:
$ kubectl get inferenceservice tensorflow-flowers -n homelab -o yaml
apiVersion: serving.kubeflow.org/v1alpha2
kind: InferenceService
metadata:
name: tensorflow-flowers
namespace: homelab
spec:
default:
predictor:
serviceAccountName: sa
tensorflow:
storageUri: s3://models/flowers

Kubernetes: Error from server (NotFound): deployments.apps "kube-verify" not found

I set up a Kubernetes cluster in my private network and managed to deploy a test pods:
now I want to expose an external ip for the service:
but when I run:
kubectl get deployments kube-verify
i get:
Error from server (NotFound): deployments.apps "kube-verify" not found
EDIT
Ok I try a new approach:
i have made a namespace called: verify-cluster
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: verify-cluster
namespace: verify-cluster
labels:
app: verify-cluster
spec:
replicas: 1
selector:
matchLabels:
app: verify-cluster
template:
metadata:
labels:
app: verify-cluster
spec:
containers:
- name: nginx
image: nginx:1.18.0
ports:
- containerPort: 80
and service.yaml:
apiVersion: v1
kind: Service
metadata:
name: verify-cluster
namespace: verify-cluster
spec:
type: NodePort
selector:
app: verify-cluster
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30007
then I run:
kubectl create -f deployment.yaml
kubectl create -f service.yaml
than checking
kubectl get all -n verify-cluster
but than I want to check deployment with:
kubectl get all -n verify-cluster
and get:
Error from server (NotFound): deployments.apps "verify-cluster" not found
hope that's better for reproduction ?
EDIT 2
when I deploy it to default namespace it runs directly so the issue must be something in the namespace
I guess that you might have forgotten to create the namespace:
File my-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
Then:
kubectl create -f ./my-namespace.yaml
First you need to get the deployment by
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
test-deployment 1/1 1 1 15m
If you have used any namespace then
$ kubectl get deployment -n your-namesapce
Then use the exact name in further commands example
kubectl scale deployment test-deployment --replicas=10
I have replicated the use case with the config files you have provided. Everything works well at my end. Make sure that namespace is created correctly without any typo errors.
Alternatively, you can create namespace using below command:
kubectl create namespace <insert-namespace-name-here>
Refer this documentation for detailed information on creating a namespace.
Another approach could be to apply your configuration directly to the requested namespace.
kubectl apply -f deployment.yml -n verify-cluster
kubectl apply -f service.yml -n verify-cluster

How can you read a database port from application.properties with environment variables

i am very new to Spring Boot and the application.properties. I have the problem, that i need to be very flexible with my database port, because i have two different databases. Therefore i want to read the port from a environment variable. I tried the following:
spring.data.mongodb.uri = mongodb://project1:${db-password}#abc:12345/project
This code works fine, if my Database has the port 12345. But if i now try to read the port from an environment variable there is a problem.
I tried this:
spring.data.mongodb.uri = mongodb://project1:${db-password}#abc:${port}/project
The problem is the following: I am using k8 and Jenkins. The environment variable "port" is given to my program in my k8 and this works fine for "db-password", but not for the port. My Jenkins says:
"The connection string contains an invalid host 'abd:${port}'. The port '${port}' is not a valid, it must be an integer between 0 and 65535"
So now to my question:
How can i read a port as an environment variable, without getting this error?
Thank you in advance!
To inject environment variable to the pods you can do the following:
Configmap
You can create ConfigMap and configure your pods to use it.
Steps required:
Create ConfigMap
Update/Create the deployment with ConfigMap
Test it
Create ConfigMap
I provided simple ConfigMap below to store your variables:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
port: "12345"
To apply it and be able to use it invoke following command:
$ kubectl create -f example-configmap.yaml
The ConfigMap above will create the environment variable port with value of 12345.
Check if ConfigMap was created successfully:
$ kubectl get configmap
Output should be like this:
NAME DATA AGE
example-config 1 21m
To get the detailed information you can check it with command:
$ kubectl describe configmap example-config
With output:
Name: example-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
port:
----
12345
Events: <none>
Update/Create the deployment with ConfigMap
I provided simple deployment with ConfigMap included:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
envFrom:
- configMapRef:
name: example-config
ports:
- containerPort: 80
Configuration responsible for using ConfigMap:
envFrom:
- configMapRef:
name: example-config
After that you need to run your deployment with command:
$ kubectl create -f configmap-test.yaml
And check if it's working:
$ kubectl get pods
With output:
NAME READY STATUS RESTARTS AGE
nginx-deployment-84d6f58895-b4zvz 1/1 Running 0 23m
nginx-deployment-84d6f58895-dp4c7 1/1 Running 0 23m
Test it
To test if environment variable is working you need to get inside the pod and check for yourself.
To do that invoke the command:
$ kubectl exec -it NAME_OF_POD -- /bin/bash
Please provide the variable NAME_OF_POD with appropriate one for your case.
After successfully getting into container run:
$ echo $port
It should show:
root#nginx-deployment-84d6f58895-b4zvz:/# echo $port
12345
Now you can use your environment variables inside pods.

Resources