K8s Kf-serving: Using a different storage other than Google Cloud Storage [duplicate] - kubeflow

Is it possible to replace the usage of Google Cloud Storage buckets with an alternative on-premises solution so that it is possible to run e.g. Kubeflow Pipelines completely independent from the Google Cloud Platform?

Yes it is possible. You can use minio, it's like s3/gs but it runs on a persistent volume of your on-premises storage.
Here are the instructions on how to use it as a kfserving inference storage:
Validate that minio is running in your kubeflow installation:
$ kubectl get svc -n kubeflow |grep minio
minio-service ClusterIP 10.101.143.255 <none> 9000/TCP 81d
Enable a tunnel for your minio:
$ kubectl port-forward svc/minio-service -n kubeflow 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Browse http://localhost:9000 to get to the minio UI and create a bucket/upload your model. Credentials minio/minio123. Alternatively you can use the mc command to do it from your terminal:
$ mc ls minio/models/flowers/0001/
[2020-03-26 13:16:57 CET] 1.7MiB saved_model.pb
[2020-04-25 13:37:09 CEST] 0B variables/
Create a secret&serviceaccount for the minio access, note that the s3-endpoint defines the path to the minio, keyid&acceskey are the credentials encoded in base64:
$ kubectl get secret mysecret -n homelab -o yaml
apiVersion: v1
data:
awsAccessKeyID: bWluaW8=
awsSecretAccessKey: bWluaW8xMjM=
kind: Secret
metadata:
annotations:
serving.kubeflow.org/s3-endpoint: minio-service.kubeflow:9000
serving.kubeflow.org/s3-usehttps: "0"
name: mysecret
namespace: homelab
$ kubectl get serviceAccount -n homelab sa -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa
namespace: homelab
secrets:
- name: mysecret
Finally, create your inferenceservice as follows:
$ kubectl get inferenceservice tensorflow-flowers -n homelab -o yaml
apiVersion: serving.kubeflow.org/v1alpha2
kind: InferenceService
metadata:
name: tensorflow-flowers
namespace: homelab
spec:
default:
predictor:
serviceAccountName: sa
tensorflow:
storageUri: s3://models/flowers

Related

identity and authorization Istio adapter not working

I am trying to get a simple example of the App identity and Access Adapter for Istio working on Minikube. I have followed the install instructions exactly and my calls to the sample application go through as if the adapter is no even there.
platform: minikube
istio installed via istioctl
adapter install via helm.
adapter pod is running
apiVersion: "security.cloud.ibm.com/v1"
kind: OidcConfig
metadata:
name: oidc-provider-config
namespace: default
spec:
authMethod: client_secret_basic
discoveryUrl: https://us-south.appid.cloud.ibm.com/oauth/v4/
clientId: ******************************
clientSecret: ******************************
apiVersion: security.cloud.ibm.com/v1
kind: Policy
metadata:
name: sample-oidc-policy
namespace: default
spec:
targets:
-
serviceName: service/helloworld
paths:
- exact: /hello
method: ALL
policies:
- policyType: oidc
config: oidc-provider-config
Did you set global.disablePolicyChecks to false and did you enable mixer during Istio install?
Mixer is disabled by default now.
See https://istio.io/docs/reference/config/installation-options/#mixer-options
Update:
I was just able to resolve this issue on my setup by doing the following:
First check the status of disablePolicyCheck:
kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks
If this returns disablePolicyChecks: true run
istioctl manifest apply --set values.global.disablePolicyChecks=false \
--set values.mixer.policy.enabled=true \
--set values.pilot.policy.enabled=true
Running the following should show the value of disablePolicyChecks as false
kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks

How can you read a database port from application.properties with environment variables

i am very new to Spring Boot and the application.properties. I have the problem, that i need to be very flexible with my database port, because i have two different databases. Therefore i want to read the port from a environment variable. I tried the following:
spring.data.mongodb.uri = mongodb://project1:${db-password}#abc:12345/project
This code works fine, if my Database has the port 12345. But if i now try to read the port from an environment variable there is a problem.
I tried this:
spring.data.mongodb.uri = mongodb://project1:${db-password}#abc:${port}/project
The problem is the following: I am using k8 and Jenkins. The environment variable "port" is given to my program in my k8 and this works fine for "db-password", but not for the port. My Jenkins says:
"The connection string contains an invalid host 'abd:${port}'. The port '${port}' is not a valid, it must be an integer between 0 and 65535"
So now to my question:
How can i read a port as an environment variable, without getting this error?
Thank you in advance!
To inject environment variable to the pods you can do the following:
Configmap
You can create ConfigMap and configure your pods to use it.
Steps required:
Create ConfigMap
Update/Create the deployment with ConfigMap
Test it
Create ConfigMap
I provided simple ConfigMap below to store your variables:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
port: "12345"
To apply it and be able to use it invoke following command:
$ kubectl create -f example-configmap.yaml
The ConfigMap above will create the environment variable port with value of 12345.
Check if ConfigMap was created successfully:
$ kubectl get configmap
Output should be like this:
NAME DATA AGE
example-config 1 21m
To get the detailed information you can check it with command:
$ kubectl describe configmap example-config
With output:
Name: example-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
port:
----
12345
Events: <none>
Update/Create the deployment with ConfigMap
I provided simple deployment with ConfigMap included:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
envFrom:
- configMapRef:
name: example-config
ports:
- containerPort: 80
Configuration responsible for using ConfigMap:
envFrom:
- configMapRef:
name: example-config
After that you need to run your deployment with command:
$ kubectl create -f configmap-test.yaml
And check if it's working:
$ kubectl get pods
With output:
NAME READY STATUS RESTARTS AGE
nginx-deployment-84d6f58895-b4zvz 1/1 Running 0 23m
nginx-deployment-84d6f58895-dp4c7 1/1 Running 0 23m
Test it
To test if environment variable is working you need to get inside the pod and check for yourself.
To do that invoke the command:
$ kubectl exec -it NAME_OF_POD -- /bin/bash
Please provide the variable NAME_OF_POD with appropriate one for your case.
After successfully getting into container run:
$ echo $port
It should show:
root#nginx-deployment-84d6f58895-b4zvz:/# echo $port
12345
Now you can use your environment variables inside pods.

Error from server (BadRequest): container "espace-client-client" in pod "espace-client-client" is waiting to start: trying and failing to pull image

I've deployed my first app on my Kubernetes prod cluster a month ago.
I could deploy my 2 services (front / back) from gitlab registry.
Now, I pushed a new docker image to gitlab registry and would like to redeploy it in prod:
Here is my deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
labels:
app: espace-client-client
name: espace-client-client
namespace: espace-client
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: espace-client-client
spec:
containers:
- envFrom:
- secretRef:
name: espace-client-client-env
image: registry.gitlab.com/xxx/espace_client/client:latest
name: espace-client-client
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
imagePullSecrets:
- name: gitlab-registry
I have no clue what is inside gitlab-registry. I didn't do it myself, and the people who did it left the crew :( Nevertheless, I have all the permissions, so, I only need to know what to put in the secret, and maybe delete it and recreate it.
It seems that secret is based on my .docker/config.json
➜ espace-client git:(k8s) ✗ kubectl describe secrets gitlab-registry
Name: gitlab-registry
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson: 174 bytes
I tried to delete existing secret, logout with
docker logout registry.gitlab.com
kubectl delete secret gitlab-registry
Then login again:
docker login registry.gitlab.com -u myGitlabUser
Password:
Login Succeeded
and pull image with:
docker pull registry.gitlab.com/xxx/espace_client/client:latest
which worked.
file: ~/.docker/config.json is looking weird:
{
"auths": {
"registry.gitlab.com": {}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.6 (linux)"
},
"credsStore": "secretservice"
}
It doesn't seem to contain any credential...
Then I recreate my secret
kubectl create secret generic gitlab-registry \
--from-file=.dockerconfigjson=/home/julien/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
I also tried to do :
kubectl create secret docker-registry gitlab-registry --docker-server=registry.gitlab.com --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
and deploy again:
kubectl rollout restart deployment/espace-client-client -n espace-client
but I still have the same error:
Error from server (BadRequest): container "espace-client-client" in pod "espace-client-client-6c8b88f795-wcrlh" is waiting to start: trying and failing to pull image
You have to update the gitlab-registry secret because this item is used to let Kubelet to pull the protected image using credentials.
Please, delete the old secret with kubectl -n yournamespace delete secret gitlab-registry and recreate it typing credentials:
kubectl -n yournamespace create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD[ --docker-email=DOCKER_EMAIL]
where:
- DOCKER_REGISTRY_SERVER is the GitLab Docker registry instance
- DOCKER_USER is the username of the robot account to pull images
- DOCKER_PASSWORD is the password attached to the robot account
You could ignore docker-email since it's not mandatory (note the square brackets).

How to pull image from dockerhub in kubernetes?

I am planning to deploy an application in my kubernetes-clustering infra.
I pushed image to dockerhub repo. How can I pull image from dockerhub?
One line command to create a Docker registry secret
kubectl create secret docker-registry regcred --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email> -n <your-namespace>
Then you can use it in your deployment file under spec
spec:
containers:
- name: private-reg-container-name
image: <your-private-image>
imagePullSecrets:
- name: regcred
More details:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token
Kubernetes run docker pull pseudo/your-image:latest under the hood. image field in Kubernetes resources is simply the docker image to run.
spec:
containers:
- name: app
image: pseudo/your-image:latest
[...]
As the docker image name contains no specific docker registry url, the default is docker.io. Your image is in fact docker.io/pseudo/your-image:latest
If your image is hosted in a private docker hub repo, you need to specify an image pull secret in the spec field.
spec:
containers:
- name: app
image: pseudo/your-image:latest
imagePullSecrets:
- name: dockerhub-credential
Here is the documentation to create the secret containing your docker hub login: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
using docker pull or kubectl set image
example yaml deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
start container and show status deployment with kubectl get deployments
result
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 18s
and now update image in kubernetes using set image
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
and show status update image with rollout
kubectl rollout status deployment/nginx-deployment
Note: ngnix is name of container ->name
containers:
- name: nginx
image: nginx:1.14.2
nginx:1.16.1 is image version in docker hub, is recommendable change version for update
if you decided remove update and rollback to the previous revision, use rollout undo
kubectl rollout undo deployment/nginx-deployment
for more information, use the documentation
Create a docker registry secret:
#!/bin/bash
for ns in $(kubectl get namespaces |grep -v NAME|awk '{print $1}')
do
kubectl create secret docker-registry docker.registry \
--docker-username=<MyAccountName> \
--docker-password='MyDockerHubPassword' -n $ns
done
Patch all the dynamic service accounts in all the namesapces with the secret you created in step 1
for ns in $(kubectl get namespaces|grep -v NAME|awk '{print $1}')
do
for sa in $(kubectl -n $ns get sa|grep -v SECRETS|awk '{print $1}')
do
kubectl patch serviceaccount $sa -p '{"imagePullSecrets": [{"name": "docker.registry"}]}' -n $ns
if [ $? -eq 0 ]; then
echo $ns $sa patched
else
echo Error patching $ns $sa
fi
done
done
You can patch only specific namespaces, if you wish.
Let me know how it goes.

How to access private Docker Hub repository from Kubernetes on Vagrant

I am failing to pull from my private Docker Hub repository into my local Kubernetes setup running on Vagrant:
Container "hellonode" in pod "hellonode-n1hox" is waiting to start: image can't be
pulled
Failed to pull image "username/hellonode": Error: image username/hellonode:latest not found
I have set up Kubernetes locally via Vagrant as described here and created a secret named "dockerhub" with kubectl create secret docker-registry dockerhub --docker-server=https://registry.hub.docker.com/ --docker-username=username --docker-password=... --docker-email=... which I supplied as the image pull secret.
I am running Kubernetes 1.2.0.
To pull a private DockerHub hosted image from a Kubernetes YAML:
Run these commands:
DOCKER_REGISTRY_SERVER=docker.io
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`
kubectl create secret docker-registry myregistrykey \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
--docker-email=$DOCKER_EMAIL
If your username on DockerHub is DOCKER_USER, and your private repo is called PRIVATE_REPO_NAME, and the image you want to pull is tagged as latest, create this example.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: whatever
spec:
containers:
- name: whatever
image: DOCKER_USER/PRIVATE_REPO_NAME:latest
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
imagePullSecrets:
- name: myregistrykey
Then run:
kubectl create -f example.yaml
Create k8 Secret:
apiVersion: v1
kind: Secret
metadata:
name: repositorySecretKey
data:
.dockerconfigjson: <base64 encoded docker auth config>
type: kubernetes.io/dockerconfigjson
Then in pod or rc config mention the secret. Example :
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: quay.io/example/hello:1.1
imagePullSecrets:
- name: repositorySecretKey
Docker auth config
{
"https://quay.io": {
"email": ".",
"auth": "<base64 encoded auth token>"
}
}
Or
kubectl create secret docker-registry myregistrykey \
--docker-server=DOCKER_REGISTRY_SERVER \
--docker-username=DOCKER_USER \
--docker-password=DOCKER_PASSWORD \
--docker-email=DOCKER_EMAIL
I solved using the following Kubectl command :
kubectl create secret docker-registry your-key-name\
--docker-server=docker.io \
--docker-username=DOCKER_USER \
--docker-password=DOCKER_PASSWORD \
--docker-email=DOCKER_EMAIL
You can follow these instructions on how to configure nodes to authenticate to a private repository in order to configure the kubelets to make Docker use your credentials, or follow +Phagun Baya's solution with imagePullSecrets that applies to pods.
Just in case anyone else is stuck using kubectl from Windows -
set secretname="secret1"
set username="dockerhubUsername"
set pw="dockerhubPassword"
set email="dockerhubEmail#domain.com"
kubectl create secret docker-registry %secretname% --docker-username=%username% --docker-password=%pw% --docker-email=%email%

Resources