Not being able to inference TensorFlow model hosted by kfserving component in Kubeflow - kubeflow

Hi so I am using kfserving v.0.5.1 component for hosting the model. I am able to download and deploy model from s3 but facing issue when try to access it.
After deployment kfserving outputted the following endpoint
http://recommendation-model.kubeflow-user-example-com.example.com
which I was not able to access it from outside and inside the node. After looking around I set my ingress-gateway to LoadBalancer from NodePort and added sslip.io to config-map config-domain of knative-serving
I followed knative-dn-config
<IPofMyLB>.sslip.io: ""
after that I try to inference model but getting no error or response from server
curl -d '{"instances": ["abc"]}' -X POST http://recommendation-model.kubeflow-user-example-com.<IPofMyLB>.sslip.io/v1/models/recommendation-model:predict
I try to simply get the inference endpoint and output is this:
curl -v -X GET http://recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying ELBIP...
* TCP_NODELAY set
* Connected to recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io (ELBIP) port 80 (#0)
> GET / HTTP/1.1
> Host: recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 302 Found
< content-type: text/html; charset=utf-8
< location: /dex/auth?client_id=kubeflow-oidc-authservice&redirect_uri=%2Flogin%2Foidc&response_type=code&scope=profile+email+groups+openid&state=MTYyMjY0MjU5M3xFd3dBRUV4MVVERkljREpRVUc1SVdXeDFaVkk9fOEnkjCWGNj6WPOgFhv2BUwNSKHsYyBR2kyj9_0geX2f
< date: Wed, 02 Jun 2021 14:03:13 GMT
< content-length: 269
< x-envoy-upstream-service-time: 1
< server: istio-envoy
<
Found.
* Connection #0 to host recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io left intact
* Closing connection 0
(base) ahsan#Ahsans-MacBook-Pro kfserving % curl -v -X GET http://recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying ELBIP...
* TCP_NODELAY set
* Connected to recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io (ELBIP) port 80 (#0)
> GET / HTTP/1.1
> Host: recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 302 Found
< content-type: text/html; charset=utf-8
< location: /dex/auth?client_id=kubeflow-oidc-authservice&redirect_uri=%2Flogin%2Foidc&response_type=code&scope=profile+email+groups+openid&state=MTYyMjY0MzE3OXxFd3dBRUhOdmFrSk9iVkU0Wms1VmMzVnhXbkU9fECJ3_U0SaWkR441eIWq-AJbFAV29-2Bk8uxPAOxPJD0
< date: Wed, 02 Jun 2021 14:12:59 GMT
< content-length: 269
< x-envoy-upstream-service-time: 1
< server: istio-envoy
<
Found.
* Connection #0 to host recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io left intact
* Closing connection 0
model directory structure
recommendation_model/
└── 1
├── assets
├── keras_metadata.pb
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
Not sure how to get serving to work as this is the last part of my pipeline

When visiting the inferenceservice through the LoadBalancer, essentially the istio-ingressgateway, your request has an extra layer of control compared to the NodePort, which is dictated by the Istio security policy.
The response message of your curl indicates that you have an Istio installation with DEX authentication.
The istio-dex guide has examples of how to set the cookie for authenticating your inference request.

Related

docker registry: https instead of http

I've just deployed a docker registry.
I'm able to get access to it using:
$ curl -I chart-example.local/v2/
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: application/json; charset=utf-8
Date: Tue, 28 Jan 2020 20:10:35 GMT
Docker-Distribution-Api-Version: registry/2.0
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
However, when I'm trying to push an local image to it, I'm getting this message:
$ docker push chart-example.local/feedly:latest
The push refers to repository [chart-example.local/feedly]
Get https://chart-example.local/v2/: x509: certificate has expired or is not yet valid
Why docker is trying to get access using https instead of http?
Docker uses https by default for security. You can override this setting by modifying your daemon.json file with the following content. Do not use this setting in production.
{
"insecure-registries" : ["chart-example.local"]
}
See this link for more information: https://docs.docker.com/registry/insecure/

Websocket request error on Cloud Run on GKE

Just deployed a cloud run on GKE (version 1.13.6-gke.6) service following this: https://cloud.google.com/run/docs/deploying
Code were similar to https://cloud.google.com/appengine/docs/flexible/python/using-websockets-and-session-affinity
and it worked on my local
but my requests (via curl and javascript) return 503 error:
< HTTP/1.1 503 Service Unavailable
HTTP/1.1 503 Service Unavailable
< content-length: 85
content-length: 85
< content-type: text/plain
content-type: text/plain
< date: Fri, 14 Jun 2019 05:07:37 GMT
date: Fri, 14 Jun 2019 05:07:37 GMT
< server: istio-envoy
server: istio-envoy
< connection: close
connection: close
Any idea what am I missing out?

"404 page not found" when exec `curl --unix-socket /var/run/docker.sock http:/containers/json`

docker version: 1.11.2
curl version: 7.50.3 (x86_64-pc-linux-gnu) libcurl/7.50.3 OpenSSL/1.0.1e zlib/1.2.7
/usr/local/sbin/bin/curl --unix-socket /var/run/docker.sock http://images/json -v
* Trying /var/run/docker.sock...
* Connected to images (/var/run/docker.sock) port 80 (#0)
> GET /json HTTP/1.1
> Host: images
> User-Agent: curl/7.50.3
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Thu, 22 Sep 2016 06:11:52 GMT
< Content-Length: 19
<
404 page not found
* Curl_http_done: called premature == 0
* Connection #0 to host images left intact
Is there anything wrong with my docker daemon? How can I get the containers info from the docker unix-socket?
docker deamon is absolutely started.
I followed this page:https://docs.docker.com/engine/reference/api/docker_remote_api/#/v1-23-api-changes, its suggestion us to use curl 7.40 or later, command curl --unix-socket /var/run/docker.sock http:/containers/json. You can found that there is a unavild URL http:/containers/json in this command.
Then I download the newest curl 7.50.3, the key of this problem is the curl's version, we should exec like below:
curl --unix-socket /var/run/docker.sock http://localhost/images/json
More detail watch this page.https://superuser.com/questions/834307/can-curl-send-requests-to-sockets. Hope it help some other people who confused.

Docker registry 2.0 api v2 access using auth token

I have private docker repository in quay.io and I want to get list of repositories and tags related to it.
So I use curl -IL https://quay.io/user/accountName/v2/_catalog
it returns
HTTP/1.1 401 UNAUTHORIZED
Server: nginx/1.9.5
Date: Thu, 18 Feb 2016 11:02:19 GMT
Content-Type: application/json
Content-Length: 117
Connection: keep-alive

Error uploading image into Docker Registry using API v2

I am trying to upload Docker image (tarball) into private Docker registry using following API.
I am following the documentation here: http://docs.docker.com/registry/spec/api/
Step 1: Initiate the upload and get location URL
$ curl -v -X POST http://localhost:5000/v2/hello-world/blobs/uploads/ * About to connect() to localhost port 5000 (#0)
* Trying ::1...
* Connected to localhost (::1) port 5000 (#0)
> POST /v2/hello-world/blobs/uploads/ HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:5000
> Accept: */*
>
< HTTP/1.1 202 Accepted
< Content-Length: 0
< Docker-Distribution-Api-Version: registry/2.0
< Docker-Upload-Uuid: dd319793-f017-45b3-afe4-c8102363a8df
< Location: http://localhost:5000/v2/hello-world/blobs/uploads/dd319793-f017-45b3-afe4-c8102363a8df?_state=SB2605fFM_7KNkYTHjVrVQVT62dufwXNTw9QzO2_aRR7Ik5hbWUiOiJoZWxsby13b3JsZCIsIlVVSUQiOiJkZDMxOTc5My1mMDE3LTQ1YjMtYWZlNC1jODEwMjM2M2E4ZGYiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTUtMTAtMjdUMjI6MjU6MjEuMTM0MTI5MDNaIn0%3D
< Range: 0-0
< Date: Tue, 27 Oct 2015 22:25:21 GMT
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host localhost left intact
Step 2: Use location URL to upload the actual Docker image
$ curl -v -H "Content-type: application/octet-stream" -H "Content-Length: 13824" --data-binary #/tmp/hello-world.tar -X PUT -L "http://localhost:5000/v2/hello-world/blobs/uploads/dd319793-f017-45b3-afe4-c8102363a8df?_state=SB2605fFM_7KNkYTHjVrVQVT62dufwXNTw9QzO2_aRR7Ik5hbWUiOiJoZWxsby13b3JsZCIsIlVVSUQiOiJkZDMxOTc5My1mMDE3LTQ1YjMtYWZlNC1jODEwMjM2M2E4ZGYiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTUtMTAtMjdUMjI6MjU6MjEuMTM0MTI5MDNaIn0%3D&digest=tarsum.v2+sha256:97bbb955c700a6414fd48ae147986e9b42c0508c8a766cea61e7e3badf0f7dde"
* About to connect() to localhost port 5000 (#0)
* Trying ::1...
* Connected to localhost (::1) port 5000 (#0)
> PUT /v2/hello-world/blobs/uploads/dd319793-f017-45b3-afe4-c8102363a8df?_state=SB2605fFM_7KNkYTHjVrVQVT62dufwXNTw9QzO2_aRR7Ik5hbWUiOiJoZWxsby13b3JsZCIsIlVVSUQiOiJkZDMxOTc5My1mMDE3LTQ1YjMtYWZlNC1jODEwMjM2M2E4ZGYiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTUtMTAtMjdUMjI6MjU6MjEuMTM0MTI5MDNaIn0%3D&digest=tarsum.v2+sha256:97bbb955c700a6414fd48ae147986e9b42c0508c8a766cea61e7e3badf0f7dde HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:5000
> Accept: */*
> Content-type: application/octet-stream
> Content-Length: 13824
> Expect: 100-continue
>
* Done waiting for 100-continue
< HTTP/1.1 400 Bad Request
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< Date: Tue, 27 Oct 2015 22:29:02 GMT
< Content-Length: 131
* HTTP error before end of send, stop sending
<
{"errors":[{"code":"DIGEST_INVALID","message":"provided digest did not match uploaded content","detail":"digest parsing failed"}]}
* Closing connection 0
Also, I am using sha256sum as follows:
$ sha256sum /tmp/hello-world.tar
97bbb955c700a6414fd48ae147986e9b42c0508c8a766cea61e7e3badf0f7dde /tmp/hello-world.tar
What I am possibly doing wrong here? How to get around DIGEST_INVALID error?
Here is a simple example that should get you on track:
local reponame=foo/bar
local uploadURL
local numBytes=10000000 # 10 Megabytes
uploadURL=$(curl -siL -X POST "https://registrydomain/v2/$reponame/blobs/uploads/" | grep 'Location:' | cut -d ' ' -f 2 | tr -d '[:space:]')
blobDigest="sha256:$(head -c $numBytes /dev/urandom | tee upload.tmp | shasum -a 256 | cut -d ' ' -f 1)"
echo "Uploading Blob of 10 Random Megabytes"
time curl -T upload.tmp --progress-bar "$uploadURL&digest=$blobDigest" > /dev/null

Resources