Just deployed a cloud run on GKE (version 1.13.6-gke.6) service following this: https://cloud.google.com/run/docs/deploying
Code were similar to https://cloud.google.com/appengine/docs/flexible/python/using-websockets-and-session-affinity
and it worked on my local
but my requests (via curl and javascript) return 503 error:
< HTTP/1.1 503 Service Unavailable
HTTP/1.1 503 Service Unavailable
< content-length: 85
content-length: 85
< content-type: text/plain
content-type: text/plain
< date: Fri, 14 Jun 2019 05:07:37 GMT
date: Fri, 14 Jun 2019 05:07:37 GMT
< server: istio-envoy
server: istio-envoy
< connection: close
connection: close
Any idea what am I missing out?
Related
Hi so I am using kfserving v.0.5.1 component for hosting the model. I am able to download and deploy model from s3 but facing issue when try to access it.
After deployment kfserving outputted the following endpoint
http://recommendation-model.kubeflow-user-example-com.example.com
which I was not able to access it from outside and inside the node. After looking around I set my ingress-gateway to LoadBalancer from NodePort and added sslip.io to config-map config-domain of knative-serving
I followed knative-dn-config
<IPofMyLB>.sslip.io: ""
after that I try to inference model but getting no error or response from server
curl -d '{"instances": ["abc"]}' -X POST http://recommendation-model.kubeflow-user-example-com.<IPofMyLB>.sslip.io/v1/models/recommendation-model:predict
I try to simply get the inference endpoint and output is this:
curl -v -X GET http://recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying ELBIP...
* TCP_NODELAY set
* Connected to recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io (ELBIP) port 80 (#0)
> GET / HTTP/1.1
> Host: recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 302 Found
< content-type: text/html; charset=utf-8
< location: /dex/auth?client_id=kubeflow-oidc-authservice&redirect_uri=%2Flogin%2Foidc&response_type=code&scope=profile+email+groups+openid&state=MTYyMjY0MjU5M3xFd3dBRUV4MVVERkljREpRVUc1SVdXeDFaVkk9fOEnkjCWGNj6WPOgFhv2BUwNSKHsYyBR2kyj9_0geX2f
< date: Wed, 02 Jun 2021 14:03:13 GMT
< content-length: 269
< x-envoy-upstream-service-time: 1
< server: istio-envoy
<
Found.
* Connection #0 to host recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io left intact
* Closing connection 0
(base) ahsan#Ahsans-MacBook-Pro kfserving % curl -v -X GET http://recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying ELBIP...
* TCP_NODELAY set
* Connected to recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io (ELBIP) port 80 (#0)
> GET / HTTP/1.1
> Host: recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 302 Found
< content-type: text/html; charset=utf-8
< location: /dex/auth?client_id=kubeflow-oidc-authservice&redirect_uri=%2Flogin%2Foidc&response_type=code&scope=profile+email+groups+openid&state=MTYyMjY0MzE3OXxFd3dBRUhOdmFrSk9iVkU0Wms1VmMzVnhXbkU9fECJ3_U0SaWkR441eIWq-AJbFAV29-2Bk8uxPAOxPJD0
< date: Wed, 02 Jun 2021 14:12:59 GMT
< content-length: 269
< x-envoy-upstream-service-time: 1
< server: istio-envoy
<
Found.
* Connection #0 to host recommendation-model.kubeflow-user-example-com.ELBIP.sslip.io left intact
* Closing connection 0
model directory structure
recommendation_model/
└── 1
├── assets
├── keras_metadata.pb
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
Not sure how to get serving to work as this is the last part of my pipeline
When visiting the inferenceservice through the LoadBalancer, essentially the istio-ingressgateway, your request has an extra layer of control compared to the NodePort, which is dictated by the Istio security policy.
The response message of your curl indicates that you have an Istio installation with DEX authentication.
The istio-dex guide has examples of how to set the cookie for authenticating your inference request.
I've just deployed a docker registry.
I'm able to get access to it using:
$ curl -I chart-example.local/v2/
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: application/json; charset=utf-8
Date: Tue, 28 Jan 2020 20:10:35 GMT
Docker-Distribution-Api-Version: registry/2.0
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
However, when I'm trying to push an local image to it, I'm getting this message:
$ docker push chart-example.local/feedly:latest
The push refers to repository [chart-example.local/feedly]
Get https://chart-example.local/v2/: x509: certificate has expired or is not yet valid
Why docker is trying to get access using https instead of http?
Docker uses https by default for security. You can override this setting by modifying your daemon.json file with the following content. Do not use this setting in production.
{
"insecure-registries" : ["chart-example.local"]
}
See this link for more information: https://docs.docker.com/registry/insecure/
docker version: 1.11.2
curl version: 7.50.3 (x86_64-pc-linux-gnu) libcurl/7.50.3 OpenSSL/1.0.1e zlib/1.2.7
/usr/local/sbin/bin/curl --unix-socket /var/run/docker.sock http://images/json -v
* Trying /var/run/docker.sock...
* Connected to images (/var/run/docker.sock) port 80 (#0)
> GET /json HTTP/1.1
> Host: images
> User-Agent: curl/7.50.3
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Thu, 22 Sep 2016 06:11:52 GMT
< Content-Length: 19
<
404 page not found
* Curl_http_done: called premature == 0
* Connection #0 to host images left intact
Is there anything wrong with my docker daemon? How can I get the containers info from the docker unix-socket?
docker deamon is absolutely started.
I followed this page:https://docs.docker.com/engine/reference/api/docker_remote_api/#/v1-23-api-changes, its suggestion us to use curl 7.40 or later, command curl --unix-socket /var/run/docker.sock http:/containers/json. You can found that there is a unavild URL http:/containers/json in this command.
Then I download the newest curl 7.50.3, the key of this problem is the curl's version, we should exec like below:
curl --unix-socket /var/run/docker.sock http://localhost/images/json
More detail watch this page.https://superuser.com/questions/834307/can-curl-send-requests-to-sockets. Hope it help some other people who confused.
I have private docker repository in quay.io and I want to get list of repositories and tags related to it.
So I use curl -IL https://quay.io/user/accountName/v2/_catalog
it returns
HTTP/1.1 401 UNAUTHORIZED
Server: nginx/1.9.5
Date: Thu, 18 Feb 2016 11:02:19 GMT
Content-Type: application/json
Content-Length: 117
Connection: keep-alive
We want to check if an image exists in the public registry (Docker Hub) automatically before we start a deployment. With the v1 API, we would just query https://index.docker.io/v1/repositories/gliderlabs/alpine/tags/3.2 for example.
But now the official API for the registry is v2, what is the official way of checking the existence of an image in the public registry?
v1
$ curl -i https://index.docker.io/v1/repositories/gliderlabs/alpine/tags/latest
HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Tue, 11 Aug 2015 10:02:09 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Vary: Cookie
X-Frame-Options: SAMEORIGIN
Strict-Transport-Security: max-age=31536000
[{"pk": 20307475, "id": "5bd56d81"}, {"pk": 20355979, "id": "511136ea"}]
v2:
$ curl -i https://index.docker.io/v2/repositories/gliderlabs/alpine/tags/latest
HTTP/1.1 301 MOVED PERMANENTLY
Server: nginx/1.6.2
Date: Tue, 11 Aug 2015 10:04:20 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
X-Frame-Options: SAMEORIGIN
Location: https://index.docker.io/v2/repositories/gliderlabs/alpine/tags/latest/
Strict-Transport-Security: max-age=31536000
$ curl -i https://index.docker.io/v2/repositories/gliderlabs/alpine/tags/latest/
HTTP/1.1 301 MOVED PERMANENTLY
Server: nginx/1.6.2
Date: Tue, 11 Aug 2015 10:04:26 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
X-Frame-Options: SAMEORIGIN
Location: https://registry.hub.docker.com/v2/repositories/gliderlabs/alpine/tags/latest/
Strict-Transport-Security: max-age=31536000
$ curl -i https://registry.hub.docker.com/v2/repositories/gliderlabs/alpine/tags/latest/
HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Tue, 11 Aug 2015 10:04:34 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Vary: Cookie
X-Frame-Options: SAMEORIGIN
Allow: GET, DELETE, HEAD, OPTIONS
Strict-Transport-Security: max-age=31536000
{"name": "latest", "full_size": 5250074, "id": 130839, "repository": 127805, "creator": 152141, "last_updater": 152141, "image_id": null, "v2": false}
Am I supposed to stick to the v1 url even though it is now kind of deprecated or use v2 URLs but there is no documentation about it? If I use v2, shall I use directly https://registry.hub.docker.com/v2/ or still use https://index.docker.io/v1/ and follow the redirects?
Upstream's download-frozen-image-v2.sh script should be of some use as at least a decent API example here (https://github.com/docker/docker/blob/6bf8844f1179108b9fabd271a655bf9eaaf1ee8c/contrib/download-frozen-image-v2.sh#L47-L54).
The main key is that you'll need to be hitting registry-1.docker.io instead of index.docker.io, and that you need a "token" from auth.docker.io (https://auth.docker.io/token?service=registry.docker.io&scope=repository:gliderlabs/alpine:pull), even if you're just requesting read-only access to a public repository. Once you've got that token, you can hit https://registry-1.docker.io/v2/gliderlabs/alpine/manifests/latest with an Authorization header which will either return the JSON manifest of the image or error out with a 404.
token="$(curl -sSL "https://auth.docker.io/token?service=registry.docker.io&scope=repository:$image:pull" | jq --raw-output .token)"
curl -fsSL -H "Authorization: Bearer $token" "https://registry-1.docker.io/v2/$image/manifests/$digest"