Hi am developing a solution that creates and deploys services to Google cloud run using its REST API with OAuth as a service account created fot that purpose.
I am stuck at making the created services available publicly.
I was unable to find a corresponding --allow-unauthenticated parameter as with gcloud to use from the API.
The only way I found is to manually add allUsers as Cloud Run Invoker on each service I want publicly reachable. But, I would like all the services from that service-account to be automatically reacheable publicly.
I would like to know if there is a better(more automatic) way to achieve this.
Thanks in advance.
Firstly, you can't do this in only one command. You have to deploy the service and then to grant allUsers on the service. The CLI do this 2 steps conveniently for you.
Anyway, when you are stuck like this, there is a useful trick: add --log-http at your gcloud command. Like this, you will see all the HTTP API calls performed by the CLI.
if you do this when you deploy a new Cloud Run service, you will have tons of line and, at a moment, you have this
==== request start ====
uri: https://run.googleapis.com/v1/projects/gbl-imt-homerider-basguillaueb/locations/us-central1/services/predict2:setIamPolicy?alt=json
method: POST
== headers start ==
b'Authorization': --- Token Redacted ---
b'X-Goog-User-Project': b'gbl-imt-homerider-basguillaueb'
b'accept': b'application/json'
b'accept-encoding': b'gzip, deflate'
b'content-length': b'98'
b'content-type': b'application/json'
b'user-agent': b'google-cloud-sdk gcloud/299.0.0 command/gcloud.run.deploy invocation-id/61070d063a604fdda8e87ad63777e3ae environment/devshell environment-version/None interactive/True from-script/False python/3.7.3 term/screen (Linux 4.19.112+
)'
== headers end ==
⠹ Deploying new service...
{"policy": {"bindings": [{"members": ["allUsers"], "role": "roles/run.invoker"}], "etag": "ACAB"}}
== body end ==
⠹ Setting IAM Policy...
---- response start ----
status: 200
-- headers start --
-content-encoding: gzip
cache-control: private
content-length: 159
content-type: application/json; charset=UTF-8
date: Wed, 08 Jul 2020 11:37:11 GMT
server: ESF
transfer-encoding: chunked
vary: Origin, X-Origin, Referer
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-xss-protection: 0
-- headers end --
-- body start --
{
"version": 1,
"etag": "BwWp7IdZGHs=",
"bindings": [
{
"role": "roles/run.invoker",
"members": [
"allUsers"
]
}
]
}
So, it's an addition API call that perform the CLI for you. You can find here the API definition
If you want to perform a call manually, you can do a request like this
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "content-type: application/json" -X POST \
-d '{"policy": {"bindings": [{"members": ["allUsers"], "role": "roles/run.invoker"}]}}' \
"https://run.googleapis.com/v1/projects/<PROJECT_ID>/locations/<REGION>/services/<SERVICE_NAME>:setIamPolicy"
Related
I want to write a data point to my influxdb via bash shell but the timestamp seems to cause problems.
root#server:UP [~]# curl -i -XPOST 'http://localhost:8086/write?db=tbr' --data-binary 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000'
HTTP/1.1 400 Bad Request
Content-Type: application/json
Request-Id: 03666c3f-b7b5-11ea-8659-02420a0a1b02
X-Influxdb-Build: OSS
X-Influxdb-Error: unable to parse 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000': bad timestamp
X-Influxdb-Version: 1.7.10
X-Request-Id: 03666c3f-b7b5-11ea-8659-02420a0a1b02
Date: Fri, 26 Jun 2020 13:57:46 GMT
Content-Length: 129
{"error":"unable to parse 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000': bad timestamp"}
This is how I created the timestamp in the first place
def date = LocalDateTime.of(2020, Month.JUNE, 26, 0, 0, 0)
def ms = date.atZone(ZoneId.systemDefault()).toInstant().toEpochMilli()
TimeUnit.NANOSECONDS.convert(ms, TimeUnit.MILLISECONDS)
So the timestamp is supposed to be in ns and I ensured that. Why is influxdb giving me that error message? What's wrong with that timestamp?
Thanks!
It's not a timestamp issue. You have missed comma between the data that you are passing. Kindly comma separate all of them except the timestamp field. Modified your curl request, try now.
curl -i -XPOST 'http://localhost:8086/write?db=tbr' --data-binary 'codenarc, maxPriority2Violations=917, maxPriority3Violations=3336 1593122400000000000'
One more example:
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server02,region=us-west value=0.55 1422568543702900257'
I am trying to send a POST request to a site using Hyper 0.9. The request works with curl:
curl https://api.particle.io/v1/devices/secret/set_light -d args=0 -d access_token=secret
and Python:
import requests
r = requests.post("https://api.particle.io/v1/devices/secret/set_light",
data={"access_token": "secret", "args": "0"})
but my Rust implementation doesn't seem to go through, always yielding 400.
use hyper::client::Client;
let addr = "https://api.particle.io/v1/devices/secret/set_light";
let body = "access_token=secret&args=0";
let mut res = client.post(addr)
.body(body)
.send()
.unwrap();
It is greatly beneficial to be aware of various tools for debugging HTTP problems like this. In this case, I used nc to start a dumb server so I could see the headers the HTTP client is sending (nc -l 5000). I modified the cURL and Rust examples to point to 127.0.0.1:5000 and this was the output:
cURL:
POST /v1/devices/secret/set_light HTTP/1.1
Host: 127.0.0.1:5000
User-Agent: curl/7.43.0
Accept: */*
Content-Length: 26
Content-Type: application/x-www-form-urlencoded
args=0&access_token=secret
Hyper:
POST /v1/devices/secret/set_light HTTP/1.1
Host: 127.0.0.1:5000
Content-Length: 26
access_token=secret&args=0
I don't have an account at particle.io to test with, but I'm guessing you need that Content-Type header. Setting a User-Agent would be good etiquette and the Accept header is really more for your benefit, so you might as well set them too.
I'm a newbie to Neo4j, which is at the moment my main candidate of all the graph databases out there. I'm writing a thesis about integrating a database to a smart city, and Neo4j is one of the best candidates for this purpose, if not the best.
However, I'm unable to get Neo4j's SPARQL plugin working. I'm a newbie also to Maven, but I was able to download the plugin from GitHub and compile it - however, I had to skip the tests to be able to compile it. Anyway, 'build success.'
I followed the instructions
http://neo4j-contrib.github.io/sparql-plugin/
and I suppose that I was able to insert the example quads (example 1) to my database:
curl -X POST -H Content-Type:application/json -H Accept:application/json --data-binary #sampledata.txt -v http:// l o c a l h o s t :7474/db/data/ext/SPARQLPlugin/graphdb/insert_quad
Response:
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 7474 (#0)
> POST /db/data/ext/SPARQLPlugin/graphdb/insert_quad HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:7474
> Content-Type:application/json
> Accept:application/json
> Content-Length: 130
>
* upload completely sent off: 130 out of 130 bytes
< HTTP/1.1 204 No Content
< Access-Control-Allow-Origin: *
* Server Jetty(9.0.5.v20130815) is not blacklisted
< Server: Jetty(9.0.5.v20130815)
<
* Connection #0 to host localhost left intact
However, I cannot find that quad in my database. I suppose that a query 'MATCH (n) RETURN n LIMIT 100' would show them, right? Anyway, I only find one node with one property, 'value: urn:com.tinkerpop.blueprints.pgm.oupls.sail:namespaces.' When I try querying (example 2):
curl -X POST -H Content-Type:application/json -H Accept:application/json --data-binary #sampledata.txt -v http:// l o c a l h o s t :7474/db/data/ext/SPARQLPlugin/graphdb/execute_sparql
Response:
Hostname was NOT found in DNS cache
Trying 127.0.0.1...
Connected to localhost (127.0.0.1) port 7474 (#0)
POST /db/data/ext/SPARQLPlugin/graphdb/execute_sparql HTTP/1.1
User-Agent: curl/7.35.0
Host: localhost:7474
Content-Type:application/json
Accept:application/json
Content-Length: 74
upload completely sent off: 74 out of 74 bytes < HTTP/1.1 500 Server Error < Content-Type: application/json; charset=UTF-8 <
Access-Control-Allow-Origin: * < Content-Length: 3274
Server Jetty(9.0.5.v20130815) is not blacklisted < Server: Jetty(9.0.5.v20130815) < { "message" :
"org.openrdf.query.algebra.Var.setConstant(Z)V", "exception" :
"NoSuchMethodError", "fullname" : "java.lang.NoSuchMethodError",
"stacktrace" : [
"org.openrdf.query.parser.sparql.TupleExprBuilder.createConstVar(TupleExprBuilder.java:340)",
"org.openrdf.query.parser.sparql.TupleExprBuilder.mapValueExprToVar(TupleExprBuilder.java:271)",
"org.openrdf.query.parser.sparql.TupleExprBuilder.visit(TupleExprBuilder.java:1512)",
"org.openrdf.query.parser.sparql.ast.ASTPathSequence.jjtAccept(ASTPathSequence.java:20)",
"org.openrdf.query.parser.sparql.TupleExprBuilder.visit(TupleExprBuilder.java:1323)",
"org.openrdf.query.parser.sparql.ast.ASTPathAlternative.jjtAccept(ASTPathAlternative.java:18)",
"org.openrdf.query.parser.sparql.TupleExprBuilder.visit(TupleExprBuilder.java:1875)",
"org.openrdf.query.parser.sparql.ast.ASTPropertyListPath.jjtAccept(ASTPropertyListPath.java:18)",
"org.openrdf.query.parser.sparql.ast.SimpleNode.childrenAccept(SimpleNode.java:157)",
"org.openrdf.query.parser.sparql.ASTVisitorBase.visit(ASTVisitorBase.java:979)",
"org.openrdf.query.parser.sparql.ast.ASTTriplesSameSubjectPath.jjtAccept(ASTTriplesSameSubjectPath.java:18)",
"org.openrdf.query.parser.sparql.ast.SimpleNode.childrenAccept(SimpleNode.java:157)",
"org.openrdf.query.parser.sparql.ASTVisitorBase.visit(ASTVisitorBase.java:421)",
"org.openrdf.query.parser.sparql.ast.ASTBasicGraphPattern.jjtAccept(ASTBasicGraphPattern.java:19)",
"org.openrdf.query.parser.sparql.TupleExprBuilder.visit(TupleExprBuilder.java:1144)",
"org.openrdf.query.parser.sparql.ast.ASTGraphPatternGroup.jjtAccept(ASTGraphPatternGroup.java:19)",
"org.openrdf.query.parser.sparql.ast.SimpleNode.childrenAccept(SimpleNode.java:157)",
"org.openrdf.query.parser.sparql.ASTVisitorBase.visit(ASTVisitorBase.java:1021)",
"org.openrdf.query.parser.sparql.ast.ASTWhereClause.jjtAccept(ASTWhereClause.java:19)",
"org.openrdf.query.parser.sparql.TupleExprBuilder.visit(TupleExprBuilder.java:389)",
"org.openrdf.query.parser.sparql.TupleExprBuilder.visit(TupleExprBuilder.java:228)",
"org.openrdf.query.parser.sparql.ast.ASTSelectQuery.jjtAccept(ASTSelectQuery.java:19)",
"org.openrdf.query.parser.sparql.TupleExprBuilder.visit(TupleExprBuilder.java:378)",
"org.openrdf.query.parser.sparql.TupleExprBuilder.visit(TupleExprBuilder.java:228)",
"org.openrdf.query.parser.sparql.ast.ASTQueryContainer.jjtAccept(ASTQueryContainer.java:21)",
"org.openrdf.query.parser.sparql.SPARQLParser.buildQueryModel(SPARQLParser.java:210)",
"org.openrdf.query.parser.sparql.SPARQLParser.parseQuery(SPARQLParser.java:164)",
"org.neo4j.server.plugin.sparql.SPARQLPlugin.executeSPARQL(SPARQLPlugin.java:68)",
"java.lang.reflect.Method.invoke(Method.java:606)",
"org.neo4j.server.plugins.PluginMethod.invoke(PluginMethod.java:61)",
"org.neo4j.server.plugins.PluginManager.invoke(PluginManager.java:159)",
"org.neo4j.server.rest.web.ExtensionService.invokeGraphDatabaseExtension(ExtensionService.java:312)",
"org.neo4j.server.rest.web.ExtensionService.invokeGraphDatabaseExtension(ExtensionService.java:134)",
"java.lang.reflect.Method.invoke(Method.java:606)",
"org.neo4j.server.rest.transactional.TransactionalRequestDispatcher.dispatch(TransactionalRequestDispatcher.java:139)",
"java.lang.Thread.run(Thread.java:744)" ]
Connection #0 to host localhost left intact
I'm wondering what could be causing this kind of behavior. I have tried SPARQL plugin with many configurations, according to all the instructions I have found, but the plugin just refuses to work.
By the way, I'm wondering is it suitable for production use. Can someone comment on this?
Perhaps there are more ways than one to get RDF working in Neo4j? As I'm a newbie to Neo4j, I'm, of course, interested in solutions that are somewhat easy to install and tailor. Well, first and foremost, they should really work in production use.
Neo4j is working otherwise just great! REST API works just fine, but I would like to compare it with the SPARQL endpoint if possible.
I've uploaded a (pgp) file via the documents API, and changed its
visibility to public. However, I'm unable to download it publicly
using the contents link for that file.
Here are the relevant bits of the xml for the meta-data for the file in
question.
$ curl -H "GData-Version: 3.0" -H "Authorization: Bearer ..." https://docs.google.com/feeds/default/private/full
...
<content type="application/pgp-encrypted" src="https://doc-0c-c0-docs.googleusercontent.com/docs/securesc/tkl8gnmcm9fhm6fec3160bcgajgf0i18/opa6m1tmj5cufpvrj89bv4dt0q6696a4/1336514400000/04627947781497054983/04627947781497054983/0B_-KWHz80dDXZ2dYdEZ0dGw3akE?h=16653014193614665626&e=download&gd=true"/>
...
<gd:feedLink rel="http://schemas.google.com/acl/2007#accessControlList" href="https://docs.google.com/feeds/default/private/full/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl"/>
$ curl -H "GData-Version: 3.0" -H "Authorization: Bearer ..." https://docs.google.com/feeds/default/private/full/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl
...
<entry gd:etag="W/"DUcNRns4eCt7ImA9WhVVFUw."">
<id>https://docs.google.com/feeds/id/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl/default</id>
...
<gAcl:role value="reader"/>
<gAcl:scope type="default"/>
...
The role/scope returned for the file in question is reader/default, indicating
it is public. (It also shows up with public shared access in the web UI.)
However, accessing
the src attribute in the content element results in:
$ curl --verbose 'https://doc-0c-c0-docs.googleusercontent.com/docs/securesc/tkl8gnmcm9fhm6fec3160bcgajgf0i18/opa6m1tmj5cufpvrj89bv4dt0q6696a4/1336514400000/04627947781497054983/04627947781497054983/0B_-KWHz80dDXZ2dYdEZ0dGw3akE?h=16653014193614665626&e=download&gd=true'
< HTTP/1.1 401 Unauthorized
< Server: HTTP Upload Server Built on May 7 2012 18:16:42 (1336439802)
< WWW-Authenticate: GoogleLogin realm="http://www.google.com/accounts"
< Date: Tue, 08 May 2012 22:48:37 GMT
< Expires: Tue, 08 May 2012 22:48:37 GMT
< Cache-Control: private, max-age=0
< Content-Length: 0
< Content-Type: text/html
It seems like you are trying to publish a document: https://developers.google.com/google-apps/documents-list/#publishing_documents_by_publishing_a_single_revision
Once you publish it, the link with rel set to "http://schemas.google.com/docs/2007#publish" will point to the published document on the web.
Before I bang my head against all the issues myself I thought I'd run it by you guys and see if you could point me somewhere or pass along some tips.
I'm writing a really basic monitoring script to make sure some of my web applications are alive and answering. I'll fire it off out of cron and send alert emails if there's a problem.
So what I'm looking for are suggestions on what to watch out for. Grepping the output of wget will probably get me by, but I was wondering if there was a more programmatic way to get robust status information out of wget and my resulting web page.
This is a general kind of question, I'm just looking for tips from anybody who happens to have done this kind of thing before.
Check the exit code,
wget --timeout=10 --whatever http://example.com/mypage
if [ $? -ne 0 ] ; then
there's a pproblem, mail logs, send sms, etc.
fi
I prefer curl --head for this type of usage:
% curl --head http://stackoverflow.com/
HTTP/1.1 200 OK
Cache-Control: public, max-age=60
Content-Length: 359440
Content-Type: text/html; charset=utf-8
Expires: Tue, 05 Oct 2010 19:06:52 GMT
Last-Modified: Tue, 05 Oct 2010 19:05:52 GMT
Vary: *
Date: Tue, 05 Oct 2010 19:05:51 GMT
This will allow you to check the return status to make sure it's 200 (or whatever you're expecting it to be) and the content-length to make sure it's the expected value (or at least not zero.) And it will exit non-zero if there's any problem with the connection.
If you want to check for changes in the page content, pipe the output through md5 and then compare what you get to your pre-computed known value:
wget -O - http://stackoverflow.com | md5sum