Before I bang my head against all the issues myself I thought I'd run it by you guys and see if you could point me somewhere or pass along some tips.
I'm writing a really basic monitoring script to make sure some of my web applications are alive and answering. I'll fire it off out of cron and send alert emails if there's a problem.
So what I'm looking for are suggestions on what to watch out for. Grepping the output of wget will probably get me by, but I was wondering if there was a more programmatic way to get robust status information out of wget and my resulting web page.
This is a general kind of question, I'm just looking for tips from anybody who happens to have done this kind of thing before.
Check the exit code,
wget --timeout=10 --whatever http://example.com/mypage
if [ $? -ne 0 ] ; then
there's a pproblem, mail logs, send sms, etc.
fi
I prefer curl --head for this type of usage:
% curl --head http://stackoverflow.com/
HTTP/1.1 200 OK
Cache-Control: public, max-age=60
Content-Length: 359440
Content-Type: text/html; charset=utf-8
Expires: Tue, 05 Oct 2010 19:06:52 GMT
Last-Modified: Tue, 05 Oct 2010 19:05:52 GMT
Vary: *
Date: Tue, 05 Oct 2010 19:05:51 GMT
This will allow you to check the return status to make sure it's 200 (or whatever you're expecting it to be) and the content-length to make sure it's the expected value (or at least not zero.) And it will exit non-zero if there's any problem with the connection.
If you want to check for changes in the page content, pipe the output through md5 and then compare what you get to your pre-computed known value:
wget -O - http://stackoverflow.com | md5sum
Related
Hi am developing a solution that creates and deploys services to Google cloud run using its REST API with OAuth as a service account created fot that purpose.
I am stuck at making the created services available publicly.
I was unable to find a corresponding --allow-unauthenticated parameter as with gcloud to use from the API.
The only way I found is to manually add allUsers as Cloud Run Invoker on each service I want publicly reachable. But, I would like all the services from that service-account to be automatically reacheable publicly.
I would like to know if there is a better(more automatic) way to achieve this.
Thanks in advance.
Firstly, you can't do this in only one command. You have to deploy the service and then to grant allUsers on the service. The CLI do this 2 steps conveniently for you.
Anyway, when you are stuck like this, there is a useful trick: add --log-http at your gcloud command. Like this, you will see all the HTTP API calls performed by the CLI.
if you do this when you deploy a new Cloud Run service, you will have tons of line and, at a moment, you have this
==== request start ====
uri: https://run.googleapis.com/v1/projects/gbl-imt-homerider-basguillaueb/locations/us-central1/services/predict2:setIamPolicy?alt=json
method: POST
== headers start ==
b'Authorization': --- Token Redacted ---
b'X-Goog-User-Project': b'gbl-imt-homerider-basguillaueb'
b'accept': b'application/json'
b'accept-encoding': b'gzip, deflate'
b'content-length': b'98'
b'content-type': b'application/json'
b'user-agent': b'google-cloud-sdk gcloud/299.0.0 command/gcloud.run.deploy invocation-id/61070d063a604fdda8e87ad63777e3ae environment/devshell environment-version/None interactive/True from-script/False python/3.7.3 term/screen (Linux 4.19.112+
)'
== headers end ==
⠹ Deploying new service...
{"policy": {"bindings": [{"members": ["allUsers"], "role": "roles/run.invoker"}], "etag": "ACAB"}}
== body end ==
⠹ Setting IAM Policy...
---- response start ----
status: 200
-- headers start --
-content-encoding: gzip
cache-control: private
content-length: 159
content-type: application/json; charset=UTF-8
date: Wed, 08 Jul 2020 11:37:11 GMT
server: ESF
transfer-encoding: chunked
vary: Origin, X-Origin, Referer
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-xss-protection: 0
-- headers end --
-- body start --
{
"version": 1,
"etag": "BwWp7IdZGHs=",
"bindings": [
{
"role": "roles/run.invoker",
"members": [
"allUsers"
]
}
]
}
So, it's an addition API call that perform the CLI for you. You can find here the API definition
If you want to perform a call manually, you can do a request like this
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "content-type: application/json" -X POST \
-d '{"policy": {"bindings": [{"members": ["allUsers"], "role": "roles/run.invoker"}]}}' \
"https://run.googleapis.com/v1/projects/<PROJECT_ID>/locations/<REGION>/services/<SERVICE_NAME>:setIamPolicy"
I want to write a data point to my influxdb via bash shell but the timestamp seems to cause problems.
root#server:UP [~]# curl -i -XPOST 'http://localhost:8086/write?db=tbr' --data-binary 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000'
HTTP/1.1 400 Bad Request
Content-Type: application/json
Request-Id: 03666c3f-b7b5-11ea-8659-02420a0a1b02
X-Influxdb-Build: OSS
X-Influxdb-Error: unable to parse 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000': bad timestamp
X-Influxdb-Version: 1.7.10
X-Request-Id: 03666c3f-b7b5-11ea-8659-02420a0a1b02
Date: Fri, 26 Jun 2020 13:57:46 GMT
Content-Length: 129
{"error":"unable to parse 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000': bad timestamp"}
This is how I created the timestamp in the first place
def date = LocalDateTime.of(2020, Month.JUNE, 26, 0, 0, 0)
def ms = date.atZone(ZoneId.systemDefault()).toInstant().toEpochMilli()
TimeUnit.NANOSECONDS.convert(ms, TimeUnit.MILLISECONDS)
So the timestamp is supposed to be in ns and I ensured that. Why is influxdb giving me that error message? What's wrong with that timestamp?
Thanks!
It's not a timestamp issue. You have missed comma between the data that you are passing. Kindly comma separate all of them except the timestamp field. Modified your curl request, try now.
curl -i -XPOST 'http://localhost:8086/write?db=tbr' --data-binary 'codenarc, maxPriority2Violations=917, maxPriority3Violations=3336 1593122400000000000'
One more example:
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server02,region=us-west value=0.55 1422568543702900257'
I need to write image parser from some website which will take images, some other info and save it to my local folder.
So let's say we have image at this url :
https://i.stack.imgur.com/MiqEv.jpg
(this is someone's SO avatar)
So I want to to save it to local folder. Let's say to "~/test/image.png"
I found this link
And I tried this in my terminal:
rails console
require 'open-uri'
open('~/test/image.jpg', 'wb') do
|file| file << open('https://i.stack.imgur.com/MiqEv.jpg').read
end
As you can see my home/test folder is empty
And I got this output from console
#<File:~/test/image.jpg (closed)>
What do I do?
Also I tried this:
require 'open-uri'
download = open('https://i.stack.imgur.com/MiqEv.jpg')
IO.copy_stream(download, '~/test/image.jpg')
And got this output:
=> #https://i.stack.imgur.com/MiqEv.jpg>, #meta={"date"=>"Fri, 06 May 2016
11:58:05 GMT", "content-type"=>"image/jpeg", "content-length"=>"4276",
"connection"=>"keep-alive",
"set-cookie"=>"__cfduid=d7f982c0742bf40e58d626659c65a88841462535885;
expires=Sat, 06-May-17 11:58:05 GMT; path=/; domain=.imgur.com;
HttpOnly", "cache-control"=>"public, max-age=315360000",
"etag"=>"\"b75caf18a116034fc3541978de7bac5b\"", "expires"=>"Mon, 04
May 2026 11:58:05 GMT", "last-modified"=>"Thu, 28 Mar 2013 15:05:35
GMT", "x-amz-version-id"=>"TP7cpPcf0jWeW2t1gUz66VXYlevddAYh",
"cf-cache-status"=>"HIT", "vary"=>"Accept-Encoding",
"server"=>"cloudflare-nginx", "cf-ray"=>"29ec4221fdbf267e-FRA"},
#metas={"date"=>["Fri, 06 May 2016 11:58:05 GMT"],
"content-type"=>["image/jpeg"], "content-length"=>["4276"],
"connection"=>["keep-alive"],
"set-cookie"=>["__cfduid=d7f982c0742bf40e58d626659c65a88841462535885;
expires=Sat, 06-May-17 11:58:05 GMT; path=/; domain=.imgur.com;
HttpOnly"], "cache-control"=>["public, max-age=315360000"],
"etag"=>["\"b75caf18a116034fc3541978de7bac5b\""], "expires"=>["Mon, 04
May 2026 11:58:05 GMT"], "last-modified"=>["Thu, 28 Mar 2013 15:05:35
GMT"], "x-amz-version-id"=>["TP7cpPcf0jWeW2t1gUz66VXYlevddAYh"],
"cf-cache-status"=>["HIT"], "vary"=>["Accept-Encoding"],
"server"=>["cloudflare-nginx"], "cf-ray"=>["29ec4221fdbf267e-FRA"]},
#status=["200", "OK"]>
2.3.0 :244 > IO.copy_stream(download, '~/test/image.jpg') => 4276
But my folder is still empty.
What do I do??
The problem is that the file is not getting created. If you create the file using File.open or open and then execute the `IO.copy_stream' it will work.
Also ~/ doesn't work in ruby. You have to specify the whole path.
require 'open-uri'
download = open('https://i.stack.imgur.com/MiqEv.jpg')
open('/home/user/image.jpg', 'w')
IO.copy_stream(download, '~/test/image.jpg')
If you want a directory to be created as well, you will have to user Dir.mkdir. If you want to create nested directories use FileUtils::mkdir_p. If it is difficult to use either, I would suggest using system 'mkdir dirname' or system 'mkdir -p dir1/dir2/dir3'
Dir.mkdir '/home/user/test' # doesnt work for nested folder creation
require 'fileutils'
FileUtils::mkdir_p '/home/user/test1/test2' # for nested
system 'mkdir '~/test' # Unix command for directory creation
system 'mkdir -p '~/test1/test2' # Unix command for nested directory
Hope this helps
If you are using Ubuntu, could you just use the wget?
You can use both wget 'https://i.stack.imgur.com/MiqEv.jpg' and system("wget 'https://i.stack.imgur.com/MiqEv.jpg'"). Or system("wget 'https://i.stack.imgur.com/MiqEv.jpg' > /your/path
Note: for the first command you need to wrap you command into the ` signs. This will cause ruby to call a system command.
Also, consider using /home/your_name instead of just ~. Also notice the leading / slash.
I am in a jam here, I am trying to decode this url,
http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z%2Fp3MrqycripNTi4JXVyp3h3N2r3sqo1N4%3D
This takes you to the
http://www.vidxden.com/ce8mfl8kd6oy
I have run base64 decoding for "3uXn46ChlNzr7Z%2Fp3MrqycripNTi4JXVyp3h3N2r3sqo1N4%3D" and it comes as jibberish.
I am an imacros coder and need to decode this string for my client, Please help me out in this jam.
Regards
Ram
It doesn't decode to that URL, it decodes to a URL that when downloaded will redirect to your target:
$ python3 -c "import urllib.parse; print(urllib.parse.unquote('http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4='))"
http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4=
$ curl -sS http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4= -D /dev/tty
HTTP/1.1 302 Moved Temporarily
Server: nginx/1.0.11
Content-Type: text/html
Location: http://www.vidxden.com/ce8mfl8kd6oy
Content-Length: 0
Accept-Ranges: bytes
Date: Tue, 05 Jun 2012 22:54:31 GMT
Connection: keep-alive
Note the 302 status and the Location header
I was curious, and did some tries.
feeding http://www.insidepro.com/hashes.php?lang=eng with http://www.vidxden.com/ce8mfl8kd6oy or www.vidxden.com/ce8mfl8kd6oy does not result in the url to interpret
using http://code.google.com/p/hash-identifier/ on 3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4= or 3uXn46ChlNzr7Z%2Fp3MrqycripNTi4JXVyp3h3N2r3sqo1N4%3D does not give any result
However: 3uXn46ChlNzr7Z seems to be repeatedly used by fastpasstv (google for it). And the rest (p3MrqycripNTi4JXVyp3h3N2r3sqo1N4) is said to possibly be a MD5 Hash by Hash ID.
You will not easily decrypt the MD5 hash...
So, it seems you are out of luck for decrypting this. Just follow the redirect...
I've uploaded a (pgp) file via the documents API, and changed its
visibility to public. However, I'm unable to download it publicly
using the contents link for that file.
Here are the relevant bits of the xml for the meta-data for the file in
question.
$ curl -H "GData-Version: 3.0" -H "Authorization: Bearer ..." https://docs.google.com/feeds/default/private/full
...
<content type="application/pgp-encrypted" src="https://doc-0c-c0-docs.googleusercontent.com/docs/securesc/tkl8gnmcm9fhm6fec3160bcgajgf0i18/opa6m1tmj5cufpvrj89bv4dt0q6696a4/1336514400000/04627947781497054983/04627947781497054983/0B_-KWHz80dDXZ2dYdEZ0dGw3akE?h=16653014193614665626&e=download&gd=true"/>
...
<gd:feedLink rel="http://schemas.google.com/acl/2007#accessControlList" href="https://docs.google.com/feeds/default/private/full/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl"/>
$ curl -H "GData-Version: 3.0" -H "Authorization: Bearer ..." https://docs.google.com/feeds/default/private/full/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl
...
<entry gd:etag="W/"DUcNRns4eCt7ImA9WhVVFUw."">
<id>https://docs.google.com/feeds/id/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl/default</id>
...
<gAcl:role value="reader"/>
<gAcl:scope type="default"/>
...
The role/scope returned for the file in question is reader/default, indicating
it is public. (It also shows up with public shared access in the web UI.)
However, accessing
the src attribute in the content element results in:
$ curl --verbose 'https://doc-0c-c0-docs.googleusercontent.com/docs/securesc/tkl8gnmcm9fhm6fec3160bcgajgf0i18/opa6m1tmj5cufpvrj89bv4dt0q6696a4/1336514400000/04627947781497054983/04627947781497054983/0B_-KWHz80dDXZ2dYdEZ0dGw3akE?h=16653014193614665626&e=download&gd=true'
< HTTP/1.1 401 Unauthorized
< Server: HTTP Upload Server Built on May 7 2012 18:16:42 (1336439802)
< WWW-Authenticate: GoogleLogin realm="http://www.google.com/accounts"
< Date: Tue, 08 May 2012 22:48:37 GMT
< Expires: Tue, 08 May 2012 22:48:37 GMT
< Cache-Control: private, max-age=0
< Content-Length: 0
< Content-Type: text/html
It seems like you are trying to publish a document: https://developers.google.com/google-apps/documents-list/#publishing_documents_by_publishing_a_single_revision
Once you publish it, the link with rel set to "http://schemas.google.com/docs/2007#publish" will point to the published document on the web.