How to decode this url, its pretty complicated - url

I am in a jam here, I am trying to decode this url,
http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z%2Fp3MrqycripNTi4JXVyp3h3N2r3sqo1N4%3D
This takes you to the
http://www.vidxden.com/ce8mfl8kd6oy
I have run base64 decoding for "3uXn46ChlNzr7Z%2Fp3MrqycripNTi4JXVyp3h3N2r3sqo1N4%3D" and it comes as jibberish.
I am an imacros coder and need to decode this string for my client, Please help me out in this jam.
Regards
Ram

It doesn't decode to that URL, it decodes to a URL that when downloaded will redirect to your target:
$ python3 -c "import urllib.parse; print(urllib.parse.unquote('http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4='))"
http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4=
$ curl -sS http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4= -D /dev/tty
HTTP/1.1 302 Moved Temporarily
Server: nginx/1.0.11
Content-Type: text/html
Location: http://www.vidxden.com/ce8mfl8kd6oy
Content-Length: 0
Accept-Ranges: bytes
Date: Tue, 05 Jun 2012 22:54:31 GMT
Connection: keep-alive
Note the 302 status and the Location header

I was curious, and did some tries.
feeding http://www.insidepro.com/hashes.php?lang=eng with http://www.vidxden.com/ce8mfl8kd6oy or www.vidxden.com/ce8mfl8kd6oy does not result in the url to interpret
using http://code.google.com/p/hash-identifier/ on 3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4= or 3uXn46ChlNzr7Z%2Fp3MrqycripNTi4JXVyp3h3N2r3sqo1N4%3D does not give any result
However: 3uXn46ChlNzr7Z seems to be repeatedly used by fastpasstv (google for it). And the rest (p3MrqycripNTi4JXVyp3h3N2r3sqo1N4) is said to possibly be a MD5 Hash by Hash ID.
You will not easily decrypt the MD5 hash...
So, it seems you are out of luck for decrypting this. Just follow the redirect...

Related

Writing to InfluxDB fails due to timestamp problem

I want to write a data point to my influxdb via bash shell but the timestamp seems to cause problems.
root#server:UP [~]# curl -i -XPOST 'http://localhost:8086/write?db=tbr' --data-binary 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000'
HTTP/1.1 400 Bad Request
Content-Type: application/json
Request-Id: 03666c3f-b7b5-11ea-8659-02420a0a1b02
X-Influxdb-Build: OSS
X-Influxdb-Error: unable to parse 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000': bad timestamp
X-Influxdb-Version: 1.7.10
X-Request-Id: 03666c3f-b7b5-11ea-8659-02420a0a1b02
Date: Fri, 26 Jun 2020 13:57:46 GMT
Content-Length: 129
{"error":"unable to parse 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000': bad timestamp"}
This is how I created the timestamp in the first place
def date = LocalDateTime.of(2020, Month.JUNE, 26, 0, 0, 0)
def ms = date.atZone(ZoneId.systemDefault()).toInstant().toEpochMilli()
TimeUnit.NANOSECONDS.convert(ms, TimeUnit.MILLISECONDS)
So the timestamp is supposed to be in ns and I ensured that. Why is influxdb giving me that error message? What's wrong with that timestamp?
Thanks!
It's not a timestamp issue. You have missed comma between the data that you are passing. Kindly comma separate all of them except the timestamp field. Modified your curl request, try now.
curl -i -XPOST 'http://localhost:8086/write?db=tbr' --data-binary 'codenarc, maxPriority2Violations=917, maxPriority3Violations=3336 1593122400000000000'
One more example:
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server02,region=us-west value=0.55 1422568543702900257'

Hyper POST request always yields 400

I am trying to send a POST request to a site using Hyper 0.9. The request works with curl:
curl https://api.particle.io/v1/devices/secret/set_light -d args=0 -d access_token=secret
and Python:
import requests
r = requests.post("https://api.particle.io/v1/devices/secret/set_light",
data={"access_token": "secret", "args": "0"})
but my Rust implementation doesn't seem to go through, always yielding 400.
use hyper::client::Client;
let addr = "https://api.particle.io/v1/devices/secret/set_light";
let body = "access_token=secret&args=0";
let mut res = client.post(addr)
.body(body)
.send()
.unwrap();
It is greatly beneficial to be aware of various tools for debugging HTTP problems like this. In this case, I used nc to start a dumb server so I could see the headers the HTTP client is sending (nc -l 5000). I modified the cURL and Rust examples to point to 127.0.0.1:5000 and this was the output:
cURL:
POST /v1/devices/secret/set_light HTTP/1.1
Host: 127.0.0.1:5000
User-Agent: curl/7.43.0
Accept: */*
Content-Length: 26
Content-Type: application/x-www-form-urlencoded
args=0&access_token=secret
Hyper:
POST /v1/devices/secret/set_light HTTP/1.1
Host: 127.0.0.1:5000
Content-Length: 26
access_token=secret&args=0
I don't have an account at particle.io to test with, but I'm guessing you need that Content-Type header. Setting a User-Agent would be good etiquette and the Accept header is really more for your benefit, so you might as well set them too.

Download link (via API) to publicly accessible file on google docs returns 401

I've uploaded a (pgp) file via the documents API, and changed its
visibility to public. However, I'm unable to download it publicly
using the contents link for that file.
Here are the relevant bits of the xml for the meta-data for the file in
question.
$ curl -H "GData-Version: 3.0" -H "Authorization: Bearer ..." https://docs.google.com/feeds/default/private/full
...
<content type="application/pgp-encrypted" src="https://doc-0c-c0-docs.googleusercontent.com/docs/securesc/tkl8gnmcm9fhm6fec3160bcgajgf0i18/opa6m1tmj5cufpvrj89bv4dt0q6696a4/1336514400000/04627947781497054983/04627947781497054983/0B_-KWHz80dDXZ2dYdEZ0dGw3akE?h=16653014193614665626&e=download&gd=true"/>
...
<gd:feedLink rel="http://schemas.google.com/acl/2007#accessControlList" href="https://docs.google.com/feeds/default/private/full/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl"/>
$ curl -H "GData-Version: 3.0" -H "Authorization: Bearer ..." https://docs.google.com/feeds/default/private/full/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl
...
<entry gd:etag="W/"DUcNRns4eCt7ImA9WhVVFUw."">
<id>https://docs.google.com/feeds/id/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl/default</id>
...
<gAcl:role value="reader"/>
<gAcl:scope type="default"/>
...
The role/scope returned for the file in question is reader/default, indicating
it is public. (It also shows up with public shared access in the web UI.)
However, accessing
the src attribute in the content element results in:
$ curl --verbose 'https://doc-0c-c0-docs.googleusercontent.com/docs/securesc/tkl8gnmcm9fhm6fec3160bcgajgf0i18/opa6m1tmj5cufpvrj89bv4dt0q6696a4/1336514400000/04627947781497054983/04627947781497054983/0B_-KWHz80dDXZ2dYdEZ0dGw3akE?h=16653014193614665626&e=download&gd=true'
< HTTP/1.1 401 Unauthorized
< Server: HTTP Upload Server Built on May 7 2012 18:16:42 (1336439802)
< WWW-Authenticate: GoogleLogin realm="http://www.google.com/accounts"
< Date: Tue, 08 May 2012 22:48:37 GMT
< Expires: Tue, 08 May 2012 22:48:37 GMT
< Cache-Control: private, max-age=0
< Content-Length: 0
< Content-Type: text/html
It seems like you are trying to publish a document: https://developers.google.com/google-apps/documents-list/#publishing_documents_by_publishing_a_single_revision
Once you publish it, the link with rel set to "http://schemas.google.com/docs/2007#publish" will point to the published document on the web.

POST with curl according to a particular format

I'm attempting to gather XML data automatically using curl, and my command so far is
curl -E keyStore.pem -d 'uid=myusername&password=mypassword&active=y&type=F' 'https://www.aidaptest.naimes.faa.gov/aidap/XmlNotamServlet HTTP/1.1/' -k
but it keeps on giving me a "Your browser sent a query this server could not understand." error.
I'm pretty sure that it's connecting since it's not rejecting me, but I don't know how to properly format the POST. Here's some documentation they gave me for the format of the POST request.
POST <URL>/aidap/XmlNotamServlet HTTP/1.1
Content-type: application/x-www-form-urlencoded
Content-length: <input_parameter’s length>
<a blank line>
<input_parameter>`
input_parameter is uid, password, and location_id bit and is correct
Am I doing this correctly from what you can see?
Something like this should do it.
curl -E keyStore.pem -v -X POST -i --header Content-Type:application/x-www-form-urlencoded
-d 'uid=myusername&password=mypassword&active=y&type=F' 'https://www.aidaptest.naimes.faa.gov/aidap/XmlNotamServlet'
I don't think you need HTTP/1.1 at the end of the URL in the command line. And even if you needed it, you certainly don't need the final / character before the closing single quote.

Using wget to do monitoring probes

Before I bang my head against all the issues myself I thought I'd run it by you guys and see if you could point me somewhere or pass along some tips.
I'm writing a really basic monitoring script to make sure some of my web applications are alive and answering. I'll fire it off out of cron and send alert emails if there's a problem.
So what I'm looking for are suggestions on what to watch out for. Grepping the output of wget will probably get me by, but I was wondering if there was a more programmatic way to get robust status information out of wget and my resulting web page.
This is a general kind of question, I'm just looking for tips from anybody who happens to have done this kind of thing before.
Check the exit code,
wget --timeout=10 --whatever http://example.com/mypage
if [ $? -ne 0 ] ; then
there's a pproblem, mail logs, send sms, etc.
fi
I prefer curl --head for this type of usage:
% curl --head http://stackoverflow.com/
HTTP/1.1 200 OK
Cache-Control: public, max-age=60
Content-Length: 359440
Content-Type: text/html; charset=utf-8
Expires: Tue, 05 Oct 2010 19:06:52 GMT
Last-Modified: Tue, 05 Oct 2010 19:05:52 GMT
Vary: *
Date: Tue, 05 Oct 2010 19:05:51 GMT
This will allow you to check the return status to make sure it's 200 (or whatever you're expecting it to be) and the content-length to make sure it's the expected value (or at least not zero.) And it will exit non-zero if there's any problem with the connection.
If you want to check for changes in the page content, pipe the output through md5 and then compare what you get to your pre-computed known value:
wget -O - http://stackoverflow.com | md5sum

Resources