Get status code for given a timeout and url - ruby-on-rails

I am working on a ruby on rails application.I want to ping the URL with a given timeout and interval.I was able to get status code using curl and even with net/http without specifying the timeout.
This works fine without timeout :
%x{curl -sL -w "%{http_code}" "URl" -o /dev/null}
Can I know if there is any other way or modification to this

Related

let cgi-fcgi command fail on bad status code

I am trying to write a health check for kubernetes. The application runs with fpm-php, so it needs to be fcgi.
I have this script
#!/usr/bin/env bash
fcgi_env=(
SCRIPT_NAME=/var/www/html/public/index.php
SCRIPT_FILENAME=/var/www/html/public/index.php
REMOTE_ADDR=127.0.0.1
REQUEST_METHOD=GET
REQUEST_URI="$1"
PATH_INFO="$1"
)
env -i "${fcgi_env[#]}" cgi-fcgi -bind -connect 127.0.0.1:9000
The problem is that this shows me headers and body, but I am interested in the status code or more specifically I want it to fail, if the status code is greater than 299.
I can set some custom header and grep for that and so on, but I would like a more solid approach. So the question is, how do I retrieve the status code or even better have it fail on a bad status similar to curls --fail option?

How to emulate curl requests in rails app?

I want to test that a PUT to an endpoint (products/:id) works, but when I try
curl -X PUT -d listing_id_created=True localhost:3000/products/27
it gives ActionController::InvalidAuthenticityToken, which I now realise is the expected result (since there's no authenticity token provided since the PUT is coming from curl and curl doesn't know anything about it).
So my question is how do I run some simple curl PUTs (or any other verbs) to check that endpoints work correctly? Is the only solution to simply disable/skip the authenticity token?

How to get Openshift session token using rest api calls

As part of an automated tests suite I have to use OpenShift's REST APIs to send commands and get OpenShift's status. To authenticate these API calls I need to embed an authorization token in every call.
Currently, I get this token by executing the following commands with ssh on the machine where OpenShift is installed:
oc login --username=<uname> --password=<password>
oc whoami --show-token
I would like to stop using the oc tool completely and get this token using HTTP calls to the APIs but am not really able to find a document that explains how to use it. If I use the option --loglevel=10 when calling oc commands I can see the HTTP calls made by oc when logging in but it is quite difficult for me to reverse-engineer the process from these logs.
Theoretically this is not something specific to OpenShift but rather to the OAuth protocol, I have found some documentation like the one posted here but I still find it difficult to implement without specific examples.
If that helps, I am developing this tool using ruby (not rails).
P.S. I know that normally for this type of job one should use Service Account Tokens but since this is a testing environment the OpenShift installation gets removed and reinstalled fairly often. This would force me to re-create the service account every time with the oc command line tool and again prevent me from automatizing the process.
I have found the answer in this GitHub issue.
Surprisingly, one curl command is enough to get the token:
curl -u joe:password -kv -H "X-CSRF-Token: xxx" 'https://master.cluster.local:8443/oauth/authorize?client_id=openshift-challenging-client&response_type=token'
The response is going to be an HTTP 302 trying to redirect to another URL. The redirection URL will contain the token, for example:
Location: https://master.cluster.local:8443/oauth/token/display#access_token=VO4dAgNGLnX5MGYu_wXau8au2Rw0QAqnwq8AtrLkMfU&expires_in=86400&token_type=bearer
You can use token or combination user/password.
To use username:password in header, you can use Authorizartion: Basic. The oc client commands are doing simple authentication with your user and password in header. Like this
curl -H "Authorization: Basic <SOMEHASH>"
where the hash is exactly base64 encoded username:password. (try it with echo -n "username:password" | base64).
To use token, you can obtain the token here with curl:
curl -H Authorization: Basic $(echo -n username:password | base64)" https://openshift.example.com:8443/oauth/authorize\?response_type\=token\&client_id\=openshift-challenging-client
But the token is replied in the ugly format format. You can try to grep it
... | grep -oP "access_token=\K[ˆ&]*"
You need to use the correct url for your oauth server. In my case, I use openshift 4.7 and this is the url:
https://oauth-openshift.apps.<clustername><domain>/oauth/authorize\?response_type\=token\&client_id\=openshift-challenging-client
oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host
In case you are using OpenShift CRC:
Then the URL is: https://oauth-openshift.apps-crc.testing/oauth/authorize
Command to get the Token:
curl -v --insecure --user developer:developer --header "X-CSRF-Token: xxx" --url "https://oauth-openshift.apps-crc.testing/oauth/authorize?response_type=token&client_id=openshift-challenging-client" 2>&1 | grep -oP "access_token=\K[^&]*"
Note:
2>&1 is required, because curl writes to standard error
--insecure: because I have not set up TLS certificate
Adjust the user and password developer as needed (crc developer/developer is standard user in crc, therefore good for testing.)
Token is per default 24h vaild
Export the Token to an environment Variable
export TOKEN=$(curl -v --insecure --user developer:developer --header "X-CSRF-Token: xxx" --url "https://oauth-openshift.apps-crc.testing/oauth/authorize?response_type=token&client_id=openshift-challenging-client" 2>&1 | grep -oP "access_token=\K[^&]*")
And Use the token then in, e.g., oc login:
oc login --token=$TOKEN --server=https://api.crc.testing:6443

Multi files upload using curl to ruby on rails app

I'm struggling about how I can send an unknow number of files to my rails app using curl.
This is my curl request to POST with one file :
curl -H 'Authorization: Token token=your_token' -X POST -F job[webapp_id]=2 -F job[file]=#test.txt localhost:3000/api/v0/jobs
It works.
I would like to allow the user to send as much as file as he wants with something like :
-F job[files][]=#test1.txt -F job[files][]=#test2.txt
-F job[files[]]=#test1.txt -F job[files[]]=#test2.txt
But it's not working.
I also tried with :
-F job[files[0]]=#test.txt -F job[files[1]]=#test2.txt
Still not working. I think it's because I don't know how to tweak my permit parameters. I get an empty array.
Any idea on how to do it with one request ?
RestClient may be easier to work with.

Getting only response header from HTTP POST using cURL

One can request only the headers using HTTP HEAD, as option -I in curl(1).
$ curl -I /
Lengthy HTML response bodies are a pain to get in command-line, so I'd like to get only the header as feedback for my POST requests. However, HEAD and POST are two different methods.
How do I get cURL to display only response headers to a POST request?
-D, --dump-header <file>
Write the protocol headers to the specified file.
This option is handy to use when you want to store the headers
that a HTTP site sends to you. Cookies from the headers could
then be read in a second curl invocation by using the -b,
--cookie option! The -c, --cookie-jar option is however a better
way to store cookies.
and
-S, --show-error
When used with -s, --silent, it makes curl show an error message if it fails.
from the man page. so
curl -sS -D - www.acooke.org -o /dev/null
follows redirects, dumps the headers to stdout and sends the data to /dev/null (that's a GET, not a POST, but you can do the same thing with a POST - just add whatever option you're already using for POSTing data)
note the - after the -D which indicates that the output "file" is stdout.
The other answers require the response body to be downloaded. But there's a way to make a POST request that will only fetch the header:
curl -s -I -X POST http://www.google.com
An -I by itself performs a HEAD request which can be overridden by -X POST to perform a POST (or any other) request and still only get the header data.
The Following command displays extra informations
curl -X POST http://httpbin.org/post -v > /dev/null
You can ask server to send just HEAD, instead of full response
curl -X HEAD -I http://httpbin.org/
Note: In some cases, server may send different headers for POST and HEAD. But in almost all cases headers are same.
For long response bodies (and various other similar situations), the solution I use is always to pipe to less, so
curl -i https://api.github.com/users | less
or
curl -s -D - https://api.github.com/users | less
will do the job.
Maybe it is little bit of an extreme, but I am using this super short version:
curl -svo. <URL>
Explanation:
-v print debug information (which does include headers)
-o. send web page data (which we want to ignore) to a certain file, . in this case, which is a directory and is an invalid destination and makes the output to be ignored.
-s no progress bar, no error information (otherwise you would see Warning: Failed to create the file .: Is a directory)
warning: result always fails (in terms of error code, if reachable or not). Do not use in, say, conditional statements in shell scripting...
Much easier – this also follows links.
curl -IL http://example.com/in-the-shadows
While the other answers have not worked for me in all situations, the best solution I could find (working with POST as well), taken from here:
curl -vs 'https://some-site.com' 1> /dev/null
headcurl.cmd (windows version)
curl -sSkv -o NUL %* 2>&1
I don't want a progress bar -s,
but I do want errors -S,
not bothering about valid https certificates -k,
getting high verbosity -v (this is about troubleshooting, is it?),
no output (in a clean way).
oh, and I want to forward stderr to stdout, so I can grep against the whole thing (since most or all output comes in stderr)
%* means [pass on all parameters to this script] (well(https://stackoverflow.com/a/980372/444255), well usually that's just one parameter: the url you are testing
real-world example (on troubleshooting proxy issues):
C:\depot>headcurl google.ch | grep -i -e http -e cache
Hostname was NOT found in DNS cache
GET HTTP://google.ch/ HTTP/1.1
HTTP/1.1 301 Moved Permanently
Location: http://www.google.ch/
Cache-Control: public, max-age=2592000
X-Cache: HIT from company.somewhere.ch
X-Cache-Lookup: HIT from company.somewhere.ch:1234
Linux version
for your .bash_aliases / .bash_rc:
alias headcurl='curl -sSkv -o /dev/null $# 2>&1'
-D, --dump-header
Write the protocol headers to the specified file.
This option is handy to use when you want to store the headers
that a HTTP site sends to you. Cookies from the headers could
then be read in a second curl invocation by using the -b,
--cookie option! The -c, --cookie-jar option is however a better
way to store cookies.

Resources