I am trying to do GET request via SSL, and getting the response code as 0 and response body as blank.
REQUEST
Typhoeus::Request.get("https://www.example.com", headers: { 'Accept' => "application/json"}, ssl_verifypeer: false, userpwd: 'username' + ":" + 'pwd', sslversion: :sslv3))
RESPONSE:
"ETHON: performed EASY url= response_code=0 return_code=peer_failed_verification total_time=0.22201"
The same URL is working on terminal via CURL also it is working on mac machine too.
Related
Context: I'm using traefik as my reverse proxy to send HTTP requests to my backend Golang server, which I've added some CORS handling. It works from Postman and when I'm cURLing the HTTP GET request
Problem: I'm getting this 404 error on the browser:
Axios call overriding Host
axios.create({
baseURL: 'http://localhost',
})
axios.defaults.headers['Host'] = 'dev.docker.local'
got this error in the console
refused to set unsafe header "Host"
Axios call overriding default Host using X-Host-Override
axios.create({
baseURL: 'http://localhost',
})
axios.defaults.headers['X-Host-Override'] = 'dev.docker.local'
Axios call setting default headers - seems like it's always using localhost as the Host
axios.create({
baseURL: 'http://localhost',
headers: {'Host': 'dev.docker.local'}
})
set CORS in route handlers
func About(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("Access-Control-Allow-Methods", "OPTIONS, GET")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "*")
aboutStruct := about{
Name: "Hello world",
}
w.WriteHeader(http.StatusOK)
j, _ := json.Marshal(aboutStrcut)
w.Write(j)
}
finally found a way to solve this problem for the browser, needed to use dnsmasq to point docker.local to 127.0.0.1 and then set baseURL to dev.docker.local, no need to override Host
I am trying to transfert files using a python program running on a local Anaconda to a local Jupyter within a docker container using the Jupyter rest API.
I managed already to execute a requests.get() succesfully after muddling-through a bit on how to input the token.
Now I would like now to execute a requests.post() command to transfert the files.
Configuration:
local docker container running on docker toolbox for windows
docker version 17.04.0-ce, build 4845c56
tensorflow/tensorflow incl. Jupyter latest version install
jupyter_kernel_gateway==0.3.1
local Anaconda v. 4.3.14 running on a windows 10 machine
Code:
token = token_code_provided_by_jupyter_at_startup
api_url = "http://192.168.99.100:8888/api/contents"
# getting the file's data from disk and converting into a json file
cwd = os.getcwd()
file_location = cwd+r'\Resources\Test\test_post.py'
payload = open(file_location, 'r').read()
b64payload = base64.encodestring(payload)
body = json.dumps({
'content':b64payload,
'name': 'test_post.py',
'path': '/api/contents/',
'format': 'base64',
'type':'file'
})
# getting the xsrf cookie
client = requests.session()
client.get('http://192.168.99.100:8888/')
csrftoken = client.cookies['_xsrf']
headers ={'Content-type': 'application/json', 'X-CSRFToken':csrftoken, 'Referer':'http://192.168.99.100:8888/api/contents', 'token':token}
response = requests.post(api_url, data=body, headers=headers, verify=True)
Error returned
[W 12:22:36.710 NotebookApp] 403 POST /api/contents (192.168.99.1): XSRF cookie does not match POST argument
[W 12:22:36.713 NotebookApp] 403 POST /api/contents (192.168.99.1) 4.17ms referer=http://192.168.99.100:8888/api/contents
My solution is inspired by #SaintNazaire. In my Chrome browser, I opened the cookie folder and found the repeated _xsrf items in Cookies. I removed all of them and refreshed the Jupyter, and then everything went well.
Actually there is no need for xsrf cookie when using header token for authentification.
headers = {'Authorization': 'token ' + token}
Reference is made to the Jupyter notebook documentation.
http://jupyter-notebook.readthedocs.io/en/latest/security.html
I am trying to access a google cloud storage bucket with axios to upload a file:
I set the CORS policy in the bucket to:
[
{
"origin": ["http://localhost:8000", "localhost"],
"responseHeader": ["Access-Control-Allow-Origin", "Content-Type"],
"method": ["GET", "HEAD", "DELETE", "PUT", "POST"],
"maxAgeSeconds": 3600
}
]
Then I generate a signed url using this gsutil command:
gsutil signurl -m RESUMABLE -d 1h my-key.json gs://test-bucket/
Then finally I send this axios POST request:
var startLink = "signed url from gsutil"
var data = {
'Content-Length': 0,
'Content-Type': 'text/plain',
'x-goog-resumable': 'start',
host: 'test-django-bucket.storage.googleapis.com',
};
axios.post(startLink, data)
.then(function(response) {
console.log(respone);
});
This result I get is:
<?xml version='1.0'
encoding='UTF-8'?><Error><Code>InvalidPolicyDocument</Code><Message>The content of the form does not meet the conditions specified in the
policy document.</Message><Details>Missing policy</Details></Error>
What exactly have I done wrong here? I'm following the instructions found here.
Update:
A couple notes on what I had to fix to get everything working after some times from #BrandonYarbrough below:
First the axios request was wrong, it should be:
var data = {
headers: {
'content-type': 'text/plain',
'x-goog-resumable': 'start',
}
};
axios.post(startLink, {}, data)
.then(function(response) {
console.log(response);
});
Next I had to update the gstuil command as described below to:
gsutil signurl -m RESUMABLE -d 10h -c "text/plain" mykey.json gs://test-bucket
You need to give gsutil two other pieces of information to add to the signature: the Content-Type, and the name of the object you're creating. Try this command:
gsutil signurl -m RESUMABLE -d 1h -c "text/plain" my-key.json gs://test-bucket/object-name.txt
Also, gsutil will probably output a URL like "storage.googleapis.com/test-django-bucket/your_object?lotsOfUrlParameters". If you were to go to that URL while specifying a host header of "test-django-bucket.storage.googleapis.com", it would appear that you actually wanted an object called "test-django-bucket/your_object" inside of a bucket called "test-django-bucket". Either remove the host header and hit storage.googleapis.com directly, or edit the URL returned by gsutil to remove the "test-django-bucket" bit.
In addition, you're sending headers as data, I think. I think axios headers are set using the "headers" config section.
I am using EmberJS to communicate with an MVC Web API using Auth0 for authorization. When I have a valid bearer token I can communicate and retrieve data from the Web API just fine. When the token expires the Web API returns an an expected 401 the following message is displayed in the browser console:
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:4200'
The ajaxError method of the RESTAdapter is called as expected, but the jqXHR.status field is 0.
export default
DS.RESTAdapter.extend({
namespace: 'api',
host: webApiUrl,
headers: function() {
return {
"Authorization": 'Bearer ' + localStorage.getItem('userToken'),
};
}.property().volatile(),
ajaxError: function(jqXHR) {
var error = this._super(jqXHR);
if (jqXHR && jqXHR.status === 401) {
Ember.Logger.info("Not Authorized");
}
Ember.Logger.info("Error " + jqXHR.status + " calling API redirecting to login.");
}
});
Here is a sample of the response returned from the API:
HTTP/1.1 401 Unauthorized
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
X-SourceFiles: =?UTF-8?B?QzpcU291cmNlXFBheWNvckRldlxTb3VyY2VcSW50ZWdyYXRpb25cR2VuZXJhbFxNYWluXFBheWNvci5JbnRlZ3JhdGlvbi5PcGVyYXRpb25zXFBheWNvci5JbnRlZ3JhdGlvbi5PcGVyYXRpb25zLkFwaVxhcGlcbG9nZ2luZ0V2ZW50cw==?=
X-Powered-By: ASP.NET
Date: Fri, 30 Jan 2015 16:45:35 GMT
Content-Length: 927
I have tried XML and plan/text Content-types, but the result is the same.
I don't believe this is an actual CORS issue because this problem only occurs when the API returns an error; otherwise I'm downloading and displaying the data just fine.
Does anyone know what the issue might be?
Thanks in advance.
I was having the same issue and it was a CORS issue.
I'm not sure what backend your api server is but mine is a Rails API and the solution was to move the CORS middleware to the top of the middleware stack
config.middleware.insert_before 0, 'Rack::Cors' do
allow do
origins '*'
resource '*', :headers => :any, :methods => [:get, :post, :options]
end
end
The issue is a bit confusing because before fixing the problem if I made a request using cURL I receive the correct response with the right headers etc
$ curl -I -X GET -H 'Accept: application/json' http://api.example.dev/foo
HTTP/1.1 401 Unauthorized
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Pacu-Media-Type: v1
Content-Type: application/json; charset=utf-8
X-Rack-CORS: preflight-hit; no-origin
X-Request-Id: def30b49-895f-4581-82ec-87bcfb6c44e5
X-Runtime: 0.010945
Date: Sun, 22 Feb 2015 03:30:35 GMT
Connection: close
and the correct error message
$ curl -X GET -H 'Accept: application/json' http://api.example.dev/foo
{"errors":[{"id":"01ac93be-ea7a-4d8e-b86b-9ea1f4136b11","title":"unauthorized","detail":"The access token is invalid","status":"401"}
The problem is that cURL is not making a cross site request and therefore CORS isn't needed. When attempting to connect using jQuery the request would fail with the following error in the browser's dev console.
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:4200' is therefore not allowed access. The response had HTTP status code 401.
The response status was 401 but the response body was empty and the jqXHR.status was set to 0.
Again the reason was because my Rails backend CORS middleware needed to be at the top of the rack stack order. In Rails since I am using the Routes Engine as an exceptions app config.exceptions_app = self.routes I needed the CORS middleware to load before it in ActionDispatch::ShowExceptions
I am having trouble making a proxy call. How on earth do you make this happen?
Here is what I have so far:
proxy_addr = '162.243.105.128'
proxy_port = 6170
Net::HTTP::Proxy(proxy_addr, proxy_port).start('www.google.com') {|http| http}
I get:
#<#<Class:0x007f85d8a092d0> www.google.com:80 open=false>
When:
Net::HTTP::Proxy(proxy_addr, proxy_port).start('www.google.com') {|http| http.get('www.google.com')}
I get
#<Net::HTTPNotImplemented 501 Tor is not an HTTP Proxy readbody=true>
How do I make this work!
Tor is a SOCKS proxy, not an HTTP proxy.
I ran into the same issue. I don't think it is possible with plain Net::HTTP. Install http://socksify.rubyforge.org/
require 'socksify/http'
http = Net::HTTP::SOCKSProxy(addr, port)
puts http.get(URI('http://echoip.com'))