I am trying to access a google cloud storage bucket with axios to upload a file:
I set the CORS policy in the bucket to:
[
{
"origin": ["http://localhost:8000", "localhost"],
"responseHeader": ["Access-Control-Allow-Origin", "Content-Type"],
"method": ["GET", "HEAD", "DELETE", "PUT", "POST"],
"maxAgeSeconds": 3600
}
]
Then I generate a signed url using this gsutil command:
gsutil signurl -m RESUMABLE -d 1h my-key.json gs://test-bucket/
Then finally I send this axios POST request:
var startLink = "signed url from gsutil"
var data = {
'Content-Length': 0,
'Content-Type': 'text/plain',
'x-goog-resumable': 'start',
host: 'test-django-bucket.storage.googleapis.com',
};
axios.post(startLink, data)
.then(function(response) {
console.log(respone);
});
This result I get is:
<?xml version='1.0'
encoding='UTF-8'?><Error><Code>InvalidPolicyDocument</Code><Message>The content of the form does not meet the conditions specified in the
policy document.</Message><Details>Missing policy</Details></Error>
What exactly have I done wrong here? I'm following the instructions found here.
Update:
A couple notes on what I had to fix to get everything working after some times from #BrandonYarbrough below:
First the axios request was wrong, it should be:
var data = {
headers: {
'content-type': 'text/plain',
'x-goog-resumable': 'start',
}
};
axios.post(startLink, {}, data)
.then(function(response) {
console.log(response);
});
Next I had to update the gstuil command as described below to:
gsutil signurl -m RESUMABLE -d 10h -c "text/plain" mykey.json gs://test-bucket
You need to give gsutil two other pieces of information to add to the signature: the Content-Type, and the name of the object you're creating. Try this command:
gsutil signurl -m RESUMABLE -d 1h -c "text/plain" my-key.json gs://test-bucket/object-name.txt
Also, gsutil will probably output a URL like "storage.googleapis.com/test-django-bucket/your_object?lotsOfUrlParameters". If you were to go to that URL while specifying a host header of "test-django-bucket.storage.googleapis.com", it would appear that you actually wanted an object called "test-django-bucket/your_object" inside of a bucket called "test-django-bucket". Either remove the host header and hit storage.googleapis.com directly, or edit the URL returned by gsutil to remove the "test-django-bucket" bit.
In addition, you're sending headers as data, I think. I think axios headers are set using the "headers" config section.
Related
I am trying to submit a transaction to Hyperledger Sawtooth v1.0.1 using javascript to a validator running on localhost. The code for the post request is as below:
request.post({
url: constants.API_URL + '/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => {
if (err) {
console.log(err);
return cb(err)
}
console.log(response.body);
return cb(null, response.body);
});
The transaction gets processed when submitted from an backend nodejs application, but it returns an OPTIONS http://localhost:8080/batches 405 (Method Not Allowed) error when submitted from client. These are the options that I have tried:
Inject Access-Control-Allow-* headers into the response using an extension: The response still gives the same error
Remove the custom header to bypass preflight request: This makes the validator throw an error as shown:
...
sawtooth-rest-api-default | KeyError: "Key not found: 'Content-Type'"
sawtooth-rest-api-default | [2018-03-15 08:07:37.670 ERROR web_protocol] Error handling request
sawtooth-rest-api-default | Traceback (most recent call last):
...
The unmodified POST request from the browser gets the following response headers from the validator:
HTTP/1.1 405 Method Not Allowed
Content-Type: text/plain; charset=utf-8
Allow: GET,HEAD,POST
Content-Length: 23
Date: Thu, 15 Mar 2018 08:42:01 GMT
Server: Python/3.5 aiohttp/2.3.2
So, I guess OPTIONS method is not handled in the validator. A GET request for the state goes through fine when the CORS headers are added. This issue was also not faced in Sawtooth v0.8.
I am using docker to start the validator, and the commands to start it are a slightly modified version of those given in the LinuxFoundationX: LFS171x course. The relevant commands are below:
bash -c \"\
sawadm keygen && \
sawtooth keygen my_key && \
sawset genesis -k /root/.sawtooth/keys/my_key.priv && \
sawadm genesis config-genesis.batch && \
sawtooth-validator -vv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800
Can someone please guide me as to how to solve this problem?
CORS issues are always the best.
What is CORS?
Your browser trying to protect users from bring directed to a page they think is the frontend for an API, but is actually fraudulent. Anytime a web page tries to access an API on a different domain, that API will need to explicitly give the webpage permission, or the browser will block the request. This is why you can query the API from Node.js (no browser), and can put the REST API address directly into your address bar (same domain). However, trying to go from localhost:3000 to localhost:8008 or from file://path/to/your/index.html to localhost:8008 is going to get blocked.
Why doesn't the Sawtooth REST API handle OPTIONS requests?
The Sawtooth REST API does not know the domain you are going to run your web page from, so it can't whitelist it explicitly. It is possible to whitelist all domains, but this obviously destroys any protection CORS might give you. Rather than try to weigh the costs and benefits of this approach for all Sawtooth users everywhere, the decision was made to make the REST API as lightweight and security agnostic as possible. Any developer using it would be expected to put it behind a proxy server, and they can make whatever security decisions they need on that proxy layer.
So how do you fix it?
You need to setup a proxy server that will put the REST API and your web page on the same domain. There is no quick configuration option for this. You will have to set up an actual server. Obviously there are lots of ways to do this. If you are already familiar with Node, you could serve the page from Node.js, and then have the Node server proxy the API calls. If you are already running all of the Sawtooth components with docker-compose though, it might be easier to use Docker and Apache.
Setting up an Apache Proxy with Docker
Create your Dockerfile
In the same directory as your web app create a text file called "Dockerfile" (no extension). Then make it look like this:
FROM httpd:2.4
RUN echo "\
LoadModule proxy_module modules/mod_proxy.so\n\
LoadModule proxy_http_module modules/mod_proxy_http.so\n\
ProxyPass /api http://rest-api:8008\n\
ProxyPassReverse /api http://rest-api:8008\n\
RequestHeader set X-Forwarded-Path \"/api\"\n\
" >>/usr/local/apache2/conf/httpd.conf
This is going to do a couple of things. First it will pull down the httpd module from DockerHub, which is just a simple static server. Then we are using a bit of bash to add five lines to Apache's configuration file. These five lines import the proxy modules, tell Apache that we want to proxy http://rest-api:8008 to the /api route, and set the X-Forwarded-Path header so the REST API can properly build response URLs. Make sure that rest-api matches the actual name of the Sawtooth REST API service in your docker compose file.
Modify your docker compose file
Now, to the docker compose YAML file you are running Sawtooth through, you want to add a new property under the services key:
services:
my-web-page:
build: ./path/to/web/dir/
image: my-web-page
container_name: my-web-page
volumes:
- ./path/to/web/dir/public/:/usr/local/apache2/htdocs/
expose:
- 80
ports:
- '8000:80'
depends_on:
- rest-api
This will build your Dockerfile located at ./path/to/web/dir/Dockerfile (relative to the docker compose file), and run it with its default command, which is to start up Apache. Apache will serve whatever files are located in /usr/local/apache2/htdocs/, so we'll use volumes to link the path to your web files on your host machine (i.e. ./path/to/web/dir/public/), to that directory in the container. This is basically an alias, so if you update your web app later, you don't need to restart this docker container to see the changes. Finally, ports will take the server, which is at port 80 inside the container, and forward it out to localhost:8000.
Running it all
Now you should be able to run:
docker-compose -f path/to/your/compose-file.yaml up
And it will start up your Apache server along with the Sawtooth REST API and validator and any other services you defined. If you go to http://localhost:8000, you should see your web page, and if you go to http://localhost:8000/api/blocks, you should see a JSON representation of the blocks on chain. More importantly you should be able to make the request from your web app:
request.post({
url: 'api/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => console.log(response) );
Whew. Sorry for the long response, but I'm not sure if it is possible to solve CORS any faster. Hopefully this helps.
The transaction Header should have details like, address of the block where it would be save. Here is example which I have used and is working fine for me :
String payload = "create,0001,BLockchain CPU,Black,5000";
logger.info("Sending payload as - "+ payload);
String payloadBytes = Utils.hash512(payload.getBytes()); // --fix for invaluid payload seriqalization
ByteString payloadByteString = ByteString.copyFrom(payload.getBytes());
String address = getAddress(IDEM, ITEM_ID); // get unique address for input and output
logger.info("Sending address as - "+ address);
TransactionHeader txnHeader = TransactionHeader.newBuilder().clearBatcherPublicKey()
.setBatcherPublicKey(publicKeyHex)
.setFamilyName(IDEM) // Idem Family
.setFamilyVersion(VER)
.addInputs(address)
.setNonce("1")
.addOutputs(address)
.setPayloadSha512(payloadBytes)
.setSignerPublicKey(publicKeyHex)
.build();
ByteString txnHeaderBytes = txnHeader.toByteString();
byte[] txnHeaderSignature = privateKey.signMessage(txnHeaderBytes.toString()).getBytes();
String value = Signing.sign(privateKey, txnHeader.toByteArray());
Transaction txn = Transaction.newBuilder().setHeader(txnHeaderBytes).setPayload(payloadByteString)
.setHeaderSignature(value).build();
BatchHeader batchHeader = BatchHeader.newBuilder().clearSignerPublicKey().setSignerPublicKey(publicKeyHex)
.addTransactionIds(txn.getHeaderSignature()).build();
ByteString batchHeaderBytes = batchHeader.toByteString();
byte[] batchHeaderSignature = privateKey.signMessage(batchHeaderBytes.toString()).getBytes();
String value_batch = Signing.sign(privateKey, batchHeader.toByteArray());
Batch batch = Batch.newBuilder()
.setHeader(batchHeaderBytes)
.setHeaderSignature(value_batch)
.setTrace(true)
.addTransactions(txn)
.build();
BatchList batchList = BatchList.newBuilder()
.addBatches(batch)
.build();
ByteString batchBytes = batchList.toByteString();
String serverResponse = Unirest.post("http://localhost:8008/batches")
.header("Content-Type", "application/octet-stream")
.body(batchBytes.toByteArray())
.asString()
.getBody();
Context: I'm using traefik as my reverse proxy to send HTTP requests to my backend Golang server, which I've added some CORS handling. It works from Postman and when I'm cURLing the HTTP GET request
Problem: I'm getting this 404 error on the browser:
Axios call overriding Host
axios.create({
baseURL: 'http://localhost',
})
axios.defaults.headers['Host'] = 'dev.docker.local'
got this error in the console
refused to set unsafe header "Host"
Axios call overriding default Host using X-Host-Override
axios.create({
baseURL: 'http://localhost',
})
axios.defaults.headers['X-Host-Override'] = 'dev.docker.local'
Axios call setting default headers - seems like it's always using localhost as the Host
axios.create({
baseURL: 'http://localhost',
headers: {'Host': 'dev.docker.local'}
})
set CORS in route handlers
func About(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("Access-Control-Allow-Methods", "OPTIONS, GET")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "*")
aboutStruct := about{
Name: "Hello world",
}
w.WriteHeader(http.StatusOK)
j, _ := json.Marshal(aboutStrcut)
w.Write(j)
}
finally found a way to solve this problem for the browser, needed to use dnsmasq to point docker.local to 127.0.0.1 and then set baseURL to dev.docker.local, no need to override Host
I am trying to transfert files using a python program running on a local Anaconda to a local Jupyter within a docker container using the Jupyter rest API.
I managed already to execute a requests.get() succesfully after muddling-through a bit on how to input the token.
Now I would like now to execute a requests.post() command to transfert the files.
Configuration:
local docker container running on docker toolbox for windows
docker version 17.04.0-ce, build 4845c56
tensorflow/tensorflow incl. Jupyter latest version install
jupyter_kernel_gateway==0.3.1
local Anaconda v. 4.3.14 running on a windows 10 machine
Code:
token = token_code_provided_by_jupyter_at_startup
api_url = "http://192.168.99.100:8888/api/contents"
# getting the file's data from disk and converting into a json file
cwd = os.getcwd()
file_location = cwd+r'\Resources\Test\test_post.py'
payload = open(file_location, 'r').read()
b64payload = base64.encodestring(payload)
body = json.dumps({
'content':b64payload,
'name': 'test_post.py',
'path': '/api/contents/',
'format': 'base64',
'type':'file'
})
# getting the xsrf cookie
client = requests.session()
client.get('http://192.168.99.100:8888/')
csrftoken = client.cookies['_xsrf']
headers ={'Content-type': 'application/json', 'X-CSRFToken':csrftoken, 'Referer':'http://192.168.99.100:8888/api/contents', 'token':token}
response = requests.post(api_url, data=body, headers=headers, verify=True)
Error returned
[W 12:22:36.710 NotebookApp] 403 POST /api/contents (192.168.99.1): XSRF cookie does not match POST argument
[W 12:22:36.713 NotebookApp] 403 POST /api/contents (192.168.99.1) 4.17ms referer=http://192.168.99.100:8888/api/contents
My solution is inspired by #SaintNazaire. In my Chrome browser, I opened the cookie folder and found the repeated _xsrf items in Cookies. I removed all of them and refreshed the Jupyter, and then everything went well.
Actually there is no need for xsrf cookie when using header token for authentification.
headers = {'Authorization': 'token ' + token}
Reference is made to the Jupyter notebook documentation.
http://jupyter-notebook.readthedocs.io/en/latest/security.html
I am creating an API for a backend service with Rails 4.
The service needs to upload an image file to an amazon s3 bucket.
I'd like to use a direct upload url, so that the clients manage the uploads to s3 and the server is not kept busy.
Currently I have the following prototypical rails action
def create
filename = params[:filename]
s3_direct_post = S3_BUCKET.presigned_post(key: "offers/#{SecureRandom.uuid}/#{filename}", acl: 'public-read')
s3p = s3_direct_post.fields
url = "#{s3_direct_post.url}/#{filename}?X-Amz-Algorithm=#{s3p['x-amz-algorithm']}&X-Amz-Credential=#{s3p['x-amz-credential']}&X-Amz-Date=#{s3p['x-amz-date']}&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=#{s3p['x-amz-signature']}"
render json: {success: true, url: url}, status: :ok
end
This generates such an url:
https://my-bucket.s3.eu-central-1.amazonaws.com/test.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=MYKEY/20150420/eu-central-1/s3/aws4_request&X-Amz-Date=20150420T162603Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=MYSIGNATURE
Now I try to post the test.png to this url with the following:
curl -v -T test.png "url"
and I get the following error response:
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>MYKEY</AWSAccessKeyId>...
I believe the problem comes from the fact, that the specified X-Amz-SignedHeaders Header is wrong. I am not sure which headers are used by default from the amazon rails sdk gem.
How should I change my url generation, so that a mobile client can just take the url and post a file to it?
Here is a solution:
In config/initializers/aws.rb:
AWS_CREDS = Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'])
Aws.config.update({
region: 'eu-central-1',
credentials: AWS_CREDS
})
S3 = Aws::S3::Resource.new('eu-central-1')
S3_BUCKET_NAME = ENV['S3_BUCKET_NAME']
S3_BUCKET = S3.bucket(S3_BUCKET_NAME)
In your model/controller/concern/or whatever:
obj = S3_BUCKET.object("offers/#{user.id}/#{self.id}")
url = obj.presigned_url(:put) # obj.presigned_url(:put, acl: 'public-read') #if you want to make the file public
Then to upload you can use a mobile client or curl:
curl -X PUT -T file_to_upload "url from above"
Note that you will have to add the x-amz-acl: public-read header if you used the public-read acl option:
curl -H "x-amz-acl: public-read" -X PUT -T file_to_upload "url from above"
I am trying to do GET request via SSL, and getting the response code as 0 and response body as blank.
REQUEST
Typhoeus::Request.get("https://www.example.com", headers: { 'Accept' => "application/json"}, ssl_verifypeer: false, userpwd: 'username' + ":" + 'pwd', sslversion: :sslv3))
RESPONSE:
"ETHON: performed EASY url= response_code=0 return_code=peer_failed_verification total_time=0.22201"
The same URL is working on terminal via CURL also it is working on mac machine too.