What is the easiest way to get an HTTP response from command-line Dart? - dart

I am writing a command-line script in Dart. What's the easiest way to access (and GET) an HTTP resource?

Use the http package for easy command-line access to HTTP resources. While the core dart:io library has the primitives for HTTP clients (see HttpClient), the http package makes it much easier to GET, POST, etc.
First, add http to your pubspec's dependencies:
name: sample_app
description: My sample app.
dependencies:
http: any
Install the package. Run this on the command line or via Dart Editor:
pub install
Import the package:
// inside your app
import 'package:http/http.dart' as http;
Make a GET request. The get() function returns a Future.
http.get('http://example.com/hugs').then((response) => print(response.body));
It's best practice to return the Future from the function that uses get():
Future getAndParse(String uri) {
return http.get('http://example.com/hugs')
.then((response) => JSON.parse(response.body));
}
Unfortunately, I couldn't find any formal docs. So I had to look through the code (which does have good comments): https://code.google.com/p/dart/source/browse/trunk/dart/pkg/http/lib/http.dart

this is the shortest code i could find
curl -sL -w "%{http_code} %{url_effective}\\n" "URL" -o /dev/null
Here, -s silences curl's progress output, -L follows all redirects as before, -w prints the report using a custom format, and -o redirects curl's HTML output to /dev/null.
Here are the other special variables available in case you want to customize the output some more:
url_effective
http_code
http_connect
time_total
time_namelookup
time_connect
time_pretransfer
time_redirect
time_starttransfer
size_download
size_upload
size_header
size_request
speed_download
speed_upload
content_type
num_connects
num_redirects
ftp_entry_path

Related

Executing Request against Go function in Docker image

I am trying to create an AWS Lambda function written in Go. To do this, I've followed the steps provided [here]. I can successfully build my Docker image. However, when I run the image, I receive the following warning:
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
I am using an M1 MacBook. But, since it was a warning, I thought I would still try to submit a request. My .go file looks like this:
package main
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/lambda"
)
type MyEvent struct {
Name string `json:"name"`
}
func HandleRequest(ctx context.Context, name MyEvent) (string, error) {
fmt.Println("Hello, world")
return fmt.Sprintf("Hello %s!", name.Name), nil
}
func main() {
lambda.Start(HandleRequest)
}
The cURL request looks like this:
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"name":"steve"}'
When I submit the request, the Terminal window just sits there. I do not see anything written in the Docker console window. What am I doing wrong? I'm not seeing any errors. At the same time, I'm not seeing any activity.

Hyperledger Sawtooth - Preflight error while submitting transaction

I am trying to submit a transaction to Hyperledger Sawtooth v1.0.1 using javascript to a validator running on localhost. The code for the post request is as below:
request.post({
url: constants.API_URL + '/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => {
if (err) {
console.log(err);
return cb(err)
}
console.log(response.body);
return cb(null, response.body);
});
The transaction gets processed when submitted from an backend nodejs application, but it returns an OPTIONS http://localhost:8080/batches 405 (Method Not Allowed) error when submitted from client. These are the options that I have tried:
Inject Access-Control-Allow-* headers into the response using an extension: The response still gives the same error
Remove the custom header to bypass preflight request: This makes the validator throw an error as shown:
...
sawtooth-rest-api-default | KeyError: "Key not found: 'Content-Type'"
sawtooth-rest-api-default | [2018-03-15 08:07:37.670 ERROR web_protocol] Error handling request
sawtooth-rest-api-default | Traceback (most recent call last):
...
The unmodified POST request from the browser gets the following response headers from the validator:
HTTP/1.1 405 Method Not Allowed
Content-Type: text/plain; charset=utf-8
Allow: GET,HEAD,POST
Content-Length: 23
Date: Thu, 15 Mar 2018 08:42:01 GMT
Server: Python/3.5 aiohttp/2.3.2
So, I guess OPTIONS method is not handled in the validator. A GET request for the state goes through fine when the CORS headers are added. This issue was also not faced in Sawtooth v0.8.
I am using docker to start the validator, and the commands to start it are a slightly modified version of those given in the LinuxFoundationX: LFS171x course. The relevant commands are below:
bash -c \"\
sawadm keygen && \
sawtooth keygen my_key && \
sawset genesis -k /root/.sawtooth/keys/my_key.priv && \
sawadm genesis config-genesis.batch && \
sawtooth-validator -vv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800
Can someone please guide me as to how to solve this problem?
CORS issues are always the best.
What is CORS?
Your browser trying to protect users from bring directed to a page they think is the frontend for an API, but is actually fraudulent. Anytime a web page tries to access an API on a different domain, that API will need to explicitly give the webpage permission, or the browser will block the request. This is why you can query the API from Node.js (no browser), and can put the REST API address directly into your address bar (same domain). However, trying to go from localhost:3000 to localhost:8008 or from file://path/to/your/index.html to localhost:8008 is going to get blocked.
Why doesn't the Sawtooth REST API handle OPTIONS requests?
The Sawtooth REST API does not know the domain you are going to run your web page from, so it can't whitelist it explicitly. It is possible to whitelist all domains, but this obviously destroys any protection CORS might give you. Rather than try to weigh the costs and benefits of this approach for all Sawtooth users everywhere, the decision was made to make the REST API as lightweight and security agnostic as possible. Any developer using it would be expected to put it behind a proxy server, and they can make whatever security decisions they need on that proxy layer.
So how do you fix it?
You need to setup a proxy server that will put the REST API and your web page on the same domain. There is no quick configuration option for this. You will have to set up an actual server. Obviously there are lots of ways to do this. If you are already familiar with Node, you could serve the page from Node.js, and then have the Node server proxy the API calls. If you are already running all of the Sawtooth components with docker-compose though, it might be easier to use Docker and Apache.
Setting up an Apache Proxy with Docker
Create your Dockerfile
In the same directory as your web app create a text file called "Dockerfile" (no extension). Then make it look like this:
FROM httpd:2.4
RUN echo "\
LoadModule proxy_module modules/mod_proxy.so\n\
LoadModule proxy_http_module modules/mod_proxy_http.so\n\
ProxyPass /api http://rest-api:8008\n\
ProxyPassReverse /api http://rest-api:8008\n\
RequestHeader set X-Forwarded-Path \"/api\"\n\
" >>/usr/local/apache2/conf/httpd.conf
This is going to do a couple of things. First it will pull down the httpd module from DockerHub, which is just a simple static server. Then we are using a bit of bash to add five lines to Apache's configuration file. These five lines import the proxy modules, tell Apache that we want to proxy http://rest-api:8008 to the /api route, and set the X-Forwarded-Path header so the REST API can properly build response URLs. Make sure that rest-api matches the actual name of the Sawtooth REST API service in your docker compose file.
Modify your docker compose file
Now, to the docker compose YAML file you are running Sawtooth through, you want to add a new property under the services key:
services:
my-web-page:
build: ./path/to/web/dir/
image: my-web-page
container_name: my-web-page
volumes:
- ./path/to/web/dir/public/:/usr/local/apache2/htdocs/
expose:
- 80
ports:
- '8000:80'
depends_on:
- rest-api
This will build your Dockerfile located at ./path/to/web/dir/Dockerfile (relative to the docker compose file), and run it with its default command, which is to start up Apache. Apache will serve whatever files are located in /usr/local/apache2/htdocs/, so we'll use volumes to link the path to your web files on your host machine (i.e. ./path/to/web/dir/public/), to that directory in the container. This is basically an alias, so if you update your web app later, you don't need to restart this docker container to see the changes. Finally, ports will take the server, which is at port 80 inside the container, and forward it out to localhost:8000.
Running it all
Now you should be able to run:
docker-compose -f path/to/your/compose-file.yaml up
And it will start up your Apache server along with the Sawtooth REST API and validator and any other services you defined. If you go to http://localhost:8000, you should see your web page, and if you go to http://localhost:8000/api/blocks, you should see a JSON representation of the blocks on chain. More importantly you should be able to make the request from your web app:
request.post({
url: 'api/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => console.log(response) );
Whew. Sorry for the long response, but I'm not sure if it is possible to solve CORS any faster. Hopefully this helps.
The transaction Header should have details like, address of the block where it would be save. Here is example which I have used and is working fine for me :
String payload = "create,0001,BLockchain CPU,Black,5000";
logger.info("Sending payload as - "+ payload);
String payloadBytes = Utils.hash512(payload.getBytes()); // --fix for invaluid payload seriqalization
ByteString payloadByteString = ByteString.copyFrom(payload.getBytes());
String address = getAddress(IDEM, ITEM_ID); // get unique address for input and output
logger.info("Sending address as - "+ address);
TransactionHeader txnHeader = TransactionHeader.newBuilder().clearBatcherPublicKey()
.setBatcherPublicKey(publicKeyHex)
.setFamilyName(IDEM) // Idem Family
.setFamilyVersion(VER)
.addInputs(address)
.setNonce("1")
.addOutputs(address)
.setPayloadSha512(payloadBytes)
.setSignerPublicKey(publicKeyHex)
.build();
ByteString txnHeaderBytes = txnHeader.toByteString();
byte[] txnHeaderSignature = privateKey.signMessage(txnHeaderBytes.toString()).getBytes();
String value = Signing.sign(privateKey, txnHeader.toByteArray());
Transaction txn = Transaction.newBuilder().setHeader(txnHeaderBytes).setPayload(payloadByteString)
.setHeaderSignature(value).build();
BatchHeader batchHeader = BatchHeader.newBuilder().clearSignerPublicKey().setSignerPublicKey(publicKeyHex)
.addTransactionIds(txn.getHeaderSignature()).build();
ByteString batchHeaderBytes = batchHeader.toByteString();
byte[] batchHeaderSignature = privateKey.signMessage(batchHeaderBytes.toString()).getBytes();
String value_batch = Signing.sign(privateKey, batchHeader.toByteArray());
Batch batch = Batch.newBuilder()
.setHeader(batchHeaderBytes)
.setHeaderSignature(value_batch)
.setTrace(true)
.addTransactions(txn)
.build();
BatchList batchList = BatchList.newBuilder()
.addBatches(batch)
.build();
ByteString batchBytes = batchList.toByteString();
String serverResponse = Unirest.post("http://localhost:8008/batches")
.header("Content-Type", "application/octet-stream")
.body(batchBytes.toByteArray())
.asString()
.getBody();

Difficulty in sourcing tcl files from sharepoint

I have tcl byte code on sharepoint with url like
https://share.abc.com/sites/abc/test.tcl
I want to source this file in another tcl file residing on my machine.
I don't want to copy the file from sharepoint.
Can anyone help me out here?
The source command only reads from the filesystem, but that can be a virtual filesystem. Thus, you can use the tclvfs package to make it so that HTTP sites can be mounted within the process, and then you can read from that.
# Add in HTTPS support
package require http
package require tls
::http::register https 443 ::tls::socket
# Mount the site; the vfs::urltype package won't work as it doesn't support https
package require vfs::http
# Double quotes only because of Stack Overflow highlighting sucking
vfs::http::Mount "https://share.abc.com/" /https.share.abc.com
# Load and evaluate the file
source /https.share.abc.com/sites/abc/test.tcl
This all assumes that you don't need any username/password credentials. If you do, you need to set them as part of the mount:
vfs::http::Mount "https://theuser:thepassword#share.abc.com/" /https.share.abc.com
Note that this currently requires that you're using HTTP Basic Auth (over HTTPS). That's sufficiently secure for almost any reasonable use.
This is quite a large stack of stuff. You can do it in rather less if you are willing to do some more of the work yourself:
package require base64
package require http
package require tls
::http::register https 443 ::tls::socket
proc source_https {url username password} {
set auth "Basic [base64::encode ${username}:${password}]"
set headers [list Authorization $auth]
set tok [http::geturl $url -headers $headers]
if {[http::ncode $tok] != 200} {
# Cheap and nasty version...
set msg [http::code $tok]
http::cleanup $tok
error "Problem with fetch: $msg"
}
set script [http::data $tok]
http::cleanup $tok
# These next two commands are effectively what [source] does (apart from I/O)
info script $url
uplevel 1 $script
}
source_https "https://share.abc.com/sites/abc/test.tcl" AzureDiamond hunter2

Trying to make curl requests in ruby

is there a ruby curl library that will allow me to duplicate this request:
curl -d '<hello xmlns="http://checkout.google.com/schema/2"/>' https://S_MERCHANT_ID:S_MERCHANT_KEY#sandbox.google.com/checkout/api/checkout/v2/request/Merchant/S_MERCHANT_ID
i have tried curb, but their PostField.content class is not cooperating with google's checkout api. here is the code from my curb request:
c = Curl::Easy.new("https://MY_ID:MY_KEY#sandbox.google.com/checkout/api/checkout/v2/request/Merchant/MY_ID_AGAIN")
c.http_auth_types = :basic
c.username = 'MY_ID'
c.password = 'MY_KEY'
# c.headers["data"] = '<?xml version="1.0" encoding="UTF-8"?><hello xmlns="http://checkout.google.com/schema/2"/>'
c.http_post(Curl::PostField.content('', '<?xml version="1.0" encoding="UTF-8"?><hello xmlns="http://checkout.google.com/schema/2"/>'))
c.perform
i HAVE managed to get it working using ruby's system command, but im not sure how to handle the response from it.
req = system("curl -d '<hello xmlns=\"http://checkout.google.com/schema/2\"/>' https://MY_ID:MY_KEY#sandbox.google.com/checkout/api/checkout/v2/request/Merchant/MY_ID")
I have been at it for 2 hours now. any help would be greatly appreciated, thanks!
You can use IO.popen to read from the child process:
IO.popen(['curl', '-o', '-', '-d', ..., err: [:child, :out]]) do |io|
response = io.read
end
This example combines standard out and standard error into one stream in the child process, and it forces curl to redirect output to standard out via -o. You would specify your other options in place of the ....
I always use Rest Client gem for such use cases, it is very simple in use and have all REST requests out-of-box with whole batch of tuning parameters.
Your code will look like something similar to this:
url = "sandbox.google.com/checkout/api/checkout/v2/request/Merchant/#{S_MERCHANT_ID}"
credentials = "#{S_MERCHANT_ID}:#{S_MERCHANT_KEY}"
RestClient.post "https://credentials##{url}", '<hello xmlns="http://checkout.google.com/schema/2"/>'
Alternatively, you can use a HTTP request library such as Typheous (https://github.com/typhoeus/typhoeus). Is there anything that binds you with "curl"?
I would have curl put the result in a file, and then open the file using ruby and read it ( File.open)
Or us httparty
I figured it out (YAAAAY!)
if anyone else is having this problem, here is the solution.
executable commands work fine in the command line, but if you are trying to render the output of an executable command from a controller in rails, make sure you use render :json instead of render :text to print the results.
for some reason the render :text was only outputting bits and pieces of my command's output (and driving me insane in the process).
For those of you trying to integrate with google checkout in rails, here is how you make http requests to google:
First step: add rest-client to your Gemfile. here is how to do it from the command line:
$ cd /path/to/your/rails/app
$ sudo nano Gemfile
Next, add the gem to your gemfile by placing the following somewhere in your Gemfile
$ gem "rest-client"
next, run bundle install
$ bundle install
restart your server. if apache2:
$ sudo service apache2 reload
if webrick:
$ rails s
then, in your controller (assuming you have rails set up and are able to access a controller from the browser) write the following code:
$ url = "https://YOUR_GOOGLE_CHECKOUT_MERCHANT_ID:YOUR_GOOGLE_CHECKOUT_KEY#sandbox.google.com/checkout/api/checkout/v2/request/Merchant/YOUR_GOOGLE_CHECKOUT_MERCHANT_ID"
$ req = RestClient.post(url, '<hello xmlns="http://checkout.google.com/schema/2"/>')
render :json => req
Please don't forget to replace YOUR_GOOGLE_MERCHANT_ID with your actual merchant id and YOUR_GOOGLE_CHECKOUT_KEY with your actual google checkout key
<?xml version="1.0" encoding="UTF-8"?>
<bye xmlns="http://checkout.google.com/schema/2" serial-number="1dfc3b90-1fa6-47ea-a585-4d5482b6c785" />
(answer courtesy of nexo)

ffmpeg av_register_all() not working?

I'm attempting to use ffmpeg on iOS. When I call av_register_all() then attempt to open the file using avformat_open_input() I get the following error:
No URL Protocols are registered. Missing call to av_register_all()?
Has anyone seen this before? Any help would be much appreciated
I had protocols disabled in the build script in which I built the static library.
If you want to see the ffmpeg list of available protocols type:
./configure --list-protocols
and you will have something like that
applehttp https rtmps
cache md5 rtmpt
concat mmsh rtmpte
crypto mmst rtp
file pipe tcp
gopher rtmp tls
http rtmpe udp
httpproxy
if you have disable everything in you ffmpeg configuration, usually in this case it is enough to enable the file protocol:
./configure \
...
--enable-protocol=file \
...

Resources