I would like to get the Authorization Bearer header for OAuth purposes, but it looks a bit confusing reading the docs
use nickel::{Nickel, JsonBody, HttpRouter, Request, Response, MiddlewareResult, MediaType};
// Get the full Authorization header from the incoming request headers
let auth_header = match request.origin.headers.get::<Authorization<Bearer>>() {
Some(header) => header,
None => panic!("No authorization header found")
};
This generates the error:
src/main.rs:84:56: 84:86 error: the trait hyper::header::HeaderFormat is not implemented for the type hyper::header::common::authorization::Authorization<hyper::header::common::authorization::Bearer> [E0277]
Looking at implementation it appears for me to be correct:
https://github.com/hyperium/hyper/blob/master/src/header/common/authorization.rs
impl<S: Scheme + Any> HeaderFormat for Authorization<S> where <S as FromStr>::Err: 'static {
fn fmt_header(&self, f: &mut fmt::Formatter) -> fmt::Result {
if let Some(scheme) = <S as Scheme>::scheme() {
try!(write!(f, "{} ", scheme))
};
self.0.fmt_scheme(f)
}
}
https://github.com/auth0/rust-api-example/issues/1
Looking at the documentation for Authorization, we can see that it does indeed implement Header:
impl<S: Scheme + Any> Header for Authorization<S>
where S::Err: 'static
So you were on the right track. My guess is that you are running into something more insidious: multiple versions of the same crate.
Specifically, the version of nickel that I compiled today (0.7.3), depends on hyper 0.6.16. However, if I add hyper = "*" to my Cargo.toml, then I also get the newest version of hyper - 0.7.0.
As unintuitive as it may seem, items from hyper 0.7 are not compatible with items from hyper 0.6. This is nothing specific about hyper either; it's true for all crates.
If you update your dependency to lock to the same version of hyper that nickel wants, then you should be good to go.
Cargo.toml
# ...
[dependencies]
hyper = "0.6.16"
nickel = "*"
src/main.rs
extern crate nickel;
extern crate hyper;
use hyper::header::{Authorization, Bearer};
use nickel::{HttpRouter, Request};
fn foo(request: Request) {
// Get the full Authorization header from the incoming request headers
let auth_header = match request.origin.headers.get::<Authorization<Bearer>>() {
Some(header) => header,
None => panic!("No authorization header found")
};
}
fn main() {}
Related
What I want to do is really what the title says. I would like to know how I can receive data per post in hyper, for example, suppose I execute the following command (with a server in hyper running on port :8000):
curl -X POST -F "field=#/path/to/file.txt" -F "tool=curl" -F "other-file=#/path/to/other.jpg" http://localhost:8000
Now, I'm going to take parf of the code on the main page of hyper as an example:
use std::{convert::Infallible, net::SocketAddr};
use hyper::{Body, Request, Response, Server};
use hyper::service::{make_service_fn, service_fn};
async fn handle(_: Request<Body>) -> Result<Response<Body>, Infallible> {
Ok(Response::new("Hello, World!".into()))
}
#[tokio::main]
async fn main() {
let addr = SocketAddr::from(([127, 0, 0, 1], 8000));
let make_svc = make_service_fn(|_conn| async {
Ok::<_, Infallible>(service_fn(handle))
});
let server = Server::bind(&addr).serve(make_svc);
if let Err(e) = server.await {
eprintln!("server error: {}", e);
}
}
So, now, with this basic code, how can I receive the data per post that my curl command above would send? How do I adapt my code to read the data? I've tried to search the internet, but what I found was that hyper doesn't actually split the request body depending on the HTTP method, it's all part of the same body. But I haven't been able to find a way to process data like the above with code like mine. Thanks in advance.
Edit
I tried the exact code that they left me in the answer. That is, this code:
async fn handle(req: Request<Body>) -> Result<Response<Body>, Infallible> {
let mut files = multipart::server::Multipart::from(req);
.....
}
But I get this error:
expected struct multipart::server::Multipart, found struct
hyper::Request
How can I solve that?
It is a single body, but the data is encoded in a way that contains the multiple files.
This is called multipart, and in order to parse the body correctly you need a multipart library such as https://crates.io/crates/multipart
To hyper integration you need to add the feature flag hyper in Cargo.toml
multipart = { version = "*", features = ["hyper"] }
Then
async fn handle(mut files: multipart::server::Multipart) -> Result<Response<Body>, Infallible> {
files.foreach_entry(|field| {
// contains name, filename, type ..
println!("Info: {:?}",field.headers);
// contains data
let mut bytes:Vec<u8> = Vec::new();
field.data.read_to_end(&mut bytes);
});
Ok(Response::new("Received the files!".into()))
}
You can also use it like this
async fn handle(req: Request<Body>) -> Result<Response<Body>, Infallible> {
let mut files = multipart::server::Multipart::from(req);
.....
}
I have a problem. I am using asp.net core 3 web api. The Angular 8 app client is generated with nSwag version 13.2.1.0. The specificatio is generated Swashbuckle.AspNetCore 5.
The result I get is:
**
* #param body (optional)
* #return Success
*/
seller(body: RegisterSellerRequest | undefined): Observable<TokenResponse> {
let url_ = this.baseUrl + "/api/register/seller";
url_ = url_.replace(/[?&]$/, "");
const content_ = JSON.stringify(body);
let options_: any = {
body: content_,
observe: "response",
responseType: "blob",
headers: new HttpHeaders({
"Content-Type": "application/json-patch+json",
"Accept": "application/json"
})
};
As you can see the responseType: "blob" is generated, and that's not good for our angular app's interceptor.
Is there a way to set response to be application/json?!
In my controllers I set the swagger attributes like this:
[ApiExplorerSettings(GroupName = Constatns.PublicSwaggerGroup)]
[SwaggerOperation(OperationId = "registerSeller")]
[HttpPost("api/register/seller")]
[ValidateModel]
[AllowAnonymous]
[ProducesResponseType((int)HttpResponseType.OK, Type = typeof(TokenResponse))]
[ProducesResponseType((int)HttpResponseType.BadRequest)]
[Produces("application/json")]
public async Task<TokenResponse> RegisterSeller([FromBody] RegisterSellerRequest data)
{}
I think its currently no simple way to change that. Its the simplest way to load everything as blob and then transform it to json or binary depending the response type. Changing that would mean that the generator templates get much more complicated.
I am new in Spring 5 and Reactive Programming. My problem is creating the export feature for the database by a rest API.
User hits GET request -> Server reads data and returns data as a zip file. Because zip file is large, so I need to stream these data.
My code as below:
#GetMapping(
value = "/export",
produces = ["application/octet-stream"],
headers = [
"Content-Disposition: attachment; filename=\"result.zip\"",
"Content-Type: application/zip"])
fun streamData(): Flux<Resource> = service.export()
I use curl as below:
curl http://localhost/export -H "Accept: application/octet-stream"
But it always returns 406 Not Acceptable.
Anyone helps?
Thank you so much
The headers attribute of the #GetMapping annotation are not headers that should be written to the HTTP response, but mapping headers. This means that your #GetMapping annotation requires the HTTP request to contain the headers you've listed. This is why the request is actually not mapped to your controller handler.
Now your handler return type does not look right - Flux<Resource> means that you intend to return 0..* Resource instances and that they should be serialized. In this case, a return type like ResponseEntity<Resource> is probably a better choice since you'll be able to set response headers on the ResponseEntity and set its body with a Resource.
Is it right, man? I still feel it's not good with this solution at the last line when using blockLast.
#GetMapping("/vehicle/gpsevent", produces = ["application/octet-stream"])
fun streamToZip(): ResponseEntity<FileSystemResource> {
val zipFile = FileSystemResource("result.zip")
val out = ZipOutputStream(FileOutputStream(zipFile.file))
return ResponseEntity
.ok().cacheControl(CacheControl.noCache())
.header("Content-Type", "application/octet-stream")
.header("Content-Disposition", "attachment; filename=result.zip")
.body(ieService.export()
.doOnNext { print(it.key.vehicleId) }
.doOnNext { it -> out.putNextEntry(ZipEntry(it.key.vehicleId.toString() + ".json")) }
.doOnNext { out.write(it.toJsonString().toByteArray(charset("UTF-8"))) }
.doOnNext { out.flush() }
.doOnNext { out.closeEntry() }
.map { zipFile }
.doOnComplete { out.close() }
.log()
.blockLast()
)
}
I'm receiving a standard request from an API. It looks something like this :
It's content type and length is :
But when this hits my Rails server, Rails responds with
The reason I'm bringing this up, is because the same request seems to work on SCORM Cloud's server. If I upload the exact same content to them, and watch it in the debugger, I see it send out an application/json statement with the same Request payload, but with no unexpected token error.
Does a Rails application/json request have to be written a certain way that differs from other servers? Is there a proper way to rewrite this line in Rack Middleware to prevent this error?
Update
The javascript :
function _TCDriver_XHR_request (lrs, url, method, data, callback, ignore404, extraHeaders) {
_TCDriver_Log("_TCDriver_XHR_request: " + url);
var xhr,
finished = false,
xDomainRequest = false,
ieXDomain = false,
ieModeRequest,
title,
ticks = ['/', '-', '\\', '|'],
location = window.location,
urlParts,
urlPort,
result,
extended,
until,
fullUrl = lrs.endpoint + url
;
urlParts = fullUrl.toLowerCase().match(/^(.+):\/\/([^:\/]*):?(\d+)?(\/.*)?$/);
// add extended LMS-specified values to the URL
if (lrs.extended !== undefined) {
extended = [];
for (var prop in lrs.extended) {
if(lrs.extended[prop] != null && lrs.extended[prop].length > 0){
extended.push(prop + "=" + encodeURIComponent(lrs.extended[prop]));
}
}
if (extended.length > 0) {
fullUrl += (fullUrl.indexOf("?") > -1 ? "&" : "?") + extended.join("&");
}
}
//Consolidate headers
var headers = {};
headers["Content-Type"] = "application/json";
headers["Authorization"] = lrs.auth;
if (extraHeaders !== null) {
for (var headerName in extraHeaders) {
headers[headerName] = extraHeaders[headerName];
}
}
//See if this really is a cross domain
xDomainRequest = (location.protocol.toLowerCase() !== urlParts[1] || location.hostname.toLowerCase() !== urlParts[2]);
if (! xDomainRequest) {
urlPort = (urlParts[3] === null ? ( urlParts[1] === 'http' ? '80' : '443') : urlParts[3]);
xDomainRequest = (urlPort === location.port);
}
//If it's not cross domain or we're not using IE, use the usual XmlHttpRequest
if (! xDomainRequest || typeof XDomainRequest === 'undefined') {
_TCDriver_Log("_TCDriver_XHR_request using XMLHttpRequest");
xhr = new XMLHttpRequest();
xhr.open(method, fullUrl, callback != null);
for (var headerName in headers) {
xhr.setRequestHeader(headerName, headers[headerName]);
}
}
//Otherwise, use IE's XDomainRequest object
else {
_TCDriver_Log("_TCDriver_XHR_request using XDomainRequest");
ieXDomain = true;
ieModeRequest = _TCDriver_GetIEModeRequest(method, fullUrl, headers, data);
xhr = new XDomainRequest ();
xhr.open(ieModeRequest.method, ieModeRequest.url);
}
Rails is being "helpful" here and assuming that the client is correctly using "Content-Type" and passing a value that actually matches that content type. In other words, the payload in the request has to be parseable JSON, and the value being passed is not valid JSON.
Which is an entirely reasonable thing for it to do when you are implementing an in house API that isn't intended for maximum interoperability. What Rails doesn't know is that an LRS' document storage is supposed to be "dumb" and basically allow the client to shove whatever it wants in and get whatever it wants out, which is why SCORM Cloud accepts the request, basically it just stores the content type and the contents, and then regurgitates them as is on request.
The code you pasted is from a very old library that has poor implementation of Content-Type headers. If this code is found anywhere other than in a relatively old version of a piece of content from one of the major e-learning authoring tools then it should be updated to use a recent version of TinCanJS and improve the content type handling.
As far as making this work on Rails, sorry I don't have that much experience with it. Presumably there is a switch or something to turn off automatic request body parsing, at least that's what most other frameworks I've used have.
Does a Rails application/json request have to be written a certain way that differs from other servers?
Not that I know of no.
Is there a proper way to rewrite this line in Rack Middleware to prevent this error?
There might a way yes, maybe even without rack middlewares, although it's quite hard to help you without an actual request to work with.
I'm trying to delete an entry from the database by odata. I get the error message
{"error":{"code":"","message":{"lang":"en-US","value":"Bad Request - Error in query syntax."}}}
my code:
function deleteMonthEntry() {
var item = actMonthEntries.getItem(listIndex);
var queryString = "Stundens(" + item.data.datensatz_id + ")?$format=json";
var requestUrl = serviceUrl + queryString;
WinJS.xhr({
type: "delete",
url: requestUrl,
headers: {
"Content-type": "application/json"
}
}).done(
function complete(response) {
},
function (error) {
console.log(error);
}
);
}
My request URL looks like this:
requestUrl = "http://localhost:51893/TimeSheetWebservice.svc/Stundens(305233)?$format=json"
Thanks
Marlowe
At least I found the solution:
I've entered an filter request to my service like this:
TimeSheetWebservice.svc/Stundens?$filter=datensatz_id eq 305221
this returned the correct entry with this link:
TimeSheetWebservice.svc/Stundens(305221M)
So if I enter a M after the ID, everything works fin. But I have no idea where this M comes from.
Can anyone tell me the reason for this M? It does not belong to the ID. The ID is this
305221
Marlowe
Are you sure the server you're talking to supports the $format query option? Many don't. I would try removing that part of the request URI, and instead modify your headers value to specify an Accept header:
headers: {
"Content-Type": "application/json",
"Accept": "application/json"
}
For servers where $format is allowed, giving it a json value is equivalent to providing an Accept header with the application/json MIME type.
In general, for a DELETE operation, the Accept header or $format value only matters for error cases. With a successful DELETE, the response payload body will be empty, so there's no need for the server to know about your format preference.