I'm storing images in fabric blockchain as base64 strings. Here, whenever I try to interact(update or add assets including base64 strings) with the blockchain through the composer rest server, it throws below error,
Error: request entity too large
How to increase the request size limit or what are other possible approaches to handle this issue?
I was able to edit the maximum request size of the REST server by editing the server.js located in path_to_rest_server/server/. I edited below code,
// Support JSON encoded bodies.
app.middleware('parse', bodyParser.json());
// Support URL encoded bodies.
app.middleware('parse', bodyParser.urlencoded({
extended: true,
}));
to,
// Support JSON encoded bodies.
app.middleware('parse', bodyParser.json({
strict: false,
limit: "10000kb"
}));
// Support URL encoded bodies.
app.middleware('parse', bodyParser.urlencoded({
extended: true,
limit: "10000kb"
}));
10000kb is the size limit.
Related
I'm new to KsqlDB so I might be missing something obvious. My question is related to the chunked JSON output of a never-ending push-query not being valid JSON. Let me elaborate.
In short, my setup is as follows. From a typescript/node process I've created a push query on a ksql stream as such:
CREATE STREAM events (id VARCHAR, timestamp VARCHAR, location VARCHAR, events ARRAY<VARCHAR>) WITH (kafka_topic='mytopic', value_format='json', partitions=1);
The push query itself is created as a long-running REST stream (using axios):
const response = await axios.post(
`http://ksqldb-server:8088/query-stream`,
{
sql: `SELECT * FROM events EMIT CHANGES;`,
streamsProperties: {}
},
{
headers: {
'Content-Type': 'application/vnd.ksql.v1+json',
Accept: 'application/vnd.ksql.v1+json',
},
responseType: 'stream',
}
);
This works. When run, I first get the header row:
[{"header":{"queryId":"transient_EVENTS_2815830975103425962","schema":"`ID` STRING, `TIMESTAMP` STRING, `LOCATION` STRING, `EVENTS` ARRAY<STRING>"}}
Followed by new rows coming in one-by-one based on real-world events:
{"row":{"columns":["b82baad7-a87e-4617-b18a-1782b4cb49ce","2022-05-16 08:03:03","Home",["EventA","EventD"]]}},\n
Now, if this query would ever complete it would probably end up as valid JSON when concatenated together (although the header row is missing a , at the end). Since it's a push query however, it never completes and as such I won't receive the closing ] - which means it will never be valid JSON. Also, I'm looking to process events in real-time, otherwise I could have written a pull query instead.
My expectations were that each new row would be parseble by itself using JSON.parse(). Instead, I've ended up having to JSON.parse(data.slice(0, -2)) to get rid of the additional ,\n. However, it does not feel right to put this into production.
What is the rational behind outputting chunked JSON on push queries? It seems an illogical format to me for any use-case.
And is there a way to alter the output of ksql events to what I would expect? Maybe some header or attribute I'm missing?
Thanks for your insights!
You explicitly set application/vnd.ksql.v1+json as your desired response format in the headers:
headers: {
'Content-Type': 'application/vnd.ksql.v1+json',
Accept: 'application/vnd.ksql.v1+json',
},
application/vnd.ksql.v1+json means that the complete response will be a valid JSON doc.
As you pointed out, this is impractical as the push query never completes. You should remove the headers or set them explicitly to the default application/vnd.ksqlapi.delimited.v1. application/vnd.ksqlapi.delimited.v1 means that the every returned row is going to be valid JSON.
See https://docs.ksqldb.io/en/latest/developer-guide/ksqldb-rest-api/streaming-endpoint/#executing-pull-or-push-queries for more details.
I would like to modify the response body returned by the backend.
As background I'll detail my specific problem (but I don't require a solution to the specific problem, just the method for manipulating a response body). I want to insert/add a key value pair to the response body based on the status code of the response and I want to transform snake_case keys into camelCase keys.
For example, given a response with
status code: 401
body: {'detail_message': 'user is not logged in'}
I want to transform it to a response with
status code: 401
body: {'success': False, 'detailMessage': 'user is not logged in'}
The rule for success would be True for anything below 400 and False for anything above or equal.
Lua scripting can be used in my API gateway which is Krakend
https://www.krakend.io/docs/endpoints/lua/
The documentation only includes examples for printing the response body and modifying headers but not for modifying the response body.
I have no experience with Lua and only need it for one task. I haven't been able to find online example of response body manipulation which I could play with.
What methods do I need in order to add a key value pair to a response body and to manipulate the keys in the response body?
Let's say we gave a generic POST request with Python's requests.
req = requests.post('http://someapi.someservice.com', files=files)
req will be a Response object. In my case, the .content of the response can be very, very large so I do not wish to read it all into memory. Luckily, requests provides an iterator .iter_content that allows iteration over the contents. My question is, though, does req contain all contents of the response already (and as such everything is already read into memory), or does calling .content and as such .iter_content initiate a download which really fetches the content? This is important, because if assigning the POST request to a variable already reads the Response's content into memory, then of course using .iter_content makes no difference.
You will need to set the stream parameter to True in your request in order to avoid downloading the entire content of the response into the response object:
req = requests.post('http://someapi.someservice.com', files=files, stream=True)
Excerpt from the documentation of Body Content Workflow:
By default, when you make a request, the body of the response is
downloaded immediately. You can override this behaviour and defer
downloading the response body until you access the Response.content
attribute with the stream parameter... You can further control the workflow by use of the Response.iter_content() and Response.iter_lines() methods.
Problem
I have added support for http compression in our self-hosted OWIN/Katana Web API OData 4 service but I do not see how to support compression in the .NET client. I'm using OData libraries v6.5.0 and I need to support compression/decompression in the client (OData v4 Client Code Generator). I am using Deflate encoding for the compression via an ActionFilter. Everything compresses correctly on the server as confirmed via Fiddler but I do not know how to configure the client to support this now that the OData client uses the Request and Response Pipelines instead of the now defunct WritingRequest and RecievingResponse events that once supported this very scenario.
Attempts
By experimentation I found that I can hook into the ReceivingResponse event on my DataServiceContext and then call ReceivingResponseEventArgs.ResponseMessage.GetStream() but I don't know what to do to overwrite the message content correctly. If I CopyTo() on the stream, I get a null reference exception at Microsoft.OData.Core.ODataMessageReader.DetectPayloadKind(). I presume this is because the stream was read to the end and the position needs to be set back to zero but I cannot do that because the stream also throws an exception when setting the position back because it says it does not support seeking. I presume this is simply due to the stream being read-only. Even if I could copy the stream to decompress it successfully, how do I modify the response message content with the decompressed content? I don't see any hooks for this at all in the RequestPipeline or ResponsePipeline. To clarify, I want to decompress the response message content and then set it for the materialization that occurs soon after, how might I do that? Extra credit for how to also send compressed requests to the OData service. Thanks!
OData client use the HTTPWebRequest and HTTPWebReponse, which supports the compression well. Try setting the AutomaticDecompression of HTTPWebRequest to Deflate or GZip, in SendingRequest2 pipeline event, like this:
private void OnSendingRequest_(object sender, SendingRequest2EventArgs args)
{
if (!args.IsBatchPart) // The request message is not HttpWebRequestMessage in batch part.
{
HTTPWebRequest request = ((HttpWebRequestMessage)args.RequestMessage).HttpWebRequest;
request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
}
}
Then in response, HTTPWebResponse will decompress the stream automatically, before the materialization work.
I'm very new in Erlang world and I'm trying to write a client for the Twitter Stream API. I'm using httpc:request to make a POST request and I constantly get 401 error, I'm obviously doing something wrong with how I'm sending the request... What I have looks like this:
fetch_data() ->
Method = post,
URL = "https://stream.twitter.com/1.1/statuses/filter.json",
Headers = "Authorization: OAuth oauth_consumer_key=\"XXX\", oauth_nonce=\"XXX\", oauth_signature=\"XXX%3D\", oauth_signature_method=\"HMAC-SHA1\", oauth_timestamp=\"XXX\", oauth_token=\"XXX-XXXXX\", oauth_version=\"1.0\"",
ContentType = "application/json",
Body = "{\"track\":\"keyword\"}",
HTTPOptions = [],
Options = [],
R = httpc:request(Method, {URL, Headers, ContentType, Body}, HTTPOptions, Options),
R.
At this point I'm confident there's no issue with the signature as the same signature works just fine when trying to access the API with curl. I'm guessing there's some issue with how I'm making the request.
The response I'm getting with the request made the way demonstrated above is:
{ok,{{"HTTP/1.1",401,"Unauthorized"},
[{"cache-control","must-revalidate,no-cache,no-store"},
{"connection","close"},
{"www-authenticate","Basic realm=\"Firehose\""},
{"content-length","1243"},
{"content-type","text/html"}],
"<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\"/>\n<title>Error 401 Unauthorized</title>\n</head>\n<body>\n<h2>HTTP ERROR: 401</h2>\n<p>Problem accessing '/1.1/statuses/filter.json'. Reason:\n<pre> Unauthorized</pre>\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n</body>\n</html>\n"}}
When trying with curl I'm using this:
curl --request 'POST' 'https://stream.twitter.com/1.1/statuses/filter.json' --data 'track=keyword' --header 'Authorization: OAuth oauth_consumer_key="XXX", oauth_nonce="XXX", oauth_signature="XXX%3D", oauth_signature_method="HMAC-SHA1", oauth_timestamp="XXX", oauth_token="XXX-XXXX", oauth_version="1.0"' --verbose
and I'm getting the events just fine.
Any help on this would be greatly appreciated, new with Erlang and I've been pulling my hair out on this one for quite a while.
There are several issues with your code:
In Erlang you are encoding parameters as a JSON body while with curl, you are encoding them as form data (application/x-www-form-urlencoded). Twitter API expects the latter. In fact, you get a 401 because the OAuth signature does not match, as you included the track=keyword parameter in the computation while Twitter's server computes it without the JSON body, as it should per OAuth RFC.
You are using httpc with default options. This will not work with the streaming API as the stream never ends. You need to process results as they arrive. For this, you need to pass {sync, false} option to httpc. See also stream and receiver options.
Eventually, while httpc can work initially to access Twitter streaming API, it brings little value to the code you need to develop around it to stream from Twitter API. Depending on your needs you might want to replace it a simple client directly built on ssl, especially considering it can decode HTTP packets (what is left for you is the HTTP chunk encoding).
For example, if your keywords are rare, you might get a timeout from httpc. Besides, it might be easier to update the list of keywords or your code with no downtime without httpc.
A streaming client directly based on ssl could be implemented as a gen_server (or a simple process, if you do not follow OTP principles) or even better a gen_fsm to implement reconnection strategies. You could proceed as follows:
Connect using ssl:connect/3,4 specifying that you want the socket to decode the HTTP packets with {packet, http_bin} and you want the socket to be configured in passive mode {active, false}.
Send the HTTP request packet (preferably as an iolist, with binaries) with ssl:send/2,3. It shall spread on several lines separated with CRLF (\r\n), with first the query line (GET /1.1/statuses/filter.json?... HTTP/1.1) and then the headers including the OAuth headers. Make sure you include Host: stream.twitter.com as well. End with an empty line.
Receive the HTTP response. You can implement this with a loop (since the socket is in passive mode), calling ssl:recv/2,3 until you get http_eoh (end of headers). Note down whether the server will send you data chunked or not by looking at the Transfer-Encoding response header.
Configure the socket in active mode with ssl:setopts/2 and specify you want packets as raw and data in binary format. In fact, if data is chunked, you could continue to use the socket in passive mode. You could also get data line by line or get data as strings. This is a matter of taste: raw is the safest bet, line by line requires that you check the buffer size to prevent truncation of a long JSON-encoded tweet.
Receive data from Twitter as messages sent to your process, either with receive (simple process) or in handle_info handler (if you implemented this with a gen_server). If data is chunked, you shall first receive the chunk size, then the tweets and the end of the chunk eventually (cf RFC 2616). Be prepared to have tweets that spread on several chunks (i.e. maintain some kind of buffer). The best here is to do the minimum decoding in this process and send tweets to another process, possibly in binary format.
You should also handle errors and socket being closed by Twitter. Make sure you follow Twitter's guidelines for reconnection.