Payload length of x bytes exceeds the allowed limit of 8192 bytes - autodesk-designautomation

Is there a way to increase the max size of the workitem in the post request? I have 7 parameters, that means 7 aws presigned url, hence the workitem size during the post.

Your pre-signed URLs are "abnormally long" because they include x-amz-security-token. Pre-signed URLs are already temporary (contain an expiration) so I'm not sure why you are using temporary credentials to generate them.
Can you look into this?

Related

Logstash IIS Log Parsing: Cookie field truncated

I am parsing IIS logs with Logstash and noticed that cookie field is being truncated in some cases (displays "cookie" => "..." instead of the actual value).
I see that other cookies in the events of similar length are being processed correctly. Entire event length does not exceed 4,000 characters, so i suppose everything should fit.
What could have gone wrong?
This seems to be by design. See this post.
Cause This behavior is by design. The length of each IIS log field
value is limited to 4096 bytes (4k). If one of the field values is
greater than 4096 bytes, that value will be replaced with the three
dots. In the above example, the client's Cookie was larger than 4096
bytes and was therefore replaced with (...).
Workaround To workaround this issue, use one of the following options:
Write your own custom logging module that does not have the 4096 byte
field limitation.
OR
Reduce the size of the request or response header values to be logged
so that they are less than 4096 bytes and will therefore not be
replaced by the three dots.
You can instead add a custom field, set it to dump your value from server variables or request header and set the maxCustomFieldLength. However that seems to truncate the data to 4k rather than replacing it with "...". See this post. Slight improvement but not 100% ideal.

GetWorkItemsAsync fails when it retrieves 1800 workitems

GetWorkItemsAsync fails when it retrieves 1800 workitems. Example:
int[] ids = (from WorkItem info in wlinks select info.Id).ToArray();
WorkItemTrackingHttpClient tfvcClient = _tfs.GetClient<WorkItemTrackingHttpClient>();
List<Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models.WorkItem> dworkitems = tfvcClient.GetWorkItemsAsync(ids).Result;
If I pass array of Ids with 90 elements it works fine.
Is there any limit that it can get only n number of elements, how can we overcome this problem?
Yes, there is a limitation of the URL length, it will get this exception once the URL length has been exceeded.
So, as a workaround you can limit your calls to a allowed range at a time (e.g. 200 ids at a time). Then call several times for the query.
Unfortunately you’ve hit a limitation of the URL length. Once the URL
length has been exceeded, the server just gets the truncated version,
so odds are high that the truncated work item id is not valid.
I recommend limiting your calls to 200 ids at a time.
Source here :
https://github.com/Microsoft/vsts-dotnet-samples/issues/49
Reference this thread for the limitation of the URL length: What is the maximum length of a URL in different browsers?
This similar thread for your reference: Is there any restriction for number of characters in TFS REST API?

Is it possible to append bytes to a blob in Azure Blob Storage?

I had a bit of trouble uploading large files (>2GB) with ASP.NET MVC5 but I managed to fix it by splitting that file in packets with jQuery and uploading each packet separately. In my backend I want to upload those packets to Azure Blob Storage. Is there a way to append those bytes to an already existing blob? Most solutions I find on the internet advice to download the blob, add the bytes and re-upload them. But I think that's kinda a waste of bandwith since you download and reupload a file all the time
Try using append blobs. There is a code sample at https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs/#writing-to-an-append-blob. From that page:
An append blob is a new type of blob, introduced with version 5.x of the Azure storage client library for .NET. An append blob is optimized for append operations, such as logging. Like a block blob, an append blob is comprised of blocks, but when you add a new block to an append blob, it is always appended to the end of the blob. You cannot update or delete an existing block in an append blob. The block IDs for an append blob are not exposed as they are for a block blob.
Each block in an append blob can be a different size, up to a maximum of 4 MB, and an append blob can include a maximum of 50,000 blocks. The maximum size of an append blob is therefore slightly more than 195 GB (4 MB X 50,000 blocks).
Azure Blob Storage supports re-composing new blobs from existing blocks in already uploaded (and committed) blobs and combining those existing blocks with new blocks you upload. The order of the blocks can be chosen freely, so you can append new blocks or insert new blocks.
Append blobs are mostly used for applications that continuously add data to an object.
For an overview, check here.

Delphi WebAction - Request.ContentFields.Values['something'] size limit

Im having an issue with my web dll, compiled in XE5.
I use an AJAX POST via javascript to sumbit a base64 value for an image that was cropped.
The length is about 127 000 characters.
When logging the Request.ContentFields.Values['Base64Image'] in Delphi, the total length was reduced to 61 000, which then saves half an image.
Is there a size limit on Request.ContentFields.Text? If so, how do I overcome this? Can i save a base64 element directly from the HTML page?

What is the maximum URL length you can pass to the Wininet function, HttpOpenRequest?

What is the maximum URL length you can pass to the Wininet function, HttpOpenRequest?
There are some max length consts in WinInet.h:
...
//
// maximum field lengths (arbitrary)
//
#define INTERNET_MAX_HOST_NAME_LENGTH 256
#define INTERNET_MAX_USER_NAME_LENGTH 128
#define INTERNET_MAX_PASSWORD_LENGTH 128
#define INTERNET_MAX_PORT_NUMBER_LENGTH 5 // INTERNET_PORT is unsigned short
#define INTERNET_MAX_PORT_NUMBER_VALUE 65535 // maximum unsigned short value
#define INTERNET_MAX_PATH_LENGTH 2048
#define INTERNET_MAX_SCHEME_LENGTH 32 // longest protocol name length
#define INTERNET_MAX_URL_LENGTH (INTERNET_MAX_SCHEME_LENGTH \
+ sizeof("://") \
+ INTERNET_MAX_PATH_LENGTH)
...
HttpOpenRequest does not have a maximum length but server software you are targeting will likely have a limit on your URL length.
Apache (Server)
My early attempts to measure the
maximum URL length in web browsers
bumped into a server URL length limit
of approximately 4,000 characters,
after which Apache produces a "413
Entity Too Large" error. I used the
current up to date Apache build found
in Red Hat Enterprise Linux 4. The
official Apache documentation only
mentions an 8,192-byte limit on an
individual field in a request.
Microsoft Internet Information Server (Server)
The default limit is 16,384 characters
(yes, Microsoft's web server accepts
longer URLs than Microsoft's web
browser). This is configurable.
Perl HTTP::Daemon (Server)
Up to 8,000 bytes will work. Those
constructing web application servers
with Perl's HTTP::Daemon module will
encounter a 16,384 byte limit on the
combined size of all HTTP request
headers. This does not include
POST-method form data, file uploads,
etc., but it does include the URL. In
practice this resulted in a 413 error
when a URL was significantly longer
than 8,000 characters. This limitation
can be easily removed. Look for all
occurrences of 16x1024 in Daemon.pm
and replace them with a larger value.
Of course, this does increase your
exposure to denial of service attacks.
(from Boutell.com)
I would suggest less than 2000 characters., but this KB article suggests Internet Explorer has a limit of 2083, which may well apply to your case too.

Resources