Using the Microsoft Graph API, it is possible to add file attachments to messages as described here. However, since REST requests have a total size limit of 4mb this does not allow for very large attachments.
A resumable upload session allows for larger uploads that may be referenced by reference attachments providing a download link. However, these links are obviously short-lived and we would like to clean up the files at some point.
Is there any way to create a message with persistent file attachments larger than 4mb? Thinking along the lines of a DriveItem->FileAttachment conversion here, but could not find anything on the topic. Help is very much appreciated!
As of May 2017 you can use the referenceAttachment resource type. This allows you to attach a DriveItem from the OneDrive to a message:
POST https://graph.microsoft.com/beta/me/messages/AAMkAGE1M88AADUv0uFAAA=/attachments
Content-type: application/json
Content-length: 319
{
"#odata.type": "#microsoft.graph.referenceAttachment",
"name": "Personal pictures",
"sourceUrl": "https://contoso.com/personal/mario_contoso_net/Documents/Pics",
"providerType": "oneDriveConsumer",
"permission": "Edit",
"isFolder": "True"
}
Related
I am trying to create the upload PUT request for the OneDrive API. It's the large file "resumable upload" version which requires the createUploadSession.
I have read the Microsoft docs here: As a warning the docs are VERY inaccurate and full of factual errors...
The docs simply say:
PUT
https://sn3302.up.1drv.com/up/fe6987415ace7X4e1eF866337Content-Length:
26Content-Range: bytes 0-25/128 <bytes 0-25 of the
file>
I am authenticated and have the upload session created, however when I pass the JSON body containing my binary file I receive this error:
{ "error": {
"code": "BadRequest",
"message": "Property file in payload has a value that does not match schema.", .....
Can anyone point me at the schema definition? Or explain how the JSON should be constructed?
As a side question, am I right in using "application/json" for this at all? What format should the request use?
Just to confirm, I am able to see the temp file created ready and waiting on OneDrive for the upload, so I know I'm close.
Thanks for any help!
If you're uploading the entire file in a single request then why do you use upload session when you can use the simple PUT request?
url = https://graph.microsoft.com/v1.0/{user_id}/items/{parent_folder_ref_id}:/{filename}:/content
and "Content-Type": "text/plain" header and in body simply put the file bytes.
If for some reason I don't understand you have to use single-chunk upload session then:
Create upload session (you didn't specified any problems here so i'm not elaborating)
Get uploadUrl from createUploadSession response and send PUT request with the following headers:
2.1 "Content-Length": str(file_size_in_bytes)
2.2 "Content-Range": "bytes 0-{file_size_in_bytes - 1}/{file_size_in_bytes}"
2.3 "Content-Type": "text/plain"
Pass the file bytes in body.
Note that in the PUT request the body is not json but simply bytes (as specified by the content-type header.
Also note that max chuck size is 4MB so if your file is larger than that, you will have to split into more than one chunks.
Goodlcuk
I've following list of attachments to send in mail
210KBPDF.pdf
1MBPDF.pdf
4MBPDF.pdf
To send total file size under 4 MB(1MBPDF, 210KBPDF) I can use this approach and
To send large file (4MBPDF) I'm using solution provided here
But when I'm trying to send three files (1MBPDF, 210KBPDF, 4MBPDF) together using large file approach (Sample code) I'm getting following error...
com.microsoft.graph.http.GraphServiceException: Error code:
ErrorAttachmentSizeShouldNotBeLessThanMinimumSize Error message:
Attachment size must be greater than the minimum size.
POST
https://graph.microsoft.com/v1.0/me/messages/AAMkAGNhNWJlNjdkLWNkZTUtNDE1Yy1hYzkxLTkyOWI1M2U3NGQzOABGAAAAAAASIVxVSsS8RI-T3F73mdJZBwANqxyKMlQbSqZO439E21_mAAAAAAEPAAANqxyKMlQbSqZO439E21_mAAAVRY2dAAA=/attachments/microsoft.graph.createUploadSession
SdkVersion : graph-java/v2.3.2 Authorization : [PII_REDACTED]
{"attachmentItem":{"attachmentType":"file","conten[...]
400 : Bad Request [...]
Please let me know if I'm making any mistakes to implement the approach or suggest me any work-around to send multiple attachments mixed in size.
Thanks
Here is the approach I suggest you take to avoid any issues:
Create a draft email
Upload any attachment <3MB via the add attachment endpoint
Upload any attachment >3MB via the large upload endpoint
Send the email endpoint
With that approach instead of trying to upload the small attachments with the draft email creation, you'll avoid random failures when the total size of base64 encoded small attachments exceeds the maximum size of 4MB per request on Microsoft Graph.
I have tried several attempts to upload a GeoJSON FeatureSet to the Azure Map Service REST API.
https://learn.microsoft.com/en-us/rest/api/maps/data/uploadpreview
The JSON I tried came from http://geojson.xyz/ - namely the "populated places simple" file, which you can download here:
https://d2ad6b4ur7yvpq.cloudfront.net/naturalearth-3.3.0/ne_50m_populated_places_simple.geojson
1,249 points, 175KB.
On POSTing to /mapData/upload I get a HTTP 200, and 'success' response message.
The response headers includes a location, which when I query I get a 200 back, with this error message in the body.
{
"error": {
"code": "400 BadRequest",
"message":
"Upload request failed.\nYour data has been removed " +
"as we encountered the following problems with it:\n" +
"System.Threading.Tasks.Task`1[System.String[]]"
}
}
Any ideas?
Richard, I wasn't able to repro your issue.
The file is indeed a valid GeoJSON file and I was able to successfully upload the file ne_50m_populated_places_simple.geojson(Downloaded from http://geojson.xyz/) using the Map Data Service Upload API
Please give it another try and feel free to let us know if you still see any issues.
The team is investigating, but they say this is often due to issues with the GeoJSON file. Try pasting your GeoJSON into this site: http://geojson.io/ If you see any red in the side panel, hover over it and it should provide some insights into the issue.
I am using Rails 5 as the backend of a mobile app. The problem I am trying to solve is receiving a request from the app containing information about the customer plus 2 photos. After a short approach two options appeared:
Send the files first in a multipart/form-data POST, and return an ID to the client. After that a "real" request is sent again and the server should associate the ID(metadata) and the file.
Send the files is Base64.encoded format without changing the JSON header. Something like:
curl -X POST \
-H "Content-Type: application/vnd.api+json" \
-H "Cache-Control: no-cache" \
-d '{
"data": {
"type": "identities",
"attributes": {
"param1": "first param",
"param2": "second param",
"image1": "data:image/png;base64,iVBORw0KGgoAAAANSU.....",
"image2": "data:image/png;base64,iVBORw0KGgoAAAANSU....."
}
}' "http://API_URL/identity"
My concerns about these 2 approaches respectively are:
Since we are expecting 2 files should we make a request for each one associating it with ID? What is expected to happen if the second call doesn't reach the server or is not valid?
What amount of bytes should we accept? I was thinking of 10MB but I am not sure if this is a good idea and how the server will react? Is it a good idea firstly to validate the type and size of the file on a UI level(the mobile app)?
If someone can suggest something else I would really appreciate it. Also if you have any experience with this problem please share useful references that you have used, I would appreciate them as well.
1) If you need 2 files at one request - pass 2 files as multipart/form-data, that is totally fine. If you use b64, you encode everything first and then decode everything. Not the best idea.
2) You should validate those files both at front and back end. Max amount of bytes should be like max_file1_size + max_file_2_size + max_other_fields_size + headers_size, it's less about guessing then trying.
3) It will be a good option to use carrierwave - nice gem, and you'll got a lot less space to mess :)
Now i'm working with GoogleDrive API for iOS ( i don't use the Objective-C client library ) and I can update file's data only by "media" uploadType, file's data and/or file's meta-data by "multipart" uploadType, but I can't update anything with "resumable" uploadType.
By google's API references, request update file uses method PUT, but request with "resumable" uploadType uses method POST in the first request to send meta-data of file only, the second request with PUT method send data of file, this make me confused so much. Anyone has any suggestion for me T_T ?
Finally I did it:
Request 1:
Header:
PUT https://www.googleapis.com/upload/drive/v2/files?uploadType=resumable
Content-Type: application/json; charset=UTF-8
X-Upload-Content-Type: video/mp4
X-Upload-Content-Length: 7421411
Body:
{
Meta-data of files to upload
}
After received upload_id from Location header of response 1, call the 2nd request:
Header:
PUT https://www.googleapis.com/upload/drive/v2/files?uploadType=resumable&upload_id=adZc...
Content-Length: 7421411
Content-Type: video/mp4
Body:
{
bytes of file to upload
}
Are you using XMLHTTP request or using the Google Drive SDK API. request = gapi.client.request(); and request.exeute().
Most important question is are you able to send the next chunk. What is the point of doing an resumable upload with the full set of bytes. The point for resumable upload is to use chunking.
Upload is resumable from the next chunk, i.e technically on the next chunk, the filesize should increase. I don't mind if the server doesn't give me number of bytes received. I need to know if the file is increasing. If the file size is increasing, I can simply offset the bytes to upload to the diff of filesize and the data I send.
If by definition of what you are doing, resumable upload is supported by default. All you have to do is use:
'path': '/upload/drive/v2/files/'+ NewFileID,
the NewFileID can by any file ID old or new. It will still update the file with new bytes.
So, I am not sure what is the advantage of you are doing? Did you figure out how to send the next chunk (256KB default) and confirm that the filesize on the google drive change to a value:
First ChunkSize + + New ChunkSize +
First ChunkSize : your initial chunk that you put. Example 256KB
: This is space allocated by Google drive for storing its metadata.
New ChunkSize: You second Chunck size (which should be default 256KB again)
: This is again metadata stored by Google drive for storing the next chunk.
So total size by the time both are processed should be > 512KB