When uploading single file into one drive for business I can't find a way to add a create and modify date.
when I use resumeable upload for file more then 4 Mb it is working but not for single small files.
using the Microsoft graph .NET SDK
var client = await GetGraphClient(request);
var stream = new FileStream(request.LocalPath, FileMode.Open, FileAccess.Read, FileShare.Read);
client.Me.Drive.Root.ItemWithPath($"{ds.FolderPath}/{request.File.Name}").Content.Request().PutAsync<DriveItem>(stream);
how do I upload with this method a file with create and modify date??
There's currently no way to do this in a single request with OneDrive Business - you'll have to follow the put-to-content request with a metadata update that includes the timestamps.
In the future the goal is to align with OneDrive Personal and allow multipart requests that include both the metadata and the file content, but this is not yet available.
Related
I have an excel on line documents that can be edited by users in the Excel 365 web application.
I have an application that read this excel file with the graph api.
I have successfully managed to read the data from the file but when a user change the excel file and Excel says it has been saved, if I read the file immediately with my application I have old data.
I have to wait for 30s to have the updated data. Is there anything I can do to avoid this latency.
Here is my call to get the data :
var range = await _graphClient.Drives[_driveId].Items[_itemId].Workbook.Worksheets[workseetName]
.Range(rangeAddress).UsedRange(true)
.Request()
.GetAsync();
I posted a similar question on the graph API github repo. They responded here:
https://github.com/microsoftgraph/msgraph-cli/issues/215#issuecomment-1379391739
They suggest using the etag property to determine whether the workbook has changed since that last access. https://learn.microsoft.com/en-us/graph/api/resources/driveitem?view=graph-rest-1.0
They also suggest using the Excel session. https://learn.microsoft.com/en-us/graph/api/resources/excel?view=graph-rest-1.0#sessions-and-persistence
However in my personal testing the session method didn't fix the delay in excel saving the data in the backend.
They also mentioned:
"I cannot guarantee that these will work. I don't think this API was designed with real-time co-authoring in mind."
In our project we started to use GRAPH API v.1.0 introduced some time ago and we wanted our users to be able to download *.eml file. For that we found that this is called MIME stream on MS side.
Long story short:
we use TypeScript (TS) and Graph Client library (#microsoft/microsoft-graph-client#^2.2.1);
We use request
await graphClient.api(/me/messages/${msgID}/$value).getStream()
The problem we started to notice is that 15MB email (with some attachments of 3-5MB) takes approximately 30sec to download whereas if email is downloaded via Outlook OWA (trhee dots on email > download) it takes few seconds.
Is there a way to increase the download speed using GRAPH api?
I tried to download MIME from graph and expected it to be as fast as downloading MIME from outlook web but it takes twice (maybe more) amount of time to download
EDITED:
As an example: downloading eml (13Mb) from https://outlook.live.com/mail
by uri like: https://attachment.outlook.live.net/owa/outlook_HEXNUM#outlook.com/service.svc/s/DownloadMessage?id=BASE64%3D&token=HUGETOKEN takes 2.5s
downloading eml using GraphApi by uri like:
client.api('/me/messages/${encodeURIComponent(this._fixId(mailId))}/$value').getStream() and reading stream takes 20-30s.
I am using microsoft teams API to fetch public channel message. With messages I also get attachments with a sharepoint URL as content URL. Now I want to download that file but it fails to download it. If I directly copy paste contentURL on browser, it downloads the file but if I do an http call, it fails. I cannot use sharepoint graph api(download by item ID) as I don't get itemID here. Is there any way to download those file? Any permission needed or any mistake I am doing on HTTP call?
PS: I do see a couple of same questions but there are no accepted answer and all are couple of years old.
I am working on a project where the user joins a "stream". During stream setup, the person who is creating the stream (the stream creator) can choose to either:
Upload all photos added to the stream by members to our hosting solution (S3)
Upload all photos added to the stream by members to the stream creator's own Dropbox authenticated folder
In the future I would like to add more storage providers (such as Drive, Onesky etc)
There is a couple of different questions I have in regards to how to solve this.
What should the structure be in the database for photos? I currently only have photo_url, but that won't be easy to manage from a data perspective with pre-signed urls and when there are different ways a photo can be uploaded (s3, dropbox etc.)
How should the access tokens for each storage provider be stored? Remember that only the stream creator's access_token will be stored and everyone who is on the stream will share that token when uploading photos
I will add iOS and web clients in the future that will do a direct upload to the storage provider and bypass the server to avoid a heavy load on the server
As far as database storage, your application should dictate the structure based on the interface that you present both to the user and to the stream.
If you have users upload a photo and they don't get to choose the URI, and you don't have any hierarchy within a stream, then I'd recommend storing just an ID and a stream_id in your main photo table.
So at a minimum you might have something looking like
create table photos(id integer primary key, stream_id integer references streams(id) not null);
But you probably also want description and other information that is independent of storage.
The streams table would have all the generic information about a stream, but would have a polymorphic association to a class dependent on the type of stream. So you could use that association to get an instance of S3Stream or DropBoxStream based on what actual stream was used.
That instance (also an ActiveRecord resource) could store the access key, and for things like dropbox, the path to the folder etc. In addition, that instance could provide methods to construct a URI given your Photo object.
If a particular technology needs to cache signed URIs, then say the S3Stream object could reference a S3SignedUrl model where the URIs are signed.
If it turns out that the signed URL code is similar between DropBox and S3, then perhaps you have a single SignedUrl model.
When you design the ios and android clients, it is critical that they are not given access to the stream owner's access tokens. Instead, you'll need to do all the signing inside your server app. You wouldn't want a compromise of a device to lead to exposing the access token creating billing problems as well as privacy exposures.
Hope this helps.
we setup a lot of rails applications with different kind of file storages behind it.
Yes, just an url is not manageable in the future. To save a lot of time you could use gems like carrierwave or paperclip. They handle all the thumbnail generation and file validation. One approach is, that you could upload the file from the client directly to S3 or Dropbox to a tmp folder and just tell your Rails App "Hey, here is the url of a new upload file" and paperclip and carrierwave will take care of the thumbnail generation and storaging. (Example for paperclip)
Don't know exactly how your stream works, so I cannot give a good answer to this -.-
With the setup I mentioned in 1. you should upload form your different clients directly to S3 or Dropbox etc. and after uploading, the client tells the Rails Backend that it should import the file from that url. (And before paperclip or carrierwave finish their processing you could use the tmp url from the file to display something directly in your stream)
I'm building a Ruby on Rails app, and I'd like to integrate some Office365 features.
For instance : I would like to download a file from OneDrive and then attach it to an Email in order to send it via Outlook rest API.
I found this get Item content OneDrive REST API but I dont understand how to use it.
I understand that I have to send a GET request (formated as explained in msdn.microsoft.com) with Rails, which will then provide me a "a pre-authenticated download URL" to download the file.
Then I will have to send a second GET request with this a pre-authenticated download URL to start the download, but I don't understand how to deal with the Response in order to save the file into a variable.
How can I retrieve the file into a variable of my Ruby on Rails App, so that I can attach it to an Email with an Outlook REST API to send it from my own Rail controller ?
Also this workflow is really not optimized in term of Bandwidth and Processing (3 REST API request + 1 download + 1 upload), it will work.
However if it exist a single REST API that direclty attach a OneDrive file to an email to send it, that would ease a lot my life, save energy, save money from Microsoft datacenter, and spare the planet ecology.
Any tutorial, examples, or more explanatory doc would be much appreciated.
--- EDIT ---
Adding link to the email is not wished as the email may have to be send to someone outside of Office365 users, and public link are a security issue for confidential documents.
Any help is welcome.
There isn't a single REST API call you can make currently to do what you want, although being able to easily attach a file from OneDrive to a new email message is a great scenario for Microsoft Graph API, it just isn't supported right now.
If you want to attach the file, you need to do as you mentioned, download the contents of the file, and then upload it again as an attachment to the message.
However, I'd recommend sending a link to the file instead, even though you mentioned you don't want to do that. OneDrive for Business now supports "company shareable links" which are scoped to just the user's organization instead of being available totally anonymously.
Something else to consider: The security concerns of sending an anonymous link aren't that different than sending an attached file. In fact, the anonymous link can be more secure, because access to the file can be monitored and revoked in the future (unlike the attachment, which will always be out there).