OneDrive Api - With whom the item is shared? - microsoft-graph-api

In OneDrive Business account I have shared files and folders and I'm trying to get a list of emails/users with whom the items are shared.
Both
https://graph.microsoft.com/v1.0/me/drive/sharedWithMe
and
https://graph.microsoft.com/v1.0/me/drive/root/children
both produce a similar result. I get the list of files, but the property Permissions is never present. All I see is whether the items are shared, but not with whom.
Now, I'm aware of /drive/items/{fileId}/permissions, but this would mean checking the files one-by-one. My app deals with a lot of files and I would really appreciate a way to get hose permissions in bulk...
Is there such an option?

/sharedWithMe is actually the opposite of what you're looking for. These are not files you've shared with others but rather files others have shared with you.
As for your specific scenario, permissions is unfortunately not supported in a collection. In other words, it isn't possible to $epand=permissions on the /children collection. Each file needs to be inspected separately.
You can however reduce the number of files you need to inspect by looking at the shared property. For example, if the scope property is set to user you know this file was shared with a specific user. If the shared property is null, you know this file is only available to the current user.
You can also reduce the number of calls you're making by using JSON Batching. After constructing a list of shared files you want to check, you can use Batching to process them in blocks of 20. This should greatly reduce the amount of overhead and dramatically improve the overall performance.

_api/web/onedriveshareditems?$top=100&$expand=SpItemUrl might just do the trick. This is the URL that is used by the web interface of OneDrive. Hope it helps

Related

Uniquely identify files with same name and size but with different contents

We have a scenario in our project where there are files coming from the client with the same file name, sometimes with the same file size too. Currently when we upload a file, we are checking the new file name with the existing files in the database and if there is a reference we are marking it as duplicate and would not allow to upload at all. But now we have a requirement to check the content of the file when they have the same file name. So we need to find out a solution to differentiate such files based on contents. So, how do we efficiently do that - meaning how to do it avoiding even a minute chance of error?
Rails 3.1, Ruby 1.9.3
Below is one option I have read from a web reference.
require 'digest'
digest_value = Digest::MD5.base64digest(File.read( file_path ))
And the above line will read all the contents of the incoming file and based on which it will generate a unique hash, right? Then we can use it for unique file identification. But we have more than 500 users simultaneously working in 24/7 mode and most of them will be doing this operation. So, if the incoming file has a huge size (> 25MB) then the Digest will take more time to read the whole contents and there by suffer performance issues. So, what could be a better solution considering all these facts?
I have read the question and the comments and I have to say you have the problem stated not 100% correct. It seems that what you need is to identify identical content. Period. Despite whether name and size are equal or not. Correct me if I am wrong, but you likely don’t want to allow users to update 100 duplicates of the same file just because the user has 100 copies of it in local, having different names.
So far, so good. I would use the following approach. The file name is not involved anyhow. The file size might help in terms of fast-check the uniqueness (sizes differ hence files are definitely different.)
Then one might allow the upload with an instant “OK” response. Afterwards, the server in the background should run Digest::MD5, comparing the file against all already uploaded. If there is a duplicate, the new copy of the file should be removed, but the name should stay on the filesystem, being a symbolic link to the original.
That way you’ll not frustrate users, giving them an ability to have as many copies of the file as they want under different names, while preserving the HDD volume at the lowest possible level.

What is the proper way for a program to open and write to a mapped drive without allowing the computer user to do so?

I am working with a program designed to record and display user-input data for tracking courses in a training process. One of the requirements was that we be able to keep a copy of each course's itinerary (in .pdf format) to display alongside the course. This program is being written in Delphi 7, expected to run on Windows 7 machines.
I've managed to get a remote location set up on the customer's main database (running CentOS 6), as a samba share, to store the files. However, I'm now running into a usability issue with the handling of the files in question.
The client doesn't want the process to go to a mapped drive; they've had problems in the past with individual users treating the mapped drive another set of programs require as personal drive space. However, without that, the only method I could come up with for saving/reading back the .pdf files was a direct path to the share (that is, setting the program to copy to/read from \\server\share\ directly) - which is garnering complaints that it takes too long.
What is the proper way to handle this? I've had several thoughts on the issue, but I can't determine which path would be the best to follow:
I know I could map the drive at the beginning of the program execution, then unmap it at the end, but that leaves it available for the end user to save to while the program is up, or if the program were to crash.
The direct 'write-to-share' method, bypassing the need for a mapped drive, as I've said, is considered too slow (probably because it's consistently a bit sluggish to display the files).
I don't have the ability to set a group policy on these machines, so I can't hide a drive that way - and I really don't think it's a wise idea for my program to attempt to change the registry on the user's machine, which also lets that out.
I considered trying to have the drive opened as a different user, but I'm not sure that helps - after looking at it, I'm thinking (perhaps inaccurately) that it wouldn't be any defense; the end user would still have access to the drive as opened during the use window.
Given that these four options seem to be less than usable, what is the correct way to handle these requirements?
I don't think it will work with a samba share.
However you could think about using (secure) ftp or if there is a database just uploading them as a blob.
This way you don't have to expose user credentials to a user.

Solution For Monitoring and Maintaining App's Size on Disc

I'm building an app that makes extensive use of CoreData and a lot of my models have UIImage and NSData properties (for images and videos). Since it's not a great idea to store that data directly into CoreData, I built a file manager class that writes the files into different buckets in the documents directory depends on the context in which was created and media type.
My question now is how do I manage the documents directory? Is there a way to detect how much space the app has used up out of its total allocated space? Additionally, what is the best way to go about cleaning those directories; do I check every time a file is written or only on app launch, ect ect.
Is there a way to detect how much space the app has used up out of its total allocated space?
Apps don't have a limit on total allocated space, they're limited by the amount of space on the device. You can find out how much space you're using for these files by using NSFileManager to scan the directories. There are several methods that do this in different ways-- check out enumeratorAtPath:, for example. For each file, use a method like attributesOfItemAtPath:error: to get the file size.
Better would be to track the file sizes as you create and delete files. Keep a running total, stored in user defaults. When you create a new file, increase it by the amount of new data. When you remove a file, decrease the running total.
Additionally, what is the best way to go about cleaning those directories; do I check every time a file is written or only on app launch, ect ect.
If these files are local data that's inherently part of the associated Core Data object, the sensible approach is to delete a file when its Core Data object is deleted. The managed object needs the data file, so don't delete the file if you still use the object. That means there must be some way to link the two, but I'm assuming that's already true since you say that these files are used by managed objects somehow.
If the files are something like cached data that's easily re-created or re-downloaded, you should put them in the location returned by NSTemporaryDirectory(). Then iOS can delete them when it thinks the space is needed. You can also clear out old files whenever it seems appropriate, by scanning for older files or ones that haven't been used in a while (the details depend on exactly how you use the files).

Multiple Uploads to website simultaneosly

I am building a ASP.Net website and the website accepts a PDF as input and processes them. I am generating an intermediate file with a particular name. But I want to know if multiple users are using the same site at the same time, then how will the server handle this.
How can I handle this. Will Multi-Threading do the job? What about the file names of the intermediate files I am generating? How can I make sure they won't override each other. How to achieve performance?
Sorry if the question is too basic for you.
I'm not into .NET but it sounds like a generic problem anyways, so here are my two cents.
Like you said, multithreading (as usually different requests run in different threads) takes care for most of that kind of problems, as every method invocation involves new objects run in a separate context.
There are exceptions, though:
- Singleton (global) objects whose any of their operations have side effects
- Other resources (files, etc. ), this is exactly your case.
So in the case of files, I'd ponder these (mutually exclusive) alternatives:
(1) Never write the uploaded file to disk, instead hold it into memory and process it in there (like in byte array). In this case you're leveraging the thread-per-request protection. This one cannot be applied if your files are really big.
(2) Choose very randomized names (like UUID) to write them into a temporary location so their names won't clash if two users upload at the same time.
I'd go with (1) whenever possible.
Best

How are you mapping database records to physical files such as image uploads

37 signals suggests id partitioning to accomplish this thing..
http://37signals.com/svn/archives2/id_partitioning.php
Any suggestions would be more than welcome.
Thanks.
We use Paperclip for storing our files. It can do what you want pretty easily.
We use partitioning by date so an image uploaded today would end up in 2009/12/10/image_12345.jpg. The path is stored in the db for reference and the path to the image folder (the parent of 2009) is placed in some config file. If we need to change things later it makes it very easy.
You can map by virtually everything. We use mapping by user on our designs, but it's a HR system so it makes sense (there's no way the user will have 32k file entries) and the files are clearly connected with user. On Media Library parts of the system dividing by date or ID will be more useful.
The catch is, you should store some part of file path in database table (as suggested before). Will it be date, or user hash/name (often also divided, eg u/user j/john j/jo/john etc). Then you don't have to worry about changing division system, as this will only require database update.

Resources