I am trying to upload a file to a Sharepoint document folder using the createUploadSession method. My request looks like the below:
https://graph.microsoft.com/v1.0/sites/{MY-SITE}/drive/root:/cases/Test.txt:/createUploadSession
I then PUT the file contents using the provided uploadUrl in the response. While the upload is successful, no users can see the file in the folder. We are using Application permissions (not delegated) so there is no user directly assigned to the uploaded file. How do I attribute this file upload so other users can see the file? I am using Postman. I see examples of how to upload files, but none mention how to allow users to access the file once uploaded.
I tried applying a scope attribute in the body of the createUploadSession, but that did not work. JSON body below:
"#microsoft.graph.conflictBehavior": "replace",
"description": "description",
"fileSystemInfo": { "#odata.type": "microsoft.graph.fileSystemInfo" },
"scope": "users",
"name": "Test.txt"
Any guidance would be appreciated.
Figured out this issue. The file needed to be checked in using the Check In Changes to a DriveItem resource.
POST /drives/{driveId}/items/{itemId}/checkin
POST /groups/{groupId}/drive/items/{itemId}/checkin
POST /me/drive/items/{item-id}/checkin
POST /sites/{siteId}/drive/items/{itemId}/checkin
POST /users/{userId}/drive/items/{itemId}/checkin
Once checked in, users are able to view.
Related
I saw N number of tutorials on uploading a file to active storage locally, to S3 etc using rails view. But I cannot find a legit source on how to upload a file to an active storage through API Say via postman, like what are prerequisites required to pass an attachment through API via postman?
Let me detail it.
Step 1: Through React or any frontend framework, I will choose a file to upload(a .PDF file), ultimately it should be saved somewhere.
Step 2: This chosen file should be passed through as API to the backend service where the storage of the chosen file to be saved in a storage like AWS S3.
How will the API request be sent to store the file? Can someone help me out with it?
its Very Simple to Do
Open the Postman
Go to Body -> Form-data
after that take a mouse to the key filed you will find Option for File
Then you have to select the file and send the request
Or send the File Path in the raw JSON object as well like this
and then Create the Object at backend so place check on file object if it blank then send path like this
// For Temporary Use Over Back End Only NOT NEED To Send These
"file_path" : "/home/jolly/Downloads/pdf/pdf-5.pdf",
"file_name" : "pdf-5.pdf"
and then in the code use it to create the file object which u pass where u needed to save s3 or rails storage
file_path = params[:file_path]
file_name = params[:file_name]
image_path = Rails.root+file_path
image_file = File.new(image_path)
When an Excel or Word file on SharePoint is opened,it is in edit mode by default.From Excel/Word online menu, there is a menu option to set the file read-only, called "Protect Workbook" in Excel inline and "Protect Document" in Word online, as shown in the screenshot.
Next time the file is open in Excel/Word online, it is opened in read-only mode and shows an "Edit Anyway" button to switch to edit mode, which is exactly what I need.
Question is: How can I use Graph API to programatically set an Excel or Word document as read-only?
HTTP request
POST /workbook/worksheets/{id|name}/protection/protect
Request headers
Name Description
Authorization Bearer {token}. Required.
Workbook-Session-Id Workbook session Id that determines if changes are persisted or not. Optional.
Request body
In the request body, provide a JSON object with the following parameters.
Parameter Type Description
options WorkbookWorksheetProtectionOptions Optional. sheet protection options.
Response
If successful, this method returns 200 OK response code. It does not return anything in the response body.
Kindly look at https://learn.microsoft.com/en-us/graph/api/worksheetprotection-protect?view=graph-rest-1.0&tabs=http for more Information
I am currently working on Azure API App services, In that I have little bit confusion for generating swagger metadata using swashbuckler. for that I read the below documentation in that they are mentioning to see your metadata just add swagger/docs/v1 at the end your API url.
https://learn.microsoft.com/en-us/azure/app-service-api/app-service-api-dotnet-get-started
I am also did same thing and I am able to see my metadata generation In the form JSON. but when I added swagger/help/v1 or swagger/help/v2 at the end of my API url, I am not getting any metadata in the form JSON.
Is there any reason using only swagger/docs/v1 for generating swagger metadata in the form JSON or others also allowed like swagger/help/v1 etc.
Swashbuckle's default path is /swagger/docs/v1, and the Swashbuckle docs show how to change that path.
httpConfiguration
.EnableSwagger("docs/help/{apiVersion}", c => c.SingleApiVersion("v1", "A title for your API"))
.EnableSwaggerUi("sandbox/{*assetPath}");
In this case the URL to swagger json will be docs/help/v1 and the url to the swagger-ui will be sandbox/index.
I am using transloadit to process image uploads with rails. I have included all the fields (fields: "*") so they are submitted with the params. I would now like to use them in the assembly instructions to rename the files. See the relevant excerpt of the instructions,
"export": {
"use": [
"base",
"large",
"medium",
"thumb"
],
"robot": "/s3/store",
"key": "********",
"secret": "********",
"bucket": "********",
"path": "${unique_original_prefix}/${previous_step.name}/${fields.coach[name]}.${file.ext}"
}
However this does not work. The resulting files are,
5e
/f88480973a11e49ecf65da10504cf1
/base
/.jpg
/large
/.jpg
/medium
/.jpg
/thumb
/.jpg
What am I doing wrong?
Bonus:
Also is there a way to parameterize the field values with transloadit or should I just have a hidden input field which gets set with the proper value when the form is submitted. This I guess would also allow me to circumvent the first problem but somehow that feels dirty.
you are using ${unique_original_prefix}, which works like:
This is like ${unique_prefix} with the exception that two files that are encoding results of the same uploaded file (the original file) will have the same prefix value here.
And then for ${unique_prefix}:
A unique 33-character prefix used to avoid file name collisions, such as “f2/d3eeeb67479f11f8b091b04f6181ad”
Please notice how both these assembly variables will create a two char subdirectory, which you are seeing in your results. And the rest of the path is expected.
If you'd like a unique prefix without the two-char subfolders, please use ${assembly.id} instead of ${unique_original_prefix}.
About the bonus, you can just add a "fields" object in your assembly params (just like you are adding "auth" and "steps"). Those will also be available as fields. You could for example add your own unique prefix by sending it in that object and then using it as ${fields.my_custom_unique_prefix}. Just make sure you use "my_custom_unique_prefix" as the key in your fields object.
Kind regards,
Tim
Co-Founder Transloadit
#tim_kos
The flow is:
The user selects an image on the client.
Only filename, content-type and size are sent to the server. (E.g. "file.png", "image/png", "123123")
The response are fields and policies for upload directly to S3. (E.g. "key: xxx, "alc": ...)
The case is that if I change the extension of "file.pdf" to "file.png" and then uploads it, the data sent to the server before uploads to S3 are:
"file.png"
"image/png"
The servers says "ok" and return the S3 fields for upload .
But the content type sent is not a real content type. But how I can validate this on the server?
Thanks!
Example:
Testing Redactorjs server side code (https://github.com/dybskiy/redactor-js/blob/master/demo/scripts/image_upload.php) it checks the file content type. But trying upload fake image (test here: http://imperavi.com/redactor/), it not allows the fake image. Like I want!
But how it's possible? Look at the request params: (It sends as image/jpeg, that should be valid)
When I was dealing with this question at work I found a solution using Mechanize.
Say you have an image url, url = "http://my.image.com"
Then you can use img = Mechanize.new.get(url)[:body]
The way to test whether img is really an image is by issuing the following test:
img.is_a?(Mechanize::Image)
If the image is not legitimate, this will return false.
There may be a way to load the image from file instead of URL, I am not sure, but I recommend looking at the mechanize docs to check.
With older browsers there's nothing you can do, since there is no way for you to access the file contents or any metadata beyond its name.
With the HTML5 file api you can do better. For example,
document.getElementById("uploadInput").files[0].type
Returns the mime type of the first file. I don't believe that the method used to perform this identification is mandated by the standard.
If this is insufficient then you could read the file locally with the FileReader apis and do whatever tests you require. This could be as simple as checking for the magic bytes present at the start of various file formats to fully validating that the file conforms to the relevant specification. MDN has a great article that shows how to use various bits of these apis.
Ultimately none of this would stop a malicious attempt.