We want to save documents to individual OneDrive Folders.
Currently:
User "Tim" generates a customer overview (Last visits, Revenue etc.) in our ERP-Sytem from Customer "TomCompany" and it will be automatically saved in an FTP-Folder. He's now able to have a look on this file at customers site with Good Reader on his iPad.
Plan:
First step: The customer overview should be saved directly to OneDrive, instead of an FTP-Folder.
Second step: Every Sales Person has his own OneDrive account, so it should be saved to his own account with user-Parameters etc. (which is not a Problem to manage in our ERP-API).
The question is: Is it possible to connect to OneDrive from a different System like ERP. "SaveFileToOneDrive with Authentication"
You can 'connect' to OneDrive through the given API with JavaScript.
Here is an example: https://dev.onedrive.com/sdk/js-v7/js-picker-save.htm .
You can now add the 'Save to OneDrive' button on every page you need it.
If not noticed yet, some examples for the API: https://dev.onedrive.com/sample-code.htm
Hope this helps you to solve your (for me still unknown) problem ;-)
I implemented own windows live API because of I found some problems with standard live api. It is based on REST API so there is layer with objects (file, folder, etc...) and each object has some equipment (i.e. file has method for upload and download file). Second layer is for communication with server side and object layer send requests into second layer which send it into server. Server sends response and second layer return this response into object layer.
I implemented onedrive function mainly because of I developed application which uploads some files into onedrive.
So it is very simple to use it. I describe it on webpage https://wlivefw.codeplex.com/
You can sign as user which onedrive want to use by connection object. Then you will need folder id where you want to create new file. Then you create file object with parent_id set to folder id, name (is required) and description (optional). And now you call File.Create(file object which you created, Stream object - data of origin file, OverWriteOption - if you want to overwrite file if exists or not or create with new name, and progress handler - delegate to method which you want to invoke when progress changed).
File uploading is implemented by BITS protocol, so you can upload file greater than 60MB. File is uploaded by fragment so if fragment uploading fails you can very easy send this fragment again - in exception when uploading fails is delegate to continue method which continue upload from last successfull fragment.
I would like to improve this library so library is free to use as well as source code. Please if you will expand this library send me your changes and I will build new version, etc... Thank you and I hope it is usefull.
Related
I am interested to know if its possible to apply a sensitivity label to a document received via an email and then save the document to a specific directory in one drive.
For example, lets say company xyz sends a mail with files attached that we must process, I would like the files to be removed from the mail, marked with a custom sensitivity label like xzy_secret and then store the file in a OneDrive folder called xyz_company
So all the files in that folder eventually are labelled as per the customer.
Does anyone know if this is possible? The idea is that we can then apply DLP to our customers files and ensure we can track them within the business.
Anyone have any ideas? Is there an API for doing this or a power automate method?
As far as I know, Send an email action (with power automate) does not support applying the sensitivity label to the email currently. Being said that, you may need to implement your needs through the Rest API, please check this article and see if it helps:
https://joannecklein.com/2019/05/06/setting-a-retention-label-in-sharepoint-from-microsoft-flow/
I have a directory that contains a csv file and avatar images.
The contents of csv file are as
Id Name Avatar Dept School
1 Mark 01019.jpg Market None
2 John 21122.jpg Business None
3 Sam 33311.jpg IT None
....
....
50 James 9823.jpg IT USA
The avatar images are placed in the same folder of csv file.
What I want is that when a user uploads csv file, then the info in file is converted into business objects, say Person. I can upload and parse the csv to get Id, Name, Dept, School, but ofcourse cant make it upload avatar images (in the csv file) to server, in same web request.
What are the possible ways to achieve this? Assume that I want to avoid zipping all images+csv in a single .zip file and then upload it on server.
Thanks.
I just love when people end their question by excluding the only possible solution.
The server (where your web application is running) has no direct access to the client (where the files are). The only thing the server can work with is what the client chooses to give it. So, your option is to have the user upload each image file invidually, along with the CSV, or to zip it all up, so they can send everything in a single upload. That's it. Period. At least with a standard web page.
You can of course create a Java applet or a Flash application that the user would authorize to access their filesystem to retrieve the necessary files. Essentially, the process is still the same, it's just the Java/Flash app would automatically do the file uploads instead of requiring the user to manually do them. However, both Java (on the web) and Flash are all but dead technologies at this point, so by using either of those, you're creating a dependency on something that is constantly exploited and not guaranteed to continue to receive security patches for the life of your application. Flash, in particular, has already been end-of-lifed, so Adobe will abandon support entirely within the next few years, max.
Long and short, tell your user to zip it up and upload a zip file.
So I've read the documentation for the dropbox api, and its quite rough around the edges, and Im not sure that I can do this. Basically what I want to do is make my app create a folder, give the user the link so that they can give it to other users. After those users have that link, they can paste it into the app and it will let them see whats in the folder. They don't even need to be able to see it, the app just needs to be able to download files from the folder. Its kinda exactly what a shared link would do in the normal dropbox. Is this at all possible? And if so, how would one go about doing this?
This certainly sounds possible, but you would probably need your own server-side component. The basic idea as I see it:
user A links the app to their Dropbox account
the app has the user pick a folder
the app generates its own link for that folder and supplies it to user A
user A supplies the link to user B
user B supplies the link to the app
the app looks up user A via the link, and uses /metadata or /delta to list files from user A's account, and then /files (GET) to serve file content to user B (from user A's account).
The server-side component is important in order to avoid exposing user A's access token to any other user.
This would be much easier, and wouldn't require a server-side component, if Dropbox offered an API for Dropbox shared links themselves, since you could then just use /shares to get one and pass that around directly. It doesn't though, so I'll pass this along as a request.
In my application, I have a textarea input where users can type a note.
When they click Save, there is an AJAX call to Web Api that saves the note to the database.
I would like for users to be able to attach multiple files to this note (Gmail style) before saving the Note. It would be nice if the upload could start as soon as attached, before saving the note.
What is the best strategy for this?
P.S. I can't use jQuery fineuploader plugin or anything like that because I need to give the files unique names on the server before uploading them to Azure.
Is what I'm trying to do possible, or do I have to make the whole 'Note' a normal form post instead of an API call?
Thanks!
This approach is file-based, but you can apply the same logic to Azure Blob Storage containers if you wish.
What I normally do is give the user a unique GUID when they GET the AddNote page. I create a folder called:
C:\TemporaryUploads\UNIQUE-USER-GUID\
Then any files the user uploads at this stage get assigned to this folder:
C:\TemporaryUploads\UNIQUE-USER-GUID\file1.txt
C:\TemporaryUploads\UNIQUE-USER-GUID\file2.txt
C:\TemporaryUploads\UNIQUE-USER-GUID\file3.txt
When the user does a POST and I have confirmed that all validation has passed, I simply copy the files to the completed folder, with the newly generated note ID:
C:\NodeUploads\Note-100001\file1.txt
Then delete the C:\TemporaryUploads\UNIQUE-USER-GUID folder
Cleaning Up
Now. That's all well and good for users who actually go ahead and save a note, but what about the ones who uploaded a file and closed the browser? There are two options at this stage:
Have a background service clean up these files on a scheduled basis. Daily, weekly, etc. This should be a job for Azure's Web Jobs
Clean up the old files via the web app each time a new note is saved. Not a great approach as you're doing File IO when there are potentially no files to delete
Building on RGraham's answer, here's another approach you could take:
Create a blob container for storing note attachments. Let's call it note-attachments.
When the user comes to the screen of creating a note, assign a GUID to the note.
When user uploads the file, you just prefix the file name with this note id. So if a user uploads a file say file1.txt, it gets saved into blob storage as note-attachments/{note id}/file1.txt.
Depending on your requirement, once you save the note, you may move this blob to another blob container or keep it here only. Since the blob has note id in its name, searching for attachments for a note is easy.
For uploading files, I would recommend doing it directly from the browser to blob storage making use of AJAX, CORS and Shared Access Signature. This way you will avoid data going through your servers. You may find these blog posts useful:
Revisiting Windows Azure Shared Access Signature
Windows Azure Storage and Cross-Origin Resource Sharing (CORS) – Lets Have Some Fun
I'm trying to do something a bit complicated and I'm not entirely sure how to go about it. Could you please give me some pointers on the tech I should use and how I should go about implementing this. Here's what I need to do:
Create an iOS app that allows the user to upload pictures from his camera roll and modify variables with sliders. (so far so good)
These variables and graphics are used to modify some htlm5 code (i.e. the graphics the user supplies are called by the hmtl code and the variables modify some set variables in the script) (Do I just edit the code as a string?)
The code is put together and uploaded to a server where it is accessible at a unique URL. The user can save multiple times and each time it creates a new URL. (Do I need an FTP here?)
Your question is too general, but as far as I can help, Yes you have to create and edit some html source texts, and append every object that user is adding to the page as some html codes, files, css, etc.
and for uploading, if you want the user to upload the site to his/her own ftp server or web hosting service, yes you need FTP connection to create with the server.
But if you want your user to upload the website to a space you're providing for the user, then you need some server part and maybe some APIs. then you may use FTP or even some APIs to create and update files on your server. It highly depends on the service you want to provide.