Uploading files in VSTS extension - upload

Referring to the link here:
https://learn.microsoft.com/en-us/vsts/extend/develop/data-storage.
Does the "documents" for the data storage refers to any file types?
Can we upload files via VSTS extension?
Ie. Is it possible to invoke a server side implementation using aspx or php to store a file inside my extension?

As Jazimov said that you can’t store the files in VSTS extension data storage.
I recommend that you can upload the files to a repository of VSTS through REST API (e.g. Add a binary file) in your VSTS extension, then store the necessary information (e.g. server path, file name, objectId etc) in data storage.

The Documents object is a collection of Document objects. Document objects are deserialized as JSON objects.
When you ask if "documents" can refer to any file type, the answer is "No". Documents are not files. They start as C# objects that are serialized then persisted to a data store. When retrieved, they are returned as JSON strings.
You can encode a file into your data structure before storing and then the returned JSON will contain your deserialized file information. See Binary Data in JSON String. Something better than Base64 for more details.
The last part of your question: Of course your can invoke a service that uploads and downloads files. You would have to write that code logic on your own--it's not part of an VSTS extension's data-storage subsystem.

Related

Uploading an image from iOS to Azure File Share

I have found documentation for uploading an image from iOS to a blob container in Azure - https://learn.microsoft.com/en-us/azure/storage/blobs/storage-ios-how-to-use-blob-storage
using the https://github.com/Azure/azure-storage-ios library
But I wish to upload directly to a file share. Is there a way to do this?
It needs to be implemented using SAS authentication.
Unfortunately I am not familiar with iOS programming thus I will not be able to provide you any code. However you can use the steps below to write code.
Assuming you have a SAS URL for the file share in which you wish to upload the file, you can simply use Azure Storage REST API to upload the file in a file share. You should be able to use built-in HTTP functionality in the programming language of your choice to do that.
Let's assume that you have a SAS URL for the file share in the following format: https://<account-name>.file.core.windows.net/<share-name>?<sas-token>.
First thing you would need to do is insert the file name that you wish to upload in this SAS URL so that you get a SAS URL for the file. Your SAS URL would look something like: https://<account-name>.file.core.windows.net/<share-name>/<file-name>?<sas-token>.
Next you would need to create an empty file. You will use Create File REST API operation. Do not worry about the Authorization request header there as it is already included in the SAS. Only request header you would need to include is x-ms-content-length value of which should be the size of the file you want to upload. This will create an empty file having size as that of the file you want to upload.
Once this operation completes, next you would need to upload the data in the empty file you just created. You will use Put Range operation. The request headers you need to include are x-ms-range (value of which should be bytes=0-file-length - 1) and Content-Length (value of which should be the length of your file). The request body will contain the file contents.
Using these steps you should be able to upload a file in a file share.

read write file properties with PropertyHandler Shell Extension

I'm trying to create PropertyHandler shell extension.
What's the best way for embedding properties like (Title,Author,.....) to use the same file in multi computers or devices?
StgCreateStorageEx ? way or there is other ways to do it?
because StgCreateStorageEx dealing with NTFS files only and i'm not sure if the file hold these properties with it if i open it in other device with same PropertyHandler
Is there any way to save properties inside the my file ?
The StgCreateStorageEx function creates a new storage object using the IStorage interface. This allows storing multiple data objects within a single binary file, see for example https://en.wikipedia.org/wiki/COM_Structured_Storage. So, technically, you can save almost anything in this file including embedded properties.
I don't think that this is limited to NTFS: The old Microsoft Office .doc format (and many other Microsoft products) use this storage format and work also with FAT32.
If you want to use this binary file format is a completely different question. As you did not provide any information about the content and format of your file, I cannot recommend anything. One alternative would be to store the content of your file in an xml file. Properties like Title and Author then could be added easily.

Copy only new added files from one folder to another, without moving the existing files from source folder

I am doing file integration using mirth. There is one software which generate the HL7 files. I want to read data from that files, without moving them to another destination. Next time when I want to read data, at that time it'll ignore the files from which the data are already read (i.e.Just read the new files data which are generated after last data read).
I had done this but I'll achieve it when I modify the original filename, if I am not modifying the filename then it'll read the duplicate data.
Is there is any solution for this problem, so we can read data from the files which are generated new. I am using mirth 3.5.1 version and HL7 v2 messages.
Thanks in advance.
Thanks #daveloyall, I am posting your comment as a answer here.
When you rename a file at the time you process it, for example, to add a .DONE suffix to the filename, you are adding information that can be used later. The part of the channel that reads files could be configured to skip files that have the .DONE suffix. You also add information if you move the files. Or store the filenames in some database table. I don't know if Mirth has an internal feature that tracks which HL7 messages it already processed, but if such a feature exists, the keyword 'deduplication' might be associated with it.

Download all available (and new) files

I'm using NSURLSessionDownloadTask to download some .mov files from a web and storing them in my app.
Now what I'd like to achieve is to
download ALL files of certain type (in this case .mov) available on the page, without having to specify every file URL
download files ONLY if they are not already stored in my app.
Is there any way to achieve this?
You would have to scrape that html page to get all the urls (.mov) you are looking for. Either you can use NSXMLParser if you want to write your own or you can google some library.
When you download a file, persist some metadata (eg. name or some unique identifier) either in SQLite or CoreData, so that you can check if the file has already been downloaded.

Parse migration to heroku/aws regarding the image

I have successfully migrated my parse db to aws but the urls of image files still like http://files.parsetfss.com/77447afb-f681-4b55-afad-6bceeb2e155a/tfss-79297c86-bd48-4d7f-87ab-c43e02d1a8f3-photo.png
it means files are still on parse s3 cloud or something their own storage so what will happen to those files after parse shutdown.
what is the way to migrate the images to new database/storage on my own AWS. I am worried because I have apprx 14.5 k images on parse.
Please provide you valuable guidance on this.
As you know, Parse Files is a feature allowing developers to upload files (up to 10 megabytes each) and reference those files from objects in Parse Core data classes or directly by URL provided in the API response from Parse.
Behind-the-scenes, Parse is uploading your files to a Parse-owned S3 bucket (files.parsetfss.com) and prefixing the file objects with your application “File key”.
To answer your questions directly, there's active solutions in-the-works and
here’s the latest addressing the migration and optional Parse File storage options post migration.
How do I migrated my legacy Parse Files over to Parse Sever?
Migrating legacy Parse Files from Parse-owned S3 bucket to developer-owned Parse Sever: https://github.com/ParsePlatform/parse-server/issues/8
What NON database options do I have for storing my Parse Files after migrating to Parse Server?
Add support to upload Parse Files directly to Amazon Simple Storage (S3) via S3 adapter running Parse Server: https://github.com/ParsePlatform/parse-server/pull/113
Migration Considerations for Parse Files:
When a user first uploads a file, Parse service uploads it to files.parsetfss.com and responds with a link directly to the file. At this point, there is NO POINTER or METADATA referencing this file object in Parse Core or other data classes. The developer would need to keep a reference to this file in their own data class OR make another API call to create an object or update an existing object and associate the Parse File with that object. Otherwise, the file is orphaned. Parse does allow you to "Clean Up Files" in the App Settings of your application. This option will delete any files that are not referenced by any objects. Orphaned files can only be deleted by using the Master Key and there is currently no way to search ALL your uploaded Parse Files per account or application unless it’s associated with a class object.
What happens to EXISTING Parse Files during the migration to Parse Server?
During the migration, the files stay on Parse's S3 bucket, but the newly migrated Parse Server knows how to continue serving them up post migration. NO FILES HAVE BEEN MIGRATED! Only the pointers to the S3 bucket owned by Parse AND only if those files are associated with an object. So, if the developer DOES NOT MIGRATE the “legacy” pre-migration Parse Files from Parse prior to Parse shutdown in 2017, they could lose access to these files.
Parse and the open source Parse Server community is ACTIVELY working on providing migration solutions. See here.
What happens to NEW Parse Files uploaded after the migration to Parse Server?
New Parse Files uploaded to a Parse Server after migration are hosted in MongoDB GridStore(Mongo). Only files uploaded through api.parse.com API endpoint are hosted by Parse. In other words, if you migrated your app to Parse Server but have not updated the clients to use the new Parse Server API endpoint, those Parse Files will still get uploaded to the Parse owned S3 bucket. For those clients that upload Parse Files using their new Parse Server API endpoint, the files will be stored directly into the developers MongoDB database.
I hope you found this information useful.

Resources