How to access files on Ceph directly as URL - storage

I need a storage system with the following requirements:
1. It should support data/service clustering
2. It should be open-source so that I can extend functionalities later if needed
3. It should support file system because I want to access some files as public url(direct access). So that I can store my scripts in these files and directly refer these files.
4. Supports some kind of authentication
5. I want it to be on premise (Not cloud).
Ceph seems to qualify all the criteria but does it support the public access of files just like a URL(Point 3) ? It has ability to generate temporary URLs though but I want permanent URLs for few files.

You could run Nextcloud and have your data volume (and database, if you feel so inclined) stored on the Ceph cluster. That's open-source, you can setup direct links to files including permanent links, and is authenticated.

Related

Azure Blob Storage File Paths

I'm going to be using the Azure Storage REST API to create and retrieve images uploaded by users using my iOS app. I'd like a directory structure something like
container_name/user_Id/group_Id/item_Id/image.jpg
Each user can have multiple group_Ids and each item can have multiple images.
Is this even possible and if so, should each user have their own container or have them all under one container?
Please note that Azure Blob Storage doesn't really has directory structure in the server side. Instead, the structure is simply two level: container, blob.
However, there is a workaround: you can name your blobs with "virtual directory" prefix, just like container_name/user_Id/group_Id/item_Id in your example, and then list blobs under your container with prefix specified.
container_name/user_Id/group_Id/item_Id/image.jpg
As mentioned by #Zhaoxing Lu - Microsoft, By including path information in blob names, you can create a virtual directory structure you can organize and traverse as you would a traditional file system. The directory structure is virtual only--the only resources available in Blob storage are containers and blobs. However, the storage client library offers a CloudBlobDirectory object to refer to a virtual directory and simplify the process of working with blobs that are organized in this way
For example, consider the following set of block blobs in a container named photos:
photo1.jpg
2010/architecture/description.txt
2010/architecture/photo3.jpg
2010/architecture/photo4.jpg
2011/architecture/photo5.jpg
2011/architecture/photo6.jpg
2011/architecture/description.txt
2011/photo7.jpg
Full documentation can be found here

Azure file storage/indexing solution

I'm developing a Web Application, and it is running as an Azure Web App. This application has a section in which a user can navigate a directory, and allows the user to open the files and browse sub-directories in said directory.
At the moment, the sub-directories and files are inside "~/Content/Documents", and I am browsing the directory's by using Directory.GetFiles() and Directory.GetDirectories(); functions which are provided by System.IO.
The files in question would be retrieved and downloaded several times a day, and there is no way to manually path one-by-one, seeing as there is a large quantity, and they are subject to change.
However, I has become inconvenient to store the files within the web directory. So my two questions are:
What Azure service can I use to store and retrieve my files?
and
Which of these services provides the ability to index/map a path, which would fit with my web-app?
Please note that the users do not have the ability to edit or otherwise upload any of the files, and there is therefor no need for the service to allow non-authenticated upload.
The newish Azure File Storage feature can be used to store files in Azure Storage and make them accessible via an SMB file share. This will allow for legacy application that require the use of a traditional file share for saving / retrieving files. This allows for easier integration into existing applications without needing to completely rewrite the file storage code.
https://azure.microsoft.com/en-us/blog/azure-file-storage-now-generally-available/

ASP.Net MVC Bundle linked content files

I've been trying to reduce the amount of copying and pasting of content files across some of my projects and decided to go down the adding files as links from a central project.
The problem I have now is that the System.Web.Optimization.Bundle.AddFile(string virtualPath, bool throwIfNotExist = true) does not work as the file doesn't really exist in the directory I specified.
Does anyone have any pointers? Or maybe an alternative to linking content files?
Thanks.
I think you cannot access files outside of your web project with the virtual path system and It might hurt when you want to deploy your app.
I suggest to make a custom project for your static content with a custom domain: e.g. static.yourcompany.com and reference all this files from this domain. This also has the advantage that the browser does not have to add the cookies for authentication and tracking to these requests, which might be faster if you have a lot of traffic. You can also open your custom cdn (http://www.maxcdn.com/pricing/) or store the files in azure or amazon aws (which is more or less free for less files and traffic).
Another approach is to make some batch files to keep your content files synced.

How to store per-user temporary files?

I need to store some temporary files from my program sometimes, currently I use the AppData path which works. However, I have just been trying my program on a non Admin (guest) account on Windows. This is resulting in errors because Windows is refusing me access to the AppData folder.
What would be the most ideal path to use instead of AppData, that even a user with the lowest permissions can use?
I tried Googling this one because I am sure I have seen an article on the Microsoft website that lists the different paths and requirements needed but I can't find it.
Thanks
If you want to store temporary files then use a sub-folder in the temporary directory. Use GetTempPath to find out where this is.
Note that on all modern versions of Windows, this folder is a per-user folder and is not shared between different users. If you want a location that is shared between all users then you need the CSIDL_COMMON_APPDATA folder. However, as you have discovered, standard users do not have rights to write in the folder. The standard approach is for the installation program to create a sub-folder with a permissive ACL that allows sufficient write access for standard users.

MongoDB's GridFS, Rails 3, X-Sendfile, and ACL's, HOW-TO?

I have a Rails 3 project that does file upload/download, with access rights (User has many Files, and can only read/write his own files).
If I store my files on classic filesystem, I can check the access to the file in my rails app and then use X-Sendfile header to redirect to the file, if user has access. In this way, a user can never access a file without permission, and the download is fast.
Can I make file download from GridFS as fast as X-Sendfile, and skip the hassle of piping them trough rails/rack ?
Piping them trough rails/rack would be horribly slow ?
Can I make file download from GridFS as fast as X-Sendfile, and skip the hassle of piping them trough rails/rack, AND ALSO have the ability to enforce access rights ?
Up until now I've found and thought of to possible solutions:
Use something like gridfs-fuse to mount the GFS to local FS and use X-Sendfile just as allways.
Use something like nginx-gridfs which is c-fast and out-of-rails (does not block my app's req-resp cycle while downloading). The downside is that it's server specific

Resources