I have an aspnet app which i upload files to the azure blobs. I know that azure don't create structural paths in the containers, just blobs, but you can emulate directories putting a "/" on the uri.
i.e
I'd upload a list of files and my uri is like this
http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/FileName01.jpg
http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/FileName02.jpg
http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/FileName03.jpg
My download method:
public RemoteFile Download(DownloadRequest request)
{
var fileFinal = string.Format("{0}/{1}/{2}",request.IDProtocol ,request.IDDocument, request.FileName);
var blobBlock = InitializeDownload(fileFinal);
if (!blobBlock.Exists())
{
throw new FileNotFoundException("Error");
}
var stream = new MemoryStream();
blobBlock.DownloadToStream(stream);
return File(request.FileName)
}
private CloudBlob InitializeDownload(string uri)
{
var blobBlock = _blobClient.GetBlobReference(uri);
return blobBlock;
}
This way, i'm getting just one file. But i need to see and download all files inside http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/
Thanks
Adding more details. You will need to use one of the listing APIs provided by the client library: CloudBlobContainer.ListBlobs(), CloudBlobContainer.ListBlobsSegmented(), and CloudBlobContainer.ListBlobsSegmentedAsync() (and various overloads.). You can specify the directory prefix, and the service will only enumerate blobs matching the prefix. You can then download each blob. You may also want to look at the ‘useFlatBlobListing’ argument, depending on your scenario.
http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.storage.blob.cloudblobcontainer.listblobs.aspx
In addition AzCopy (see http://blogs.msdn.com/b/windowsazurestorage/archive/2012/12/03/azcopy-uploading-downloading-files-for-windows-azure-blobs.aspx) also supports this scenario of downloading all blobs in a given directory path.
Since each blob is a separate web resource, function above will download only one file. One thing you could do is list all blobs using the logic you are using and then download those blobs on your server first, zip them and the return that zip file to your end user.
Use AzCopy functionalities, now, it has a lot of supports.
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10
Related
I have a problem. I am using Team Foundation Server 2017 RTM. I have a build definition that will deploy my app to a development server running Windows Server 2012 R2. My app allows users to upload images and PDFs. When this is done, a folder named Media is created in my project's root directory and the files are uploaded here. The problem is, whenever I queue a new build, this folder gets destroyed and all the links to the media don't point to anything. I am rather new at managing and setting up TFS so I was wondering if there is any way I can preserve the contents of my media folder whenever I queue a new build. Any ideas?
Ok, so I spent my whole day looking at this.
In my C# code I create a directory like so:
// -- Create a new file name that is unique
string fileExtension = Path.GetExtension(upload.FileName);
Guid fileGuid = Guid.NewGuid();
string fileName = fileGuid + fileExtension;
// -- Create the directory and upload the image to that directory
string mediaDirectory = Server.MapPath("~/Media/");
Directory.CreateDirectory(mediaDirectory);
string filePath = Path.Combine(mediaDirectory, fileName);
upload.SaveAs(filePath);
I would then set the image url on the Media object like:
string imageUrl = "/Media/" + fileName;
So now, instead of storing the image in the database, I am just storing the URL to the image.
This was creating the directory in the app directory where I can store the files:
Which is cool but as I mentioned, this directory will be destroyed every time I queue a new build. How I fixed this was to modify where I stored the images:
// -- Create a new file name that is unique
string fileExtension = Path.GetExtension(upload.FileName);
Guid fileGuid = Guid.NewGuid();
string fileName = fileGuid + fileExtension;
// -- Create the directory and upload the image to that directory
// The Media directory will be created on the C drive root
string mediaDirectory = #"c:\Media";
Directory.CreateDirectory(mediaDirectory);
string filePath = Path.Combine(mediaDirectory, fileName);
upload.SaveAs(filePath);
Now my Media folder is created on the server's C drive and won't be destroyed whenever I queue a new build. Since the app can't access files outside the app directory, I needed a way to access those files in the Media directory. What I did was create a new virtual folder in IIS that points to the Media folder and gave it the alias Media:
This will now let me have access to all those files I put in the Media directory and will properly display the images when needed. I really hope this helps someone because I spent way too long looking at this.
According to your description, there is a concept of working directory in the build agent. If you set clean=true in the build definition, this will delete the previous build output when you query a new build. Not sure where you Media folder located, avoid to create/put it in some directory on the build agent such as Build.ArtifactStagingDirectory
The local path on the agent where any artifacts are copied to before
being pushed to their destination. For example: c:\agent_work\1\a.
A typical way to use this folder is to publish your build artifacts
with the Copy files and Publish build artifacts steps.
Note: This directory is purged before each new build, so you don't have to clean it up yourself.
More details about the folder path in build/release, you could refer this tutorial-- Predefined variables
I am working on a Firefox add-on which among other stuff generates thumbnails of websites for use by the add-on. So far I've been storing them by their image data URL using simple-storage. Two problems with this: the storage space is limited and sending very long strings around doesn't seem optimal(I assume the browser has optimized ways of loading image files, but maybe not data URLs). I think it shouldn't be a problem to save the files to disk, the question is where though. I googled quite a bit and could not find anything. Is there a natural place for this? Are there any restrictions?
As of Firefox 32, the place to store data for your add-on is supposed to be: [profile]/extension-data/[add-on ID]. This was established by the resolution of "Bug 915838 - Provide add-ons a standard directory to store data, settings". There is a follow-on bug, "Bug 952304 - (JSONStore) JSON storage API for addons to use in storing data and settings" which is supposed to provide an API for easy access.
For the Addon-SDK, you can obtain the addon ID (which you define in package.json) with:
let self = require("sdk/self");
let addonID = self.id;
For XUL and restartless extensions, you should be able to get the ID of your addon (which you define in the install.rdf file) with:
Components.utils.import("resource://gre/modules/Services.jsm");
let addonID = Services.appInfo.ID
You can then do the following to generate a URI for a file in that directory:
userProfileDirectoryPath = Components.classes["#mozilla.org/file/directory_service;1"]
.getService( Components.interfaces.nsIProperties)
.get("ProfD", Components.interfaces.nsIFile).path,
/**
* Generate URI for a filename in the extension's data directory under the preferences
* directory.
*/
function generateURIForFileInPrefExtensionDataDirectory (fileName) {
//Account for the path separator being OS dependent
let toReturn = "file://" + userProfileDirectoryPath.replace(/\\/g,"/");
return toReturn +"/extension-data/" + addonID + "/" + fileName;
}
}
The object myExtension.addonData is a copy that I store of the Bootstrap data provided to entry points in bootstrap.js.
I'm running Jenkins and I have it successfully working with my GitHub account, but I can't get it working correctly with Amazon S3.
I installed the S3 plugin and when I run a build it successfully uploads to the S3 bucket I specify, but all of the files uploaded end up in the root of the bucket. I have a bunch of folders (such as /css /js and so on), but all of the files in those folders from hithub end up in the root of my S3 account.
Is it possible to get the S3 plugin to upload and retain the folder structure?
It doesn't look like this is possible. Instead, I'm using s3cmd to do this. You must first install it on your server, and then in one of the bash scripts within a Jenkins job you can use:
s3cmd sync -r -P $WORKSPACE/ s3://YOUR_BUCKET_NAME
That will copy all of the files to your S3 account maintaining the folder structure. The -P keeps read permissions for everyone (needed if you're using your bucket as a web server). This is a great solution using the sync feature, because it compares all your local files against the S3 bucket and only copies files that have changed (by comparing file sizes and checksums).
I have never worked with the S3 plugin for Jenkins (but now that I know it exists, I might give it a try), though, looking at the code, it seems you can only do what you want using a workaround.
Here's what the actual plugin code does (taken from github) --I removed the parts of the code that are not relevant for the sake of readability:
class hudson.plugins.s3.S3Profile, method upload:
final Destination dest = new Destination(bucketName,filePath.getName());
getClient().putObject(dest.bucketName, dest.objectName, filePath.read(), metadata);
Now if you take a look into hudson.FilePath.getName()'s JavaDoc:
Gets just the file name portion without directories.
Now, take a look into the hudson.plugins.s3.Destination's constructor:
public Destination(final String userBucketName, final String fileName) {
if (userBucketName == null || fileName == null)
throw new IllegalArgumentException("Not defined for null parameters: "+userBucketName+","+fileName);
final String[] bucketNameArray = userBucketName.split("/", 2);
bucketName = bucketNameArray[0];
if (bucketNameArray.length > 1) {
objectName = bucketNameArray[1] + "/" + fileName;
} else {
objectName = fileName;
}
}
The Destination class JavaDoc says:
The convention implemented here is that a / in a bucket name is used to construct a structure in the object name. That is, a put of file.txt to bucket name of "mybucket/v1" will cause the object "v1/file.txt" to be created in the mybucket.
Conclusion: the filePath.getName() call strips off any prefix (S3 does not have any directory, but rather prefixes, see this and this threads for more info) you add to the file. If you really need to put your files into a "folder" (i.e. having a specific prefix that contains a slash (/)), I suggest you to add this prefix to the end of your bucket name, as explicited in the Destination class JavaDoc.
Yes this is possible.
It looks like for each folder destination, you'll need a separate instance of the S3 plugin however.
"Source" is the file you're uploading.
"Destination bucket" is where you place your path.
Using Jenkins 1.532.2 and S3 Publisher Plug-In 0.5, the UI configure Job screen rejects additional S3 publish entries. There would also be a significant maintenance benefit to us if the plugin recreated the workspace directory structure as we'll have many directories to create.
Set up your git plugin.
Set up your Bash script
All in your folder marked as "*" will go to bucket
I am currently struggling to upload multiple files from the local storage to the Azure Blob Storage, I was wondering if anyone could help me, below is the code i was previously using to upload a single zip file.
private void SaveZip(string id, string fileName, string contentType, byte[] data)
{
// Create a blob in container and upload image bytes to it
var blob = this.GetContainer().GetBlobReference(fileName);
blob.Properties.ContentType = contentType;
// Create some metadata for this image
var metadata = new NameValueCollection();
metadata["Id"] = id;
metadata["Filename"] = fileName;
}
SaveZip(
Guid.NewGuid().ToString(),
zipFile.FileName,
zipFile.PostedFile.ContentType,
zipFile.FileBytes);
Thanks, Sami.
It's quite straightforward with Set-AzureStorageBlobContent from azure storage powershell.
ls -File -Recurse | Set-AzureStorageBlobContent -Container upload
MSDN documentation : http://msdn.microsoft.com/en-us/library/dn408487.aspx
I don't think there's any build-in methods you can use to upload multiple files to the BLOB. What you can do is to upload them one by one, or parallel.
If you're just starting to work with Blob Storage, I'd encourage you to take a look at the "How to" article we've published. Specifically, the section on "How to Upload a Blob into a Container" should be helpful. Beyond that, Shaun is correct - there is no built-in support in the StorageClient library for uploading multiple files at once, but you can certainly upload them one-by-one.
If your need is just to get it done, and not to make an app out of it, you should consider checking out Cloud Storage Studio.
Like CodeThug said, "You never do anything with the byte array".
You have to upload the data stream to the blob and you are done.
I have multimodule project
Project
|--src
|-JavaFile.java
Web-Project
|-Web-Content
|-images
| |-logo.PNG
|-pages
|-WEB-INF
regular java module - contains src with all java files
dynamic web project module - contains all web related stuff
eventually regular java module goes as a jar file in dynamic web module in lib folder
Problem
java file after compilation looks for an image file in c:\ibm\sdp\server completepath\logo.png rather in context. File is defined in java file as below for iText:
Image logo = Image.getInstance("/images/logo.PNG");
Please suggest how can I change my java file to refer to image. I am not allowed to change my project structure.
You need to use ServletContext#getResource() or, better, getResourceAsStream() for that. It returns an URL respectively an InputStream of the resource in the web content.
InputStream input = getServletContext().getResourceAsStream("/images/logo.PNG");
// ...
This way you're not dependent on where (and how!) the webapp is been deployed. Relying on absolute disk file system paths would only end up in portability headache.
See also:
getResourceAsStream() vs FileInputStream
Update: as per the comments, you seem to be using iText (you should have clarified that a bit more in the question, I edited it). You can then use the Image#getInstance() method which takes an URL:
URL url = getServletContext().getResource("/images/logo.PNG");
Image image = Image.getInstance(url);
// ...
Update 2: as per the comments, you turn out to be sitting in the JSF context (you should have clarified that as well in the question). You should use ExternalContext#getResource() instead to get the URL:
URL url = FacesContext.getCurrentInstance().getExternalContext().getResource("/images/logo.PNG");
Image image = Image.getInstance(url);
// ...