LibGit2Sharp Cloning from a network file share - libgit2sharp

I'm having issues cloning using the file transport when the remote is hosted on a network drive.
I downloaded the project and tried adding some test cases:
[Fact]
public void CanCloneALocalRepositoryFromANetworkDriveUri()
{
var networkPath = #"file:///192.168.1.1/Share/TestRepo.git";
var uri = new Uri(networkPath);
AssertLocalClone(uri.AbsoluteUri, BareTestRepoPath);
}
That fails with:
LibGit2Sharp.LibGit2SharpException : failed to resolve path 'file://192.168.1.1/Share/TestRepo.git': The filename, directory name, or volume label syntax is incorrect.
I tried mapping a drive letter (Z:) to the share, and ran this:
[Fact]
public void CanCloneALocalRepositoryFromAMappedNetworkDrive()
{
var networkPath = #"file:///Z:/TestRepo.git";
var uri = new Uri(networkPath);
AssertLocalClone(uri.AbsoluteUri, BareTestRepoPath);
}
That fails with:
LibGit2Sharp.LibGit2SharpException : failed to resolve path 'Z:/TestRepo.git': The system cannot find the path specified.
unless I set:
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLinkedConnections
to a DWORD value of 1, as per this TechNet article - in which case the clone succeeds. However, this is not a viable solution in my situation, as it raises deployment issues in security-conscious environments.
It appears that LibGit2Sharp is not capable of cloning from a file UNC. Have I understood correctly, and if so is there any way to work around this?

The file:/// URL syntax is not appropriate for UNC paths. Just use a UNC path, eg:
\\192.168.1.1\Share\TestRepo.git
This works in LibGit2Sharp and the git command-line client as well.

Related

Checkout using Libgit2sharp on an empty repo

By using git checkout -b <branchname> I am able to create a new branch in an empty repo and the start committing files on that branch. I am not able to achieve this via libgit2sharp. By using repo.Checkout(branchName) it throws following error:
LibGit2Sharp.NotFoundException: No valid git object identified by exists in the repository.
The current version of the native libgit2 library used by libgit2sharp requires a HEAD to exist as it is used during the branch creation. Using a empty(null) committish is valid in the official git version and thus creating a new branch and checking it out works fine on a completely bare repo. Maybe this is covered in the next release and/or an already known bug.
But either way, just create an initial commit that is empty in content and it works:
using System;
using System.IO;
using LibGit2Sharp;
namespace stackoverflow
{
class MainClass
{
public static void Main (string[] args)
{
var rPath = Path.Combine (Path.GetTempPath (), "StackOverFlow");
var rootedPath = Repository.Init (rPath, false);
var repo = new Repository (rootedPath);
repo.Commit ("Initial Commit");
repo.CreateBranch ("EmptyBranch");
repo.Checkout ("EmptyBranch");
}
}
}

Embed SWF file in firefox add-on (no SDK)

I have a problem with a firefox extension i'm developing.
I need to add an SWF file in the page. If I load it from a remote server, it works fine:
myObj2.setAttribute("data",'http://www.mySite.com/myFile.swf');
myPar1.setAttribute("value",'http://www.mySite.com/myFile.swf');
It works fine but is not accepted for the review.
so I created a resource dir in the manifest:
resource ldvswf swf/
and changed the script into:
myObj2.setAttribute("data",'resource://ldvswf/myFile.swf');
myPar1.setAttribute("value",'resource://ldvswf/myFile.swf');
but it doesn't work. The folder resource://ldvswf is ok as i tested loading an image and I see it.
The reviewer wrote me that for flash file it "requires doing so via a file: URL", but I don't know how to manage, I tested:
'file: resource://ldvswf/myFile.swf'
'file://resource://ldvswf/myFile.swf'
'file://ldvswf/myFile.swf'
'file: ldvswf/myFile.swf'
And nothing works.
Any suggestion for the right path?
Thanks a lot!
Nadia
Update: the editor wrote me:
You need a file URL that points to an actual file. If your extension is unpacked, something like the following should do:
Services.io.newFileURI(Services.io.newURI("resource://ldvswf/myFile.swf", null, null)
.QueryInterface(Ci.nsIFileURL).file)
.spec
But I don't understand how to plce it to replace:
myObj2.setAttribute("data",'http://www.mySite.com/myFile.swf');
myPar1.setAttribute("value",'http://www.mySite.com/myFile.swf');
I made some test like:
var file = Services.io.newFileURI(Services.io.newURI("resource://ldvswf/myFile.swf", null, null).QueryInterface(Ci.nsIFileURL).file).spec ;
myObj2.setAttribute("data",file);
myPar1.setAttribute("value",file);
But I get this error message:
Error: NS_NOINTERFACE: Component returned failure code: 0x80004002 (NS_NOINTERFACE) [nsIFileURL.file]
Have you tried using contentaccessible=yes in your chrome.manifest, like this:
content package_name content/ contentaccessible=yes
I tested it with an image in a local web page and it worked, don't know for SWF:
<img src="chrome://package_name/skin/window.png"/>
Another approach is to get the file URI of the chrome:// (not resource://) file like this:
function chromeToPath (aPath){
if (!aPath || !(/^chrome:/.test(aPath))){
return null; //not a chrome url
}
var Cc = Components.classes, Ci = Components.interfaces;
var spec, url, file;
var ios = Cc['#mozilla.org/network/io-service;1'].getService(Ci["nsIIOService"]);
var uri = ios.newURI(aPath, "UTF-8", null);
var crs = Cc['#mozilla.org/chrome/chrome-registry;1'].getService(Ci["nsIChromeRegistry"]);
var fph = Cc["#mozilla.org/network/protocol;1?name=file"].createInstance(Ci.nsIFileProtocolHandler);
spec = crs.convertChromeURL(uri).spec;
return spec;
}
usage:
chromeToPath('chrome://package_name/skin/window.png')
and it will return the file:// URI of the file.
If you test this, better make sure the extension is unpacked (unzipped) on install:
in install.rdf:
<em:unpack>true</em:unpack>
try breaking the line like this:
var file = Services.io.newURI("resource://ldvswf/myFile.swf", null, null).QueryInterface(Ci.nsIFileURL).file;
alert(file.path)
var file = Services.io.newFileURI(file).spec ;
alert(file)
the first alert() should give a local system path like 'c:\folder\filename...'
the second should give a file:// URI
so what do you get?
make sure you have a line in package.json:
"unpack": true,
at the same level as name, title, author etc. Note that true does not have the quotes.
(https://developer.mozilla.org/en-US/Add-ons/SDK/Tools/package_json)

How to download all files in an Azure Container Directory?

I have an aspnet app which i upload files to the azure blobs. I know that azure don't create structural paths in the containers, just blobs, but you can emulate directories putting a "/" on the uri.
i.e
I'd upload a list of files and my uri is like this
http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/FileName01.jpg
http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/FileName02.jpg
http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/FileName03.jpg
My download method:
public RemoteFile Download(DownloadRequest request)
{
var fileFinal = string.Format("{0}/{1}/{2}",request.IDProtocol ,request.IDDocument, request.FileName);
var blobBlock = InitializeDownload(fileFinal);
if (!blobBlock.Exists())
{
throw new FileNotFoundException("Error");
}
var stream = new MemoryStream();
blobBlock.DownloadToStream(stream);
return File(request.FileName)
}
private CloudBlob InitializeDownload(string uri)
{
var blobBlock = _blobClient.GetBlobReference(uri);
return blobBlock;
}
This way, i'm getting just one file. But i need to see and download all files inside http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/
Thanks
Adding more details. You will need to use one of the listing APIs provided by the client library: CloudBlobContainer.ListBlobs(), CloudBlobContainer.ListBlobsSegmented(), and CloudBlobContainer.ListBlobsSegmentedAsync() (and various overloads.). You can specify the directory prefix, and the service will only enumerate blobs matching the prefix. You can then download each blob. You may also want to look at the ‘useFlatBlobListing’ argument, depending on your scenario.
http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.storage.blob.cloudblobcontainer.listblobs.aspx
In addition AzCopy (see http://blogs.msdn.com/b/windowsazurestorage/archive/2012/12/03/azcopy-uploading-downloading-files-for-windows-azure-blobs.aspx) also supports this scenario of downloading all blobs in a given directory path.
Since each blob is a separate web resource, function above will download only one file. One thing you could do is list all blobs using the logic you are using and then download those blobs on your server first, zip them and the return that zip file to your end user.
Use AzCopy functionalities, now, it has a lot of supports.
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10

Jenkins Continuous Integration with Amazon S3 - Everything is uploading to the root?

I'm running Jenkins and I have it successfully working with my GitHub account, but I can't get it working correctly with Amazon S3.
I installed the S3 plugin and when I run a build it successfully uploads to the S3 bucket I specify, but all of the files uploaded end up in the root of the bucket. I have a bunch of folders (such as /css /js and so on), but all of the files in those folders from hithub end up in the root of my S3 account.
Is it possible to get the S3 plugin to upload and retain the folder structure?
It doesn't look like this is possible. Instead, I'm using s3cmd to do this. You must first install it on your server, and then in one of the bash scripts within a Jenkins job you can use:
s3cmd sync -r -P $WORKSPACE/ s3://YOUR_BUCKET_NAME
That will copy all of the files to your S3 account maintaining the folder structure. The -P keeps read permissions for everyone (needed if you're using your bucket as a web server). This is a great solution using the sync feature, because it compares all your local files against the S3 bucket and only copies files that have changed (by comparing file sizes and checksums).
I have never worked with the S3 plugin for Jenkins (but now that I know it exists, I might give it a try), though, looking at the code, it seems you can only do what you want using a workaround.
Here's what the actual plugin code does (taken from github) --I removed the parts of the code that are not relevant for the sake of readability:
class hudson.plugins.s3.S3Profile, method upload:
final Destination dest = new Destination(bucketName,filePath.getName());
getClient().putObject(dest.bucketName, dest.objectName, filePath.read(), metadata);
Now if you take a look into hudson.FilePath.getName()'s JavaDoc:
Gets just the file name portion without directories.
Now, take a look into the hudson.plugins.s3.Destination's constructor:
public Destination(final String userBucketName, final String fileName) {
if (userBucketName == null || fileName == null)
throw new IllegalArgumentException("Not defined for null parameters: "+userBucketName+","+fileName);
final String[] bucketNameArray = userBucketName.split("/", 2);
bucketName = bucketNameArray[0];
if (bucketNameArray.length > 1) {
objectName = bucketNameArray[1] + "/" + fileName;
} else {
objectName = fileName;
}
}
The Destination class JavaDoc says:
The convention implemented here is that a / in a bucket name is used to construct a structure in the object name. That is, a put of file.txt to bucket name of "mybucket/v1" will cause the object "v1/file.txt" to be created in the mybucket.
Conclusion: the filePath.getName() call strips off any prefix (S3 does not have any directory, but rather prefixes, see this and this threads for more info) you add to the file. If you really need to put your files into a "folder" (i.e. having a specific prefix that contains a slash (/)), I suggest you to add this prefix to the end of your bucket name, as explicited in the Destination class JavaDoc.
Yes this is possible.
It looks like for each folder destination, you'll need a separate instance of the S3 plugin however.
"Source" is the file you're uploading.
"Destination bucket" is where you place your path.
Using Jenkins 1.532.2 and S3 Publisher Plug-In 0.5, the UI configure Job screen rejects additional S3 publish entries. There would also be a significant maintenance benefit to us if the plugin recreated the workspace directory structure as we'll have many directories to create.
Set up your git plugin.
Set up your Bash script
All in your folder marked as "*" will go to bucket

how to upload files to rackspace cloud using windows services

using my windows service (target framework=.Net framework 4.0 client profile) I am trying to upload files to rackspace cloudfiles.
I found out some asp.net c# apis here https://github.com/rackspace/csharp-cloudfiles
but looks like they are not compatible with windows services.
any clues how to make this work together?
It's perfect library for work with rackspce. I am use it. And i am sure that it's not problem to use this library inside of windows service. But i think possible problems with .net framework client profile and com.mosso.cloudfiles.dll. But try first with client profile.
Also i use following code to upload files to Rackspace(Configuration it's my configuration class. Instead of 'Configuration.RackSpaceUserName' and 'Configuration.RackSpaceKey' use yous own creadentials):
private Connection CreateConnection()
{
var userCredentials = new UserCredentials(Configuration.RackSpaceUserName, Configuration.RackSpaceKey);
return new Connection(userCredentials);
}
public void SaveUniqueFile(string containerName, string fileName, Guid guid, byte[] buffer)
{
string extension = Path.GetExtension(fileName);
Connection connection = CreateConnection();
MemoryStream stream = new MemoryStream(buffer);
string uniqueFileName = String.Format("{0}{1}", guid, extension);
connection.PutStorageItem(containerName, stream, uniqueFileName);
}
Configuration something like this:
public class Configuration
{
public static string RackSpaceUserName = "userName";
public static string RackSpaceKey= "rackspaceKey";
}
I you don't want to use com.mosso.cloudfiles.dll very easy create you own driver for rackspace. Because actually for upload file to rackspace you just need send put request with 'X-Auth-Token' header. Also you can check request structure using plugin for firefox to view and upload files to Rackspace and firebug.
I have some example in C# using that same library here :
https://github.com/chmouel/upload-to-cf-cs
this is a pretty simple CLI but hopefully that should give an idea how to use it.
I've been around this for about one hour and weird things are happening into VS2010. Although I have referenced the dll and intellisense is working, cannot compile.
It looks like the referenced dll disappears. So, my recomendation in case you go into the same issue, use rack space for .NET 3.5: csharp-cloudfiles-DOTNETv3.5-bin-2.0.0.0.zip
Just be sure to change your project to the same framework version. It works really good.
For your reference, the downloads page is here: https://github.com/rackspace/csharp-cloudfiles/downloads

Resources