Copy database outside APK - xamarin.android

I am trying to copy database from Asset folder , but ufortunetly i've got the errror: System.UnauthorizedAccessException: 'Access to the path "/storage/emulated/0/Northwind.sqlite" is denied.' I added Runtime Permission. Could you tell me what am i doing wrong? Below is my source code:
string dbName = "Northwind.sqlite";
string dbPath = Path.Combine(Android.OS.Environment.ExternalStorageDirectory.ToString(), dbName);
// Check if your DB has already been extracted.
if (!File.Exists(dbPath))
{
using (BinaryReader br = new BinaryReader(Android.App.Application.Context.Assets.Open(dbName)))
{
using (BinaryWriter bw = new BinaryWriter(new FileStream(dbPath, FileMode.Create)))
{
byte[] buffer = new byte[2048];
int len = 0;
while ((len = br.Read(buffer, 0, buffer.Length)) > 0)
{
bw.Write(buffer, 0, len);
}
}
}
}

You could follow the stpes below. It works well on my side.
My database in Assets folder.
Set the Build Action as AndroidAssect.
You could use the following code to copy the file from Assects folder to Android Application folder
// Android application default folder.
var dbFile = GetDefaultFolderPath();
// Check if the file already exists.
if (!File.Exists(dbFile))
{
using (FileStream writeStream = new FileStream(dbFile, FileMode.OpenOrCreate, FileAccess.Write))
{
// Assets is comming from the current context.
await Assets.Open(databaseName).CopyToAsync(writeStream);
}
}
Download the source file from the link below.
https://github.com/damienaicheh/CopyAssetsProject

Related

Compress Xamarin Android APK assemblies

Is it possible to compress Xamarin assemblies into assemblies.blob after decompressing them using decompress-assemblies command line tool included in xamarin-android?
I got familiar with the structure of assemblies.blob file from https://github.com/xamarin/xamarin-android/blob/main/Documentation/project-docs/AssemblyStores.md and used the following to be able to modify a DLL inside that store file:
Decompress the file using decompress-assemblies utility
Patch the DLL file needed
Read the descriptor index for the compressed DLL, to get that, use assembly-store-reader, then write down the offset of the assembly that requires patching, open assemblies.blob in a hex editor, navigate to the offset of the DLL, descriptor index will be inside the uint after the compression magic uint (XALZ)
Compress the DLL file again using AssemblyCompression class: and use the descriptor index as an input to AssemblyData constructor
Add this method to AssemblyStoreReader to recreate the assemblies.blob with the patched DLL:
internal void SaveStoreToStream(Stream output)
{
EnsureStoreDataAvailable();
// Load patched DLL.
MemoryStream dllStream;
using (var fs = File.Open("/location/of/patched.dll.lz4", FileMode.Open, FileAccess.Read))
{
dllStream = new MemoryStream();
fs.CopyTo(dllStream);
}
// Index of the assembly to patch in assemblies.blob file.
var patchedAssemblyIndex = 17;
uint offsetDiff = (uint)(Assemblies[patchedAssemblyIndex].DataSize - dllStream.Length);
using (var bw = new BinaryWriter(output))
{
bw.Write(ASSEMBLY_STORE_MAGIC);
bw.Write(ASSEMBLY_STORE_FORMAT_VERSION);
bw.Write(LocalEntryCount);
bw.Write(GlobalEntryCount);
bw.Write(StoreID);
foreach (AssemblyStoreAssembly assembly in Assemblies)
{
if (assembly.RuntimeIndex > patchedAssemblyIndex)
{
bw.Write(assembly.DataOffset - offsetDiff);
} else
{
bw.Write(assembly.DataOffset);
}
if (assembly.RuntimeIndex == patchedAssemblyIndex)
{
bw.Write((uint)dllStream.Length);
} else
{
bw.Write(assembly.DataSize);
}
bw.Write(assembly.DebugDataOffset);
bw.Write(assembly.DebugDataSize);
bw.Write(assembly.ConfigDataOffset);
bw.Write(assembly.ConfigDataSize);
}
foreach (AssemblyStoreHashEntry entry in GlobalIndex32)
{
bw.Write(entry.Hash);
bw.Write(entry.MappingIndex);
bw.Write(entry.LocalStoreIndex);
bw.Write(entry.StoreID);
}
foreach (AssemblyStoreHashEntry entry in GlobalIndex64)
{
bw.Write(entry.Hash);
bw.Write(entry.MappingIndex);
bw.Write(entry.LocalStoreIndex);
bw.Write(entry.StoreID);
}
foreach (AssemblyStoreAssembly assembly in Assemblies) {
if (assembly.RuntimeIndex == patchedAssemblyIndex)
{
bw.Write(dllStream.ToArray());
}
else
{
using (var stream = new MemoryStream())
{
assembly.ExtractImage(stream);
bw.Write(stream.ToArray());
}
}
}
}
}
Write the stream passed to the above method to a new file and that will be the assemblies.blob with the patched DLL
Use apktool to recreate the APK, align and sign, then it should be working with your patched DLL

How do I increase the size of an Azure File Storage CloudFile before I know the file size?

I'm using Azure File Storage to store some files, and I want to create a zip file containing some of these files on the same Azure file share.
This is my code so far:
private void CreateZip(CloudFileDirectory directory) {
if (directory == null) throw new ArgumentNullException(nameof(directory));
var zipFilename = $"{directory.Name}.zip";
var zip = directory.GetFileReference(zipFilename);
if (!zip.Exists()) {
zip.Create(0); // <-- I don't know what size its gonna be!!
using (var zipStream = zip.OpenWrite(null))
using (var archive = new ZipArchive(zipStream, ZipArchiveMode.Create)) {
foreach (var file in directory.ListFilesAndDirectories().OfType<CloudFile>()) {
if (file.Name.Equals(zipFilename, StringComparison.InvariantCultureIgnoreCase))
continue;
using (var fileStream = file.OpenRead()) {
var entry = archive.CreateEntry(file.Name);
using (var entryStream = entry.Open())
fileStream.CopyTo(entryStream); // <-- exception is thrown
}
}
}
}
}
On the line zip.Create(0); this creates an empty file. I then go on to use this file reference to create a zip file, and add stuff to it, but when it gets to the fileStream.CopyTo(entryStream); it throws an exception with this message:
The remote server returned an error: (416) The range specified is invalid for the current size of the resource.
Presumably because the file size is 0 and it's unable to automatically increase the size.
I can create the file with int.MaxValue, but then I get a 2GB file. I can't even work out the size of the file I'm adding to the achive and resize the file to extend it by that amount, because its a zip and its gonna compress and change the file size.
How do I do this?
This issue is more related with System.IO.Compression. I have rewrite your code, please use memory stream instead like the following code. It works fine on my side. Hope it could give you some tips.
public static void CreateZip(CloudFileDirectory directory)
{
if (directory == null) throw new ArgumentNullException(nameof(directory));
var zipFilename = $"{directory.Name}.zip";
var zip = directory.GetFileReference(zipFilename);
if (!zip.Exists())
{
//zip.Create(600000); // <-- I don't know what size its gonna be!!
using (var memoryStream = new MemoryStream())
{
using (var archive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true))
{
foreach (var file in directory.ListFilesAndDirectories().OfType<CloudFile>())
{
if (file.Name.Equals(zipFilename, StringComparison.InvariantCultureIgnoreCase))
continue;
using (var fileStream = file.OpenRead())
{
var entry = archive.CreateEntry(file.Name, CompressionLevel.Optimal);
using (var entryStream = entry.Open())
{
fileStream.CopyTo(entryStream); // <-- exception is thrown
}
}
}
}
memoryStream.Seek(0, SeekOrigin.Begin);
zip.UploadFromStream(memoryStream);
}
}
}

Not able to properly download files from azure storage and data are lost too when downloading files

I have 2 files saved on Azure blob storage:
Abc.txt
Pqr.docx
Now i want to create zip files of this 2 files and allow user to download.
I have saved this in my database table field like this:
Document
Abc,Pqr
Now when i click on download then i am getting file like below with no data in it and file extension are lost too like below:
I want user to get exact file(.txt,.docx) in zip when user download zip file.
This is my code:
public ActionResult DownloadImagefilesAsZip()
{
string documentUrl = repossitory.GetDocumentsUrlbyId(id);//output:Abc.txt,Pqr.Docx
if (!string.IsNullOrEmpty(documentUrl))
{
string[] str = documentUrl.Split(',');
if (str.Length > 1)
{
using (ZipFile zip = new ZipFile())
{
int cnt = 0;
foreach (string t in str)
{
if (!string.IsNullOrEmpty(t))
{
Stream s = this.GetFileContent(t);
zip.AddEntry("File" + cnt, s);
}
cnt++;
}
zip.Save(outputStream);
outputStream.Position = 0;
return File(outputStream, "application/zip", "all.zip");
}
}
}
public Stream GetFileContent(string fileName)
{
CloudBlobContainer container = this.GetCloudBlobContainer();
CloudBlockBlob blockBlob = container.GetBlockBlobReference(fileName);
var stream = new MemoryStream();
blockBlob.DownloadToStream(stream);
return stream;
}
public CloudBlobContainer GetCloudBlobContainer()
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"].ToString());
CloudBlobClient blobclient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer blobcontainer = blobclient.GetContainerReference("Mystorage");
if (blobcontainer.CreateIfNotExists())
{
blobcontainer.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
}
blobcontainer.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
return blobcontainer;
}
I want same file to be downloaded when user download zip file.
Can anybody help me with this??
I'm not a web dev, but hopefully this will help. This snippet of code is in a method where I download a list of blobs into a zip file archive using a stream. The list of files had the slashes in all directions, so there's code in here to fix this, and to make sure I'm getting the blob reference with the right text (no URL, and no opening slash if the blob is in a "folder").
I suspect your problem is not using a memory stream or a binary writer. Specificity helps sometimes. Good luck.
using (ZipArchive zipFile = ZipFile.Open(outputZipFileName, ZipArchiveMode.Create))
{
foreach (string oneFile in listOfFiles)
{
//Need the filename, complete with relative path. Make it like a file name on disk, with backwards slashes.
//Also must be relative, so can't start with a slash. Remove if found.
string filenameInArchive = oneFile.Replace(#"/", #"\");
if (filenameInArchive.Substring(0, 1) == #"\")
filenameInArchive = filenameInArchive.Substring(1, filenameInArchive.Length - 1);
//blob needs slashes in opposite direction
string blobFile = oneFile.Replace(#"\", #"/");
//take first slash off of the (folder + file name) to access it directly in blob storage
if (blobFile.Substring(0, 1) == #"/")
blobFile = oneFile.Substring(1, oneFile.Length - 1);
var cloudBlockBlob = this.BlobStorageSource.GetBlobRef(blobFile);
if (!cloudBlockBlob.Exists()) //checking just in case
{
//go to the next file
//should probably trace log this
//add the file name with the fixed slashes rather than the raw, messed-up one
// so anyone looking at the list of files not found doesn't think it's because
// the slashes are different
filesNotFound.Add(blobFile);
}
else
{
//blob listing has files with forward slashes; that's what the zip file requires
//also, first character should not be a slash (removed it above)
ZipArchiveEntry newEntry = zipFile.CreateEntry(filenameInArchive, CompressionLevel.Optimal);
using (MemoryStream ms = new MemoryStream())
{
//download the blob to a memory stream
cloudBlockBlob.DownloadToStream(ms);
//write to the newEntry using a BinaryWriter and copying it 4k at a time
using (BinaryWriter entry = new BinaryWriter(newEntry.Open()))
{
//reset the memory stream's position to 0 and copy it to the zip stream in 4k chunks
//this keeps the process from taking up a ton of memory
ms.Position = 0;
byte[] buffer = new byte[4096];
bool copying = true;
while (copying)
{
int bytesRead = ms.Read(buffer, 0, buffer.Length);
if (bytesRead > 0)
{
entry.Write(buffer, 0, bytesRead);
}
else
{
entry.Flush();
copying = false;
}
}
}//end using for BinaryWriter
}//end using for MemoryStream
}//if file exists in blob storage
}//end foreach file
} //end of using ZipFileArchive
There are two things I noticed:
Once you read the blob contents in stream, you are not resetting that stream's position to 0. Thus all files in your zip are of zero bytes.
When calling AddEntry, you may want to specify the name of the blob there instead of "File"+cnt.
Please look at the code below. It's a console app that creates the zip file and writes it on the local file system.
static void SaveBlobsToZip()
{
string[] str = new string[] { "CodePlex.png", "DocumentDB.png" };
var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
var blobClient = account.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("images");
using (var fs = new FileStream("D:\\output.zip", FileMode.Create))
{
fs.Position = 0;
using (var ms1 = new MemoryStream())
{
using (ZipFile zip = new ZipFile())
{
int cnt = 0;
foreach (string t in str)
{
var ms = new MemoryStream();
container.GetBlockBlobReference(t).DownloadToStream(ms);
ms.Position = 0;//This was missing from your code
zip.AddEntry(t, ms);//You may want to give the name of the blob here.
cnt++;
}
zip.Save(ms1);
}
ms1.Position = 0;
ms1.CopyTo(fs);
}
}
}
UPDATE
Here's the code in the MVC application (though I am not sure it is the best code :) but it works). I modified your code a little bit.
public ActionResult DownloadImagefilesAsZip()
{
string[] str = new string[] { "CodePlex.png", "DocumentDB.png" }; //repossitory.GetDocumentsUrlbyId(id);//output:Abc.txt,Pqr.Docx
CloudBlobContainer blobcontainer = GetCloudBlobContainer();// azureStorageUtility.GetCloudBlobContainer();
MemoryStream ms1 = new MemoryStream();
using (ZipFile zip = new ZipFile())
{
int cnt = 0;
foreach (string t in str)
{
var ms = new MemoryStream();
CloudBlockBlob blockBlob = blobcontainer.GetBlockBlobReference(t);
blockBlob.DownloadToStream(ms);
ms.Position = 0;//This was missing from your code
zip.AddEntry(t, ms);//You may want to give the name of the blob here.
cnt++;
}
zip.Save(ms1);
}
ms1.Position = 0;
return File(ms1, "application/zip", "all.zip");
}
I have seen people using ICSharpZip library, take a look at this piece of code
public void ZipFilesToResponse(HttpResponseBase response, IEnumerable<Asset> files, string zipFileName)
{
using (var zipOutputStream = new ZipOutputStream(response.OutputStream))
{
zipOutputStream.SetLevel(0); // 0 - store only to 9 - means best compression
response.BufferOutput = false;
response.AddHeader("Content-Disposition", "attachment; filename=" + zipFileName);
response.ContentType = "application/octet-stream";
foreach (var file in files)
{
var entry = new ZipEntry(file.FilenameSlug())
{
DateTime = DateTime.Now,
Size = file.Filesize
};
zipOutputStream.PutNextEntry(entry);
storageService.ReadToStream(file, zipOutputStream);
response.Flush();
if (!response.IsClientConnected)
{
break;
}
}
zipOutputStream.Finish();
zipOutputStream.Close();
}
response.End();
}
Taken from here generate a Zip file from azure blob storage files

can eml directly open with outlook, instead eml file will download then click it to open in outlook?

I have an asp.net MVC application, below code works file.
But the code is that, When navigate to Email action in browser, an EML file is download, then when we click on that file, the file will open with outlook.
Can it be possible, when action calls, then EML file will directly open with outlook, instead of download and then click to open??
Code
public async Task<FileStreamResult> Email()
{
string dummyEmail = "test#localhost.com";
var mailMessage = new MailMessage();
mailMessage.From = new MailAddress(dummyEmail);
mailMessage.To.Add("dejan.caric#gmail.com");
mailMessage.Subject = "Test subject";
mailMessage.Body = "Test body";
// mark as draft
mailMessage.Headers.Add("X-Unsent", "1");
// download image and save it as attachment
using (var httpClient = new HttpClient())
{
var imageStream = await httpClient.GetStreamAsync(new Uri("http://dcaric.com/favicon.ico"));
mailMessage.Attachments.Add(new Attachment(imageStream, "favicon.ico"));
}
var stream = new MemoryStream();
ToEmlStream(mailMessage, stream, dummyEmail);
stream.Position = 0;
return File(stream, "message/rfc822", "test_email.eml");
}
private void ToEmlStream(MailMessage msg, Stream str, string dummyEmail)
{
using (var client = new SmtpClient())
{
var id = Guid.NewGuid();
var tempFolder = Path.Combine(Path.GetTempPath(), Assembly.GetExecutingAssembly().GetName().Name);
tempFolder = Path.Combine(tempFolder, "MailMessageToEMLTemp");
// create a temp folder to hold just this .eml file so that we can find it easily.
tempFolder = Path.Combine(tempFolder, id.ToString());
if (!Directory.Exists(tempFolder))
{
Directory.CreateDirectory(tempFolder);
}
client.UseDefaultCredentials = true;
client.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory;
client.PickupDirectoryLocation = tempFolder;
client.Send(msg);
// tempFolder should contain 1 eml file
var filePath = Directory.GetFiles(tempFolder).Single();
// create new file and remove all lines that start with 'X-Sender:' or 'From:'
string newFile = Path.Combine(tempFolder, "modified.eml");
using (var sr = new StreamReader(filePath))
{
using (var sw = new StreamWriter(newFile))
{
string line;
while ((line = sr.ReadLine()) != null)
{
if (!line.StartsWith("X-Sender:") &&
!line.StartsWith("From:") &&
// dummy email which is used if receiver address is empty
!line.StartsWith("X-Receiver: " + dummyEmail) &&
// dummy email which is used if receiver address is empty
!line.StartsWith("To: " + dummyEmail))
{
sw.WriteLine(line);
}
}
}
}
// stream out the contents
using (var fs = new FileStream(newFile, FileMode.Open))
{
fs.CopyTo(str);
}
}
}
With Chrome you can make it automatically open certain files, once they are downloaded.
.EML should attempt to open in Outlook.
I am not sure about other browsers, but Chrome seemed to be the only one with this option.
It's not a pefect solution because if someone downloaded an .EML from another website in Chrome, it will open automatically aswell.
I recommend having Chrome dedicated to your Web application.
You sure can open local .eml file with Outlook.
But in context of web application, you must firstly download it.

"A generic error occurred in GDI+" error while showing uploaded images

i am using the following code to show the image that has been saved in my database from my asp.net mvc(C#) application:.
public ActionResult GetSiteHeaderLogo()
{
SiteHeader _siteHeader = new SiteHeader();
Image imgImage = null;
long userId = Utility.GetUserIdFromSession();
if (userId > 0)
{
_siteHeader = this.siteBLL.GetSiteHeaderLogo(userId);
if (_siteHeader.Logo != null && _siteHeader.Logo.Length > 0)
{
byte[] _imageBytes = _siteHeader.Logo;
if (_imageBytes != null)
{
using (System.IO.MemoryStream imageStream = new System.IO.MemoryStream(_imageBytes))
{
imgImage = Image.FromStream(imageStream);
}
}
string sFileExtension = _siteHeader.FileName.Substring(_siteHeader.FileName.IndexOf('.') + 1,
_siteHeader.FileName.Length - (_siteHeader.FileName.IndexOf('.') + 1));
Response.ContentType = Utility.GetContentTypeByExtension(sFileExtension.ToLower());
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.BufferOutput = false;
if (imgImage != null)
{
ImageFormat _imageFormat = Utility.GetImageFormat(sFileExtension.ToLower());
imgImage.Save(Response.OutputStream, _imageFormat);
imgImage.Dispose();
}
}
}
return new EmptyResult();
}
It works fine when i upload original image. But when i upload any downloaded images, it throws the following error:
System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+.
System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+.
at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams)
at System.Drawing.Image.Save(Stream stream, ImageFormat format)
For. Ex: When i upload the original image, it shows as logo in my site and i downloaded that logo from the site and when i re-upload the same downloaded image, it throws the above error. It seems very weird to me and not able to find why its happening. Any ideas on this?
I'd guess that your problem lies here:
using (System.IO.MemoryStream imageStream = new System.IO.MemoryStream(_imageBytes))
{
imgImage = Image.FromStream(imageStream);
}
Because after using .FromStream, the Image owns the stream and might be very upset if you close it. To verify if that's the problem you can just try:
using (System.IO.MemoryStream imageStream = new System.IO.MemoryStream(_imageBytes))
{
imgImage = new Bitmap( Image.FromStream(imageStream) );
}
I've found that error usually comes from a file access problem. Sounds obvious, I realize, but double-check that the file path is correct and that the file exists, and also that the IIS process has permissions to that file.

Resources