Recover single asset history - hyperledger

I'm trying to recover the history of a single asset. The model is defined like the following
namespace org.example.basic
asset SampleAsset identified by assetId {
o String assetId
--> SampleParticipant owner
o String value
}
participant SampleParticipant identified by participantId {
o String participantId
o String firstName
o String lastName
}
transaction GetAssetHistory {
o String assetId
}
event SampleEvent {
--> SampleAsset asset
o String oldValue
o String newValue
}
I generate a single participant and a new asset referencing to the previous participant. And I proceed to update the asset value variable value. But reading about asset update I found the following:
async function getAssetHistory(tx) {
//How can I get a single asset history using the tx.assetId value??
let historian = await businessNetworkConnection.getHistorian();
let historianRecords = historian.getAll();
console.log(prettyoutput(historianRecords));
}
When I deploy the bna and I call the function I get the following:
img
In other functions i use the RuntimeApi but I dont know if businessNetworkConnection is a Runtime API call.
Any idea of how can a get a single asset history?
Any example on internet?
***************** UPDATE
I change the way to recover a particula asset history. Doing the following:
In js file
/**
* Sample read-only transaction
* #param {org.example.trading.MyPartHistory} tx
* #returns {org.example.trading.Trader[]} All trxns
* #transaction
*/
async function participantHistory(tx) {
console.log('1');
const partId = tx.tradeid;
console.log('2');
const nativeSupport = tx.nativeSupport;
// const partRegistry = await getParticipantRegistry('org.example.trading.Trader')
console.log('3');
const nativeKey = getNativeAPI().createCompositeKey('Asset:org.example.trading.Trader', [partId]);
console.log('4');
const iterator = await getNativeAPI().getHistoryForKey(nativeKey);
let results = [];
let res = {done : false};
while (!res.done) {
res = await iterator.next();
if (res && res.value && res.value.value) {
let val = res.value.value.toString('utf8');
if (val.length > 0) {
console.log("#debug val is " + val );
results.push(JSON.parse(val));
}
}
if (res && res.done) {
try {
iterator.close();
}
catch (err) {
}
}
}
var newArray = [];
for (const item of results) {
newArray.push(getSerializer().fromJSON(item));
}
console.log("#debug the results to be returned are as follows: ");
return newArray; // returns something to my NodeJS client (called via REST API)
}
In model file
#commit(false)
#returns(Trader[])
transaction MyPartHistory {
o String tradeId
}
I create a single asset an di update then with other values. But whe I call the MyPartHistory i get the following message:
Error: Native API not available in web runtime

use of the native api is only available when you are running your business network in a real fabric environment. You can't use it in the online playground environment. You will have to setup a real fabric environment and then run playground locally connecting to that fabric in order to test your business network.

Related

how to get date wise email faster using gmail API in Asp.Net MVC with OAuth Token

Here I have written code for Gmail API to fetch mail with date filter
I am able to fetch MessageId and ThreadId using the First API. On the basis of MessageId, I put that messageId parameter in a List object and I have sent this parameter in foreach loop from List to the next API to fetch email body on basis of messageID. But the process is very slow for fetching messages from Gmail
public async Task<ActionResult> DisplayEmailWithFilter (string fromDate, string toDate) {
Message messageObj = new Message ();
Example exampleObj = new Example ();
List<GmailMessage> gmailMessagesList = new List<GmailMessage> ();
GmailMessage gmailMessage = new GmailMessage ();
var responseData = "";
//dateFilter string parameter Created with Date Values
string dateFilter = "in:Inbox after:" + fromDate + " before:" + toDate;
try {
// calling Gmail API to get MessageID Details by Date Filter
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue (scheme: "Bearer",
parameter : Session["Token"].ToString ());
HttpResponseMessage responseMessage = await client.GetAsync ("https://www.googleapis.com/gmail/v1/users/me/messages?q=" + dateFilter);
if (responseMessage.IsSuccessStatusCode) {
var data = responseMessage.Content;
}
try {
responseData = responseMessage.Content.ReadAsStringAsync ().Result;
//This Json Data Converted into List Object
var msgList = JsonConvert.DeserializeObject<Root1> (responseData);
//loop for Fetching EmailMessageData by MessageID
if (msgList.resultSizeEstimate != 0) {
foreach (var msgItem in msgList.messages) {
messageObj.id = msgItem.id;
//Calling API with MessageID Parameter to fetch Respective Message Data
HttpResponseMessage responseMessageList = await client.GetAsync ("https://www.googleapis.com/gmail/v1/users/userId/messages/id?id=" + messageObj.id.ToString () + "&userId=me&format=full");
if (responseMessageList.IsSuccessStatusCode) {
var dataNew = responseMessageList.Content;
var responseDataNew = responseMessageList.Content.ReadAsStringAsync ().Result;
//Converting json string in Object
exampleObj = JsonConvert.DeserializeObject<Example> (responseDataNew);
gmailMessage.Body = exampleObj.snippet;
//fetching Header Values comparing with string to get Data
for (int i = 1; i < exampleObj.payload.headers.Count; i++) {
if (exampleObj.payload.headers[i].name.ToString () == "Date") {
gmailMessage.RecievedDate = exampleObj.payload.headers[i].value;
}
if (exampleObj.payload.headers[i].name.ToString () == "Subject") {
gmailMessage.Subject = exampleObj.payload.headers[i].value;
}
if (exampleObj.payload.headers[i].name.ToString () == "Message-ID") {
gmailMessage.SenderEmailID = exampleObj.payload.headers[i].value;
}
if (exampleObj.payload.headers[i].name.ToString () == "From") {
gmailMessage.SenderName = exampleObj.payload.headers[i].value;
}
}
//Adding This Object Values in GmailMessgage List Object
gmailMessagesList.Add (
new GmailMessage {
Body = exampleObj.snippet,
SenderEmailID = gmailMessage.SenderEmailID,
RecievedDate = gmailMessage.RecievedDate,
SenderName = gmailMessage.SenderName,
Subject = gmailMessage.Subject,
});
}
}
}
} catch (Exception e) {
string errorMgs = e.Message.ToString ();
throw;
}
} catch (Exception e) {
string errorMgs = e.Message.ToString ();
throw;
}
return View (gmailMessagesList);
}
I can fetch Gmail email datewise but it took so much time to fetch. how can I improve my code and performance faster?
The query seems like the most you can do. If you know more information about those emails, like a specific subjects or there always come from the same sender you can try to filter that too, like you would in the Gmail interface.
Other way you would be kind of out of luck. You are limited by the files retrieved from User.messages.list.
If you need to escape from the API limitations maybe trying to retrieve the message other way would be the correct way to go. Considerate creating a small code to retrieve message by the IMAP protocol. Several questions in this topic may help you:
Reading Gmail messages using Python IMAP
Reading Gmail Email in Python
How can I get an email message's text content using Python?

'Create copy of work item' via REST API for Azure DevOps?

I'm wanting to 'Create copy of work item' which is available via the UI, ideally via the API.
I know how to create a new work item, but the feature in the UI to connect all current parent links / related links, and all other details is quite useful.
Creating via this API is here: https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work%20items/create?view=azure-devops-rest-5.1
Any help would be greatly appreciated.
We cannot just copy a work item because it contains system fields that we should skip. Additionally your process may have some rules that may block some fields on the creation step. Here is the small example to clone a work item through REST API with https://www.nuget.org/packages/Microsoft.TeamFoundationServer.Client:
class Program
{
static string[] systemFields = { "System.IterationId", "System.ExternalLinkCount", "System.HyperLinkCount", "System.AttachedFileCount", "System.NodeName",
"System.RevisedDate", "System.ChangedDate", "System.Id", "System.AreaId", "System.AuthorizedAs", "System.State", "System.AuthorizedDate", "System.Watermark",
"System.Rev", "System.ChangedBy", "System.Reason", "System.WorkItemType", "System.CreatedDate", "System.CreatedBy", "System.History", "System.RelatedLinkCount",
"System.BoardColumn", "System.BoardColumnDone", "System.BoardLane", "System.CommentCount", "System.TeamProject"}; //system fields to skip
static string[] customFields = { "Microsoft.VSTS.Common.ActivatedDate", "Microsoft.VSTS.Common.ActivatedBy", "Microsoft.VSTS.Common.ResolvedDate",
"Microsoft.VSTS.Common.ResolvedBy", "Microsoft.VSTS.Common.ResolvedReason", "Microsoft.VSTS.Common.ClosedDate", "Microsoft.VSTS.Common.ClosedBy",
"Microsoft.VSTS.Common.StateChangeDate"}; //unneeded fields to skip
const string ChildRefStr = "System.LinkTypes.Hierarchy-Forward"; //should be only one parent
static void Main(string[] args)
{
string pat = "<pat>"; //https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate
string orgUrl = "https://dev.azure.com/<org>";
string newProjectName = "";
int wiIdToClone = 0;
VssConnection connection = new VssConnection(new Uri(orgUrl), new VssBasicCredential(string.Empty, pat));
var witClient = connection.GetClient<WorkItemTrackingHttpClient>();
CloneWorkItem(witClient, wiIdToClone, newProjectName, true);
}
private static void CloneWorkItem(WorkItemTrackingHttpClient witClient, int wiIdToClone, string NewTeamProject = "", bool CopyLink = false)
{
WorkItem wiToClone = (CopyLink) ? witClient.GetWorkItemAsync(wiIdToClone, expand: WorkItemExpand.Relations).Result
: witClient.GetWorkItemAsync(wiIdToClone).Result;
string teamProjectName = (NewTeamProject != "") ? NewTeamProject : wiToClone.Fields["System.TeamProject"].ToString();
string wiType = wiToClone.Fields["System.WorkItemType"].ToString();
JsonPatchDocument patchDocument = new JsonPatchDocument();
foreach (var key in wiToClone.Fields.Keys) //copy fields
if (!systemFields.Contains(key) && !customFields.Contains(key))
if (NewTeamProject == "" ||
(NewTeamProject != "" && key != "System.AreaPath" && key != "System.IterationPath")) //do not copy area and iteration into another project
patchDocument.Add(new JsonPatchOperation()
{
Operation = Operation.Add,
Path = "/fields/" + key,
Value = wiToClone.Fields[key]
});
if (CopyLink) //copy links
foreach (var link in wiToClone.Relations)
{
if (link.Rel != ChildRefStr)
{
patchDocument.Add(new JsonPatchOperation()
{
Operation = Operation.Add,
Path = "/relations/-",
Value = new
{
rel = link.Rel,
url = link.Url
}
});
}
}
WorkItem clonedWi = witClient.CreateWorkItemAsync(patchDocument, teamProjectName, wiType).Result;
Console.WriteLine("New work item: " + clonedWi.Id);
}
}
Link to full project: https://github.com/ashamrai/AzureDevOpsExtensions/tree/master/CustomNetTasks/CloneWorkItem

Large File upload to ASP.NET Core 3.0 Web API fails due to Request Body to Large

I have an ASP.NET Core 3.0 Web API endpoint that I have set up to allow me to post large audio files. I have followed the following directions from MS docs to set up the endpoint.
https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-3.0#kestrel-maximum-request-body-size
When an audio file is uploaded to the endpoint, it is streamed to an Azure Blob Storage container.
My code works as expected locally.
When I push it to my production server in Azure App Service on Linux, the code does not work and errors with
Unhandled exception in request pipeline: System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Request body too large.
Per advice from the above article, I have configured incrementally updated Kesterl with the following:
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseKestrel((ctx, options) =>
{
var config = ctx.Configuration;
options.Limits.MaxRequestBodySize = 6000000000;
options.Limits.MinRequestBodyDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.MinResponseDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.RequestHeadersTimeout =
TimeSpan.FromMinutes(2);
}).UseStartup<Startup>();
Also configured FormOptions to accept files up to 6000000000
services.Configure<FormOptions>(options =>
{
options.MultipartBodyLengthLimit = 6000000000;
});
And also set up the API controller with the following attributes, per advice from the article
[HttpPost("audio", Name="UploadAudio")]
[DisableFormValueModelBinding]
[GenerateAntiforgeryTokenCookie]
[RequestSizeLimit(6000000000)]
[RequestFormLimits(MultipartBodyLengthLimit = 6000000000)]
Finally, here is the action itself. This giant block of code is not indicative of how I want the code to be written but I have merged it into one method as part of the debugging exercise.
public async Task<IActionResult> Audio()
{
if (!MultipartRequestHelper.IsMultipartContentType(Request.ContentType))
{
throw new ArgumentException("The media file could not be processed.");
}
string mediaId = string.Empty;
string instructorId = string.Empty;
try
{
// process file first
KeyValueAccumulator formAccumulator = new KeyValueAccumulator();
var streamedFileContent = new byte[0];
var boundary = MultipartRequestHelper.GetBoundary(
MediaTypeHeaderValue.Parse(Request.ContentType),
_defaultFormOptions.MultipartBoundaryLengthLimit
);
var reader = new MultipartReader(boundary, Request.Body);
var section = await reader.ReadNextSectionAsync();
while (section != null)
{
var hasContentDispositionHeader = ContentDispositionHeaderValue.TryParse(
section.ContentDisposition, out var contentDisposition);
if (hasContentDispositionHeader)
{
if (MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
{
streamedFileContent =
await FileHelpers.ProcessStreamedFile(section, contentDisposition,
_permittedExtensions, _fileSizeLimit);
}
else if (MultipartRequestHelper
.HasFormDataContentDisposition(contentDisposition))
{
var key = HeaderUtilities.RemoveQuotes(contentDisposition.Name).Value;
var encoding = FileHelpers.GetEncoding(section);
if (encoding == null)
{
return BadRequest($"The request could not be processed: Bad Encoding");
}
using (var streamReader = new StreamReader(
section.Body,
encoding,
detectEncodingFromByteOrderMarks: true,
bufferSize: 1024,
leaveOpen: true))
{
// The value length limit is enforced by
// MultipartBodyLengthLimit
var value = await streamReader.ReadToEndAsync();
if (string.Equals(value, "undefined",
StringComparison.OrdinalIgnoreCase))
{
value = string.Empty;
}
formAccumulator.Append(key, value);
if (formAccumulator.ValueCount >
_defaultFormOptions.ValueCountLimit)
{
return BadRequest($"The request could not be processed: Key Count limit exceeded.");
}
}
}
}
// Drain any remaining section body that hasn't been consumed and
// read the headers for the next section.
section = await reader.ReadNextSectionAsync();
}
var form = formAccumulator;
var file = streamedFileContent;
var results = form.GetResults();
instructorId = results["instructorId"];
string title = results["title"];
string firstName = results["firstName"];
string lastName = results["lastName"];
string durationInMinutes = results["durationInMinutes"];
//mediaId = await AddInstructorAudioMedia(instructorId, firstName, lastName, title, Convert.ToInt32(duration), DateTime.UtcNow, DateTime.UtcNow, file);
string fileExtension = "m4a";
// Generate Container Name - InstructorSpecific
string containerName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{instructorId}";
string contentType = "audio/mp4";
FileType fileType = FileType.audio;
string authorName = $"{firstName} {lastName}";
string authorShortName = $"{firstName[0]}{lastName}";
string description = $"{authorShortName} - {title}";
long duration = (Convert.ToInt32(durationInMinutes) * 60000);
// Generate new filename
string fileName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{Guid.NewGuid()}";
DateTime recordingDate = DateTime.UtcNow;
DateTime uploadDate = DateTime.UtcNow;
long blobSize = long.MinValue;
try
{
// Update file properties in storage
Dictionary<string, string> fileProperties = new Dictionary<string, string>();
fileProperties.Add("ContentType", contentType);
// update file metadata in storage
Dictionary<string, string> metadata = new Dictionary<string, string>();
metadata.Add("author", authorShortName);
metadata.Add("tite", title);
metadata.Add("description", description);
metadata.Add("duration", duration.ToString());
metadata.Add("recordingDate", recordingDate.ToString());
metadata.Add("uploadDate", uploadDate.ToString());
var fileNameWExt = $"{fileName}.{fileExtension}";
var blobContainer = await _cloudStorageService.CreateBlob(containerName, fileNameWExt, "audio");
try
{
MemoryStream fileContent = new MemoryStream(streamedFileContent);
fileContent.Position = 0;
using (fileContent)
{
await blobContainer.UploadFromStreamAsync(fileContent);
}
}
catch (StorageException e)
{
if (e.RequestInformation.HttpStatusCode == 403)
{
return BadRequest(e.Message);
}
else
{
return BadRequest(e.Message);
}
}
try
{
foreach (var key in metadata.Keys.ToList())
{
blobContainer.Metadata.Add(key, metadata[key]);
}
await blobContainer.SetMetadataAsync();
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
blobSize = await StorageUtils.GetBlobSize(blobContainer);
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
Media media = Media.Create(string.Empty, instructorId, authorName, fileName, fileType, fileExtension, recordingDate, uploadDate, ContentDetails.Create(title, description, duration, blobSize, 0, new List<string>()), StateDetails.Create(StatusType.STAGED, DateTime.MinValue, DateTime.UtcNow, DateTime.MaxValue), Manifest.Create(new Dictionary<string, string>()));
// upload to MongoDB
if (media != null)
{
var mapper = new Mapper(_mapperConfiguration);
var dao = mapper.Map<ContentDAO>(media);
try
{
await _db.Content.InsertOneAsync(dao);
}
catch (Exception)
{
mediaId = string.Empty;
}
mediaId = dao.Id.ToString();
}
else
{
// metadata wasn't stored, remove blob
await _cloudStorageService.DeleteBlob(containerName, fileName, "audio");
return BadRequest($"An issue occurred during media upload: rolling back storage change");
}
if (string.IsNullOrEmpty(mediaId))
{
return BadRequest($"Could not add instructor media");
}
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
var result = new { MediaId = mediaId, InstructorId = instructorId };
return Ok(result);
}
I reiterate, this all works great locally. I do not run it in IISExpress, I run it as a console app.
I submit large audio files via my SPA app and Postman and it works perfectly.
I am deploying this code to an Azure App Service on Linux (as a Basic B1).
Since the code works in my local development environment, I am at a loss of what my next steps are. I have refactored this code a few times but I suspect that it's environment related.
I cannot find anywhere that mentions that the level of App Service Plan is the culprit so before I go out spending more money I wanted to see if anyone here had encountered this challenge and could provide advice.
UPDATE: I attempted upgrading to a Production App Service Plan to see if there was an undocumented gate for incoming traffic. Upgrading didn't work either.
Thanks in advance.
-A
Currently, as of 11/2019, there is a limitation with the Azure App Service for Linux. It's CORS functionality is enabled by default and cannot be disabled AND it has a file size limitation that doesn't appear to get overridden by any of the published Kestrel configurations. The solution is to move the Web API app to a Azure App Service for Windows and it works as expected.
I am sure there is some way to get around it if you know the magic combination of configurations, server settings, and CLI commands but I need to move on with development.

How to do multiple updates for the same asset in a transaction

Here I want to do the multiple update commands for the same asset in a single transaction based on conditions.
This is my sample CTO File:
asset SampleAsset identified by id{
o String id
o Integer value
o Integer value2
o Integer value3
}
transaction SampleTransaction {
o Integer value
}
This is my sample JS file:
async function sampleTransaction(tx) {
var value = tx.value;
await updateValue(value);
if(value < MAX){ //MAX=10000
const assetRegistry1 = await getAssetRegistry('org.example.basic.SampleAsset');
var data1 = await assetRegistry.get("1");
data1.value2 = max;
await assetRegistry1.update(data1); //updateNo2
}
else{
const assetRegistry1 = await getAssetRegistry('org.example.basic.SampleAsset');
var data1 = await assetRegistry.get("1");
data1.value3 = value;
await assetRegistry1.update(data1); //UpdateNo2
}
}
async function updateValue(value){
const assetRegistry = await getAssetRegistry('org.example.basic.SampleAsset');
var data = await assetRegistry.get("1");
data.value = value;
await assetRegistry.update(data); //UpdateNo1
}
With the above code, only latest update (UpdateNo2) command is making changes to the asset. what about the first update?
In Hyperledger fabric during proposal simulation any writes made to keys cannot be read back. Hyperledger composer is subject to that same limitation both when it is used with a real fabric implementation as well as when it's used in a simulation mode (for example when using the web connection in composer-playground).
This is the problem you are seeing in your TP function. Every time you perform
let data = await assetRegistry.get("1");
in the same transaction, you are getting the original asset, you don't get a version of the asset that has been updated earlier in the transaction. So what is finally put into the world state when the transaction is committed will be only the last change you made which is why only UpdateNo2 is being seen.
Try something like this (Note I've not tested it)
async function sampleTransaction(tx) {
const assetRegistry = await getAssetRegistry('org.example.basic.SampleAsset');
const data = await assetRegistry1.get("1");
const value = tx.value;
updateValue(data, value);
if(value < MAX){ //MAX=10000
data.value2 = MAX;
}
else{
data.value3 = value;
}
await assetRegistry1.update(data);
}
function updateValue(data, value){
data.value = value;
}
(Note I have left the function structure in just to show the equivalent but updateValue can easily be removed)

List all indexeddb for one host in firefox addon

I figured if the devtool can list all created IndexedDB, then there should be an API to retrieve them...?
Dose anyone know how I get get a list of names with the help of a firefox SDK?
I did dig into the code and looked at the source. unfortunately there wasn't any convenient API that would pull out all the databases from one host.
The way they did it was to lurk around in the user profiles folder and look at all folder and files for .sqlite and make a sql query (multiple times in case there is an ongoing transaction) to each .sqlite and ask for the database name
it came down this peace of code
// striped down version of: https://dxr.mozilla.org/mozilla-central/source/devtools/server/actors/storage.js
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
"use strict";
const {async} = require("resource://gre/modules/devtools/async-utils");
const { setTimeout } = require("sdk/timers");
const promise = require("sdk/core/promise");
// A RegExp for characters that cannot appear in a file/directory name. This is
// used to sanitize the host name for indexed db to lookup whether the file is
// present in <profileDir>/storage/default/ location
const illegalFileNameCharacters = [
"[",
// Control characters \001 to \036
"\\x00-\\x24",
// Special characters
"/:*?\\\"<>|\\\\",
"]"
].join("");
const ILLEGAL_CHAR_REGEX = new RegExp(illegalFileNameCharacters, "g");
var OS = require("resource://gre/modules/osfile.jsm").OS;
var Sqlite = require("resource://gre/modules/Sqlite.jsm");
/**
* An async method equivalent to setTimeout but using Promises
*
* #param {number} time
* The wait time in milliseconds.
*/
function sleep(time) {
let deferred = promise.defer();
setTimeout(() => {
deferred.resolve(null);
}, time);
return deferred.promise;
}
var indexedDBHelpers = {
/**
* Fetches all the databases and their metadata for the given `host`.
*/
getDBNamesForHost: async(function*(host) {
let sanitizedHost = indexedDBHelpers.getSanitizedHost(host);
let directory = OS.Path.join(OS.Constants.Path.profileDir, "storage",
"default", sanitizedHost, "idb");
let exists = yield OS.File.exists(directory);
if (!exists && host.startsWith("about:")) {
// try for moz-safe-about directory
sanitizedHost = indexedDBHelpers.getSanitizedHost("moz-safe-" + host);
directory = OS.Path.join(OS.Constants.Path.profileDir, "storage",
"permanent", sanitizedHost, "idb");
exists = yield OS.File.exists(directory);
}
if (!exists) {
return [];
}
let names = [];
let dirIterator = new OS.File.DirectoryIterator(directory);
try {
yield dirIterator.forEach(file => {
// Skip directories.
if (file.isDir) {
return null;
}
// Skip any non-sqlite files.
if (!file.name.endsWith(".sqlite")) {
return null;
}
return indexedDBHelpers.getNameFromDatabaseFile(file.path).then(name => {
if (name) {
names.push(name);
}
return null;
});
});
} finally {
dirIterator.close();
}
return names;
}),
/**
* Removes any illegal characters from the host name to make it a valid file
* name.
*/
getSanitizedHost: function(host) {
return host.replace(ILLEGAL_CHAR_REGEX, "+");
},
/**
* Retrieves the proper indexed db database name from the provided .sqlite
* file location.
*/
getNameFromDatabaseFile: async(function*(path) {
let connection = null;
let retryCount = 0;
// Content pages might be having an open transaction for the same indexed db
// which this sqlite file belongs to. In that case, sqlite.openConnection
// will throw. Thus we retey for some time to see if lock is removed.
while (!connection && retryCount++ < 25) {
try {
connection = yield Sqlite.openConnection({ path: path });
} catch (ex) {
// Continuously retrying is overkill. Waiting for 100ms before next try
yield sleep(100);
}
}
if (!connection) {
return null;
}
let rows = yield connection.execute("SELECT name FROM database");
if (rows.length != 1) {
return null;
}
let name = rows[0].getResultByName("name");
yield connection.close();
return name;
})
};
module.exports = indexedDBHelpers.getDBNamesForHost;
If anyone want to use this then here is how you would use it
var getDBNamesForHost = require("./getDBNamesForHost");
getDBNamesForHost("http://example.com").then(names => {
console.log(names);
});
Think it would be cool if someone were to build a addon that adds indexedDB.mozGetDatabaseNames to work the same way as indexedDB.webkitGetDatabaseNames. I'm not doing that... will leave it up to you if you want. would be a grate dev tool to have ;)

Resources