How to do multiple updates for the same asset in a transaction - hyperledger

Here I want to do the multiple update commands for the same asset in a single transaction based on conditions.
This is my sample CTO File:
asset SampleAsset identified by id{
o String id
o Integer value
o Integer value2
o Integer value3
}
transaction SampleTransaction {
o Integer value
}
This is my sample JS file:
async function sampleTransaction(tx) {
var value = tx.value;
await updateValue(value);
if(value < MAX){ //MAX=10000
const assetRegistry1 = await getAssetRegistry('org.example.basic.SampleAsset');
var data1 = await assetRegistry.get("1");
data1.value2 = max;
await assetRegistry1.update(data1); //updateNo2
}
else{
const assetRegistry1 = await getAssetRegistry('org.example.basic.SampleAsset');
var data1 = await assetRegistry.get("1");
data1.value3 = value;
await assetRegistry1.update(data1); //UpdateNo2
}
}
async function updateValue(value){
const assetRegistry = await getAssetRegistry('org.example.basic.SampleAsset');
var data = await assetRegistry.get("1");
data.value = value;
await assetRegistry.update(data); //UpdateNo1
}
With the above code, only latest update (UpdateNo2) command is making changes to the asset. what about the first update?

In Hyperledger fabric during proposal simulation any writes made to keys cannot be read back. Hyperledger composer is subject to that same limitation both when it is used with a real fabric implementation as well as when it's used in a simulation mode (for example when using the web connection in composer-playground).
This is the problem you are seeing in your TP function. Every time you perform
let data = await assetRegistry.get("1");
in the same transaction, you are getting the original asset, you don't get a version of the asset that has been updated earlier in the transaction. So what is finally put into the world state when the transaction is committed will be only the last change you made which is why only UpdateNo2 is being seen.
Try something like this (Note I've not tested it)
async function sampleTransaction(tx) {
const assetRegistry = await getAssetRegistry('org.example.basic.SampleAsset');
const data = await assetRegistry1.get("1");
const value = tx.value;
updateValue(data, value);
if(value < MAX){ //MAX=10000
data.value2 = MAX;
}
else{
data.value3 = value;
}
await assetRegistry1.update(data);
}
function updateValue(data, value){
data.value = value;
}
(Note I have left the function structure in just to show the equivalent but updateValue can easily be removed)

Related

How to run 2 process and exchange input/output between them multiple times?

I'm trying to do something like this:
Future<String> getOutAndAnswer(testcase) async {
Process python = await Process.start('python', ['tasks/histogram/run.py']);
Process java = await Process.start('java', ['solutions/Histogram.java']);
String results = "";
for(int i = 0; i < testcase; i++){
final String out = await python.stdout.transform(utf8.decoder).first;
java.stdin.writeln(out);
final String answer = await java.stdout.transform(utf8.decoder).first;
python.stdin.writeln(answer);
results += "($out, $answer)";
}
return results;
}
Basically, the python program is responsible for generating the input of each test case, then the java program will take the input and return the answer, which is sent to the python program to check if it's correct or not, and so on for every test case.
But when I try to use the above code I get an error saying I've already listened to the stream once:
Exception has occurred.
StateError (Bad state: Stream has already been listened to.)
Python program:
import os
CASE_DIR = os.path.join(os.path.dirname(__file__), "cases")
test_cases = next(os.walk(CASE_DIR))[2]
print(len(test_cases))
for case in sorted(test_cases):
with open(os.path.join(CASE_DIR, case), 'r') as f:
print(f.readline(), end='', flush=True)
f.readline()
expected_output = f.readline()
user_output = input()
if expected_output != user_output:
raise ValueError("Wrong answer!")
print("EXIT", flush=True)
Java program:
public class Histogram {
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
int t = scanner.nextInt();
for (int i = 0; i < t; i++) {
String input = scanner.nextLine();
String answer = calculateAnswer(input);
System.out.println(answer);
}
}
}
Your issue is with .first which is going to listen to the stream, get the first element, and then immediately stop listening to the stream. See the documentation here: https://api.dart.dev/stable/2.17.3/dart-async/Stream/first.html
You should instead listen once and define an onData method to perform the steps. See the documentation for .listen() here: https://api.dart.dev/stable/2.17.3/dart-async/Stream/listen.html
You could try wrapping the stdout streams in StreamIterator<String>. You will have to give it a try to verify, but I think this is what you are looking for.
Future<String> getOutAndAnswer(int testcase) async {
Process python = await Process.start('python', ['tasks/histogram/run.py']);
Process java = await Process.start('java', ['solutions/Histogram.java']);
String results = "";
StreamIterator<String> pythonIterator = StreamIterator(
python.stdout.transform(utf8.decoder).transform(LineSplitter()));
StreamIterator<String> javaIterator = StreamIterator(
java.stdout.transform(utf8.decoder).transform(LineSplitter()));
for (int i = 0; i < testcase; i++) {
if (await pythonIterator.moveNext()) {
final String out = pythonIterator.current;
if (out == 'EXIT') {
break;
}
java.stdin.writeln(out);
if (await javaIterator.moveNext()) {
final String answer = javaIterator.current;
python.stdin.writeln(answer);
results += "($out, $answer)";
}
}
}
await pythonIterator.cancel();
await javaIterator.cancel();
return results;
}
You may need to add the following imports:
import 'dart:async';
import 'dart:convert';

Recover single asset history

I'm trying to recover the history of a single asset. The model is defined like the following
namespace org.example.basic
asset SampleAsset identified by assetId {
o String assetId
--> SampleParticipant owner
o String value
}
participant SampleParticipant identified by participantId {
o String participantId
o String firstName
o String lastName
}
transaction GetAssetHistory {
o String assetId
}
event SampleEvent {
--> SampleAsset asset
o String oldValue
o String newValue
}
I generate a single participant and a new asset referencing to the previous participant. And I proceed to update the asset value variable value. But reading about asset update I found the following:
async function getAssetHistory(tx) {
//How can I get a single asset history using the tx.assetId value??
let historian = await businessNetworkConnection.getHistorian();
let historianRecords = historian.getAll();
console.log(prettyoutput(historianRecords));
}
When I deploy the bna and I call the function I get the following:
img
In other functions i use the RuntimeApi but I dont know if businessNetworkConnection is a Runtime API call.
Any idea of how can a get a single asset history?
Any example on internet?
***************** UPDATE
I change the way to recover a particula asset history. Doing the following:
In js file
/**
* Sample read-only transaction
* #param {org.example.trading.MyPartHistory} tx
* #returns {org.example.trading.Trader[]} All trxns
* #transaction
*/
async function participantHistory(tx) {
console.log('1');
const partId = tx.tradeid;
console.log('2');
const nativeSupport = tx.nativeSupport;
// const partRegistry = await getParticipantRegistry('org.example.trading.Trader')
console.log('3');
const nativeKey = getNativeAPI().createCompositeKey('Asset:org.example.trading.Trader', [partId]);
console.log('4');
const iterator = await getNativeAPI().getHistoryForKey(nativeKey);
let results = [];
let res = {done : false};
while (!res.done) {
res = await iterator.next();
if (res && res.value && res.value.value) {
let val = res.value.value.toString('utf8');
if (val.length > 0) {
console.log("#debug val is " + val );
results.push(JSON.parse(val));
}
}
if (res && res.done) {
try {
iterator.close();
}
catch (err) {
}
}
}
var newArray = [];
for (const item of results) {
newArray.push(getSerializer().fromJSON(item));
}
console.log("#debug the results to be returned are as follows: ");
return newArray; // returns something to my NodeJS client (called via REST API)
}
In model file
#commit(false)
#returns(Trader[])
transaction MyPartHistory {
o String tradeId
}
I create a single asset an di update then with other values. But whe I call the MyPartHistory i get the following message:
Error: Native API not available in web runtime
use of the native api is only available when you are running your business network in a real fabric environment. You can't use it in the online playground environment. You will have to setup a real fabric environment and then run playground locally connecting to that fabric in order to test your business network.

Large File upload to ASP.NET Core 3.0 Web API fails due to Request Body to Large

I have an ASP.NET Core 3.0 Web API endpoint that I have set up to allow me to post large audio files. I have followed the following directions from MS docs to set up the endpoint.
https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-3.0#kestrel-maximum-request-body-size
When an audio file is uploaded to the endpoint, it is streamed to an Azure Blob Storage container.
My code works as expected locally.
When I push it to my production server in Azure App Service on Linux, the code does not work and errors with
Unhandled exception in request pipeline: System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Request body too large.
Per advice from the above article, I have configured incrementally updated Kesterl with the following:
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseKestrel((ctx, options) =>
{
var config = ctx.Configuration;
options.Limits.MaxRequestBodySize = 6000000000;
options.Limits.MinRequestBodyDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.MinResponseDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.RequestHeadersTimeout =
TimeSpan.FromMinutes(2);
}).UseStartup<Startup>();
Also configured FormOptions to accept files up to 6000000000
services.Configure<FormOptions>(options =>
{
options.MultipartBodyLengthLimit = 6000000000;
});
And also set up the API controller with the following attributes, per advice from the article
[HttpPost("audio", Name="UploadAudio")]
[DisableFormValueModelBinding]
[GenerateAntiforgeryTokenCookie]
[RequestSizeLimit(6000000000)]
[RequestFormLimits(MultipartBodyLengthLimit = 6000000000)]
Finally, here is the action itself. This giant block of code is not indicative of how I want the code to be written but I have merged it into one method as part of the debugging exercise.
public async Task<IActionResult> Audio()
{
if (!MultipartRequestHelper.IsMultipartContentType(Request.ContentType))
{
throw new ArgumentException("The media file could not be processed.");
}
string mediaId = string.Empty;
string instructorId = string.Empty;
try
{
// process file first
KeyValueAccumulator formAccumulator = new KeyValueAccumulator();
var streamedFileContent = new byte[0];
var boundary = MultipartRequestHelper.GetBoundary(
MediaTypeHeaderValue.Parse(Request.ContentType),
_defaultFormOptions.MultipartBoundaryLengthLimit
);
var reader = new MultipartReader(boundary, Request.Body);
var section = await reader.ReadNextSectionAsync();
while (section != null)
{
var hasContentDispositionHeader = ContentDispositionHeaderValue.TryParse(
section.ContentDisposition, out var contentDisposition);
if (hasContentDispositionHeader)
{
if (MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
{
streamedFileContent =
await FileHelpers.ProcessStreamedFile(section, contentDisposition,
_permittedExtensions, _fileSizeLimit);
}
else if (MultipartRequestHelper
.HasFormDataContentDisposition(contentDisposition))
{
var key = HeaderUtilities.RemoveQuotes(contentDisposition.Name).Value;
var encoding = FileHelpers.GetEncoding(section);
if (encoding == null)
{
return BadRequest($"The request could not be processed: Bad Encoding");
}
using (var streamReader = new StreamReader(
section.Body,
encoding,
detectEncodingFromByteOrderMarks: true,
bufferSize: 1024,
leaveOpen: true))
{
// The value length limit is enforced by
// MultipartBodyLengthLimit
var value = await streamReader.ReadToEndAsync();
if (string.Equals(value, "undefined",
StringComparison.OrdinalIgnoreCase))
{
value = string.Empty;
}
formAccumulator.Append(key, value);
if (formAccumulator.ValueCount >
_defaultFormOptions.ValueCountLimit)
{
return BadRequest($"The request could not be processed: Key Count limit exceeded.");
}
}
}
}
// Drain any remaining section body that hasn't been consumed and
// read the headers for the next section.
section = await reader.ReadNextSectionAsync();
}
var form = formAccumulator;
var file = streamedFileContent;
var results = form.GetResults();
instructorId = results["instructorId"];
string title = results["title"];
string firstName = results["firstName"];
string lastName = results["lastName"];
string durationInMinutes = results["durationInMinutes"];
//mediaId = await AddInstructorAudioMedia(instructorId, firstName, lastName, title, Convert.ToInt32(duration), DateTime.UtcNow, DateTime.UtcNow, file);
string fileExtension = "m4a";
// Generate Container Name - InstructorSpecific
string containerName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{instructorId}";
string contentType = "audio/mp4";
FileType fileType = FileType.audio;
string authorName = $"{firstName} {lastName}";
string authorShortName = $"{firstName[0]}{lastName}";
string description = $"{authorShortName} - {title}";
long duration = (Convert.ToInt32(durationInMinutes) * 60000);
// Generate new filename
string fileName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{Guid.NewGuid()}";
DateTime recordingDate = DateTime.UtcNow;
DateTime uploadDate = DateTime.UtcNow;
long blobSize = long.MinValue;
try
{
// Update file properties in storage
Dictionary<string, string> fileProperties = new Dictionary<string, string>();
fileProperties.Add("ContentType", contentType);
// update file metadata in storage
Dictionary<string, string> metadata = new Dictionary<string, string>();
metadata.Add("author", authorShortName);
metadata.Add("tite", title);
metadata.Add("description", description);
metadata.Add("duration", duration.ToString());
metadata.Add("recordingDate", recordingDate.ToString());
metadata.Add("uploadDate", uploadDate.ToString());
var fileNameWExt = $"{fileName}.{fileExtension}";
var blobContainer = await _cloudStorageService.CreateBlob(containerName, fileNameWExt, "audio");
try
{
MemoryStream fileContent = new MemoryStream(streamedFileContent);
fileContent.Position = 0;
using (fileContent)
{
await blobContainer.UploadFromStreamAsync(fileContent);
}
}
catch (StorageException e)
{
if (e.RequestInformation.HttpStatusCode == 403)
{
return BadRequest(e.Message);
}
else
{
return BadRequest(e.Message);
}
}
try
{
foreach (var key in metadata.Keys.ToList())
{
blobContainer.Metadata.Add(key, metadata[key]);
}
await blobContainer.SetMetadataAsync();
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
blobSize = await StorageUtils.GetBlobSize(blobContainer);
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
Media media = Media.Create(string.Empty, instructorId, authorName, fileName, fileType, fileExtension, recordingDate, uploadDate, ContentDetails.Create(title, description, duration, blobSize, 0, new List<string>()), StateDetails.Create(StatusType.STAGED, DateTime.MinValue, DateTime.UtcNow, DateTime.MaxValue), Manifest.Create(new Dictionary<string, string>()));
// upload to MongoDB
if (media != null)
{
var mapper = new Mapper(_mapperConfiguration);
var dao = mapper.Map<ContentDAO>(media);
try
{
await _db.Content.InsertOneAsync(dao);
}
catch (Exception)
{
mediaId = string.Empty;
}
mediaId = dao.Id.ToString();
}
else
{
// metadata wasn't stored, remove blob
await _cloudStorageService.DeleteBlob(containerName, fileName, "audio");
return BadRequest($"An issue occurred during media upload: rolling back storage change");
}
if (string.IsNullOrEmpty(mediaId))
{
return BadRequest($"Could not add instructor media");
}
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
var result = new { MediaId = mediaId, InstructorId = instructorId };
return Ok(result);
}
I reiterate, this all works great locally. I do not run it in IISExpress, I run it as a console app.
I submit large audio files via my SPA app and Postman and it works perfectly.
I am deploying this code to an Azure App Service on Linux (as a Basic B1).
Since the code works in my local development environment, I am at a loss of what my next steps are. I have refactored this code a few times but I suspect that it's environment related.
I cannot find anywhere that mentions that the level of App Service Plan is the culprit so before I go out spending more money I wanted to see if anyone here had encountered this challenge and could provide advice.
UPDATE: I attempted upgrading to a Production App Service Plan to see if there was an undocumented gate for incoming traffic. Upgrading didn't work either.
Thanks in advance.
-A
Currently, as of 11/2019, there is a limitation with the Azure App Service for Linux. It's CORS functionality is enabled by default and cannot be disabled AND it has a file size limitation that doesn't appear to get overridden by any of the published Kestrel configurations. The solution is to move the Web API app to a Azure App Service for Windows and it works as expected.
I am sure there is some way to get around it if you know the magic combination of configurations, server settings, and CLI commands but I need to move on with development.

launch many transactions typeorm into loop javascript

In my Electron App I would like to inject data (like Fixtures) when App launched.
I use typeorm library for managing my SQLite3 Database connection.
I created json file that represent Entity typeorm and I would like persist all of them in my DB with typeorm. For that It seems that use trasaction is more efficient.
I try two differents things but the result is the same and I don't uderstand why. The issue message is :
Error: Transaction already started for the given connection, commit
current transaction before starting a new one
My first implementation of transaction :
async setAll(entity, data)
{
let connection = await this.init()
const queryRunner = connection.createQueryRunner()
await queryRunner.connect()
for (const [key, value] of Object.entries(data))
{
await typeorm.getManager().transaction(transactionalEntityManager =>
{
})
}
}
My second implementation of transaction :
async setAll(entity, data)
{
let connection = await this.init()
const queryRunner = connection.createQueryRunner()
await queryRunner.connect()
for (const [key, value] of Object.entries(data))
{
let genre1 = new Genre()
genre1.name = 'toto'
genre1.identifier = 'gt'
genre1.logo = ''
genre1.isActivate = false
await queryRunner.startTransaction()
await queryRunner.manager.save(genre1)
await queryRunner.commitTransaction()
await queryRunner.release()
}
}
NB : The second implementation persist correctly the first object but not the others.
How can manage many typeorm Transaction created into loop for persist lot of data ?
async setAll(entity, data) {
let connection = await this.init()
const queryRunner = connection.createQueryRunner()
await queryRunner.connect()
await queryRunner.startTransaction()
try {
for await (const [key, value] of Object.entries(data)) {
let genre1 = new Genre()
genre1.name = 'toto'
genre1.identifier = 'gt'
genre1.logo = ''
genre1.isActivate = false
const newGenre= queryRunner.manager.create(Genre,genre1)
await queryRunner.manager.save(newGenre)
}
await queryRunner.commitTransaction()
} catch {
await queryRunner.rollbackTransaction()
} finally {
await queryRunner.release()
}

Run firebase cloud function when a key is a specific value

const functions = require('firebase-functions');
var IAPVerifier = require('iap_verifier');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
exports.verifyReceipt = functions.database.ref('/Customers/{uid}/updateReceipt')
.onWrite(event => {
const uid = event.params.uid;
var receipt = event.data.val();
(strReceipt).toString('base64');
var client = new IAPVerifier('IAP_secretkey')
client.verifyAutoRenewReceipt(receipt, true,function(valid, msg, data){
console.log(' RECEIPT');
if(valid) {
console.log('VALID RECEIPT');
console.log('msg:' + msg);
var strData = JSON.stringify(data);
console.log('data"' + strData);
const newReceiptRef = admin.database().ref('/Customers/{uid}/');
newReceiptRef.update({'receiptData1': data});
const recVerRef = admin.database().ref('/Customers/{uid}/');
newReceiptRef.update({'updateReceipt': 0});
// update status of payment in your system
}else{
console.log('INVALID RECEIPT');
console.log('msg:' + msg);
var strData = JSON.stringify(data);
console.log('data"' + strData);
}
});
});
This is my node js cloud function. The possible values for 'updateReceipt' are 0 and 1. Is it possible to run the cloud function only when the value is 1?
Thanks.
There is no way to only trigger the function when a specific value is present.
I can think of two options:
Write the nodes to a different branch depending on the updateReceipt value.
Add an if to your code.
The second options is definitely the simplest:
exports.verifyReceipt =
functions.database.ref('/Customers/{uid}/updateReceipt')
.onWrite(event => {
const uid = event.params.uid;
var receipt = event.data.val();
if (receipt.updateReceipt === 0) {
var client = new IAPVerifier('IAP_secretkey')
...
Alternatively, you can keep the updated receipt in a separate branch from the new receipts. That way you can trigger a function separately for just the new receipts.

Resources