How to make transaction when saving events to events store - wolkenkit

I would like to use transaction in wolkenkit-eventstore when saving events to eventstore and be able to rollback those events if something else fail, is it possible ?
I saw in source code (in saveEvents method) that you are releasing connection pool:
try {
const result = await connection.query({ name: `save events ${committedEvents.length}`, text, values });
for (let i = 0; i < result.rows.length; i++) {
committedEvents[i].event.metadata.position = Number(result.rows[i].position);
}
} catch (ex) {
if (ex.code === '23505' && ex.detail.startsWith('Key ("aggregateId", revision)')) {
throw new Error('Aggregate id and revision already exist.');
}
throw ex;
} finally {
connection.release();
}
at the finally step, so i can't gain this connection pool in any way.
Is there any way i can do transaction based system with wolkenkit-eventstore ?

I'm one of the core developers of wolkenkit, so first of all thanks for bringing up this question 😊
Right now what you want is actually not possible, but nevertheless it could be a good idea to support this use case.
In wolkenkit the procedure is that the command handler publishes the events, and only if the command handler succeeds, the events are stored in the event store in an all-or-nothing approach.
To be able to understand your use case better – you said, you would like:
to rollback those events if something else fail[s]
What would this "something else" be?
Since this could be the start for a longer discussion, I think StackOverflow is probably not the perfect place to do this, so if you would like to talk to us about this feature, could you please open a feature request for this?

Related

fovea in app purchae plugin/several database entries

I have a problem on the in app purchase cordova plugin by fovea.
I am kinda confused.
What I want to do, is that when the user chooses a product (monthly subscription), I do what I need to do, process payment and all that jazz, and when everything is done, I save an entry in my database, to indicate that the user is subscribed (and some more info).
However, when I use it, I see that I have not one but at least 10 saved entries. For same user, same product.
I have no idea why it does that. So I guess something is wrong with my code. As I use Sandbox for testing IOS side, sometimes the pop up just doesn't appear (where you need to enter your password and confirm your purchase), and yet I have 10/20 entries saved in my database (I put that bit of code when product is owned, then in the .finished event).
Can someone help me?
Here is the code
let produit = null;
if (!(window as any).store) {
alert('Store indispo');
}
this.store.register({
id: this.valeurEnvoi.app,
alias: 'abonnement',
type: store.PAID_SUBSCRIPTION,
});
console.log(this.valeurEnvoi.app);
this.store.refresh();
this.store.ready( () => {
produit = this.store.get(this.valeurEnvoi.app);
this.envoiLogs('récupération du produit', 'NO ID');
if (produit.canPurchase) {
this.store.order(produit);
}
});
this.store.refresh();
// this.store.manageSubscriptions();
this.store.when(produit).updated( (p) => {
if (p.owned) {
} else {
}
});
this.store.refresh();
console.log(this.store.log);
this.store.error( error => {
this.testLog = error.message;
alert('erreur: ' + error.code + ' message: ' + error.message);
});
this.store.when(produit).approved( (order) => {
order.finish();
});
this.store.refresh();
this.store.when(produit).finished( (order) => {
const test = this.store.findInLocalReceipts(produit);
alert(test.transactionId);
this.confirmerAchatMobile();
});
Also, If I could have some pointer to get the transactionID, that would be great! Thank you!
If you see any strange stuff, let me know, it will help.
I took a look at the repo. refresh() appears to be deprecated:
/**
* #deprecated - use store.initialize(), store.update() or store.restorePurchases()
*/
refresh() {
throw new Error("use store.initialize() or store.update()");
}
source: https://github.com/j3k0/cordova-plugin-purchase/blob/master/www/store.js
Also, from a comment I infer that the transactions are replayed if you call refresh multiple times, could that cause your issue?: Notice that all previous transactions will be replayed if you call store.refresh() a second time in the lifetime of your app.
source: https://github.com/j3k0/cordova-plugin-purchase/issues/1298
There are quite a few (109) issues on the git repo. So I think you might find your answer there. If not, I'd create an issue there, the maintainers are probably more likely to have an answer for you.

Cloud functions in Firebase trigger are not part of the transaction update

I am using the realtime database and I am using transactions to ensure the integrity of my data set. In my example below I am updating currentTime on every update.
export const updateTime = functions.database.ref("/users/{userId}/projects/{projectId}")
.onUpdate((snapshot) => {
const beforeData = snapshot.before.val();
const afterData = snapshot.after.val();
if (beforeData.currentTime !== afterData.currentTime) {
return Promise.resolve();
} else {
return snapshot.after.ref.update( {currentTime: new Date().getTime()})
.catch((err) =>{
console.error(err);
});
}
});
It seems the cloud function is not part of the transaction, but triggers multiple updates in my clients, which I try to avoid.
For example, I watched this starter tutorial which replaces :pizza: with a pizza emoji. In my client I would see :pizza: for one frame before it gets replaced with the emoji. I know, the pizza tutorial is just an example, but I am running into a similar issue. Any advice is highly appreciated!
Cloud Functions don't run as part of the database transaction indeed. They run after the database has been updated, and receive "before" and "after" snapshots of the affected data.
If you want a Cloud Function to serve as an approval process, the idiomatic approach is to have the clients write to a different location (typically called a pending queue) that the function listens to. The function then performs whatever operation it wants, and writes the result to the final location.

DataNucleus Memory/Cache Handling for large update/insert

We are running application in Spring context using DataNucleus as our ORM mapping and mysql as our database.
Our application have a daily import job of some data feed into our database. The size of the data feed translate into around 1 millions row of insert/update. The performance of the import start out to be very good but then it degrade overtime (as the number of executed query increase) and at some point the application freeze or stop responding. We will have to wait for the whole job to complete before the application response again.
This behavior looks very like a memory leak to us and we have been looking hard at our code to catch any potential problem, however the problem didn't go away. One interesting thing we found from the heap dump is that org.datanucleus.ExecutionContextThreadedImpl (or the HashSet/HashMap) hold 90% of our memory (5GB) during the import. (I have attahed the screenshot of the dump below). My research on the internet said this reference is the Level1 Cache (not sure am i correct). My question is during a large import, how can i limit/control the size of the level1 cache. May be ask DN to not cache during my import?
If that's not the L1 cache, what's the possible cause of my memory issue?
Our code use a transaction for every insert to prevent locking of large chunk of data in the database. It's call the flush method every 2000 insert
As a temporary fix, we moved our import process to run overnight when no one is using our app. Obviously, this cannot go on forever. Please could someone at least point us in the right direction so that we can do more research and hoping we can find a fixes.
Would be good if someone have knowledge of decoding the heap dump
Your help would be very much appreciated by all of us here. Many thanks!
https://s3-ap-southeast-1.amazonaws.com/public-external/datanucleus_heap_dump.png
https://s3-ap-southeast-1.amazonaws.com/public-external/datanucleus_dump2.png
Code Below - Caller of this method does not have a transaction. This method will process one import object per call, and we need to process around 100K of these object daily
#Override
#PreAuthorize("(hasUserRole('ROLE_ADMIN')")
#Transactional(propagation = Propagation.REQUIRED)
public void processImport(ImportInvestorAccountUpdate account, String advisorCompanyKey) {
ImportInvestorAccountDescriptor invAccDesc = account
.getInvestorAccount();
InvestorAccount invAcc = getInvestorAccountByImportDescriptor(
invAccDesc, advisorCompanyKey);
try {
ParseReportingData parseReportingData = ctx
.getBean(ParseReportingData.class);
String baseCCY = invAcc.getBaseCurrency();
Date valueDate = account.getValueDate();
ArrayList<InvestorAccountInformationILAS> infoList = parseReportingData
.getInvestorAccountInformationILAS(null, invAcc, valueDate,
baseCCY);
InvestorAccountInformationILAS info = infoList.get(0);
PositionSnapshot snapshot = new PositionSnapshot();
ArrayList<Position> posList = new ArrayList<Position>();
Double totalValueInBase = 0.0;
double totalQty = 0.0;
for (ImportPosition importPos : account.getPositions()) {
Asset asset = getAssetByImportDescriptor(importPos
.getTicker());
PositionInsurance pos = new PositionInsurance();
pos.setAsset(asset);
pos.setQuantity(importPos.getUnits());
pos.setQuantityType(Position.QUANTITY_TYPE_UNITS);
posList.add(pos);
}
snapshot.setPositions(posList);
info.setHoldings(snapshot);
log.info("persisting a new investorAccountInformation(source:"
+ invAcc.getReportSource() + ") on " + valueDate
+ " of InvestorAccount(key:" + invAcc.getKey() + ")");
persistenceService.updateManagementEntity(invAcc);
} catch (Exception e) {
throw new DataImportException(invAcc == null ? null : invAcc.getKey(), advisorCompanyKey,
e.getMessage());
}
}
Do you use the same pm for the entire job?
If so, you may want to try to close and create new ones once in a while.
If not, this could be the L2 cache. What setting do you have for datanucleus.cache.level2.type? It think it's a weak map by default. You may want to try none for testing.

Using executeQuery/executeQueryLocally as intermediary between web service and cache

I'm new to breeze.js and I'm having a little trouble coming up with a good way to combine executeQuery and executeQueryLocally.
The use case is this: I want to use breeze data caching to hide the flakiness of a 3rd party web service. I'd like to come up with a pattern that queries the service and falls back to the cache if the service is unavailable when called.
I've been chewing on this for a couple of days now - any suggestions or advice would be appreciated!
I think that this solution can be a good way:
executeQuery= function(query){
operating(true);
return manager.executeQuery(query).fail(fail);
function fail(error){
//You can decide if you want to query locally depending on the type of error
//Example: if(error.status===404) ;
return executeQueryLocally(query);
}
}
executeQueryLocally= function(query){
return manager.executeQuery(query).using(FetchStrategy.FromLocalCache).fail(fail);
function fail(error){
//You can't get the information, so you can throw an error
//Or that you want
throw Error('Impossible to get the requested data');
}
}
//Example calling this methods
var getCustomers= function(resultArrayObservable,inlineCountObservable){
var query = new breeze.EntityQuery("Customers").inlineCount(true);
return executeQuery(query).then(success);
function success(data){
inlineCountObservable(data.inlineCount);
resultArrayObservable(standarizeCustomerDtos(mapCustomerDtosToKos(data.results)));
}
};
With this solution I have tried to to do easy to check in every query if it is something that is going wrong and not to repeat code.
I hope this can help you.

nsIProtocolHandler: trouble loading image for html page

I'm building an nsIProtocolHandler implementation in Delphi. (more here)
And it's working already. Data the module builds gets streamed over an nsIInputStream. I've got all the nsIRequest, nsIChannel and nsIHttpChannel methods and properties working.
I've started testing and I run into something strange. I have a page "a.html" with this simple HTML:
<img src="a.png">
Both "xxm://test/a.html" and "xxm://test/a.png" work in Firefox, and give above HTML or the PNG image data.
The problem is with displaying the HTML page, the image doesn't get loaded. When I debug, I see:
NewChannel gets called for a.png, (when Firefox is processing an OnDataAvailable notice on a.html),
NotificationCallbacks is set (I only need to keep a reference, right?)
RequestHeader "Accept" is set to "image/png,image/*;q=0.8,*/*;q=0.5"
but then, the channel object is released (most probably due to a zero reference count)
Looking at other requests, I would expect some other properties to get set (such as LoadFlags or OriginalURI) and AsyncOpen to get called, from where I can start getting the request responded to.
Does anybody recognise this? Am I doing something wrong? Perhaps with LoadFlags or the LoadGroup? I'm not sure when to call AddRequest and RemoveRequest on the LoadGroup, and peeping from nsHttpChannel and nsBaseChannel I'm not sure it's better to call RemoveRequest early or late (before or after OnStartRequest or OnStopRequest)?
Update: Checked on the freshly new Firefox 3.5, still the same
Update: To try to further isolate the issue, I try "file://test/a1.html" with <img src="xxm://test/a.png" /> and still only get above sequence of events happening. If I'm supposed to add this secundary request to a load-group to get AsyncOpen called on it, I have no idea where to get a reference to it.
There's more: I find only one instance of the "Accept" string that get's added to the request headers, it queries for nsIHttpChannelInternal right after creating a new channel, but I don't even get this QueryInterface call through... (I posted it here)
Me again.
I am going to quote the same stuff from nsIChannel::asyncOpen():
If asyncOpen returns successfully, the
channel is responsible for keeping
itself alive until it has called
onStopRequest on aListener or called
onChannelRedirect.
If you go back to nsViewSourceChannel.cpp, there's one place where loadGroup->AddRequest is called and two places where loadGroup->RemoveRequest is being called.
nsViewSourceChannel::AsyncOpen(nsIStreamListener *aListener, nsISupports *ctxt)
{
NS_ENSURE_TRUE(mChannel, NS_ERROR_FAILURE);
mListener = aListener;
/*
* We want to add ourselves to the loadgroup before opening
* mChannel, since we want to make sure we're in the loadgroup
* when mChannel finishes and fires OnStopRequest()
*/
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
loadGroup->AddRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this), nsnull);
nsresult rv = mChannel->AsyncOpen(this, ctxt);
if (NS_FAILED(rv) && loadGroup)
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, rv);
if (NS_SUCCEEDED(rv)) {
mOpened = PR_TRUE;
}
return rv;
}
and
nsViewSourceChannel::OnStopRequest(nsIRequest *aRequest, nsISupports* aContext,
nsresult aStatus)
{
NS_ENSURE_TRUE(mListener, NS_ERROR_FAILURE);
if (mChannel)
{
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
{
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, aStatus);
}
}
return mListener->OnStopRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
aContext, aStatus);
}
Edit:
As I have no clue about how Mozilla works, so I have to guess from reading some code. From the channel's point of view, once the original file is loaded, its job is done. If you want to load the secondary items linked in file like an image, you have to implement that in the listener. See TestPageLoad.cpp. It implements a crude parser and it retrieves child items upon OnDataAvailable:
NS_IMETHODIMP
MyListener::OnDataAvailable(nsIRequest *req, nsISupports *ctxt,
nsIInputStream *stream,
PRUint32 offset, PRUint32 count)
{
//printf(">>> OnDataAvailable [count=%u]\n", count);
nsresult rv = NS_ERROR_FAILURE;
PRUint32 bytesRead=0;
char buf[1024];
if(ctxt == nsnull) {
bytesRead=0;
rv = stream->ReadSegments(streamParse, &offset, count, &bytesRead);
} else {
while (count) {
PRUint32 amount = PR_MIN(count, sizeof(buf));
rv = stream->Read(buf, amount, &bytesRead);
count -= bytesRead;
}
}
if (NS_FAILED(rv)) {
printf(">>> stream->Read failed with rv=%x\n", rv);
return rv;
}
return NS_OK;
}
The important thing is that it calls streamParse(), which looks at src attribute of img and script element, and calls auxLoad(), which creates new channel with new listener and calls AsyncOpen().
uriList->AppendElement(uri);
rv = NS_NewChannel(getter_AddRefs(chan), uri, nsnull, nsnull, callbacks);
RETURN_IF_FAILED(rv, "NS_NewChannel");
gKeepRunning++;
rv = chan->AsyncOpen(listener, myBool);
RETURN_IF_FAILED(rv, "AsyncOpen");
Since it's passing in another instance of MyListener object in there, that can also load more child items ad infinitum like a Russian doll situation.
I think I found it (myself), take a close look at this page. Why it doesn't highlight that the UUID has been changed over versions, isn't clear to me, but it would explain why things fail when (or just prior to) calling QueryInterface on nsIHttpChannelInternal.
With the new(er) UUID, I'm getting better results. As I mentioned in an update to the question, I've posted this on bugzilla.mozilla.org, I'm curious if and which response I will get there.

Resources