Managing transactions with AllegroGraph's Jena API - jena

I'm uncertain as to the behaviour of the AllegroGraph triple store in regard to transactions. The tutorial talks about using two connections, but does not mention Jena models.
If I use Model's begin(), commit(), and abort() methods, do I still need to use two connections? How does a model interact with the connection's auto-commit setting?

The Jena tutorial doesn't have an example of transactions, but they are supported by using the Model methods: begin, commit, and abort.
You don't have to do anything manually with 2 connections. I'll work on clarifying the language in the tutorial.
The way it's implemented, when you call begin(), it calls setAutoCommit(false).

Related

Is Microsoft Orleans not really made to support legacy applications?

After a bunch of googling, I don't really see a good way to have Orleans work with an existing Relation-Database backend.
Every example that I have found for doing this relies on adding columns to deal with concurrency and I haven't really seen any samples of how to use Orleans with, as is the typical example, the northwind database or something.
This leads me to believe that Orleans is not really intended to be used in this way (because if it was I would expect someone somewhere to have create a sample app demonstrating it by now). Am I missing something? Has anyone seen a sample project or blog post explaining how to use, say, an existing EF context with Orleans? This needs to be done without adding additional columns. I am working with data that is controlled by multiple teams in a mission critical system, so there is no way I will get approval to start adding columns to hundreds of tables.
As #Milney says, to my knowledge, there is nothing special in Orleans that would prevent you from using a normal EF DbContext, no extra columns required.
If, on the other hand, your issue is that other applications are causing concurrency issues from outside Orleans, then I think you'll need to deal with them as you would in any application (e.g. with optimistic concurrency checks).
But it's possible I'm misunderstanding your use case.

What is the best way to integrate data from ProcessMaker?

The question is relatively plain, but mainly directed to the ProcessMaker experts.
I need to extract batches of data from ProcessMaker to perform analysis later.
Currently, we have v3.3 which has database model documented very well, and not so well documented REST API.
Having no clue on the best approach I suggest Process maker developers are encouraged to use direct database connection to fetch data batches.
However, from the perspective of the v.4 upgrade, I see that the database model is no longer a part of the official documentation, as well as the "Data Integration" chapter. Everything points out to use REST API for any data affairs.
So, I am puzzled. Which way to go for v3.3 and v4? REST API or direct DB connection?
ProcessMaker 4 was designed and built as an API first application. The idea is that everything that can and should be done through the application should be done via the API. In fact, this is the way all modern systems are designed. The days of accessing the database directly are gone and for good reason. The API is a contract. It is a contract that says that if you make a request in a certain way, you will get a certain response. On the other hand, we cannot guarantee that the database itself will always have the same tables. As a result if you access the database directly, and then we decide to change the database structure, you will be out of luck and anything you built that access the database directly will potentially fail.
So - the decision is clear. V4 is a modern architecture built with modern tooling. It performs and scales better than V3. It is the future of ProcessMaker. So, we highly recommend using this versioning, upgrading and staying on our mainline, and using the API for all activities related to the data models.

Persistent relational projections in RailsEventStore

I am trying to build a CQRS and event sourced Rails 5.2.x application using RailsEventStore.
Now I would like to project my event stream into a relational model, ideally just using ActiveRecord and the PostgreSQL database I also used for my event store.
In the documentation of RailsEventStore I only found on-the-fly, non-persistent projections.
Is there any infrastructure available to continuously build and update a relational representation of an event stream? It needs to remember which events have already been applied to the relational model across restarts of the application.
In case you know how to do it, please let me know.
There is no out-of-the-box background process in RailsEventStore to support persistent projections the same way EventStore database does them.
There are however pieces you can fit together to achieve something similar — event handlers and linking.
My colleague Rafał put together a few posts documenting this approach:
https://blog.arkency.com/using-streams-to-build-read-models/
https://blog.arkency.com/read-model-patterns-in-case-of-lack-of-order-guarantee/
If you'd like to implement such projection as a separate background process rather to rely on event handlers (whether synchronous or not) then Distributed RailsEventStore with PgLinearizedRepository might be a good starting point.

Handling database-backed async callbacks in Grails

I've been working on implementing an automated trading system in Grails based on Interactive Brokers' API (brief overview here: Grails - asynchronous communication with 3rd party API) for longer than I care to admit. This is a high-frequency trading strategy, so it's not as simple as placing an order for 100 shares and getting filled. There's a lot of R&D involved, so my architecture and design have been and continue to morph and evolve over time.
What has become clear over the past month or so is that the asynchronous nature of the API is killing me. I have a model of my intended position within Grails, but this does not automatically reflect the actual state at the brokerage. There is a process of creating orders, some of which get filled now, some later, and some never. There could be partial fills, canceled or rejected orders, or any number of other errors. And the asynchronous updates have turned into a nightmare of pessimistic locks, ugly relationships and dependencies between Positions, Intents, Orders, Transactions, etc. And still, with all that unelegant, smelly code, there are times when my internal model gets out of sync with the actual state of the brokerage account. And that is a very dangerous situation.
So, I'm realizing that I need some kind of async framework that will allow Grails and the IB API to maintain precisely the same state without fail. I am somewhat familiar with Gpars, Akka, Promises, and Actors but only on the surface; I have no hands-on experience with any of them. Just recently, I saw Parse's Bolt Framework Tasks and wondered if that might be a good fit. My need is not really for parallelism or multi-threading of computations or collections. All I am trying to do is make sure that the async callbacks from IB are properly reflected in the Grails domain classes at all times. And I'm hoping that the right framework will allow me to delete tons of ugly spaghetti code that I've written trying to solve this problem.
What I need is a recommendation on the right framework, model, or architecture that addresses this problem. I welcome any recommendations, whether or not I mentioned them above.

Pros and cons of database triggers vs Rails ActiveRecord callbacks?

I am writing a program using Ruby on Rails and PostgreSQL. The system generates alot of reports which are frequently updated and frequently accessed by users. I am torn between whether I should use Postgres triggers to create the report tables (like Oracle materialized views) or the Rails built in ActiveRecord callbacks. Has anyone got any thoughts or experiences on this?
Callback are useful in the following cases:
Combine all business logic in Rails models which ease maintainability.
Make use of existing Rails model code
Easy to debug
Ruby code is easier to be written/read than sql "maintainability"
Triggers are useful in the following cases:
Performance is a big concern. It is faster than callbacks.
If your concern is ease and clean then use callbacks. If your concern is performance then use triggers.
We had the same problem, and since this is an interesting topic, I'll elaborate based on our choice/experience.
I think the concept is more complex than what highlighted in the current answer.
Since we're talking about reports, I assume that the use case is updating of data warehousing tables - not a "generic" application (this assumption/distinction is crucial).
Firstly, the "easy to debug" idea is not [necessarily] true. In our case, it's actually counterproductive to think so.
In sufficiently complex applications, some types of callbacks (data warehousing updates/millions of lines of code/mid (or more) sized team) are simply impossible to maintain, because there are so many places/ways the database will be updated, that it will be practically impossible to debug missed callbacks.
Triggers don't have to be necessarily designed as the "complex and fast" logic.
Specifically, triggers may also works as low-level callback logic, therefore being simple and lean: they would simply forward the update events back to the rails code.
To wrap up, in the use case mentioned, rails callbacks should be avoided like the plague.
An efficient and effective design is to have RDBMS triggers adding records to a queue table, and a rails-side queueing system, which acts upon them.
(Since this post is old, I'm curious about which has been the experience of the OP)

Resources