Multi-master replication with both snapshot and synchronization replication - postgresql-12

We are setting up a Multi-Master replication setup using EDB Rep Server on Postgres. We are unable to create a Publication because some of the tables created by the Keycloak Application does not have Primary keys defined in them and Replication Server mandates tables to have primary keys. Is there any way to overcome this issue?
Not so easy to add primary keys to tables created by the third party Keycloak application, not sure of the consequences.
In the same multi-master setup, can we have both synchronization replication and snapshot replication defined but for different set of tables? I assume a snapshot replication wouldn't require this primary key to be defined for the tables.

At the moment for MMR you cant create multiple publications.
On the other hand in case of SMR you are allowed to create multiple
publication e.g publication1 with tables having primary keys (that will have
sync and snapshot support) and another publication2 with tables without
primary keys (that will only have snapshot support)
The other solution is to create MMR publication1 with table having primary key in Database1 and Create a publication in SMR publication2 for tables having no primary keys in Database2. (Note: cant add same database in SMR and MMR simultaneously)

Related

Mirroring a table by 3rd party software based on timestamp and $systemId?

The 3rd party web application is processing some information from the Microsoft Dynamics Nav database on Azure SQL. To access the table, the external table is created as a bridge from another database (also on Azure SQL).
Because of the performance reasons, the interesting part of the Nav table is to be mirrored to the 3rd party database -- using the custom data pump.
The problem is that the table ([...$Sales price]) has a composed key and not all of its parts is to be mirrored. The good news is that no record should be removed from the table (only INSERT and UPDATE operations). Another good news is that the new $systemId field was added as the unique key within the table. To combine it with the timestamp field...
If the records in the source table are not to be deleted, is it correct to mirror the data based on the timestamp field--to find newly inserted or updated records--and on the $systemId field--to recognize whether it is the UPDATE of the existing record, or INSERT of the new record?

MDS Staging with automatic codes creation

everyone!
We're using MS SQL Master Data Services to organize our enterprise master data and some of entities, we keep, consists of data, that we load from external sources almost as-is. We regularly update them using jobs or SSIS packages, placing data into staging tables ([stg].[<name>_Leaf]) and starting staging process using procedures, named as [stg].[udp_<name>_Leaf] as described in THIS and THIS topics about staging process in MDS.
Sometimes data, we import from some external source is presented as a flat table, just containing a set of rows, that we might want to reference in our other tables and then we load it and enjoy (in fact we place data into staging tables, call SP and let MDS proceed it in any, comfortable for server, moment, due to the main workload of staging process is running asynchronously, via Broker).
But there are a lot of other, ugly, but real-life cases, when data, that we load is presented as a tree, containing references to members, that we've not loaded yet and just going to place them into staging tables.
The problem is that in most cases we use automatic code creation function (and we cannot use not surrogate code), and we're not able to set member referencing's field value (where referenced's member code must be placed) to newly created member, before member is created and inserted into the base table and code is generated and set.
As I can see, we could resolve this problem, if we could reference staging member by staging table's ID, which is IDENTITY and assigned right after insert.
-OR-
If we could receive a callback from the staging process when data is placed into our base tables and codes are assigned. Then we'd calculate all references and update them (using the same staging process mech).
Currently, we use stupid not-very-elegant workaround, generating GUIDs and using them as Code value, when this scenario is.
Can anyone offer anything more enterprise? (:
When loading hierarchical data load the parent records into the staging table and then run the associated stored procedure to apply them to the entity table where they will be assigned the automatically generated code.
Next when loading the child records into staging leaf lookup the parent code using a subscription view of the parent entity.
When using automatically generate codes, I recommend you start at 10000 or 100000, as Excel sorts the codes as strings.

How to have document based authorization in Elasticsearch?

I'm trying to build a query in Elasticsearch that would only search documents with a certain custom ID. A User has many Opportunities and an Opportunity has many Tasks.
On the Opportunity, the primary key is labs_id__c
On the Task, the foreign key for an Opportunity is opportunity_id__c
I would like to build a query that only searches objects that have reference to Opportunity foreign key. The purpose of this, is to only allow Users to search for related Tasks or related objects within their own Opportunities.
How do I do this?

Breeze server-generated keys of guids saving temporary client keys to database

I have set up a breeze app with entities set to autoGeneratedKeyType of Identity. My database uses char(32) guids for primary keys, which are generated as defaults on the PK columns, which are NOT set as identities.
Upon save, the record gets created in the database, but it saves with the temporary breeze generated keys of 'undefined-1', 'undefined-2' etc.
Can breeze handle this type of server-side key generation scheme? What are my options? I must stick with this key generation approach due to the design of the existing system. Generating guids on the client comes to mind...
Have you got any ideas?
If you are working with Guid's, I think best practice should be to generate them on the client. You can use the breeze.core.getUuid() method ( currently undocumented) to generate client side Guid's.
If you really want to generate them on the server, then you will need to use a Breeze KeyGenerator. There is more information on this topic here: Search for Key Generator within this page and within the API docs.

MVC design - handle file upload before saving the record

We've an MVC web app which has a Claim management wizard (similar to a typical Order entry stuff). Claim can have multiple Items and each Item can have multiple files related to it.
Claim --> Items --> Files
While adding a new Claim - we want to allow the user to add one or more items to it and also allow file upload for those items. But we want to keep everything in memory until the Claim is actually saved - so that if the user doesn't complete the Claim entry or discards it, no database interaction is done.
We're able to handle data level in-memory management via session. We serialize the Claim object (which also includes a Claim.Items property) in session. But how to manage files?
We store files in < ClaimID >\< ItemID > folder but while creating a new
claim in memory we don't have any IDs until the record is being
saved in the database (both are auto-increment int).
For now, we've to restrict the user from uploading files until a Claim is saved.
Why not interact with the database? It sounds like you're intending to persist data between multiple requests to the application, and databases are good for that.
You don't have to persist it in the same tables or even in the same database instance as the more long-term persisted data. Maybe create tables or a database for "transient" data, or data that isn't intended to persist long-term until it reaches a certain state. Or perhaps store it in the same tables as the long-term data but otherwise track state to mark it as "incomplete" or in some other way transient.
You could have an off-line process which cleans up the old data from time to time. If deletes are costly on the long-term data tables then that would be a good reason to move the transient data to their own tables, optimized for lots of writes/deletes vs. lots of reads over time.
Alternatively, you could use a temporary ID for the in-memory objects to associate them with the files on the disk prior to be persisted to the database. Perhaps even separate the file-associating ID from the record's primary ID. (I'm assuming that you're using an auto-incrementing integer for the primary key and that's the ID you need to get from the database.)
For example, you could have another identifier on the record which is a Guid (uniqueidentifier in SQL) for the purpose of linking the record to a file on the disk. That way you can create the identifier in-memory and persist it to the database without needing to initially interact with the database to generate the ID. This also has the added benefit of being able to re-associate with different files or otherwise change that identifier as needed without messing with keys.
A Guid shouldn't be used as a primary key in many cases, so you probably don't want to go that route. But having a separate ID could do the trick.

Resources