Informix 10 replication queues not moving - informix

I'm using Informix IDS 10.00.UC6 on Solaris 10, with two machines having the same database schema and all tables replicating in both directions using Enterprise Replication, so in theory both databases should have the same content.
However , a problem has arisen where one direction of replication (Host A to Host B) continues to work correctly, but the other direction (Host B to host A) does not work. The symptoms are:
Changes made to a table on Host B do not propagate to Host A (as determined by changing a row on Host B and inspecting the table on Host A)
cdr list serv shows Active and Connected (both directions), but on Host B there is a queue of millions of bytes.
cdr list repl shows non-zero queues for several of the replicates.
cdr stats recvq on Host A shows nothing received from Host B recently.
cdr stats rqm shows data in the spool trg_send_stxn with flags SEND_Q, SPOOLED, PROGRESS_TABLE, NEED_ACK, SENDQ_MASK, SREP_TABLE.
There are no errors or relevant messages in online.log or cdr_mon.log , or any other place I can think to look.
Some of the tables are "out of sync" in that rows have conflicting data or are missing; this is for various reasons relating to past errors where one host was offline. However, even changes to tables with correct data on Host B are not propagated to Host A.
I did a cdr cleanstart on Host B yesterday after this problem was occurring in both directions, which did at least make the A -> B direction start working (the opposite of what I expected), and the queue on Host B were 0 at that time. After that cleanstart, some changes to tables (with correct data) would propagate to Host A, while some changes to other tables on B would not. But today, no tables are propagating from B to A.
Before the cleanstart I had found by experimenting that sometimes deleting an individual replicate would reduce the size of the stuck queue but the queue remained stuck all the same; and sometimes, deleting a replicate would make the queue move for a time before being stuck again.
There is also a DR host that both A and B do one-way propagation to, and that is propagating correctly with no queue backup.
I'm at a loss now as to try and diagnose why the data in the replication queues is not moving. If there were sync errors (i.e. the replicated change could not be applied due to Host A data differing) I would expect log messages in online.log that the update was rejected, with information saved to $INFORMIXDIR/ats_dr and so on -- this has happened recently . It seems as if there must be something in the queue being refused but not being cleared and not logged, blocking the queue. Host A has heavy live traffic and (thankfully) is correctly replicating to Host B, but not vice versa.
Any ideas of more things to try or ways to diagnose the problem would be most welcome.
Edit - may or may not be related to Retrieving or deleting a row with a blob in Informix 10 where it appears that the ER send spooler on Host B has corruption.

If there are any recent replication definition changes in your environment, I would look for the following for any clues. As Jonathan mentioned, IDS 10.0.xC6 is quite old and there were lots of addition to ER in the latest versions that makes it even more robust and resilient to failures
Host B being Receive Only in the replicates
ATS/RIS files

What does cdr error -a show (run on each server)?
(Just in case you don't have it ... a link to Version 10 ER manual:
http://publibfp.dhe.ibm.com/epubs/pdf/25122792.pdf)
Oh, and are ALL Servers "in time sync" (ntp)?
JJ

Regarding ATS/RIS files, can we assume all replicates do have these options on and this has already worked in the past?
What's in 'onstat -g rcv' (receive statistics) on A, and how does this output change over time?
What does 'onstat -g nif' say on B? Possibly a block in the transmission?
Can we assume both sides had been restarted at least once since the issue started so any internal thread confusion would have been resolved and ER re-initialized at least once on either side?
Is there possibly some huge transaction, from B underway to A, that's clogging replication, e.g. by filling up A's receive queue? Any space problems in queue sbspaces (or queue header dbspaces)?
I guess a cleanstart on B will resolve the problem, but of course a re-sync of all replicated tables would have to occur (since you already did a cleanstart on A that's required anyway).
Andreas

Related

Will network interruption trigger monitor_node or link broken in Erlang/Elixir?

In distribution situation, for example 3 nodes running on different machines, they are connected default as clique in Erlang/Elixir. We call them A, B and C (they are connected explicitly by calling network:connect). Suppose network interruption between A and B happens.
1) Will the interruption between A and B trigger the linkage broken (spawn_link) between processes on A and B since we still have C as intermediate connected node. What about monitor_node (will that be triggered on A or B)?
2) Could we still send message from process of A to process of B since C works as intermediate connected node?
3) How does the membership components of Erlang/Elixir solve this situation? Will the connection be recovered and nothing bad really happens after all (no linkage broken, no monitor_node message returned just like everything are recovered immediately)?
Thank you for any consideration on this question!
1) Will the interruption between A and B trigger the linkage broken (spawn_link) between processes on A and B since we still have C as intermediate connected node. What about monitor_node (will that be triggered on A or B)?
The default behaviour of Erlang nodes is to connect transitively, meaning that when functions like connect or ping are called from node A to B, if the connection is established A will also try to connect to all nodes known by B i.e. the list obtained when calling nodes() at node B.
2) Could we still send message from process of A to process of B since
C works as intermediate connected node?
It depends, if A is able to connect directly to B with the transitive behaviour that I have mentioned above, then it doesn't make any difference. See below :
A ----- C ----- B
This is how you would imagine the links between your nodes if you connect A to C and C to B. But actually it will look like this :
A ----- C
\ /
\ /
B
So even when node C is running, A won't go through it to reach B. But if going through C is the only physical way for A to reach B, then A and B won't be able to communicate anymore.
3) How does the membership components of Erlang/Elixir solve this
situation? Will the connection be recovered and nothing bad really
happens after all (no linkage broken, no monitor_node message returned
just like everything are recovered immediately)?
If a monitored node goes down, there will be a message of the form {nodedown, Node} sent to the monitoring process so that it can handle the failure. The connection will not be recovered unless the node itself recovers. If the failing node does not hold a critical role in the network for example, and other nodes can still communicate with each other, then you could say that nothing bad really happens.
But that would be in my opinion a pretty reckless way to see node failures, and even if Erlang is said to be fault tolerant, it should not be considered fault acceping i.e. one should always handle errors.
Hope this helps :)
1) Will the interruption between A and B trigger the linkage broken
(spawn_link) between processes on A and B since we still have C as
intermediate connected node. What about monitor_node (will that be triggered on A or B)?
2) Could we still send message from process of A to process of B
since C works as intermediate connected node?
Erlang had a service named epmd(Erlang Port Mapper Daemon) which will broadcast the nodes' info(ip, name) to other node, and these nodes will save them.
That means, each node has a info map about other nodes.
So if the network interruption can recover and the node are not dead(restart),nodes can communicate as the same.
Above's situation can. Now talk about the can not communicate situation, which is epmd(Erlang Port Mapper Daemon) down. At this time, old nodes keep each others information so they can call each other. After restarting epmd, the new nodes created now can not call the old ones because old ones do not broad their info.
3) How does the membership components of Erlang/Elixir solve this
situation? Will the connection be recovered and nothing bad really
happens after all (no linkage broken, no monitor_node message returned
just like everything are recovered immediately)?
monitor_node will receive a message {nodedown, Node} if the connection to it is lost.
spawn_link just link two process and can only receive process down msg.

Is this the right way of building an Erlang network server for multi-client apps?

I'm building a small network server for a multi-player board game using Erlang.
This network server uses a local instance of Mnesia DB to store a session for each connected client app. Inside each client's record (session) stored in this local Mnesia, I store the client's PID and NODE (the node where a client is logged in).
I plan to deploy this network server on at least 2 connected servers (Node A & B).
So in order to allow a Client A who is logged in on Node A to search (query to Mnesia) for a Client B who is logged in on Node B, I replicate the Mnesia session table from Node A to Node B or vise-versa.
After Client A queries the PID and NODE of the Client B, then Client A and B can communicate with each other directly.
Is this the right way of establishing connection between two client apps that are logged-in on two different Erlang nodes?
Creating a system where two or more nodes are perfectly in sync is by definition impossible. In practice however, you might get close enough that it works for your particular problem.
You don't say the exact reason behind running on two nodes, so I'm going to assume it is for scalability. With many nodes, your system will also be more available and fault-tolerant if you get it right. However, the problem could be simplified if you know you only ever will run in a single node, and need the other node as a hot-slave to take over if the master is unavailable.
To establish a connection between two processes on two different nodes, you need some global addressing(user id 123 is pid<123,456,0>). If you also care about only one process running for User A running at a time, you also need a lock or allow only unique registrations of the addressing. If you also want to grow, you need a way to add more nodes, either while your system is running or when it is stopped.
Now, there are already some solutions out there that helps solving your problem, with different trade-offs:
gproc in global mode, allows registering a process under a given key(which gives you addressing and locking). This is distributed to the entire cluster, with no single point of failure, however the leader election (at least when I last looked at it) works only for nodes that was available when the system started. Adding new nodes requires an experimental version of gen_leader or stopping the system. Within your own code, if you know two players are only going to ever talk to each other, you could start them on the same node.
riak_core, allows you to build on top of the well-tested and proved architecture used in riak KV and riak search. It maps the keys into buckets in a fashion that allows you to add new nodes and have the keys redistributed. You can plug into this mechanism and move your processes. This approach does not let you decide where to start your processes, so if you have much communication between them, this will go across the network.
Using mnesia with distributed transactions, allows you to guarantee that every node has the data before the transaction is commited, this would give you distribution of the addressing and locking, but you would have to do everything else on top of this(like releasing the lock). Note: I have never used distributed transactions in production, so I cannot tell you how reliable they are. Also, due to being distributed, expect latency. Note2: You should check exactly how you would add more nodes and have the tables replicated, for example if it is possible without stopping mnesia.
Zookeper/doozer/roll your own, provides a centralized highly-available database which you may use to store the addressing. In this case you would need to handle unregistering yourself. Adding nodes while the system is running is easy from the addressing point of view, but you need some way to have your application learn about the new nodes and start spawning processes there.
Also, it is not necessary to store the node, as the pid contains enough information to send the messages directly to the correct node.
As a cool trick which you may already be aware of, pids may be serialized (as may all data within the VM) to a binary. Use term_to_binary/1 and binary_to_term/1 to convert between the actual pid inside the VM and a binary which you may store in whatever accepts binary data without mangling it in some stupid way.

Erlang remote procedure call module internals

I have Several Erlang applications on Node A and they are making rpc calls to Node B onto which i have Mnesia Stored procedures (Database querying functions) and my Mnesia DB as well. Now, occasionally, the number of simultaneous processes making rpc calls to Node B for data can rise to 150. Now, i have several questions:
Question 1: For each rpc call to a remote Node, does Node A make a completely new (say TCP/IP or UDP connection or whatever they use at the transport) CONNECTION? or there is only one connection and all rpc calls share this one (since Node A and Node B are connected [got to do with that epmd process])?
Question 2: If i have data centric applications on one node and i have a centrally managed Mnesia Database on another and these Applications' tables share the same schema which may be replicated, fragmented, indexed e.t.c, which is a better option: to use rpc calls in order to fetch data from Data Nodes to Application nodes or to develope a whole new framework using say TCP/IP (the way Scalaris guys did it for their Failure detector) to cater for network latency problems?
Question 3: Has anyone out there ever tested or bench marked the rpc call efficiency in a way that can answer the following?
(a) What is the maximum number of simultaneous rpc calls an Erlang Node can push onto another without breaking down?
(b) Is there a way of increasing this number, either by a system configuration or operating system setting? (refer to Open Solaris for x86 in your answer)
(c) Is there any other way of applications to request data from Mnesia running on remote Erlang Nodes other than rpc? (say CORBA, REST [requires HTTP end-to-end], Megaco, SOAP e.t.c)
Mnesia runs over erlang distribution, and in Erlang distribution there is only one tcp/ip connection between any pair of nodes (usually in a fully mesh arrangement, so one connection for every pair of nodes). All rpc/internode communication will happen over this distribution connection.
Additionally, it's guaranteed that message ordering is preserved between any pair of communicating processes over distribution. Ordering between more than two processes is not defined.
Mnesia gives you a lot of options for data placement. If you want your persistent storage on node B, but processing done on node A, you could have disc_only_copies of your tables on B and ram_copies on node A. That way applications on node A can get quick access to data, and you'll still get durable copies on node B.
I'm assuming that the network between A and B is a reliable LAN that is seldom going to partition (otherwise you're going to spend a bunch of time getting mnesia back online after a partition).
If both A and B are running mnesia, then I would let mnesia do all the RPC for me - this is what mnesia is built for and it has a number of optimizations. I wouldn't roll my own RPC or distribution mechanism without a very good reason.
As for benchmarks, this is entirely dependent on your hardware, mnesia schema and network between nodes (as well as your application's data access patterns). No one can give you these benchmarks, you have to run them yourself.
As for other RPC mechanisms for accessing mnesia, I don't think there are any out of the box, but there are many RPC libraries you could use to present the mnesia API to the network with a small amount of effort on your part.

Mnesia Clustering

If I am clustering 2 nodes together, from my experimenting and reading up online I understand that Node A will be like a "master" node and Node B will copy the tables over if I want them to. (Otherwise it will just access them remotely.)
What happens though if Node B goes down? Does it just recopy the data that's been changed since it was last up?
Also what happens if Node A goes down. Is Node B still usable? If so, if data is changed on Node B, does Node A copy it over to itself? My understanding so far is that Node A doesn't care about what Node B says, but someone please tell me I'm wrong.
Since the accepted answer is a link only answer, figured I would document this for anyone who comes along:
Mnesia doesn't quite work by having a primary-secondary architecture. Instead, some nodes have local copies of data, and some have remote copies. (You can see this by running mnesia:info() from the console. There is a list of remote tables, and a list for each of the local-tables: ram_copies,disc_copies and disc_only_copies.)
If a node goes down, as long as there is some table with a local copy, operations involving that table are fine.
One of the down-sides with Mnesia is that it is subject to network partition events. If in your cluster the network connection between two nodes goes bad, then each one will think that the other node is down, and continue to write data. Recovery from this is complicated. In the more mundane case though, if one node goes down, then nodes with local copies of data continue along, and when the down-node recovers, it syncs back up with the cluster.

Online mnesia recovery from network partition [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is it possible to recover from a network partition in an mnesia cluster without restarting any of the nodes involved? If so, how does one go about it?
I'm interested specifically in knowing:
How this can be done with the standard OTP mnesia (v4.4.7)
What custom code if any one needs to write to make this happen (e.g. subscribe to mnesia running_paritioned_network events, determine a new master, merge records from non-master to master, force load table from the new master, clear running parititioned network event -- example code would be greatly appreciated).
Or, that mnesia categorically does not support online recovery and requires that the node(s) that are part of the non-master partition be restarted.
While I appreciate the pointers to general distributed systems theory, in this question I am interested in erlang/OTP mnesia only.
After some experimentation I've discovered the following:
Mnesia considered the network to be partitioned if between two nodes there is a node disconnect and a reconnect without an mnesia restart.
This is true even if no Mnesia read/write operations occur during the time of the disconnection.
Mnesia itself must be restarted in order to clear the partitioned network event - you cannot force_load_table after the network is partitioned.
Only Mnesia needs to be restarted in order to clear the network partitioned event. You don't need to restart the entire node.
Mnesia resolves the network partitioning by having the newly restarted Mnesia node overwrite its table data with data from another Mnesia node (the startup table load algorithm).
Generally nodes will copy tables from the node that's been up the longest (this was the behaviour I saw, I haven't verified that this explicitly coded for and not a side-effect of something else). If you disconnect a node from a cluster, make writes in both partitions (the disconnected node and its old peers), shutdown all nodes and start them all back up again starting the disconnected node first, the disconnected node will be considered the master and its data will overwrite all the other nodes. There is no table comparison/checksumming/quorum behaviour.
So to answer my question, one can perform semi online recovery by executing mnesia:stop(), mnesia:start() on the nodes in the partition whose data you decide to discard (which I'll call the losing partition). Executing the mnesia:start() call will cause the node to contact the nodes on the other side of the partition. If you have more than one node in the losing partition, you may want to set the master nodes for table loading to nodes in the winning partition - otherwise I think there is a chance it will load tables from another node in the losing partition and thus return to the partitioned network state.
Unfortunately mnesia provides no support for merging/reconciling table contents during the startup table load phase, nor does it provide for going back into the table load phase once started.
A merge phase would be suitable for ejabberd in particular as the node would still have user connections and thus know which user records it owns/should be the most up-to-date for (assuming one user conneciton per cluster). If a merge phase existed, the node could filter userdata tables, save all records for connected users, load tables as per usual and then write the saved records back to the mnesia cluster.
Sara's answer is great, even look at article about CAP. Mnesia developers sacrifice P for CA. If you need P, then you should choice what of CAP you want sacrifice and than choice another storage. For example CouchDB (sacrifice C) or Scalaris (sacrifice A).
It works like this. Imagine the sky full of birds. Take pictures until you got all the birds.
Place the pictures on the table. Map pictures over each other. So you see every bird one time. Do you se every bird? Ok. Then you know, at that time. The system was stable.
Record what all the birds sounds like(messages) and take some more pictures. Then repeat.
If you have a node split. Go back to the latest common stable snapshot. And try** to replay what append after that. :)
It's better described in
"Distributed Snapshots: Determining Global States of Distributed Systems"
K. MANI CHANDY and LESLIE LAMPORT
** I think there are a problem deciding who's clock to go after when trying to replay what happend

Resources