DRBD Split Brain best scenario - drbd

I have 2 drbd node (primary/secondary) and I try to solve split brain without any lost data.
Running : Drbd(8.9.10-2), Pacemaker, Corosync, Postgresql
My auto solve config:
net {
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
data-integrity-alg md5;
}
How can I find last updated node? Is there any command or something like?

How can I find last updated node? Is there any command or something like?
Unfortunately, you can't using DRBD itself. You could check your logs on both servers, and compare when each of them detected the split brain situation and therefor disconnected.
Or you mount the data on each server and compare from a client view of things. Then decide which server has the better data and discard everything on node B.

Related

In Fish, how do you tweak things around to match special key bindings?

Context
So I finally give a try to Fish, and as one would expect I encounter some frictions due to differences with my usual routines.
The most astonishing for me, as for many other, was the absence of the bang operator. I'm fine with the lose of sudo !!, as the suggested function replacement seems even better to me, I named it gar which means "To make, compel (someone to do something); to cause (something to be done." However I'll need a replacement for !<abc><enter> which grab the last history line starting with <abc> and run it without further ado, suggestions are welcome.
Now, for the more personal things:
- I use a Typematrix 2030 keyboard
- I use a bépo layout
- I like to configure default finger position keys with the most used actions
Aims
As on my keybord <enter> is well positioned and is semantically relevant for that, ideally I would like to achieve the following key binding:
ctrl-enter: accept the whole suggestion and run it without further confirmation
ctrl-tab: accept the whole suggestion and wait for further edit
alt-enter: redo the last command without further confirmation
But according to xev it appears that, at least with Gnome-terminal, this combinations are not recognized. Are they terminal that supports it? For now I remapped these three to <ctrl>-i, <alt>-i and <alt>-I respectively:
bind --preset \ci forward-char execute
bind --preset \ei forward-char
bind --preset \eI forward-word
This works as expected, but it seems that now the tab key will also map to the first item. I guess that tab map to <alt>-i at some point in the shell stack. I wasn't aware of that, so I don't know yet if it will be possible for Fish to separate each of them.
To manage jobs, I also came with
bind --preset \es fg
bind --preset \eS bg
The first works as expected, but the second one doesn't. With application like vim, the binding should be operated in the application configuration itself of course. But for things as trivial as yes, <alt>-S won't work as expected while <crl>-z continue to operate normally.
I also would like to bind some commands like ls -alh and git status --short to a directly executed command, showing the result bellow the currently edited line, allowing to further type seamlessly, but didn't find the way to do it yet.
Summary of remaining question
So here are my more precise questions summarised:
how do I bind the sleep signal to <alt>-S?
is there a terminal I can use where <alt>-<enter> and <ctrl>-<enter> works?
how to seamlessly run command while maintaining the current line edition in place?
can you bind something to <alt>-i without altering <tab>?
how do I bind the sleep signal to -S?
What you are doing with bind \es fg is to alter a binding inside the shell.
But when you execute yes, the shell isn't currently in the foreground, so shell bindings don't apply.
What you'd have to do instead is change the terminal settings via stty susp \cs,
but fish resets the terminal settings when executing commands (so you can't accidentally break them and end up in an unusable environment), so there currently is no way to do this in fish.
can you bind something to <alt>-i without altering <tab>?
Sure. You bind \ei. Which is escape+i, which is alt-i (because in a terminal alt is escape).
Your problem is with ctrl-i, which in the way terminals encode control+character is tab. The application receives an actual tab character, and at that point the information has been lost.
is there a terminal I can use where - and - works?
Most terminals should send \e\r for alt-enter. ctrl-enter again is unencodable with the usual code (because \r is ctrl-m), just like ctrl-tab is.
Any fix to this requires the terminal to encode these combination differently.
how to seamlessly run command while maintaining the current line edition in place?
I don't know what you mean by this. I'm guessing you want fish to remain open and editable while a command also runs in the foreground. That can't work. There's no way to synchronize output from two commands to a terminal, not with cursor movement being what it is.

SPSS: Switch server by syntax command

Mostly, I run SPSS on a server. However, there are occasions, when it needs to be run locally.
I didn't find a way to tell SPSS by syntax, whether it has to run on the server or locally. Any ideas how to solve that 'problem'?
There is no SPSS syntax to do that.
There may be methods in scripting to do it. From the Python Reference Guide for SPSS Statistics, I see this:
GetLocalServer Method
Returns an SpssServerConf object representing the local computer.
Syntax
SpssServerConf=SpssClient.GetLocalServer()
That would be the first thing to try.
I guess you could start the server locally and then use the following in a BEGIN .. END PROGRAM block to run stuff on the server:
Example: Connecting to a Server Using a Saved Configuration
import SpssClient
SpssClient.StartClient()
ServerConfList = SpssClient.GetConfiguredServers()
for i in range(ServerConfList.Size()):
server = ServerConfList.GetItemAt(i)
if server.GetServerName()=="myservername":
server.ConnectWithSavedPassword()
SpssClient.StopClient()
SpssClient.GetConfiguredServers() gets an SpssServerConfList object that provides access to the list of configured servers.
-The GetItemAt method of an SpssServerConfList object returns the SpssServerConf object at the specified index. Index values start from 0 and represent the order in which the servers were added to the list.
The ConnectWithSavedPassword method uses the connection information (domain, user ID, and password) to connect to the server.

Error when performing schema changes in DSE 5.0

I am trying to get my head around using graphs for the first time - and as you can imagine, I am having a fair bit of trial and error.
Subsequently, I am doing a lot of;
Create Schema
find a mistake / modelling error
delete schena
rinse and repeat
All of which is completely fine: But for the fact that I seem to constantly be getting the following error;
Schema migration interrupted. The migration operation will continue in the background.
Now if I get this error when doing a schema.clear(), then, it actually doesn't continue in the background at all - it is lying!
I have to rerun the command and sometimes, several times to get the schema deleted.
And if that isn't annoying enough - I might end up with the following, too.
Script evaluation exceeded the configured threshold for the request: [149a3432-b1b3-45b7-8e68-d21c0325d877 - schema.clear()]
I have a single DC, two racks, with 2 nodes each - as a training cluster.
I am using DSE 5.0.1
I am using the GossipingPropertyFileSnitch - snitch
(I also have the rack properties file, too, for the above snitch type.
And I also ensure that I have run;
:remote config timeout max
in the gremlin-console, too...
So I am not really sure how it can complain about timing out and since this is all on my local PC in Virtual machines - and is only being used by me - I don't understand how something is interrupting the command I just asked it to complete, either!
Thanks if anyone has any ideas!
-Gavin
With special thanks to Jeremy at DataStax - I have a solution for my time out issue;
I still don't understand why it complained in the first place - in that I was the only person using the cluster - on virtual machines on my own PC.... but nonetheless - I can successfully complete commands in the gremlin console.
The required change is in DSE.YAML
altering the following configuration item to a value higher than the default of 30 seconds. (I set it to 180 sec)
realtime_evaluation_timeout: 180 sec

How do I create a single script file for when I do and don't want to collect TensorBoard statistics?

I want to have a single script, that either collects tensorboard data or not, depending on how I run it. I am aware that I can pass flags to tell my script how I want it to be run. I could even hard code it in the script and just manually change the script.
Either solution has a bigger problem. I find myself having to write an if statement everywhere on my script when I want the summary writer operations to be ran or not. For example I find that I would have to do something like:
if tb_sys_arg = 'tensorboard':
merged = tf.merge_all_summaries()
and then depending on the value of tb_sys_arg run the summaries or not, as in:
if tb_sys_arg = 'tensorboard':
merged = tf.merge_all_summaries()
else:
train_writer = tf.train.SummaryWriter(tensorboard_data_dump_train, sess.graph)
this seems really silly to me. I'd rather not have to do that. Is this the right way to do this? I just don't want to collect statistics each time I run my main script but I also don't want to have two separate scripts either.
As an anecdotical story, few months ago I started using TensorBoard and it seems I have been running my main file as follow:
python main.py —logdir=/tmp/mdl_logs
so that it collects tensorboard data. But realized that I don't think I need that last flag to collect tensorboard data. Its been so long that now I forget if I actually need that. I've been reading the documentation and tutorials but it seems I don't need that last flag (its only needed to run the web app as in tensorboard --logdir=path/to/log-directory, right?) Have I been doing this wrong all this time?
You can launch Supervisor without "summary" service, so it won't run the summary nodes, see "Launching fewer services" section of the Supervisor docs -- https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Supervisor.md#launching-fewer-services

BPEL, threads stucked on HashMap.getEntry?

I am new to SOA, and currently we met a problem when using BPEL to do some XML transformation.
we have 3 SOA projects will do something like:
Read input files from folder which is in text format
Save file content in Database and put on AQ
Read file id from AQ, load content from database, and transform to our internal XML format
apply some business logic and transform content back to text format.
SOA proejct1 do step 1-2, project2 do step 3 and project3 to step4.
We are doing some load test with input 7000 files.
the problem we experienced is that the memory use of "Old Generation" keep accumulating, although major GC can reduce it, it still keep growing, until 100%. Then no new BEPL instance can be created, and we met transaction timeout.
after analyze heap dump, we get a result like below, it seems that BPELFactoryImpl hold a HashMap which more than 180M, and it will keep growing. so does anyone experienced something similar?
we use SOA version 12.1.3. this problem stopped us for weeks, please help, thanks a lot.
Image of heap analysis
Guys
Finally we got an answer on this, it was caused by a bug, as said by Oracle Support, we are waiting for the patch.
thanks for your attention.
It's a bug. You should raise an SR referring for: stuck threads on
at java.util.HashMap.getEntry(HashMap.java:465)
at java.util.HashMap.get(HashMap.java:417)
at oracle.xml.parser.v2.XMLNode.setUserData(XMLNode.java:2137)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.doCreateElement(ExtensibleElementImpl.java:502)
at oracle.dp.entity.impl.EmFacadeObjectImpl.getElement(EmFacadeObjectImpl.java:35)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.performDOMChange(ExtensibleElementImpl.java:707)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.doOnChange(ExtensibleElementImpl.java:636)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl$DOMUpdater.notifyChanged(ExtensibleElementImpl.java:535)
at oracle.dp.notify.impl.NotifierImpl.emNotify(NotifierImpl.java:39)
at oracle.dp.entity.impl.EmHolderImpl.doNotifyOnSet(EmHolderImpl.java:53)
at oracle.dp.entity.impl.EmHolderImpl.set(EmHolderImpl.java:47)
at oracle.bpel.lang.v20.model.impl.CopyImpl.setTo(CopyImpl.java:115)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP$CallArgument$1.evaluate(BPEL2xCallWMP.java:190)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP.invokeMethod(BPEL2xCallWMP.java:103)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP.__executeStatements(BPEL2xCallWMP.java:62)
at com.collaxa.cube.engine.ext.bpel.common.wmp.BaseBPELActivityWMP.perform(BaseBPELActivityWMP.java:188)
at com.collaxa.cube.engine.CubeEngine.performActivity(CubeEngine.java:2880)
....
Bug 20857627 (20867804) : Performance issue due to large number of threads stuck in HashMap.get

Resources