Is there a way to exclude certain interfaces/address from dbus-monitor output? - monitor

From the manpage of the dbus-monitor command, I know that I can use some command line arguments like dbus-monitor "type=..., sender=..., interface=..." to specify the type/sender/interface etc I am interested in.
However, for the situation that there is a few program that has heavy dbus traffic that I am not interested in, is there an option to filter out the output of that interface/program?
THX

The dbus-daemon routes messages using message matching rules. You cannot have something like "message unmatching" rules, the specification does not support something like this. See here for more information.
To get the desired filtering behavior I would suggest using grep on the output of dbus-monitor. Check this discussion for more information.

Related

Machine parseable error messages

(From https://groups.google.com/d/msg/bazel-discuss/cIBIP-Oyzzw/caesbhdEAAAJ)
What is the recommended way for rules to export information about failures such that downstream tools can include them in UIs.
Example use case:
I ran bazel test //my:target, and one of the actions for //my:target fails because there is an unknown variable "usrname" in my/target.foo at line 7 column 10. It would also like to report that "username" is a valid variable and this is a possible misspelling. And thus wants to suggest an addition of an "e" character.
One way I have thought to do this is to have a separate file that my action produces //my:target.errors that is in a separate output group and have it write machine parseable data there in addition to human readable data on stdout.
I can then find all of these files and parse the data in them in downstream tools.
Is there any prior work on this, or does everything just try to parse the human readable output?
I recommend running the error checkers as extra actions.
I don't think Bazel currently has hooks for custom error handlers like you describe. Please consider opening a feature request: https://github.com/bazelbuild/bazel/issues/new

GLib commandline option parser - long entry descriptions

I have an application that uses GLib's commandline option parser to handle commandline arguments (as described here).
What I've found is that the description for each option entry has to be very short - in order to fit within the width of a terminal of standard size (when the application is called with the --help argument). If a description for an option is too long it wraps around, and this looks pretty bad. Is there an accepted way to tidy this up?
For example, here's what part of the help output from my application looks like in an 80 character wide terminal window:
Application Options:
-i, --ip-addr Sets the IP address to which the video strea
ms will be sent. If this option is not used then the default IP address of 127.0
.0.1 is used.
-p, --port Sets the port to send the video streams to.
If not chosen this defaults to 1234.
Ideally it would look something like this:
Application Options:
-i, --ip-addr Sets the IP address to which the video
streams will be sent. If this option is not
used then the default IP address of
127.0.0.1 is used.
-p, --port Sets the port to send the video streams to.
If not chosen this defaults to 1234.
I could just get the above result manually, by working out the required length of each line of my option descriptions. Then I could manually enter newlines and spaces into the strings to get the right indentation. But this seems like a really rough approach, and I'm sure there must be a better and less time-consuming way of formatting the output.
I'm sure this problem must have come up before for others, but I haven't found a solution, does anybody here know of a better way to get nicer formatting?
I have the exact same issue. At present I am using the ghetto fix of adding spaces. This however is not possible with the argument description (rather than just the description, which is what is printed at the end). If you add newlines to break the argument description the spacing of the proceeding arguments is messed up.

ITM reporting the different situations

I am trying to get a report on ITM 6.2.1 regarding each equipment and situations running with some of the configuration info.
I need to list each equipment, each situation, formula and the system command with the mail send. Is there a way to get this info without having to go manually into each equipment, situation, etc?
Example:
Equip: equip01
Agent: LinuxOS
Situations: LINUX_FILE_SIZE, LINUX_UNIX_FS_CRITICAL, etc
Formula: FILE: '/local/file.err' SIZE: !=0,000
Action: System command: usr/bin/mail oper#mail.com
Many thanks!
I think there are many ways to do this, but I would look into some shell scripting using tacmd commands, like "tacmd listsit -m AGENT" and "tacmd viewsit -s SITUATION" you can automate the work by combining the outputs of these commands and create a report that way.
Also, there is a cool tool called "ITMSUPER" that connects to you ITM environment through SOAP calls and creates really useful reports about the entire environment, you should definitely take a look:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home/wiki/Use%20ITMSUPER%20to%20Solve%20ITM%20Issues/page/Some%20Useful%20Examples%20of%20ITMSUPER%20for%20Beginners?lang=en

Ant output to 2 different sources?

I'm running Ant with output fed to a log file:
ant -logfile file.txt target-name
I'd also like to print some simple progress information to the console though. The answer seems to be a BuildEvent listener that writes to the console every time a new target is hit, but the documentation explicitly states:
A listener must not access System.out and System.err directly since ouput on these streams is redirected by Ant's core to the build event system.
Did I miss something? Is there a way to do this?
Ant replaces the System.out & System.err streams to remap messages printed there through it's own logging system.
That said, you can still get access to the ACTUAL OS streams by using java.io.FileDescriptor#out
Actually, the answer is Log4jListener.
There is a sample log4j configuration for logging into both console and file shown in the above link. You can then use an <echo> task with an appropriate level parameter to selectively decide what gets printed to console.
Thanks for the answers! I'm slow, but this is still something that I'd like to get right.
I've managed to get something working more or less like I want using carej's suggested approach with the java.io.FileDescriptor#out stream and an Ant scriptdef like this:
<scriptdef name="progress-text" language="javascript" >
output = new java.io.PrintStream(new java.io.FileOutputStream(java.io.FileDescriptor.err))
output.println(self.text)
</scriptdef>
Now I'm just left wondering how wize is this approach? Is there inherit risk in using the underlying OS streams directly?
EDIT:
2 Points which might be useful to anyone else with a similar question:
This article has a very good description of the Ant I/O system: http://codefeed.com/blog/?p=68
java.lang.System does something very similar to set System.out and System.err in the first place.
All of this gave me a little more confidence in this approach.

Examples of getting it wrong first, on purpose

I just caught myself doing something I do a lot, and wanted to generalize it, express it, share it and see who else is following this general practice, to find some other example situations where it might be relevant.
The general practice is getting something wrong first, on purpose, to establish that everything else is right before undertaking the current task.
What I was trying to do, specifically, was to find examples in our code base where the dojo TextArea widget was used. I knew (because I had it in front of me - existence proof) that the TextBox widget was present in at least one file. So I looked first for what I knew was there:
grep -r digit.form.TextBox | grep -v
svn
This wasn't right - I had made a common (for me) mistake of leaving off the star, so I fixed that:
grep -r digit.form.TextBox * | grep
-v svn
which found no results! Quick comparison with the file I was looking at showed me I had misspelled "dijit":
grep -r dijit.form.TextBox * | grep
-v svn
And now I got results. Cool; doing it wrong first on purpose meant my query was correct except for looking for the wrong thing, so now I could construct the right query:
grep -r dijit.form.TextArea * | grep
-v svn
and be confident that when it gave me no results, it was because there are no such files, and not because I had malformed the query.
I'll add three other examples as answers; please add any others you're aware of.
TDD
The red-green-refactor cycle of test-driven development may be the archetype of this practice. With red, demonstrate that the functionality doesn't exist; then make it exist and demonstrate that you've done so by witnessing the green bar.
http://support.microsoft.com/kb/275085
This VBA routine turns off the "subdatasheets" property for every table in your MS Access database. The user is instructed to make sure error-handling is set to "Break only on unhandled errors." The routine identifies tables needing the fix by the error that is thrown. I'm not sure this precisely fits your question, but it's always interesting to me that the error is being used in a non-error way.
Here's an example from VBA:
I also use camel case when I Dim my variables. ThisIsAnExampleOfCamelCase. As soon as I exit the VBA code line if Access doesn't change the lower case variable to camel case then I know I've got a typo. [OR, Option Explicit isn't set, which is the post topic.]
I also use this trick, several times an hour at least.
arrange - assert - act - assert
I sometimes like, in my tests, to add a counter-assertion before the action to show that the action is actually responsible for producing the desired outcome demonstrated by the concluding assertion.
When in doubt of my spelling, and of my editor's spell-checking
We use many editors. Many of them highlight misspelled words as I type them - some do not. I rely on automatic spell checking, but I can't always remember whether the editor of the moment has that feature. So I'll enter, say, "circuitx" and hit space. If it highlights, I'll back up over the space and the "x" and type another space - and learn that I spelled circuit correctly - but if it doesn't, I'll copy the word and paste it into a known spell-checker to see whether I did.
I'm not sure it's the best way to act, as it does not prevent you from mispelling the final command, for example typing "TestArea" or something like that instead of "TextArea" (your finger just have to slip a little for such a mistake).
IMHO the best way is to run your "final" command, but on two sample files first : one containing the requested text, another that doesn't.
In other words, instead of running a "similar" command, run the real one, but over "similar" data.
(Not sure if this would be a good idea to try for real!)
For example, you might give the system to the users for testing and tell them the password to get started is "Apple".
You know the users are fully up and ready to test (everything is installed and connections to databases working) when they contact you and say the password doesn't work (it's actually "Orange").

Resources