Restoring failed Informix mirror chunk - informix

What should be done if an Informix mirror chunk failed and needs to be replaced with new chunk?

One command that you could use is onspaces. Run by a DBSA with no arguments, the help output includes the information:
onspaces -m <spacename> { -p <path> -o <offset> -m <path> <offset> [-y] |
-f <filename> }
onspaces -r <spacename> [-y]
onspaces -s <spacename> -p <path> -o <offset> {-O | -D} [-y]
-m — Add mirroring to an existing DBspace, PLOGspace, BLOBspace or
SBLOBspace
-r — Turn mirroring off for a DBspace, PLOGspace, BLOBspace or SBLOBspace
-s — Change the status of a chunk
Managing mirroring for a complete dbspace
Clearly, you could turn mirroring off for the down chunk (with -r) and then turn it back on with -m. You should investigate the Administrator's Reference — especially the section on ON-Spaces — and maybe the Administrator's Guide too.
The ON-Spaces section on mirroring contains the note:
The mirrored chunks should be on a different disk. You must mirror all the chunks at the same time.
and the syntax diagram allows multiple occurrences of the -p <path> -o <offset> -m <path> <offset> part of the synopsis. The -p and -o portions identify the existing chunk, and the -m portion identifies the new mirror chunk.
The -f option allows you to put the per-chunk information into a text file.
This technique has the not necessarily desirable side-effect of dropping all mirroring on the affected dbspace temporarily, and then reinstating it.
Managing single mirror chunks
Further research reveals a way to recover a single chunk at a time. The Administrator's Guide has a section on:
Fault tolerance
with sub-sections:
Mirroring
Using mirroring
These cover the theory and practice of mirroring. In particular, you seem to need:
Take down a mirror chunk using onspaces
onspaces -s db_acct -p /dev/mirror_chk1 -o 0 -D
Recover a mirror chunk using onspaces
onspaces -s db_acct -p /dev/mirror_chk1 -o 0 -O
This allows you to specify the chunk that is down — if the system has not already marked it down. And you can then bring it back online (into recovery mode) when you've replaced the physical media. As noted in a comment, this is much easier when you use symlinks to name the device (file) that holds the data (and if you don't use non-zero offsets; for the most part, they're a relic from the days when big disk drives were 100 MiB or less).

Related

Exporting encrypted SNMPv3 traps to JSON with TShark

I have a pcap file with recordings of encrypted SNMPv3 traps from Wireshark (Version 3.2.2). For analyzing the traps, I want to export the protocol data to json using tshark.
tshark.exe -T ek -Y "snmp" -P -V -x -r input.pcap > output.json
Currently, I supply the infos to decrypt the packages via the "snmp_users" file in C:\Users\developer\AppData\Roaming\Wireshark.
# This file is automatically generated, DO NOT MODIFY.
,"snmp_user","SHA1","xxxxxx","AES","yyyyyyy"
Is it possible to supply the options via commandline?
I have tried:
tshark.exe -T ek -Y "snmp" -P -V -x -o "snmp.users_table.username:snmp_user" ...
But that causes an error:
tshark: -o flag "snmp.users_table.username:snmp_user" specifies unknown preference
Update 16.09.2020:
Option -Y used instead of -J:
-Y|--display-filter
Cause the specified filter (which uses the syntax of read/display
filters, rather than that of capture filters) to be applied before
printing a decoded form of packets or writing packets to a file.
You need to specify the option as a User Access Table or uat, with the specific table being the name of the file, namely snmp_users. So, for example:
On Windows:
tshark.exe -o "uat:snmp_users:\"\",\"snmp_user\",\"SHA1\",\"xxxxxx\",\"AES\",\"yyyyyyy\"" -T ek -J "snmp" -P -V -x -r input.pcap > output.json
And on *nix:
tshark -o 'uat:snmp_users:"","snmp_user","SHA1","xxxxxx","AES","yyyyyyy"' -T ek -J "snmp" -P -V -x -r input.pcap > output.json
Unfortunately, the Wireshark documentation is currently lacking in describing the uat option. There is a Google Summer of Code project underway though, of which Wireshark is participating, so perhaps documentation will be improved here.

Perf doesn't show build-id of the application itself

When I run the following perf command for collecting user data
$ perf record -e cycles:u -j any,u -a -o perf.data.user ./test
The output of build-id checking is
$ perf buildid-list -f -i perf.data.user
dbd41c586ea6789f3b998ed28be6ff37134e917a /lib/modules/4.19.125/build/vmlinux
b5381a457906d279073822a5ceb24c4bfef94ddb /lib/x86_64-linux-gnu/libc-2.23.so
ce17e023542265fc11d9bc8f534bb4f070493d30 /lib/x86_64-linux-gnu/libpthread-2.23.so
5d7b6259552275a3c17bd4c3fd05f5a6bf40caa5 /lib/x86_64-linux-gnu/ld-2.23.so
55a35b6df1526bf3d69586896785bf1df0bb4be6 [vdso]
59081d88e819c2fd3bcd8e58bc0c858c0ee2c3a9 /home/mahmood/bin/perf
8edd43fbf5a6d895172d205a7248a814e3b07bb2 /home/mahmood/kernel-4.19.125/test/test
2c0a469e1700fdd0b2f670737dabafeb9c38f909 /opt/glibc-2.23-install/libc.so
As you can see the /home/mahmood/kernel-4.19.125/test/test which is the binary has a build-id. That is fine. However, when I run
$ perf record -e cycles:k -j any,k -a -o perf.data.kernel ./test
I don't see the application's build-id in the same output
$ perf buildid-list -f -i perf.data.kernel
dbd41c586ea6789f3b998ed28be6ff37134e917a /lib/modules/4.19.125/build/vmlinux
49b4a1a69bb9aebaca5147b9821e1a3a2ca759f3 /lib/modules/4.19.125/kernel/net/ipv4/netfilter/iptable_filter.ko
bb4e88298fe274b1bec7ba8ab24e6a9670b93d04 /lib/modules/4.19.125/kernel/net/ipv4/netfilter/nf_nat_ipv4.ko
ee37b9e0cc9b7be3ca543ecfeaa6bde28b77df7d /lib/modules/4.19.125/kernel/net/netfilter/nf_nat.ko
2bc71fd8d0c750aa3427a31639ce75a16a3c288c /lib/modules/4.19.125/kernel/net/netfilter/nf_conntrack.ko
e5dfa4829fe8f9ed3185b708225a5bab8d6d0afe /lib/modules/4.19.125/kernel/net/ipv4/netfilter/nf_defrag_ipv4.ko
5d52d35a5b99dd81fed002ba571a7afe32b26cbd /lib/modules/4.19.125/kernel/net/ipv4/netfilter/ip_tables.ko
d19830cb5c697cb2583d327c28aa3961c945005d /lib/modules/4.19.125/kernel/drivers/gpu/drm/nouveau/nouveau.ko
b816c95c09032342acd644128cf4d21251b3578a /lib/modules/4.19.125/kernel/drivers/net/ethernet/intel/igb/igb.ko
da3d32f0230efe8329fae49f9de60ddaeddf48a9 /lib/modules/4.19.125/kernel/drivers/ata/libahci.ko
b5381a457906d279073822a5ceb24c4bfef94ddb /lib/x86_64-linux-gnu/libc-2.23.so
55a35b6df1526bf3d69586896785bf1df0bb4be6 [vdso]
Is there any reason for that? I ask that because my analyzer checks the build-ids and gives me an error regarding the missing 8edd43fbf5a6d895172d205a7248a814e3b07bb2 which is my application itself. The analyzer has no problem with the former scenario.
UPDATE:
I think since I am profiling kernel activity, there is no sign of the program itself in the recorded data. However, if I run perf record -e cycles -j any -a -o perf.data.kernel ./test, then I also see program's build-id in addition to the the kernel files.
dbd41c586ea6789f3b998ed28be6ff37134e917a /lib/modules/4.19.125/build/vmlinux
49b4a1a69bb9aebaca5147b9821e1a3a2ca759f3 /lib/modules/4.19.125/kernel/net/ipv4/netfilter/iptable_filter.ko
bb4e88298fe274b1bec7ba8ab24e6a9670b93d04 /lib/modules/4.19.125/kernel/net/ipv4/netfilter/nf_nat_ipv4.ko
ee37b9e0cc9b7be3ca543ecfeaa6bde28b77df7d /lib/modules/4.19.125/kernel/net/netfilter/nf_nat.ko
2bc71fd8d0c750aa3427a31639ce75a16a3c288c /lib/modules/4.19.125/kernel/net/netfilter/nf_conntrack.ko
e5dfa4829fe8f9ed3185b708225a5bab8d6d0afe /lib/modules/4.19.125/kernel/net/ipv4/netfilter/nf_defrag_ipv4.ko
5d52d35a5b99dd81fed002ba571a7afe32b26cbd /lib/modules/4.19.125/kernel/net/ipv4/netfilter/ip_tables.ko
d19830cb5c697cb2583d327c28aa3961c945005d /lib/modules/4.19.125/kernel/drivers/gpu/drm/nouveau/nouveau.ko
b816c95c09032342acd644128cf4d21251b3578a /lib/modules/4.19.125/kernel/drivers/net/ethernet/intel/igb/igb.ko
55a35b6df1526bf3d69586896785bf1df0bb4be6 [vdso]
59081d88e819c2fd3bcd8e58bc0c858c0ee2c3a9 /home/mahmood/bin/perf
8edd43fbf5a6d895172d205a7248a814e3b07bb2 /home/mahmood/kernel-4.19.125/test/test
b592b0baf11cf7172f25d71f5f69de2d762897cb /opt/glibc-2.23-install/lib/ld-2.23.so
2c0a469e1700fdd0b2f670737dabafeb9c38f909 /opt/glibc-2.23-install/libc.so
Not sure though if I am right. Any comment is welcome.

grep all binary file in a folder to show only a specific world followed by numbers

I spent several hours trying to figure out what I am doing wrong. Thanks for any help in advance.
I want to grep the string toze359485948584 from multi different binary file within a specific folder. The first part of the string stay the same but the 12 digits after the world toze could change.
When I use
grep -a -o -E -r 'toze' /my folder/
I get the output toze
but when I use
grep -a -o -E -r 'toze[0-9]' /my folder/
I get no output at all.
The word toze is the same in all other binary files within that folder but the 12 digits following it are different from file to file.
Example of file:
:?5o2g0?2?76=1?7?5 clasFSCl??˹?t0?l?Ah?Ob??9??$[??Te?J? ????C?'fھ???ӽ?Agj?(m?r??q[4 '?E??'黼}v?seUC?ؑFh??0?-?:??ꅜP?~0?zMANP1?p?????cBMac60:30:d4:2d:0d:c2???ɜm0SrNm9I4l6?5?5?=?4!3L2?2?5}3
6?636?5{1(1?/?.uDX3X3JWLHG7F?????cWMac60:30:d4:2b:ef:ab?????c
/U/]-?5?6m+?.?-?*?*a-4;6'.?-?0x*?.?,00?faic??˵?i0toze359485948584??˹?t#0!inst00008010-001348443E100026?????d:08seid0040E3FF32F48800180401178969456532CBE6122F11BB554?????n*0(srvn :??j?^<?`m4,G????##???180718064325Z?????d0tsid928C7F80C073CA01???ٚR? 0?NvMR1???????T0DGSTo8En?HC??G??]???Q???????s0
,?0M/540K21 clasNvMR??˹?t0instF5?l?Ah?Ob??9??$[??Te?J? ????C?'fھ???ӽ?Agj?(m?r??q[4 '?E??'黼}v?seUC?ؑFh??0?-?:?????l?0?bbcl1?
RiMcP?SYS?Hs9v>B|B?AC?#?A?=$;U<?;?>?C?9?:E9?4X<7?:6?9?5-4?4?68?8?355L5$2
Because the numbers are more than one you can try something like:
grep -a -o -E -r 'toze[0-9].' /my folder/
If you are ready to loop the files and manage them one by one you can simplify the work via:
strings $file|grep -a -o -E 'toze[0-9].'

How do I use tshark to print request-response pairs from a pcap file?

Given a pcap file, I'm able to extract a lot of information from the reconstructed HTTP request and responses using the neat filters provided by Wireshark. I've also been able to split the pcap file into each TCP stream.
Trouble I'm running into now is that of all the cool filters I'm able to use with tshark, I can't find one that will let me print out full request/response bodies. I'm calling something like this:
tshark -r dump.pcap -R "tcp.stream==123 and http.request" -T fields -e http.request.uri
Is there some filter name I can pass to -e to get the request/response body? The closest I've come is to use the -V flag, but it also prints out a bunch of information I don't necessary want and want to avoid having to kludge out with a "dumb" filter.
If you are willing to switch to another tool, tcptrace can do this with the -e option. It also has an HTTP analysis extension (xHTTP option) that generates the HTTP request/repsonse pairs for each TCP stream.
Here is a usage example:
tcptrace --csv -xHTTP -f'port=80' -lten capturefile.pcap
--csv to format output as comma sperated variable
-xHTTP for HTTP request/response written to 'http.times' this also switches on -e to dump the TCP stream payloads, so you really don't need -e as well
-f'port=80' to filter out non-web traffic
-l for long output form
-t to give me progress indication
-n to turn off hostname resolution (much faster without this)
If you captured a pcap file, you can do the following to show all requests+responses.
filename="capture_file.pcap"
for stream in `tshark -r "$filename" -2 -R "tcp and (http.request or http.response)" -T fields -e tcp.stream | sort -n | uniq`; do
echo "==========BEGIN REQUEST=========="
tshark -q -r "$filename" -z follow,tcp,ascii,$stream;
echo "==========END REQUEST=========="
done;
I just made diyism answer a bit easier to understand (you don't need sudo, and multiline script is imo simple to look at)
This probably wasn't an option when the question was asked but newer versions of tshark can "follow" conversations.
tshark -nr dump.pcap -qz follow,tcp,ascii,123
I know this is a super old question. I'm just adding this for anyone that ends up here looking for a current solution.
I use this line to show last 10 seconds request body and response body(https://gist.github.com/diyism/eaa7297cbf2caff7b851):
sudo tshark -a duration:10 -w /tmp/input.pcap;for stream in `sudo tshark -r /tmp/input.pcap -R "tcp and (http.request or http.response) and !(ip.addr==192.168.0.241)" -T fields -e tcp.stream | sort -n | uniq`; do sudo tshark -q -r /tmp/input.pcap -z follow,tcp,ascii,$stream; done;sudo rm /tmp/input.pcap

Spider a Website and Return URLs Only

I'm looking for a way to pseudo-spider a website. The key is that I don't actually want the content, but rather a simple list of URIs. I can get reasonably close to this idea with Wget using the --spider option, but when piping that output through a grep, I can't seem to find the right magic to make it work:
wget --spider --force-html -r -l1 http://somesite.com | grep 'Saving to:'
The grep filter seems to have absolutely no affect on the wget output. Have I got something wrong or is there another tool I should try that's more geared towards providing this kind of limited result set?
UPDATE
So I just found out offline that, by default, wget writes to stderr. I missed that in the man pages (in fact, I still haven't found it if it's in there). Once I piped the return to stdout, I got closer to what I need:
wget --spider --force-html -r -l1 http://somesite.com 2>&1 | grep 'Saving to:'
I'd still be interested in other/better means for doing this kind of thing, if any exist.
The absolute last thing I want to do is download and parse all of the content myself (i.e. create my own spider). Once I learned that Wget writes to stderr by default, I was able to redirect it to stdout and filter the output appropriately.
wget --spider --force-html -r -l2 $url 2>&1 \
| grep '^--' | awk '{ print $3 }' \
| grep -v '\.\(css\|js\|png\|gif\|jpg\)$' \
> urls.m3u
This gives me a list of the content resource (resources that aren't images, CSS or JS source files) URIs that are spidered. From there, I can send the URIs off to a third party tool for processing to meet my needs.
The output still needs to be streamlined slightly (it produces duplicates as it's shown above), but it's almost there and I haven't had to do any parsing myself.
Create a few regular expressions to extract the addresses from all
<a href="(ADDRESS_IS_HERE)">.
Here is the solution I would use:
wget -q http://example.com -O - | \
tr "\t\r\n'" ' "' | \
grep -i -o '<a[^>]\+href[ ]*=[ \t]*"\(ht\|f\)tps\?:[^"]\+"' | \
sed -e 's/^.*"\([^"]\+\)".*$/\1/g'
This will output all http, https, ftp, and ftps links from a webpage. It will not give you relative urls, only full urls.
Explanation regarding the options used in the series of piped commands:
wget -q makes it not have excessive output (quiet mode).
wget -O - makes it so that the downloaded file is echoed to stdout, rather than saved to disk.
tr is the unix character translator, used in this example to translate newlines and tabs to spaces, as well as convert single quotes into double quotes so we can simplify our regular expressions.
grep -i makes the search case-insensitive
grep -o makes it output only the matching portions.
sed is the Stream EDitor unix utility which allows for filtering and transformation operations.
sed -e just lets you feed it an expression.
Running this little script on "http://craigslist.org" yielded quite a long list of links:
http://blog.craigslist.org/
http://24hoursoncraigslist.com/subs/nowplaying.html
http://craigslistfoundation.org/
http://atlanta.craigslist.org/
http://austin.craigslist.org/
http://boston.craigslist.org/
http://chicago.craigslist.org/
http://cleveland.craigslist.org/
...
I've used a tool called xidel
xidel http://server -e '//a/#href' |
grep -v "http" |
sort -u |
xargs -L1 -I {} xidel http://server/{} -e '//a/#href' |
grep -v "http" | sort -u
A little hackish but gets you closer! This is only the first level. Imagine packing this up into a self recursive script!

Resources