importing hex stream into wireshark - wireshark

I have a 64 byte hex stream of a frame-
000A959D6816000A959A651508004500002E000000004006AF160A010101C0A8000A11D71EC6000000000000000050000000AD840000000102030405CC904CE3
How can I import it into Wireshark and see the whole packet?
The option of importing hex dump doesn't seems to work in my case, if I save this stream into a text file and load it.

Since this hex stream is in hex, and for hex to hexdump conversion, od doesn't seems to work. So the solution would be to convert this hex back to binary, and then use od -Ax -tx1 -v [file] on that binary file.
xxd -r -p [hexfile] [binaryfile]
od -Ax -tx1 -v [binaryfile]
Note: Use the combination -r -p to read plain hexadecimal dumps without line number information and without a particular column layout.

A hex stream can be transformed into an od-like format filtering through a couple coreutils. The output can be fed into text2pcap, for example, to also set a link-layer type.
{ echo -n "0000 "; echo $hex_stream | fold -w 2 | paste -sd ' '; } | text2pcap -l 147 - $file
hex_stream is the data to be dissected and file is the pcap file to be written by text2pcap. I use this as part of a script that generates a temporary pcap from a hex stream and invokes tshark to dissect it - this gives me the dissection result immediately with no manual intervention.
How to Dissect Anything page in the Wireshark wiki has further information on dissection of arbitrary data.

If you format your hex string as shown in this page, you should be able to use the Import from Hex Dump dialog to import the file you've created.

Related

introduce parsing loop / refactor ugly code

I am writing a script that reads from a binary file, converts to ASCII, extracts/delimits 2 columns, and pipes it out to a txt.
I looked at this post to implement the binary > ASCII step, but, in the way that it is implemented in my script, it seems to only perform the above process on the first row in the file.
How would I re-write this to loop through all rows in the file?
My code is below.
# run the command script to extract the file
script.cmd
# Read the entire file to an array of bytes.
$bytes = [System.IO.File]::ReadAllBytes("filePath")
# Decode first 'n' number of bytes to a text assuming ASCII encoding.
$text = [System.Text.Encoding]::ASCII.GetString($bytes, 0, 999999)|
# only keep columns 0-22; 148-149; separate with comma delimiter
%{ "$($_[$0..22] -join ''),$($_[147..147] -join '')"} |
# convert the file to .txt
set-content path\file.txt
Also, what is a more elegant way of writing this part so it just reads the length of the string, instead of pulling in up to 999999 bytes?
$text = [System.Text.Encoding]::ASCII.GetString($bytes, 0, 999999)|
You don't need to specify index and count. Simply use
[System.Text.Encoding]::ASCII.GetString($bytes).Split("`r`n",[System.StringSplitOptions]::RemoveEmptyEntries)
or
[System.Text.Encoding]::ASCII.GetString([System.IO.File]::ReadAllBytes("filePath")).Split("`r`n",[System.StringSplitOptions]::RemoveEmptyEntries)
I'm not sure why you would want to read it as bytes, when you could simply use Get-Content.

How to convert .txt files to .xls files using informix 4GL codes

I got a question to be disscuss.I am working on INFORMIX 4GL programs. That programs produce output text files.This is an example of the output:
Lot No|Purchaser name|Billing|Payment|Deposit|Balance|
J1006|JAUHARI BIN HAMIDI|5285.05|4923.25|0.00|361.80|
J1007|LEE, CHIA-JUI AKA LEE, ANDREW J. R.|5366.15|5313.70|0.00|52.45|
J1008|NAZRIN ANEEZA BINTI NAZARUDDIN|5669.55|5365.30|0.00|304.25|
J1009|YAZID LUTFI BIN AHMAD LUTFI|3180.05|3022.30|0.00|157.75|
From that output text files(.txt) files, we can open it manually from the excel(.xls) files.From this case, is that any 4gl codes or any commands that we can use it for open the text files in microsoft excell automatically right after we run the program?If there any ideas,please share with me... Thank You
The output shown is in the normal Informix UNLOAD format, using the pipe as a delimiter between fields. The nearest approach to this for Excel is a CSV file with comma-separated values. Generating one of those from that output is a little fiddly. You need to enclose fields containing a comma inside double quotes. You need to use commas in place of pipes. And you might have to worry about backslashes too.
It is a moot point whether it is easier to do the conversion in I4GL or whether to use a program to do the conversion. I think the latter, so I wrote this script a couple of years ago:
#!/usr/bin/env perl
#
# #(#)$Id: unl2csv.pl,v 1.1 2011/05/17 10:20:09 jleffler Exp $
#
# Convert Informix UNLOAD format to CSV
use strict;
use warnings;
use Text::CSV;
use IO::Wrap;
my $csv = new Text::CSV({ binary => 1 }) or die "Failed to create CSV handle ($!)";
my $dlm = defined $ENV{DBDELIMITER} ? $ENV{DBDELIMITER} : "|";
my $out = wraphandle(\*STDOUT);
my $rgx = qr/((?:[^$dlm]|(?:\\.))*)$dlm/sm;
# $csv->eol("\r\n");
while (my $line = <>)
{
print "1: $line";
MultiLine:
while ($line eq "\\\n" || $line =~ m/[^\\](?:\\\\)*\\$/)
{
my $extra = <>;
last MultiLine unless defined $extra;
$line .= $extra;
}
my #fields = split_unload($line);
$csv->print($out, \#fields);
}
sub split_unload
{
my($line) = #_;
my #fields;
print "$line";
while ($line =~ $rgx)
{
printf "%d: %s\n", scalar(#fields), $1;
push #fields, $1;
}
return #fields;
}
__END__
=head1 NAME
unl2csv - Convert Informix UNLOAD to CSV format
=head1 SYNOPSIS
unl2csv [file ...]
=head1 DESCRIPTION
The unl2csv program converts a file from Informix UNLOAD file format to
the corresponding CSV (comma separated values) format.
The input delimiter is determined by the environment variable
DBDELIMITER, and defaults to the pipe symbol "|".
It is not assumed that each input line is terminated with a delimiter
(there are two variants of the UNLOAD format, one with and one without
the final delimiter).
=head1 EXAMPLES
Input:
10|12|excessive|cost \|of, living|
20|40|bou\\ncing tigger|grrrrrrrr|
Output:
10,12,"excessive","cost |of, living"
20,40,"bou\ncing tigger",grrrrrrrr
=head1 RESTRICTIONS
Since the csv2unl program does not know about binary blob data, it
cannot convert such data into the hex-encoded format that Informix
requires.
It can and does handle text blob data.
=head1 PRE-REQUISITES
Text::CSV_XS
=head1 AUTHOR
Jonathan Leffler <jleffler#us.ibm.com>
=cut
I generate Excel files from 4GL code by writing a XML with the Excel progid ("?mso-application progid=\"Excel.Sheet\"?) so Excel opens it as such.
Its like writing HTML from 4GL, you just wite HTML code to file. But with Excel you write XML.

Turkish character encoding in gedit

I have a Turkish written text but I have some strange characters for example:
ý instead of ı, Ý instead of İ etc... I tried to convert encoding to iso 8859-9 but it didn't help.
If you're running a UNIX/Linux machine, try the following shell command:
you#somewhere:~$ file --mime yourfile.txt
It should output something like the snippet below, where iso-8859-1 is the acutal character set your system assumes:
yourfile.txt: text/plain; charset=iso-8859-1
Now you can convert the file into some more flexible charset, like UTF-8:
you#somewhere:~$ iconv -f iso-8859-1 -t utf-8 yourfile.txt > converted.txt
The above snippet specifies both, the charset to convert -from (which should equal the output of the file command) as well as the charset to convert -to. The result of the conversion of yourfile.txt is then stored in converted.txt, which you should be able to open with gedit.
If that doesn't work, you may paste the output of the file command, as well as some real line of your file, in the comment section...

Addressing a specific occurrence of a character in sed

How do I remove or address a specific occurrence of a character in sed?
I'm editing a CSV file and I want to remove all text between the third and the fifth occurrence of the comma (that is, dropping fields four and five) . Is there any way to achieve this using sed?
E.g:
% cat myfile
one,two,three,dropthis,dropthat,six,...
% sed -i 's/someregex//' myfile
% cat myfile
one,two,three,,six,...
If it is okay to consider cut command then:
$ cut -d, -f1-3,6- file
awk or any other tools that are able to split strings on delimiters are better for the job than sed
$ cat file
1,2,3,4,5,6,7,8,9,10
Ruby(1.9+)
$ ruby -ne 's=$_.split(","); s[2,3]=nil ;puts s.compact.join(",") ' file
1,2,6,7,8,9,10
using awk
$ awk 'BEGIN{FS=OFS=","}{$3=$4=$5="";}{gsub(/,,*/,",")}1' file
1,2,6,7,8,9,10
A real parser in action
#!/usr/bin/python
import csv
import sys
cr = csv.reader(open('my-data.csv', 'rb'))
cw = csv.writer(open('stripped-data.csv', 'wb'))
for row in cr:
cw.writerow(row[0:3] + row[5:])
But do note the preface to the csv module:
The so-called CSV (Comma Separated
Values) format is the most common
import and export format for
spreadsheets and databases. There is
no “CSV standard”, so the format is
operationally defined by the many
applications which read and write it.
The lack of a standard means that
subtle differences often exist in the
data produced and consumed by
different applications. These
differences can make it annoying to
process CSV files from multiple
sources. Still, while the delimiters
and quoting characters vary, the
overall format is similar enough that
it is possible to write a single
module which can efficiently
manipulate such data, hiding the
details of reading and writing the
data from the programmer.
$ cat my-data.csv
1
1,2
1,2,3
1,2,3,4,
1,2,3,4,5
1,2,3,4,5,6
1,2,3,4,5,6,
1,2,,4,5,6
1,2,"3,3",4,5,6
1,"2,2",3,4,5,6
,,3,4,5
,,,4,5
,,,,5
$ python csvdrop.py
$ cat stripped-data.csv
1
1,2
1,2,3
1,2,3
1,2,3
1,2,3,6
1,2,3,6,
1,2,,6
1,2,"3,3",6
1,"2,2",3,6
,,3
,,
,,

ISQL 7.3 (SuSE): Ace report output to more than one file or stdout

Does anyone know how I could trick Ace into outputing to more than one file or to a file and display simultaneously without having to write an external script?.. i.e. in the Ace spec OUTPUT REPORT TO PIPE or OUTPUT REPORT TO "filename.out" > /dev/tty01a
For piping to multiple files, you can use:
OUTPUT
REPORT TO PIPE "tee file2 >file1"
You can do more than two files if you want to, courtesy of the abilities of the tee program. Clearly, if you want the output to go to standard output as well as to a file, you pipe it to tee without the '>' redirection.
You can get the output to a pager if you use:
OUTPUT
REPORT TO PIPE "tee file1 file2 | less"

Resources