How to access ping't time - network-programming

when in win7 I ping some ip the result like this
" Reply from 113.164.49.22: bytes=32 time=256ms TTL=47" Is there any command to change output to just time

ping /? gives help. It doesn't show an option that will display only the time.
It shouldn't be difficult to cobble up a Powershell script that uses a regular expression to strip out just the time portion.

Related

Lua script read from serial port in OpenWRT

I have Openwrt router with Arduino connected via USB FTDI adapter.
Serial port is /dev/ttyUSB0
Arduino code prints some data:
First part of data printed with delay via command print(), for example:
Serial.begin(9600);
Serial.print(var1);
delay(1000);
Serial.print(var2);
delay(1000);
Serial.print(var3);
delay(1000);
And second part printed with println() command:
Serial.println("");
Serial.println(var4);
Serial.println(var5);
Serial.println(var6);
So when I open Serial port in terminal I can see something like this:
1
then timeout in 1 sec, then
1 2
next timeout. and then
1 2 3
last timeout and
1 2 3
4
5
6
It works in Terminal program and in console in OpenWRT, for example screen /dev/ttyUSB0
I need make a Lua script that will read Serial port and print the data in the same way. I have a simple script, but it doesn't work as expected.
rserial=io.open("/dev/ttyUSB0","r")
while true do
chain = nil
while chain==nil do
chain=rserial:read();
print(chain)
end
end
it shows all data at once.
it doesn't show first 3 vars one by one with delays.
Seems it is because of rserial:read() - it will read until it receives a newline character.
It stated in similar question:
How to read from a serial port in lua
I tried to run this command as was advised there:
stty -F /dev/ttyUSB0 -icanon
but it doesn't help and I don't understand why.
Is it the way to fix this behavior via stty?
or I definitely need to use another Serial libs for Lua script?
All of these libs seems pretty outdated for now and I don't want to use outdated stuff..
From the Lua Reference Manual:
When called without formats, it uses a default format that reads the
next line (see below).
A new line is anything in the buffer until the next newline character.
So as long as you don't send a newline character Lua will wait one as it has been told by calling read()
Once a newline character is received you will be prompted any other character in that line.
Terminal programs usually update every byte to show what they receive in "real-time".
So if you want to have the same behaviour you may not use read() without any arguments.
Use read(1) to read every single byte without waiting for anything else.

Wireshark script to sum Length to Source IP

I am capturing the router interface from my Fritzbox modem then using Wireshark to view it.
I'd like a script to filter a number of Source IP's and then sum all the Length's (data quantity) associated with them. Effectively giving me the total data usage of each IP address I monitor.
Conceptually is sounds simple, but after a look at Lua, I think I'm in over my head.
Thanks.
Maybe tshark can help you achieve your goal directly, without the need for a script at all? Have you tried something like:
tshark -r file.pcap -z endpoints,ip,"ip.src eq 1.1.1.1" -q
... where 1.1.1.1 represents the IP address of the endpoint you're interested in gathering statistics for? You can specify as many endpoints as you need to by or ing them together, or even use a subnet such as "ip.src eq 1.1.1.0/24", for example.

wrong times tamps in netflow data generated by ESXi

I have a problem in "Date first seen" column in the result generated by nfdump. I have enabled netflow on an ESXi 5.5 to send netflow data to a netflow server. up to now everything is OK and I can capture netflow data with nfcapd with the following command:
nfcapd -D -z -u netflow -p 9996 -n Esxi,192.168.20.54,/data/nfdump -S2 -e
but the problem is that when I filter the traffic with nfdump (e.g. with nfdump -R nfdump5/2016/ -c 10) I see "1970-01-01 03:30:00.000" for "Date first seen" column in all entries!!! What should I do to get the right time stamps?
Any help is appreciated.
The NetFlow header has a timestamp for the whole datagram; most likely, their export is using the "first seen" field as an offset from that. It's possible nfdump isn't correctly interpreting that field; I'd recommend having a look at the capture in Wireshark, which I've found to be pretty reliable in decoding NetFlow. That will also let you examine the flow records directly to see if the timestamps are really coming across that small, or are just being misinterpreted. Just remember that if you're capturing NetFlow v9 or IPFIX, you'll need to make sure that your capture includes a template datagram.
If the ESXi's NetFlow isn't exporting timestamps correctly, you can also look into monitoring using a small virtual machine running a software flow exporter (there are a number of free ones available - just Google "free flow exporter") with an interface in promiscuous mode.

Adding timestamps in the journal file?

I'm wondering if it's possible to add timestamps to the journal file?
It appears that a date & time are recorded when SPSS is started, but if you have the program open for longer periods of time (i.e. days) it doesn't break it up if the program isn't closed.
Having timestamps would make it much easier to find what I'm looking for the times I look back to find things.
This is what I use to insert timestamps into my output:
HOST COMMAND=['echo %time%'].
However the journal file only shows the syntax.
The journal file is kept flushed and closed by Statistics, so you can probably write to it from another process. I don't think the suggestion above will work, because it will write the code but not the output to the journal. However, using Python you could do something like this.
begin program.
import time
open(r"full path to your journal file", "a").write("* " + time.asctime() + "\n")
end program
I can't see why it shouldn't work, unless you are not using a windows operating system.
On Unix-like system like Linux or Mac which run the bash (shell) you would rather use
HOST COMMAND =['date'].
If you have the Python extension installed you could also use Python code to to print the date and time (which would be a platform independent solution).
BEGIN PROGRAM.
import time
print time.ctime()
END PROGRAM.

wireshark: Capture Data Layer Only

Is there a way to capture only the data layer and disregard the upper layers in wireshark? If not, is there a different packet dump utility that can do this? PREFERABLY 1 file per packet!
What I am looking for: A utility that dumps only the data (the payload) layer to a file.
This is programming related...! What I really want to do is to compare all of the datagrams in order to start to understand a third party encoding/protocol. Ideally, and what would be great, would be a hex compare utility that compares multiple files!
You should try right-clicking on a packet and select "Follow TCP Stream". Then you can save the TCP communication into a raw file for further processing. This way you won't get all the TCP/IP protocoll junk.
There is a function to limit capture size in Wireshark, but it seems that 68bytes is the smallest value. There are options to starting new files after a certain number of kilo, mega, gigabytes, but again the smallest is 1-kilobyte, so probably not useful.
I would suggest looking at the pcap library and rolling your own. I've done this in the past using the PERL Net::Pcap library, but it could easily be done it other languages too.
If you have Unix/Linux available you might also look into tcpdump. You can limit amount of data captured with -s. For example "-s 14" would typically get you the Ethernet header, which I assume is what you mean by the datalink layer. There are also options for controlling how often files are created by specifying file size with -C. So theoretically if you set the file size to the capture size, you'll get one file per packet.
Using tshark I was able to print data only, by decoding as telnet and printing field telnet.data
tshark -r file.pcap -d tcp.port==80,telnet -T fields -e telnet.data
GET /test.js HTTP/1.1\x0d\x0a,User-Agent: curl/7.35.0\x0d\x0a,Host: 127.0.0.1\x0d\x0a,Accept: */*\x0d\x0a,\x0d\x0a
HTTP/1.1 404 Not Found\x0d\x0a,Server: nginx/1.4.6 (Ubuntu)\x0d\x0a,Date: Fri, 15 Jan 2016 11:32:58 GMT\x0d\x0a,Content-Type: text/html\x0d\x0a,Content-Length: 177\x0d\x0a,Connection: keep-alive\x0d\x0a,\x0d\x0a,<html>\x0d\x0a,<head><title>404 Not Found</title></head>\x0d\x0a,<body bgcolor=\"white\">\x0d\x0a,<center><h1>404 Not Found</h1></center>\x0d\x0a,<hr><center>nginx/1.4.6 (Ubuntu)</center>\x0d\x0a,</body>\x0d\x0a,</html>\x0d\x0a
Not perfect but it was good enough for what I needed, I hope it helps some one.

Resources