SNMP out or in octets (Network Traffic) - network-programming

I am writing software in C# measuring or utilizing out or in octets (bytes) via SNMP. I need to do how many bytes pass in 1000 secs?
According to my research, its value gets timed out or reset sometimes because some results give a negative value.
.1.3.6.1.2.1.2.2.1.10 for input stream in .139
In 1024 secs it gives result of -2,1 MBytes.
How can I get accurate measurement of traffic (in or out) ?
EDIT : This code I use for calculations. It takes value in everysec and gets result.
private void timer1_Tick(object sender, EventArgs e)
{
Cursor.Current = Cursors.WaitCursor;
SnmpObject objSnmpObject, objSnmpIfSpeed;
objSnmpObject = (SnmpObject)objSnmpManager.Get(".1.3.6.1.2.1.2.2.1.16.139");
objSnmpIfSpeed = (SnmpObject)objSnmpManager.Get(".1.3.6.1.2.1.2.2.1.5.139");
if (GetResult() == 0)
{
float value = Int64.Parse(objSnmpObject.Value);
float ifSpeed = Int64.Parse(objSnmpIfSpeed.Value);
float Bytes = (value * 8 * 100 / ifSpeed);
// float megaBytes = Bytes / 1024;
sum += Bytes;
tb_calc.Text = (sum.ToString() + " Bytes");
}
_gv_timeSec++;
lb_timer.Text = _gv_timeSec.ToString();
Cursor.Current = Cursors.Default;
}

1.3.6.1.2.1.2.2.1.10 is the OID for IF-MIB::ifInOctets which is described by MIB as a Counter32 which has a upper limit of 2^32-1 (4294967295 decimal).
"The total number of octets received on the interface,
including framing characters.
Discontinuities in the value of this counter can occur at
re-initialization of the management system, and at other times as
indicated by the value of ifCounterDiscontinuityTime."
Quoting this SO answer :
a Counter32 has no defined initial value, so a single reading of
Counter32 has no information content. This is why you have to take two
(or more) readings to make sense of it. An example of this would be
the number of packets received on an ethernet interface. If you take a
reading and get back 4 million packets, you haven't learned anything:
the wire could have been pulled out of the interface for the past
year, or it could be passing millions of packets per second. You have
to take multiple readings to know anything.

I'd recommend ifHCInOctets .1.3.6.1.2.1.31.1.1.1.6 and ifHCOutOctets .1.3.6.1.2.1.31.1.1.1.10 which are 64 bit versions of OIDs mentioned by #k1eran
Those counters don't rotate so quickly when dealing with higher speeds.

Related

Simple Babymonitor with Bass.DLL

I am trying to program a simple Babymonitor for Windows (personal use).
The babymonitor should just detect the dB level of the microphone and triggers at a certain volume.
After some research, I found the Bass.dll library and came across it's function BASS_ChannelGetLevel, which is great but seems to have limitations and doesn't fit my needs (Peak equals to a DWORD value).
In the examples I found a livespec example which is "almost" what I need. The example uses BASS_ChannelGetData, but I don't quite know how to handle the returned array...
I want to keep it as simple as possible: Detect the volume from the microphone as dB or any other value (e.g. value 0-MAXINT).
How can this be done with the Bass.dll library?
The BASS_ChannelGetLevel returns the value that is capped to 0dB (return value is 32768 in this case). If you adjust your source level (lower microphone level in sound card settings) then it will work just fine.
Another way, if you want to get uncapped value is to use the BASS_ChannelGetLevelEx function instead: it returns floating point levels, where 1 is maximum (0dB) value that corresponds to BASS_ChannelGetLevel's 32767, but it can exceed 1 to detect sound levels above 0dB which is what you may need.
I also suggest you to monitor sound level for a while: trigger only if certain level exists for 2-3 seconds at least (this way you will exclude false alarms).
Here is how you obtain the db level given an input stream handle (streamHandle):
var peak = (double)Bass.BASS_ChannelGetLevel(streamHandle);
var decibels = 20 * Math.Log10(peak / Int32.MaxValue);
Alternatively, you can use the following to get the RMS (average) peak. To get the RMS value, you have to pass in a sample length into BASS_ChannelGetLevel. I'm using 20 milliseconds here but you can play with the value to see which works best for your needs.
var decibels = 0m;
var channelCount = 2; //Assuming two channels
var sampleLengthMS = 20f;
var rmsLevels = new float[channelCount];
var rmsObtained = Bass.BASS_ChannelGetLevel(streamHandle, rmsLevels, sampleLengthMS / 1000f, BASSLevel.BASS_LEVEL_RMS);
if (rmsObtained)
decibels = 20*Math.Log10(rmsLevels[0]); //using first channel (index 0) but you can get both if needed.
else
Console.WriteLine(Bass.BASS_ErrorGetCode());
Hope this helps.

lua dissector for custom protocol

I have written several Lua Dissectors for custom protocols we use and they work fine. In order to spot problems with missing packets I need to check the custom protocol sequence numbers against older packets.
The IP source and Destination addresses are always the same for device A to device B.
Inside this packet we have one custom ID.
Each ID has a sequence number so device B can determine if a packet is missing. The sequence number increments by 256 and rolls over when it reaches 65k
I have tried using global dictionary but when you scroll up and down the trace the decoder is rerun and the values change.
a couple of lines below show where the information is stored.
ID = buffer(0,6):bitfield(12,12)
SeqNum = buffer(0,6):bitfield(32,16)
Ideally I would like to list in each decoded frame if the previous sequence number is more than 256 away and to produce a table lists all these bad frames.
Src IP; Dst IP; ID; Seq
1 10.12.1.2; 10.12.1.3; 10; 0
2 10.12.1.2; 10.12.1.3; 11; 0
3 10.12.1.2; 10.12.1.3; 12; 0
4 10.12.1.2; 10.12.1.3; 11; 255
5 10.12.1.2; 10.12.1.3; 12; 255
6 10.12.1.2; 10.12.1.3; 10; 511 Packet with seq 255 is missing
I have now managed to get the dissector to check the current packet against previous packets by using a global array, where I store specific information about each frame. In the current packet being dissected I recheck the most recent packet and work my way back to the start to find a suitable packet.
dict[pinfo.number] = {frame = pinfo.number, dID = ID, dSEQNUM = SeqNum}
local frameCount = 0
local frameFound = false
while frameFound == false do
if pinfo.number > frameCount then
frameCount = frameCount + 1
if dict[(pinfo.number - frameCount)] ~= nil then
if dict[(pinfo.number - frameCount)].dID == dict[pinfo.number].dID then
seq_difference = (dict[(pinfo.number)].dSEQNUM - dict[(pinfo.number - frameCount)].dSEQNUM)
if seq_difference > 256 then
pinfo.cols.info = string.format('ID-%d SeqNum-%d missing packet(s) %d last frame %d ', ID,SeqNum, seq_difference, dict[(pinfo.number - frameCount)].frame)
end
frameFound = true
end
end
else
frameFound = true
end
end
I'm not sure I see a question to answer? If you're asking "how can I avoid having to deal with the dissector being invoked multiple times and screwing up the previous decoding of the values" - the answer to that is using the pinfo.visited boolean. It will be false the first time a given packet is dissected, and true thereafter no matter how much clicking around the user does - until the file is reloaded or a new one loaded.
To handle the reloading/new-file case, you'd hook into the init() function call for your proto, by defining a function myproto.init() function, and in that you'd clear your entire array table.
Also, you might want to google for related questions/answer on ask.wireshark.org, as that site is more frequently used for wireshark Lua API questions. For example this question/answer is similar and related to your case.

Calculate PTS before frame encoding in FFmpeg

How to calculate correct PTS value for frame before encoding in FFmpeg C API?
For encoding I'm using function avcodec_encode_video2 and then writing it by av_interleaved_write_frame.
I found some formulas, but none of them work.
In doxygen example they are using
frame->pts = 0;
for (;;) {
// encode & write frame
// ...
frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
}
This blog says that formula must be like this:
(1 / FPS) * sample rate * frame number
Someone uses only frame number to set pts:
frame->pts = videoCodecCtx->frame_number;
Or an alternative way:
int64_t now = av_gettime();
frame->pts = av_rescale_q(now, (AVRational){1, 1000000}, videoCodecCtx->time_base);
And the last one:
// 40 * 90 means 40 ms and 90 because of the 90kHz by the standard for PTS-values.
frame->pts = encodedFrames * 40 * 90;
Which one is correct? I think answer for this question will be helpful for not only for me.
It's better to think about PTS more abstractly before trying code.
What you're doing is meshing 3 "time sets" together. The first is time we're used to, based on 1000 ms per second, 60 seconds per minute, and so on. The second is the codec time for the particular codec you are using. Each codec has a certain way it wants to represent time, usually in a 1/number format meaning that for every second there is "number" amount of ticks. The third format works similar to the second except that it is the time base for the container that you are used.
Some people prefer to start with actual time, others frame count, neither is "wrong".
Starting with a frame count you need to first convert it based on your frame rate. Note all conversions I speak of use av_rescale_q(...). The purpose of this conversion is to turn a counter into time, so you rescale with your frame rate (video steam time base usually). Then you have to convert that into the time_base of your video codec before encoding.
Similarly, with a real time, your first conversion needs to be from current_time - start_time scaled to your video codec time.
Anyone using only frame counter is probably using a codec with a time_base equal to their frame rate. Most codecs do not work like this and their hack is not portable. Example:
frame->pts = videoCodecCtx->frame_number; // BAD
Additionally, anyone using hardcoded numbers in their av_rescale_q is leveraging the fact that they know what their time_base is and this should be avoided. The code isn't portable to other video formats. Instead use video_st->time_base, video_st->codec->time_base, and output_ctx->time_base to figure things out.
I hope understanding it from a higher level will help you see which of those are "correct" and which are "bad practice". There is no single answer, but maybe now you can decide which approach is best for you.
Time is measured not in seconds or milliseconds or any standard unit. Instead, it is measured by the avCodecContext's timebase.
So if you set the codecContext->time_base to 1/1, it means using second for measurement.
cctx->time_base = (AVRational){1, 1};
Assuming you want to encode at a steady fps of 30. Then, the time when a frame is encoded is framenumber * (1.0/fps)
But once again, the PTS is also not measured in seconds or any standard unit. It's measured by avStream's time_base.
In the question, the author mentioned 90k as the standard resolution for pts. But you will see that this is not always true. The exact resolution is saved in avstream. you can read it back by:
if ((err = avformat_write_header(ofctx, NULL)) < 0) {
std::cout << "Failed to write header" << err << std::endl;
return -1;
}
av_dump_format(ofctx, 0, "test.webm", 1);
std::cout << stream->time_base.den << " " << stream->time_base.num << std::endl;
The value of stream->time_stamp is only populated after calling avformat_write_header
Therefore, the right formula for calculating PTS is:
//The following assumes that codecContext->time_base = (AVRational){1, 1};
videoFrame->pts = frameduration * (frameCounter++) * stream->time_base.den / (stream->time_base.num * fps);
So really there are 3 components in the formula,
fps
codecContext->time_base
stream->time_base
so pts = fps*codecContext->time_base/stream->time_base
I have detailed my discovery here
There's also the option with setting it like frame->pts = av_frame_get_best_effort_timestamp(frame) but I'm not sure this is the correct approach either.

Calculating modbus RTU 3.5 character time

am new to Modbus and developing an application using Modbus RTU. I would like to know how to find out the RTU message frame separation time. In the Modbus RTU specification, It mentions 3.5 chars time, but there is no more data about how i can decide this intervals. and wat are the steps to calculate the separation time?
Take a look at page 13 of the Modbus Serial Line Protocol and Implementation Guide V1.02
At the bottom you will find a remark explaining the inter-character time-out (t1.5) and inter-frame delay (t3.5) values.
For baud rates over 19200 values are fixed. For slower baud rates they need to be calculated (extract from SimpleModbusMaster library for Arduino):
// Modbus states that a baud rate higher than 19200 must use a fixed 750 us
// for inter character time out and 1.75 ms for a frame delay.
// For baud rates below 19200 the timeing is more critical and has to be calculated.
// E.g. 9600 baud in a 10 bit packet is 960 characters per second
// In milliseconds this will be 960characters per 1000ms. So for 1 character
// 1000ms/960characters is 1.04167ms per character and finaly modbus states an
// intercharacter must be 1.5T or 1.5 times longer than a normal character and thus
// 1.5T = 1.04167ms * 1.5 = 1.5625ms. A frame delay is 3.5T.
if (baud > 19200)
{
T1_5 = 750;
T3_5 = 1750;
}
else
{
T1_5 = 15000000/baud;
T3_5 = 35000000/baud;
}
Modbus RTU use 11-bit char, regardless using parity or not. The formula should be : 11 * 1000000 / ( baud_rate ) for one char time, this applies for baud rate <= 19200 bps. For baud rate > 19200 bps, fixed time is used, which are 1750 micro seconds for 3.5 char time, and 750 micro seconds for 1.5 char time

Reassembling packets in a Lua Wireshark Dissector

I'm trying to write a dissector for the Safari Remote Debug protocol which is based on bplists and have been reasonably successful (current code is here: https://github.com/andydavies/bplist-dissector).
I'm running into difficultly with reassembling packets though.
Normally the protocol sends a packet with 4 bytes containing the length of the next packet, then the packet with the bplist in.
Unfortunately some packets from the iOS simulator don't follow this convention and the four bytes are either tagged onto the front of the bplist packet, or onto the end of the previous bplist packet, or the data is multiple bplists.
I've tried reassembling them using desegment_len and desegment_offset as follows:
function p_bplist.dissector(buf, pkt, root)
-- length of data packet
local dataPacketLength = tonumber(buf(0, 4):uint())
local desiredPacketLength = dataPacketLength + 4
-- if not enough data indicate how much more we need
if desiredPacketLen > buf:len() then
pkt.desegment_len = dataPacketLength
pkt.desegment_offset = 0
return
end
-- have more than needed so set offset for next dissection
if buf:len() > desiredPacketLength then
pkt.desegment_len = DESEGMENT_ONE_MORE_SEGMENT
pkt.desegment_offset = desiredPacketLength
end
-- copy data needed
buffer = buf:range(4, dataPacketLen)
...
What I'm attempting to do here is always force the size bytes to be the first four bytes of a packet to be dissected but it doesn't work I still see a 4 bytes packet, followed by a x byte packet.
I can think of other ways of managing the extra four bytes on the front, but the protocol contains a lookup table thats 32 bytes from the end of the packet so need a way of accurately splicing the packet into bplists.
Here's an example cap: http://www.cloudshark.org/captures/2a826ee6045b #338 is an example of a packet where the bplist size is at the start of the data and there are multiple plists in the data.
Am I doing this right (looking other questions on SO, and examples around the web I seem to be) or is there a better way?
TCP Dissector packet-tcp.c has tcp_dissect_pdus(), which
Loop for dissecting PDUs within a TCP stream; assumes that a PDU
consists of a fixed-length chunk of data that contains enough information
to determine the length of the PDU, followed by rest of the PDU.
There is no such function in lua api, but it is a good example how to do it.
One more example. I used this a year ago for tests:
local slicer = Proto("slicer","Slicer")
function slicer.dissector(tvb, pinfo, tree)
local offset = pinfo.desegment_offset or 0
local len = get_len() -- for tests i used a constant, but can be taken from tvb
while true do
local nxtpdu = offset + len
if nxtpdu > tvb:len() then
pinfo.desegment_len = nxtpdu - tvb:len()
pinfo.desegment_offset = offset
return
end
tree:add(slicer, tvb(offset, len))
offset = nxtpdu
if nxtpdu == tvb:len() then
return
end
end
end
local tcp_table = DissectorTable.get("tcp.port")
tcp_table:add(2506, slicer)

Resources