am new to Modbus and developing an application using Modbus RTU. I would like to know how to find out the RTU message frame separation time. In the Modbus RTU specification, It mentions 3.5 chars time, but there is no more data about how i can decide this intervals. and wat are the steps to calculate the separation time?
Take a look at page 13 of the Modbus Serial Line Protocol and Implementation Guide V1.02
At the bottom you will find a remark explaining the inter-character time-out (t1.5) and inter-frame delay (t3.5) values.
For baud rates over 19200 values are fixed. For slower baud rates they need to be calculated (extract from SimpleModbusMaster library for Arduino):
// Modbus states that a baud rate higher than 19200 must use a fixed 750 us
// for inter character time out and 1.75 ms for a frame delay.
// For baud rates below 19200 the timeing is more critical and has to be calculated.
// E.g. 9600 baud in a 10 bit packet is 960 characters per second
// In milliseconds this will be 960characters per 1000ms. So for 1 character
// 1000ms/960characters is 1.04167ms per character and finaly modbus states an
// intercharacter must be 1.5T or 1.5 times longer than a normal character and thus
// 1.5T = 1.04167ms * 1.5 = 1.5625ms. A frame delay is 3.5T.
if (baud > 19200)
{
T1_5 = 750;
T3_5 = 1750;
}
else
{
T1_5 = 15000000/baud;
T3_5 = 35000000/baud;
}
Modbus RTU use 11-bit char, regardless using parity or not. The formula should be : 11 * 1000000 / ( baud_rate ) for one char time, this applies for baud rate <= 19200 bps. For baud rate > 19200 bps, fixed time is used, which are 1750 micro seconds for 3.5 char time, and 750 micro seconds for 1.5 char time
Related
I'm attempting to sync recorded audio (from an AVAudioEngine inputNode) to an audio file that was playing during the recording process. The result should be like multitrack recording where each subsequent new track is synced with the previous tracks that were playing at the time of recording.
Because sampleTime differs between the AVAudioEngine's output and input nodes, I use hostTime to determine the offset of the original audio and the input buffers.
On iOS, I would assume that I'd have to use AVAudioSession's various latency properties (inputLatency, outputLatency, ioBufferDuration) to reconcile the tracks as well as the host time offset, but I haven't figured out the magic combination to make them work. The same goes for the various AVAudioEngine and Node properties like latency and presentationLatency.
On macOS, AVAudioSession doesn't exist (outside of Catalyst), meaning I don't have access to those numbers. Meanwhile, the latency/presentationLatency properties on the AVAudioNodes report 0.0 in most circumstances. On macOS, I do have access to AudioObjectGetPropertyData and can ask the system about kAudioDevicePropertyLatency, kAudioDevicePropertyBufferSize,kAudioDevicePropertySafetyOffset, etc, but am again at a bit of a loss as to what the formula is to reconcile all of these.
I have a sample project at https://github.com/jnpdx/AudioEngineLoopbackLatencyTest that runs a simple loopback test (on macOS, iOS, or Mac Catalyst) and shows the result. On my Mac, the offset between tracks is ~720 samples. On others' Macs, I've seen as much as 1500 samples offset.
On my iPhone, I can get it close to sample-perfect by using AVAudioSession's outputLatency + inputLatency. However, the same formula leaves things misaligned on my iPad.
What's the magic formula for syncing the input and output timestamps on each platform? I know it may be different on each, which is fine, and I know I won't get 100% accuracy, but I would like to get as close as possible before going through my own calibration process
Here's a sample of my current code (full sync logic can be found at https://github.com/jnpdx/AudioEngineLoopbackLatencyTest/blob/main/AudioEngineLoopbackLatencyTest/AudioManager.swift):
//Schedule playback of original audio during initial playback
let delay = 0.33 * state.secondsToTicks
let audioTime = AVAudioTime(hostTime: mach_absolute_time() + UInt64(delay))
state.audioBuffersScheduledAtHost = audioTime.hostTime
...
//in the inputNode's inputTap, store the first timestamp
audioEngine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (pcmBuffer, timestamp) in
if self.state.inputNodeTapBeganAtHost == 0 {
self.state.inputNodeTapBeganAtHost = timestamp.hostTime
}
}
...
//after playback, attempt to reconcile/sync the timestamps recorded above
let timestampToSyncTo = state.audioBuffersScheduledAtHost
let inputNodeHostTimeDiff = Int64(state.inputNodeTapBeganAtHost) - Int64(timestampToSyncTo)
let inputNodeDiffInSamples = Double(inputNodeHostTimeDiff) / state.secondsToTicks * inputFileBuffer.format.sampleRate //secondsToTicks is calculated using mach_timebase_info
//play the original metronome audio at sample position 0 and try to sync everything else up to it
let originalAudioTime = AVAudioTime(sampleTime: 0, atRate: renderingEngine.mainMixerNode.outputFormat(forBus: 0).sampleRate)
originalAudioPlayerNode.scheduleBuffer(metronomeFileBuffer, at: originalAudioTime, options: []) {
print("Played original audio")
}
//play the tap of the input node at its determined sync time -- this _does not_ appear to line up in the result file
let inputAudioTime = AVAudioTime(sampleTime: AVAudioFramePosition(inputNodeDiffInSamples), atRate: renderingEngine.mainMixerNode.outputFormat(forBus: 0).sampleRate)
recordedInputNodePlayer.scheduleBuffer(inputFileBuffer, at: inputAudioTime, options: []) {
print("Input buffer played")
}
When running the sample app, here's the result I get:
This answer is applicable to native macOS only
General Latency Determination
Output
In the general case the output latency for a stream on a device is determined by the sum of the following properties:
kAudioDevicePropertySafetyOffset
kAudioStreamPropertyLatency
kAudioDevicePropertyLatency
kAudioDevicePropertyBufferFrameSize
The device safety offset, stream, and device latency values should be retrieved for kAudioObjectPropertyScopeOutput.
On my Mac for the audio device MacBook Pro Speakers at 44.1 kHz this equates to 71 + 424 + 11 + 512 = 1018 frames.
Input
Similarly, the input latency is determined by the sum of the following properties:
kAudioDevicePropertySafetyOffset
kAudioStreamPropertyLatency
kAudioDevicePropertyLatency
kAudioDevicePropertyBufferFrameSize
The device safety offset, stream, and device latency values should be retrieved for kAudioObjectPropertyScopeInput.
On my Mac for the audio device MacBook Pro Microphone at 44.1 kHz this equates to 114 + 2404 + 40 + 512 = 3070 frames.
AVAudioEngine
How the information above relates to AVAudioEngine is not immediately clear. Internally AVAudioEngine creates a private aggregate device and Core Audio essentially handles latency compensation for aggregate devices automatically.
During experimentation for this answer I've found that some (most?) audio devices don't report latency correctly. At least that is how it seems, which makes accurate latency determination nigh impossible.
I was able to get fairly accurate synchronization using my Mac's built-in audio using the following adjustments:
// Some non-zero value to get AVAudioEngine running
let startDelay = 0.1
// The original audio file start time
let originalStartingFrame: AVAudioFramePosition = AVAudioFramePosition(playerNode.outputFormat(forBus: 0).sampleRate * startDelay)
// The output tap's first sample is delivered to the device after the buffer is filled once
// A number of zero samples equal to the buffer size is produced initially
let outputStartingFrame: AVAudioFramePosition = Int64(state.outputBufferSizeFrames)
// The first output sample makes it way back into the input tap after accounting for all the latencies
let inputStartingFrame: AVAudioFramePosition = outputStartingFrame - Int64(state.outputLatency + state.outputStreamLatency + state.outputSafetyOffset + state.inputSafetyOffset + state.inputLatency + state.inputStreamLatency)
On my Mac the values reported by the AVAudioEngine aggregate device were:
// Output:
// kAudioDevicePropertySafetyOffset: 144
// kAudioDevicePropertyLatency: 11
// kAudioStreamPropertyLatency: 424
// kAudioDevicePropertyBufferFrameSize: 512
// Input:
// kAudioDevicePropertySafetyOffset: 154
// kAudioDevicePropertyLatency: 0
// kAudioStreamPropertyLatency: 2404
// kAudioDevicePropertyBufferFrameSize: 512
which equated to the following offsets:
originalStartingFrame = 4410
outputStartingFrame = 512
inputStartingFrame = -2625
I may not be able to answer your question, but I believe there is a property not mentioned in your question that does report additional latency information.
I've only worked at the HAL/AUHAL layers (never AVAudioEngine), but in discussions about computing the overall latencies, some audio device/stream properties come up: kAudioDevicePropertyLatency and kAudioStreamPropertyLatency.
Poking around a bit, I see those properties mentioned in the documentation for AVAudioIONode's presentationLatency property (https://developer.apple.com/documentation/avfoundation/avaudioionode/1385631-presentationlatency). I expect that the hardware latency reported by the driver will be there. (I suspect that the standard latency property reports latency for an input sample to appear in the output of a "normal" node, and IO case is special)
It's not in the context of AVAudioEngine, but here's one message from the CoreAudio mailing list that talks a bit about using the low level properties that may provide some additional background: https://lists.apple.com/archives/coreaudio-api/2017/Jul/msg00035.html
I have the following setup:
A FPGA sending out data on UART at a baudrate of 3Mbps. The data transmitted is a chunk of 1024 bytes sent at a variable periodicity ranging from 20ms to 200ms. (So even in the worst case, datarate is far from 3Msps)
A FTDI 232RG
A piece of python running on my computer (Windows), doing basically : opening a COM port with pyserial, 3Msps, polling the in_waiting until it reaches the size of a packet (1024 bytes), formatting the packet received and print it on the screen
The script works well for low repetition frequency, but I face issues with higher repetitions (typically 20ms). When the periodicity in 20ms I eventually end up getting a buffer overflow somewhere before the in_waiting. I checked the timing of my python loop and it takes about 4ms. So it looks like there is something upstream (in the FTDI or Windows) that feeds the pyserial buffer with more than one packet within the 4ms following a packet.
I tried changing the FTDI latency in the driver (from 16ms default down to a few ms) but it does not seem to help.
I am currently clueless about what is happening. Would you have any advice about how to understand better what is happening?
Thanks for your help!
You could create a "loop" between TX and RX and run the following code (tested with a FT2232H, so mostlikely you need to change the identifier string):
import time
import serial
import serial.tools.list_ports
print([(x[0],x[2]) for x in serial.tools.list_ports.comports()])
port = [x[0] for x in serial.tools.list_ports.comports() if "FT4Q1LJFB" in x[2]][0]
ser = serial.Serial(port,12000000)
while True:
t0 = time.time()
counter = 0
for i in range(1000):
ser.write([1]*3000)
recv = ser.read(ser.inWaiting())
delta_t = time.time() - t0
counter += len(recv)
print(counter / delta_t)
For me the following output is shown
[('COM7', 'USB VID:PID=0403:6010 SER=FT4Q1LJFA'), ('COM8', 'USB VID:PID=0403:6010 SER=FT4Q1LJFB')]
0.0
0.0
0.0
0.0
96787.81184093593
1201991.0268273412
1201197.0857713912
1201166.9350959768
1201445.4072856384
You will notice that it is 0.0 in the beginning. This is because I connected RX and TX after starting the program resulting in a ramping up of the received bytes. This is the "default" mode meaning 8 bits + 1 start bit + 1 stop bit = 10 bits per word which explains why "only" 1.2 Mbytes per second are transmitted.
I am writing software in C# measuring or utilizing out or in octets (bytes) via SNMP. I need to do how many bytes pass in 1000 secs?
According to my research, its value gets timed out or reset sometimes because some results give a negative value.
.1.3.6.1.2.1.2.2.1.10 for input stream in .139
In 1024 secs it gives result of -2,1 MBytes.
How can I get accurate measurement of traffic (in or out) ?
EDIT : This code I use for calculations. It takes value in everysec and gets result.
private void timer1_Tick(object sender, EventArgs e)
{
Cursor.Current = Cursors.WaitCursor;
SnmpObject objSnmpObject, objSnmpIfSpeed;
objSnmpObject = (SnmpObject)objSnmpManager.Get(".1.3.6.1.2.1.2.2.1.16.139");
objSnmpIfSpeed = (SnmpObject)objSnmpManager.Get(".1.3.6.1.2.1.2.2.1.5.139");
if (GetResult() == 0)
{
float value = Int64.Parse(objSnmpObject.Value);
float ifSpeed = Int64.Parse(objSnmpIfSpeed.Value);
float Bytes = (value * 8 * 100 / ifSpeed);
// float megaBytes = Bytes / 1024;
sum += Bytes;
tb_calc.Text = (sum.ToString() + " Bytes");
}
_gv_timeSec++;
lb_timer.Text = _gv_timeSec.ToString();
Cursor.Current = Cursors.Default;
}
1.3.6.1.2.1.2.2.1.10 is the OID for IF-MIB::ifInOctets which is described by MIB as a Counter32 which has a upper limit of 2^32-1 (4294967295 decimal).
"The total number of octets received on the interface,
including framing characters.
Discontinuities in the value of this counter can occur at
re-initialization of the management system, and at other times as
indicated by the value of ifCounterDiscontinuityTime."
Quoting this SO answer :
a Counter32 has no defined initial value, so a single reading of
Counter32 has no information content. This is why you have to take two
(or more) readings to make sense of it. An example of this would be
the number of packets received on an ethernet interface. If you take a
reading and get back 4 million packets, you haven't learned anything:
the wire could have been pulled out of the interface for the past
year, or it could be passing millions of packets per second. You have
to take multiple readings to know anything.
I'd recommend ifHCInOctets .1.3.6.1.2.1.31.1.1.1.6 and ifHCOutOctets .1.3.6.1.2.1.31.1.1.1.10 which are 64 bit versions of OIDs mentioned by #k1eran
Those counters don't rotate so quickly when dealing with higher speeds.
How to calculate correct PTS value for frame before encoding in FFmpeg C API?
For encoding I'm using function avcodec_encode_video2 and then writing it by av_interleaved_write_frame.
I found some formulas, but none of them work.
In doxygen example they are using
frame->pts = 0;
for (;;) {
// encode & write frame
// ...
frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
}
This blog says that formula must be like this:
(1 / FPS) * sample rate * frame number
Someone uses only frame number to set pts:
frame->pts = videoCodecCtx->frame_number;
Or an alternative way:
int64_t now = av_gettime();
frame->pts = av_rescale_q(now, (AVRational){1, 1000000}, videoCodecCtx->time_base);
And the last one:
// 40 * 90 means 40 ms and 90 because of the 90kHz by the standard for PTS-values.
frame->pts = encodedFrames * 40 * 90;
Which one is correct? I think answer for this question will be helpful for not only for me.
It's better to think about PTS more abstractly before trying code.
What you're doing is meshing 3 "time sets" together. The first is time we're used to, based on 1000 ms per second, 60 seconds per minute, and so on. The second is the codec time for the particular codec you are using. Each codec has a certain way it wants to represent time, usually in a 1/number format meaning that for every second there is "number" amount of ticks. The third format works similar to the second except that it is the time base for the container that you are used.
Some people prefer to start with actual time, others frame count, neither is "wrong".
Starting with a frame count you need to first convert it based on your frame rate. Note all conversions I speak of use av_rescale_q(...). The purpose of this conversion is to turn a counter into time, so you rescale with your frame rate (video steam time base usually). Then you have to convert that into the time_base of your video codec before encoding.
Similarly, with a real time, your first conversion needs to be from current_time - start_time scaled to your video codec time.
Anyone using only frame counter is probably using a codec with a time_base equal to their frame rate. Most codecs do not work like this and their hack is not portable. Example:
frame->pts = videoCodecCtx->frame_number; // BAD
Additionally, anyone using hardcoded numbers in their av_rescale_q is leveraging the fact that they know what their time_base is and this should be avoided. The code isn't portable to other video formats. Instead use video_st->time_base, video_st->codec->time_base, and output_ctx->time_base to figure things out.
I hope understanding it from a higher level will help you see which of those are "correct" and which are "bad practice". There is no single answer, but maybe now you can decide which approach is best for you.
Time is measured not in seconds or milliseconds or any standard unit. Instead, it is measured by the avCodecContext's timebase.
So if you set the codecContext->time_base to 1/1, it means using second for measurement.
cctx->time_base = (AVRational){1, 1};
Assuming you want to encode at a steady fps of 30. Then, the time when a frame is encoded is framenumber * (1.0/fps)
But once again, the PTS is also not measured in seconds or any standard unit. It's measured by avStream's time_base.
In the question, the author mentioned 90k as the standard resolution for pts. But you will see that this is not always true. The exact resolution is saved in avstream. you can read it back by:
if ((err = avformat_write_header(ofctx, NULL)) < 0) {
std::cout << "Failed to write header" << err << std::endl;
return -1;
}
av_dump_format(ofctx, 0, "test.webm", 1);
std::cout << stream->time_base.den << " " << stream->time_base.num << std::endl;
The value of stream->time_stamp is only populated after calling avformat_write_header
Therefore, the right formula for calculating PTS is:
//The following assumes that codecContext->time_base = (AVRational){1, 1};
videoFrame->pts = frameduration * (frameCounter++) * stream->time_base.den / (stream->time_base.num * fps);
So really there are 3 components in the formula,
fps
codecContext->time_base
stream->time_base
so pts = fps*codecContext->time_base/stream->time_base
I have detailed my discovery here
There's also the option with setting it like frame->pts = av_frame_get_best_effort_timestamp(frame) but I'm not sure this is the correct approach either.
At the answer to the question on Stack and in the book at here on page 52 I found the normal getTickCount getTickFrequency combination to measure time of execution gives time in milliseconds . However the OpenCV website says its time in seconds. I am confused. Please help...
There is no room for confusion, all the references you have given point to the same thing.
getTickCount gives you the number of clock cycles after a certain event, eg, after machine is switched on.
A = getTickCount() // A = no. of clock cycles from beginning, say 100
process(image) // do whatever process you want
B = getTickCount() // B = no. of clock cycles from beginning, say 150
C = B - A // C = no. of clock cycles for processing, 150-100 = 50,
// it is obvious, right?
Now you want to know how many seconds are these clock cycles. For that, you want to know how many seconds a single clock takes, ie clock_time_period. If you find that, simply multiply by 50 to get total time taken.
For that, OpenCV gives second function, getTickFrequency(). It gives you frequency, ie how many clock cycles per second. You take its reciprocal to get time period of clock.
time_period = 1/frequency.
Now you have time_period of one clock cycle, multiply it with 50 to get total time taken in seconds.
Now read all those references you have given once again, you will get it.
dwStartTimer=GetTickCount();
dwEndTimer=GetTickCount();
while((dwEndTimer-dwStartTimer)<wDelay)//delay is 5000 milli seconds
{
Sleep(200);
dwEndTimer=GetTickCount();
if (PeekMessage (&uMsg, NULL, 0, 0, PM_REMOVE) > 0)
{
TranslateMessage (&uMsg);
DispatchMessage (&uMsg);
}
}