Discriminate between MSDU, MMPDU while using Native WiFi for retrieving incoming number of frames? - wifi

I have some problems whiling using Mircosoft Native Wifi API.
I can use queryinterface function to get the current value of counter to know the frames received.
But the statistic involves not only MSDU but also MMPDU and MPDU, which cause the result inaccurate.
as the link: See the site
The members of this structure are used to record MAC-level statistics for:
MAC service data unit (MSDU) packets.
MAC protocol data unit (MPDU) frames.
MPDU frame counters must include all MPDU fragments sent for an MSDU packet.
Management MPDU (MMPDU) frames.
So, is there any way to obtain only the received number of MSDU ??? THX!!

Related

What does CBATTError Code insufficientResources really mean?

I'm trying to send data over BLE from my iPhone to an ESP32 board. I'm developing in flutter platform and I'm using flutter_reactive_ble library.
My iPhone can connect to the other device and it can also send 1 byte using writeCharacterisiticWithResponse function. But when I try to send my real data which is large (>7000 bytes), it then gives me the error:
flutter: Error occured when writing 9f714672-888c-4450-845f-602c1331cdeb :
Exception: GenericFailure<WriteCharacteristicFailure>(
code: WriteCharacteristicFailure.unknown,
message: "Error Domain=CBATTErrorDomain Code=17
"Resources are insufficient."
UserInfo={NSLocalizedDescription=Resources are insufficient.}")
I tried searching for this error but didn't find additional info, even in Apple Developer website. It just says:
Resources are insufficient to complete the ATT request.
What does this error really means? Which resources are not sufficient and how to work around this problem?
This is almost certainly larger than this characteristic's maximum value length (which is probably on the order of 10s of bytes, not 1000s of bytes). Before writing, you need to call maximumWriteValueLength(for:) to see how much data can be written. If you're trying to send serial data over a characteristic (which is common, but not really what they were designed for), you'll need to break your data up into chunks and reassemble them on the peripheral. You will likely either need an "end" indicator of some kind, or you will need to send the length of the payload first so that the receiver knows how much to excpect.
First of all, a characteristic value cannot be larger than 512 bytes. This is set by the ATT standard (Bluetooth Core Specification v5.3, Vol 3, Part F (ATT), section 3.2.9). This number has been set arbitrarily by the protocol designers and does not map to any technical limitation of the protocol.
So, don't send 7000 bytes in a single write. You need to keep it at most 512 to be standard compliant.
If you say that it works with another Bluetooth stack running on the GATT server, then I guess CoreBluetooth does not enforce/check the maximum length of 512 bytes on the client side (I haven't tested). Therefore I also guess the error code you see was sent by the remote device rather than by CoreBluetooth locally as a pre-check.
There are three different common ways of writing a characteristic on the protocol level (Bluetooth Core Specification v5.3, Vol 3, Part G (GATT), section 4.9 Characteristic Value Write):
Write Without Response (4.9.1)
Write Characteristic Value (4.9.3)
Write Long Characteristic Values (4.9.4)
Number one is unidirectional and does not result in a response packet. It uses a single ATT_WRITE_CMD packet where the value must be at most ATT_MTU-3 bytes in length. This length can be retrieved using maximumWriteValueLength(for:) with .withoutResponse. The requestMtu method in flutter_reactive_ble uses this method internally. If you execute many writes of this type rapidly, be sure to add flow control to avoid CoreBluetooth dropping outgoing packets before they are sent. This can be done through peripheralIsReadyToSendWriteWithoutResponse by simply always waiting for this callback after each write, before you write the next packet. Unfortunately, it seems flutter_reactive_ble does not implement this flow control mechanism.
Number two uses a single ATT_WRITE_REQ where the value must be at most ATT_MTU-3 bytes in length, just as above. Use the same approach as above to retrieve that maximum length (note that maximumWriteValueLength with .withResponse always returns 512 and is not what you want). Here however, either an ATT_WRITE_RSP will be returned on success or an error packet will be received with an error code. Only one ATT transaction can be outstanding at a time, which significantly lowers throughput compared to Write Without Response.
Number three uses a sequence of multiple ATT_PREPARE_WRITE_REQ packets (containing offset and value) followed by an ATT_EXECUTE_WRITE_REQ. The maximum length of the value in each each chunk is ATT_MTU-5. Each _REQ packet also requires a corresponding _RSP packet before it can continue (alternatively, an error code could be sent by the remote device). This approach is used when the characteristic value to be written is too long to be sent using a single ATT_WRITE_REQ.
For any of the above write methods, you are always also limited by the maximum attribute size of 512 bytes as per the specification.
Any Bluetooth stack I know of transparently chooses between "Write Characteristic Value" and "Write Long Characteristic Values" when you tell it to write with response, depending on the value length and MTU. Server side it's a bit different. Some stacks put the burden on the user to combine all packets but it seems nimble handles that on its own. From what I can see in the source code (https://github.com/apache/mynewt-nimble/blob/26ccb8af1f3ea6ad81d5d7cbb762747c6e06a24b/nimble/host/src/ble_att_svr.c#L2099) it can return the "Insufficient Resources" error code when it tries to allocate memory but fails (most likely due to too much buffered data). This is what might happen for you. To answer your first actual question, the standard itself does not say anything else about this error code than simply "Insufficient Resources to complete the request".
The error has nothing to do with LE Data Length extension, which is simply an optimization for a lower layer (BLE Link Layer) that does not affect the functionality of the host stack. The L2CAP layer will take care of the reassembling of smaller link layer packets if necessary, and must always support up to the negotiated MTU without overflowing any buffers.
Now, to answer your second question, if you send very large amounts of data (7000 bytes), you must divide the data in multiple chunks and come up with a way to correctly be able to combine them. Each chunk is written as a full characteristic value. When you do this, be sure to send values at most of size ATT_MTU-3 (but never larger than 512 bytes), to avoid the inefficient overheads of "Write Long Characteristic Values". It's then up to your application code to make sure you don't run out of memory in case too much data is sent.

Automotive: How does ECU tell a CAN frame is a part of UDS protocol?

Read a lot of specifications and still can't get a simple thing.
All UDS requests encapsulated in ISO-TP packets, which are encapsulated in simple CAN frames, so ECU constantly receives a stream of frames from CAN bus.
How does ECU decide that this CAN frame is a part of any high-level protocol?
For example, I've sent Security request to ECU, CAN frame data will look like this
02 27 01
How does ECU determine that this is not just a chunk of data but a part of the protocol?
I wasn't able to find any relation to ISO/OSI stack when high-level protocols "talk to each other" using headers, so we know how to decode data packets.
The CAN message IDs that are used for specific protocols are defined per system.
In most cases the OBD-II will be sent over CAN ID 7DFh for the query and higher IDs for responses from different modules, but even that might be different on specific car models.
One way of figuring out the CAN IDs that are used for UDS-based communication is to send simple tester-present (SID 3Eh) messages and watching for CAN IDs which seems to have an appropriate response.
UDS via CAN is specified in the DoCAN ISO-15765-2 part and describe the network and transport layer for a functional (broadcast) and physical (p2p) communication between ECUs or better control functions.
Normal CAN id's don't implement any network functionalities like a addressing. For that purpose the SAE J1939 network layer is used. In a J1939 network each CAN client has a address and each functionality a parameter group (PGN). All of this is encoded in a 29bit CAN ID. For example the CAN ID 0x18EF8081. This will transport a message from the CAN client 0x81 to 0x80 via the PGN 0xEF, 0x18 is the priority.
In UDS over CAN the PGNs 0xDA (physical) and 0xDB (functional) are used for all communications. With that information's you can implement a CAN ID filter which match only the PGN part of the CAN ID.

Why is it not safe to use Socket.ReceiveLength?

Well, even Embarcadero states that it is not guaranteed to return accurate result of the bytes ready to read in the socket buffer, but if you look at it, when you place -1 at Socket.ReceiveBuf (this is what ReceiveLength wraps) it calls ioctlsocket with FIONREAD to determine the amount of data pending in the network's input buffer that can be read from socket s.
so, how is it not safe or bad ?
e.g: ioctlsocket(Socket.SocketHandle, FIONREAD, Longint(i));
The documentation you mention specifically says (emphasis mine)
Note: ReceiveLength is not guaranteed to be accurate for streaming socket connections.
This means that the length is not known ahead of time because it's being supplied by a stream of data. Obviously, if you don't know how big the data is that's being sent ahead of time, you can't properly set the length the client should expect.
Consider it like generic code to copy a file. If you don't know ahead of time how big the file is you'll be copying, you can't predict how many bytes you'll be copying. In the case of the socket, the stream size that's supplying the socket isn't known in advance (for instance, for data being generated real-time and sent), so there's no way to inform the client socket how much to expect.

Why is DirectShow dragging in unnecessary intermediate filters when making multiple input connections to my DirectShow Transform filter?

I have a DirectShow Transform filter written in Delphi 6 using the DSPACK component library. It is a simple audio mixer that creates a new input pin whenever a new connection is attempted. I say simple because once its media format is set, all connections to the its input pins or singular output pin are forced to conform to that media format. I build the filter chain manually, making all pin connections explicitly myself. I do not use any of the "intelligent rendering" calls, unless there is some way to trigger that unwanted behavior (in my case) accidentally.
NOTE: The Capture Filter is a standard DirectShow filter external to my application. My push source audio filter and simple audio mixer filters are being used as private, unregistered filters and are internal to my application.
I am having a weird problem that only occurs when I try to make multiple input connections to my mixer, which does indeed accept them. Currently, I am attempting to connect both a Capture Filter and my custom Push Source audio filter to my mixer filter. Whenever I try to do that the second upstream filter connection fails. Regardless of whether I connect the Capture Filter first or Push Source audio filter first, the second upstream filter connection always fails.
The first test I ran was to try connecting just the Capture Filter to the mixer. That worked fine.
The second test I ran was to try connecting just the Push Source audio filter to the mixer. That worked fine.
But as soon as try to do both I get a "no combination of intermediate filters could be found" error. I did several hours of deep digging into the media negotiation calls hitting my filter from the graph builder and then I found the problem. For some reason, the filter graph is dragging in the ancient "Indeo (R) Audio Software" codec into the chain.
I discovered this because despite the fact that codec did have a media format that matched my filter in almost every regard (major type, sub type, format type, wave format parameters), it had an extra 2 bytes at the end of it's pbFormat data member and that was enough to fail the equals test since that test does a comparison between the source and target pbFormat areas by comparing the cbFormat value of each media type. The Indeo codec has a cbFormat value of 20 while my filter has a cbFormat value of 18, which is the size of a _tWAVEFORMATEX data structure. In a way it's a good thing the Indeo pbFormat has that weird size because the first 18 bytes of its 20 byte area were exactly equal to the pbFormat area of my mixer filter's supported media type. Without that anomaly I never would have known that ancient codec was being drug in. I'm surprised it's being drug in at all since it has known exploits and vulnerabilities. What is most confusing is that this is happening on my mixer filter's output pin, not one of the input pins, and I have not made a single downstream connection yet when building up my pin connections.
Can anyone tell me why DirectShow is trying to drag in that codec despite the fact that the media formats for the both incoming filters, the Capture Filter and the Push Source filter, are identical and don't need any intermediate filters at all since they match my mixer filter's input pins supported format exactly? How can I fix this problem?
Also, I noticed that even in the single filter attachment tests above that succeeded, my mixer output pin was still getting queried for media formats. Why is that when as I said, at this point in building up my pin connections I have not connected anything to the output pin of my mixer filter?
--------------------------- UPDATE: 1 ----------------------------
I have learned that you can avoid the "intelligent connection" behavior entirely by using IFilterGraph.ConnectDirect() instead of IGraphBuilder.Connect(). I switched over to DirectConnect() and turns out that the input pin on my mixer filter is coming back as "already connected". That may be what is causing the graph builder to drag in the Indeo codec filter. Now that I have this new diagnostic information I will correct the problem and update this post with my results.
--------------------------- RESOLUTION ----------------------------
The root problem of all of this was my re-use of the input pin I obtained from the first destination/downstream filter I connected to my simple audio mixer filter, at the top of my application code. In other words my filter was working correctly, but I was not getting a fresh input pin with each upstream filter I tried to connect to it. Once I started doing that the connection process worked fine. I don't know why the code behind the IGraphBuilder.Connect() interface tried to bring in the Indeo codec filter, perhaps something to do with trying to connect to an already connected input pin, but it did. For my needs, I prefer the tight control that IFilterGraph.ConnectDirect() provides since it eliminates any interference from the intelligent connection code in IGraphBuilder, but I could see when video filters get involved it could become useful.

Is transmitted bytes event exist in Linux kernel?

I need to write a rate limiter, that will perform some stuff each time X bytes were transmitted.
The straightforward is to check the length of each transmitted packet, but I think it will be to slow for me.
Is there a way to use some kind of network event, that will be triggered by transmitted packets/bytes?
I think you may look at netfilter.
Using its (kernel level) api, you can have your custom code triggered by network events, modify received messages before passing it to application, and so on.
http://www.netfilter.org/
It's protocol dependent, actually. But for TCP, you can setsockopt the SO_RCVLOWAT option to define the minimum number of bytes (watermark) to permit the read operation.
If you need to enforce the maximum size too, adjust the receive buffer size using SO_RCVBUF.

Resources