After reading the book "Understanding Linux Network internals".
I came to know about some concepts of how we get the packet from the network:-
> When working in interrupt driven model, the nic registers an
> interrupt handler;
> • This interrupt handler will be called when a frame is received;
> • Typically in the handler, we allocate sk buff by calling
> dev alloc skb();
> • Copies data from nic’s buffer to this struct just created;
> • nic call generic reception routine netif rx();
> • netif rx() put frame in per cpu queue;
> • if queue is full, drop!
> • net rx action() decision based on skb->protocol;
> • This function basically dequeues the frame and delivery a copy
> for every protocol handler;
> • ptype all and ptype base queues
> ip v4 rcv() will receive the ip datagram (if is a ipv4 packet);
•ip checksum, check ip headers, ....
• ip rcv finish() makes route decision (ip forward() or ip local delivery())
Now i had some queries related to it that are:-
Regarding netif_rx() code on the link http://lxr.free-electrons.com/source/net/core/dev.c#L3075 which code says that it delivers the frame to upper layer
And regarding net_rx_action() http://lxr.free-electrons.com/source/net/core/dev.c#L4041 which code says that it takes decision based on skb->protocol
And Whats the packet transmission process i mean when is the sk_buff allocated and all.
Please guide
Related
I was recantly experimenting with espnow in micropython. Sudenly I rann Into A Problem wenn trying to run this code:
import network, espnow, time
wlan_sta = network.WLAN(network.STA_IF)
wlan_sta.active(True)
e = espnow.ESPNow()
e.active(True)
peer = b'\xff\xff\xff\xff\xff\xff' # MAC
e.add_peer(peer)
while True:
e.send(peer, "ESP")
time.sleep(1.1) # Sekunden
i get the Error OSError: -3
The Code worked on my Esp32 but not on the 8266 no clue why.
I tried reflashing my esp but that did not help either.
According to the documentation you need to call wla_sta.disconnect() after setting wlan_sta.active(True). This is the example from the docs:
import network
import espnow
# A WLAN interface must be active to send()/recv()
sta = network.WLAN(network.STA_IF) # Or network.AP_IF
sta.active(True)
sta.disconnect() # For ESP8266
e = espnow.ESPNow()
e.active(True)
peer = b'\xbb\xbb\xbb\xbb\xbb\xbb' # MAC address of peer's wifi interface
e.add_peer(peer)
e.send("Starting...") # Send to all peers
for i in range(100):
e.send(peer, str(i)*20, True)
e.send(peer, b'end') # The example in the docs is missing the `peer` argument.
If I run that example as written (well, correcting the second call to e.send as shown in the above code) and the corresponding receiver code, it all works just fine on a pair of esp8266's running v1.19.1-espnow-6-g44f65965b.
Update I think your problem is that the esp8266 may not support the broadcast address. While the documentation suggests that the esp8266 should be able to send to the broadcast address:
All active ESP-Now clients will receive messages sent to their MAC address
and all devices (except ESP8266 devices) will also receive messages sent to
the broadcast MAC address (b'\xff\xff\xff\xff\xff\xff') or any multicast MAC
address.
All ESP-Now devices (including ESP8266 devices) can also send messages to the
broadcast MAC address or any multicast MAC address.
It appears that this isn't the case. I'm able to use the example code from the docs when operating in unicast mode, but attempting to call e.add_peer with the broadcast address results in the same error you've reported.
I've opened issue #11 with this problem.
In Conclusion you can say that It IS posibille to use ESPnow on the esp 8266 in SingelCasting Mode but not in MultiCasting
Issue
-When using the ESP8266 wired up in this way it will randomly disconnect the USB interface when it powers the relay. It may then re-connect but is sporadic.
-The code can be viewed below, but essentially the relay is powered for 300ms then waits 10 seconds to loop.
Wiring Diagram https://i.stack.imgur.com/4mycx.png
Tests:
I have swapped out the relay, pump, ESP8266, aswell as re-wiring the circuit multiple times to check for a short. I also have a integer incrementing every loop cycle, when the ESP8266 is able to re-connect it will print this variable, which shows the board is not crashing:
Serial output
https://i.stack.imgur.com/ziM8g.png
I then modified the diagram so the 5v power was not in parallel, but where two different power sources, one for the ESP8266 and one for the pump circuit, however the same issue was observed:
Test Wiring Diagram https://i.stack.imgur.com/7S0aP.png
Question:
Why does the USB disconnect when sending the control signal to the relay?
Is there a way to mitigate this?
Code:
int relayInput = 5; // the input to the relay pin
int debug_test = 0;
void setup() {
// put your setup code here, to run once:
Serial.begin(115200);
pinMode(relayInput, OUTPUT); // initialize pin as OUTPUT
}
void loop() {
// put your main code here, to run repeatedly:
debug_test ++ ;
Serial.println(debug_test);
digitalWrite(relayInput, HIGH); // turn relay on
Serial.println("Water on!");
delay(300);
digitalWrite(relayInput, LOW); // turn relay off
Serial.println("Water off!");
Serial.println("Waiting 10 seconds");
delay(10000);
}
Parts:
Pump - https://www.ebay.co.uk/itm/Mini-Water-Pump-DC-3V-4-5V-Fish-Tank-Fountain-Aquarium-Submersible-White-Parts/174211676084?hash=item288fd337b4:g:128AAOSwfQteYWF3
ESP8255 - https://www.amazon.co.uk/gp/product/B07F5FJSYZ/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
Relay - https://www.amazon.co.uk/gp/product/B07BVXT1ZK/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
Ok, so researching in to this, it seems when the pump is on it pulls more current (amps) than the PC can provide.
This will be used connected to a external power source which should supply enough current to it, however I also wanted the flexibility to connect it to a PC with a serial connection to troubleshoot.
So in the end something like this:
https://i.stack.imgur.com/MKD1h.png
You are driving a 5v relay module with 3.3v output, which works perfectly for some people but it depends on the relay module and the board, this might be the problem. or the relay draws more than 12mA which is the maximum current can the ESP8266's GPIO deliver.
so I suggest you use an external power source for the relay and control it through the pin (D1 in your case).
Or just use a generic 5v relay with an external 5v power source and control it using a transistor, here is a circuit.
Additional information: https://electronics.stackexchange.com/questions/213051/how-do-i-use-a-5v-relay-with-a-3-3v-arduino-pro-mini?
Background Info:
I've implemented a Bluetooth LE Peripheral for OSX which exposes two characteristics (using CoreBluetooth). One is readable, and one is writable (both with indications on). I've implemented a Bluetooth LE Central on iOS which will read from the readable characteristic and write to the writable characteristic. I've set it up so that every time the characteristic value is read, the value is updated (in a way similar to this example). The transfer rates I get with this set up are pathetically slow (topping out at a measured sustained speed of roughly 340 bytes / second). This speed is the actual data, and not a measure including the packet details, ACKs and so on.
Problem:
This sustained speed is too slow. I've considered two solutions:
There's some parameter in CoreBluetooth that I've missed that will help me increase the speed.
I'll need to implement a custom Bluetooth LE service using the IOBluetooth classes instead of CoreBluetooth.
I believe, I've exhausted option 1. I don't see any other parameters I can tweak. I'm limited to sending 20 bytes per message. Anything else and I get cryptic errors on the iOS device concerning Unknown Errors, Unlikely Errors, or the value being "Not Long". Since the demo project also indicates a 20 byte MTU, I'll accept that this likely isn't possible.
So I'm left with option 2. I'm trying to somehow modify the connection parameters for Bluetooth LE on OSX to hopefully allow me to increase the transfer speed (by setting the min and max conn intervals to be 20ms and 40ms respectively - as well as sending multiple BT packets per connection interval). It looks like providing my own SDP Service on IOBluetooth is the only way to achieve this on OSX. The problem with this is the documentation for how to do this is negligible to non-existent.
This tells me how to implement my own service (albeit using deprecate API), however, it doesn't explain the required parameters for registering an SDP service. So I'm left wondering:
Where can I find the required parameters for this dictionary?
How do I define these parameters in a way to offer a Bluetooth LE service?
Is there any alternative to providing a Bluetooth LE Peripheral on OSX via another framework (Python library? Linux in a VM with access to the Bluetooth stack? I'd like to avoid this altogether.)
I decided my best course of action was to attempt to use Linux in a VM as there is more documentation available and access to the source code would hopefully guarantee that I could find a solution. For anyone who is also facing this problem, here's how you can issue a Connection Parameter Update Request on OS X (sort of).
Step 1
Install a Linux VM. I used Virtual Box with Linux Mint 15 (64-bit Cinnamon).
Step 2
Allow usage of the OS X Bluetooth device in your VM. Attempting to forward the Bluetooth USB Controller to your VM will give an error message. To allow this, you need to stop everything that is using the controller. On my machine, that included issuing the following commands from the command line:
sudo launchctl unload /System/Library/LaunchDaemons/com.apple.blued.plist
This will kill the OS X Bluetooth daemon. Attempting to kill blued from the Activity Monitor will just cause it to be automatically relaunched.
sudo kextunload -b com.apple.iokit.BroadcomBluetoothHostControllerUSBTransport
On my MacBook, I've got a Broadcom controller and this is the kernel module that OS X uses for it. Don't worry about issuing these commands. To undo the changes, you can power down and reboot your machine (note, in some cases when playing with the BT controller and it got into a bad state, I had to actually leave the machine powered down for ~10 seconds before rebooting to clear volatile memory).
If after running these two commands you still can't mount the BT controller, you can run kextstat | grep Bluetooth and see other Bluetooth related kernel modules and then try to unload them as well. I've got ones named IOBluetoothFamily and IOBluetoothSerialManager that don't need to be unloaded.
Step 3
Launch your VM and get your Linux BT stack. I checked out the bluez Git repo from here. I specifically grabbed the 5.14 release tag using git checkout tags/5.14 just to be sure it was at least a tagged version and less likely to be broken. 5.14 is the newest tag as of writing this answer.
Step 4
Build bluez. This was done using bootstrap, then configure, then make and make install. I used the --prefix=/opt/bluez flag on configure to prevent overwriting the install bluetooth stack. Also, I used the --enable-maintainer-mode configure flag for the reason stated in the next step. You also might need to use --disable-systemd to get it to configure. Bluez has a bunch of tools and utilities you can use for various things. In order to use the built Bluetooth daemon, you need to stop the system daemon using sudo service bluetooth stop. You can then launch the built one using sudo /opt/bluez/libexec/bluetooth/bluetoothd -n -d (this launches in non-daemon mode with debug output).
Step 5
Get your LE service running via bluez. You can view the bluez/plugins/gatt-example.c for how to do this. I directly modified this by removing the unnecessary code and using the battery service code as a template for my own service and characteristics. You need to recompile bluez to have this code added to the bluetooth daemon. One thing to note (that caused my a day or two of trouble getting this working) was that iOS caches the GATT service listing and this is not read/refreshed on each connection. If you add a service or characteristic or change a UUID, you'll need to disable Bluetooth on your iOS device and then re-enable it. This is undocumented in Apples docs and there is no programmatic way to do it.
Step 6
Unfortunately, this is where things get tricky. Bluez doesn't have support built-in for issuing the Connection Parameters Update Request using any of its utilities. I had to write it myself. I'm currently seeing if they want my code to be included in the bluez stack. I can't post the code currently as I'd need to first see if the bluez devs are interested in the code and then get approval from my workplace to give the code. However, I can currently explain what I did to enable support.
Step 7
Prime yourself on the Bluetooth Standard. Any version 4.0 or greater will have the details you need. Read the following sections.
See Vol. 2, Part E, 4.1 for Host to Controller HCI flow.
See Vol. 2, Part E, 5.4.2 for HCI ACL Data Packet format.
See Vol. 3, Part A, 4 for Signalling Packet format.
See Vol. 3, Part A, 4.20 for Connection Parameter Update Request format.
You're basically going to need to write the code to format the packets and then write them to the hci device. The HCI ACL Data Packet header will contain 4 bytes. This is followed by 4 bytes for the Signalling command's length and channel id. This is then followed by your signal payload which in my case was 12 bytes (for the Connection Parameter Update Request).
You can then write them to the device similar to hci_send_cmd in bluez/lib/hci.c. I did each packet header as it's own struct and wrote them each as iovecs to the device. I put my new function in the hci.c file and exposed it with a function prototype in bluez/lib/hci_lib.h. I then modified bluez/tools/hcitool.c to allow me to call this method from the command line. In my case, I made it so that the command was nearly identical to the lecup command as it requires the same parameters (lecup can't be used as it's meant to be called on the master side, not the slave).
Recompiled all of this and then, voila, I can use my new command on hcitool to send the parameters to the bluetooth controller. After sending my command, it then re-negotiates with the iOS device as expected.
Comments
This process is not for the faint of heart. Hopefully, either this, or some other method of setting the connection parameters is added to bluez to simplify this process. Ideally, Apple will allow the ability to do so via CoreBluetooth or IOBluetooth at some point as well (it could be possible, but undocumentated / difficult to do so, I gave up with the Apple libraries). I've journeyed down the rabbit hole and learned much more about the Bluetooth Spec then I thought I'd have to to simply change the connection parameters between a MacBook and an iPhone. Hopefully this will be helpful to somebody at some point (even if it's me checking back on how I did this).
I know I've left out a lot of details in this in order to keep it somewhat brief (i.e. usage on the bluez tools). Please comment if something isn't clear.
If you are implementing your Peripheral using CoreBluetooth, you can request somewhat customized connection parameters by calling -[CBPeripheralManager setDesiredConnectionLatency:forCentral:] to Low, Medium, or High (where Low latency means higher bandwidth). The documentation does not specify what this means, so we have to test it ourselves.
On an OSX Peripheral, when you set the desired latency to Low, the interval is still 22.5ms which is far from the minimum of 7.5ms.
On OSX Yosemite 10.10.4, this is what the CBPeripheralManagerConnectionLatency values mean:
Low: Min Interval: 18 (22.5ms), Max Interval: 18 (22.5ms), Slave Latency: 4 events, Timeout: 200 (2s).
Medium: Min Interval: 32 (40ms), Max Interval: 32 (40ms), Slave Latency: 6 events, Timeout: 200 (2s)
High: Min Interval: 160 (200ms), Max Interval: 160 (200ms), Slave Latency: 2 events, Timeout: 300 (3s)
Here is the code that I used to run a CBPeripheralManager on OSX. I used an Android device as central using BLE Explorer and dumped the Bluetooth traffic to a Btsnoop file.
// clang main.m -framework Foundation -framework IOBluetooth
#import <Foundation/Foundation.h>
#import <IOBluetooth/IOBluetooth.h>
#interface MyPeripheralManagerDelegate: NSObject<CBPeripheralManagerDelegate>
#property (nonatomic, assign) CBPeripheralManager* peripheralManager;
#property (nonatomic) CBPeripheralManagerConnectionLatency nextLatency;
#end
#implementation MyPeripheralManagerDelegate
+ (NSString*)stringFromCBPeripheralManagerState:(CBPeripheralManagerState)state {
switch (state) {
case CBPeripheralManagerStatePoweredOff: return #"PoweredOff";
case CBPeripheralManagerStatePoweredOn: return #"PoweredOn";
case CBPeripheralManagerStateResetting: return #"Resetting";
case CBPeripheralManagerStateUnauthorized: return #"Unauthorized";
case CBPeripheralManagerStateUnknown: return #"Unknown";
case CBPeripheralManagerStateUnsupported: return #"Unsupported";
}
}
+ (CBUUID*)LatencyCharacteristicUuid {
return [CBUUID UUIDWithString:#"B81672D5-396B-4803-82C2-029D34319015"];
}
- (void)peripheralManagerDidUpdateState:(CBPeripheralManager *)peripheral {
NSLog(#"CBPeripheralManager entered state %#", [MyPeripheralManagerDelegate stringFromCBPeripheralManagerState:peripheral.state]);
if (peripheral.state == CBPeripheralManagerStatePoweredOn) {
NSDictionary* dict = #{CBAdvertisementDataLocalNameKey: #"ConnLatencyTest"};
// Generated with uuidgen
CBUUID *serviceUuid = [CBUUID UUIDWithString:#"7AE48DEE-2597-4B4D-904E-A3E8C7735738"];
CBMutableService* service = [[CBMutableService alloc] initWithType:serviceUuid primary:TRUE];
// value:nil makes it a dynamic-valued characteristic
CBMutableCharacteristic* latencyCharacteristic = [[CBMutableCharacteristic alloc] initWithType:MyPeripheralManagerDelegate.LatencyCharacteristicUuid properties:CBCharacteristicPropertyRead value:nil permissions:CBAttributePermissionsReadable];
service.characteristics = #[latencyCharacteristic];
[self.peripheralManager addService:service];
[self.peripheralManager startAdvertising:dict];
NSLog(#"startAdvertising. isAdvertising: %d", self.peripheralManager.isAdvertising);
}
}
- (void)peripheralManagerDidStartAdvertising:(CBPeripheralManager *)peripheral
error:(NSError *)error {
if (error) {
NSLog(#"Error advertising: %#", [error localizedDescription]);
}
NSLog(#"peripheralManagerDidStartAdvertising %d", self.peripheralManager.isAdvertising);
}
+ (CBPeripheralManagerConnectionLatency) nextLatencyAfter:(CBPeripheralManagerConnectionLatency)latency {
switch (latency) {
case CBPeripheralManagerConnectionLatencyLow: return CBPeripheralManagerConnectionLatencyMedium;
case CBPeripheralManagerConnectionLatencyMedium: return CBPeripheralManagerConnectionLatencyHigh;
case CBPeripheralManagerConnectionLatencyHigh: return CBPeripheralManagerConnectionLatencyLow;
}
}
+ (NSString*)describeLatency:(CBPeripheralManagerConnectionLatency)latency {
switch (latency) {
case CBPeripheralManagerConnectionLatencyLow: return #"Low";
case CBPeripheralManagerConnectionLatencyMedium: return #"Medium";
case CBPeripheralManagerConnectionLatencyHigh: return #"High";
}
}
- (void)peripheralManager:(CBPeripheralManager *)peripheral didReceiveReadRequest:(CBATTRequest *)request {
if ([request.characteristic.UUID isEqualTo:MyPeripheralManagerDelegate.LatencyCharacteristicUuid]) {
[self.peripheralManager setDesiredConnectionLatency:self.nextLatency forCentral:request.central];
NSString* description = [MyPeripheralManagerDelegate describeLatency: self.nextLatency];
request.value = [description dataUsingEncoding:NSUTF8StringEncoding];
[self.peripheralManager respondToRequest:request withResult:CBATTErrorSuccess];
NSLog(#"didReceiveReadRequest:latencyCharacteristic. Responding with %#", description);
self.nextLatency = [MyPeripheralManagerDelegate nextLatencyAfter:self.nextLatency];
} else {
NSLog(#"didReceiveReadRequest: (unknown) %#", request);
}
}
#end
int main(int argc, const char * argv[]) {
#autoreleasepool {
MyPeripheralManagerDelegate *peripheralManagerDelegate = [[MyPeripheralManagerDelegate alloc] init];
CBPeripheralManager* peripheralManager = [[CBPeripheralManager alloc] initWithDelegate:peripheralManagerDelegate queue:nil];
peripheralManagerDelegate.peripheralManager = peripheralManager;
[[NSRunLoop currentRunLoop] run];
}
return 0;
}
I'm trying to study and understand operations of the Linux tcp/ip stack, specifically how 'ping' sends packets down and receives them.
Ping creates raw socket in AF_INET family, therefore I placed printk in inet_sendmsg() at net/ipv4/af_inet.c to print out the socket protocol name (RAW, UDP etc.) and the address of protocol specific sendmsg function which correctly appears to be raw_sendmsg() from net/ipv4/raw.c
Now, I'm sending a single packet and observe that I'm getting printk form inet_sendmsg() twice.This puzzles me -- is it normal (has something to do with interrupts etc. ?) or there's something broken in the kernel?
Platform - ARM5te, kernel 2.6.31.8
Looking forward to hearing from you !
Mark
I'm attempting to receive a UDP Broadcast under Mono for Android and I am seeing no data coming in. This is somewhat perplexing because it works fine on the Galaxy Tab 7 and Galaxy Tab 10 (Android v 3.2) I have, but fails on an HTC G2 (Android v2.3.4).
The code is straightforward:
public void BeginDiscover()
{
var packet = new DiscoverPacket();
lock (m_syncRoot)
{
var localEndpoint = new IPEndPoint(m_local, 0);
using (var udp = new UdpClient(localEndpoint))
{
var remoteEndpoint = new IPEndPoint(IPAddress.Broadcast, DiscoverPort);
udp.Send(packet.Data, packet.Data.Length, remoteEndpoint);
Thread.Sleep(100);
}
}
}
I have verified that the manifest includes this line:
<uses-permission android:name="android.permission.INTERNET" />
Though this is happening in Debug, so that should be implicitly set anyway.
Other very strange observations:
Again, this is working just fine on another type of device
The handler listening for UDP broadcasts (which list listening for the response) does see this outbound packet. The code for this listener is also straightforward:
[listener code]
private void Start()
{
m_discoverListener = new UdpClient(DiscoverPort);
m_discoverListener.BeginReceive(DiscoverCallback, m_discoverListener);
}
private void DiscoverCallback(IAsyncResult result)
{
try
{
var ep = new IPEndPoint(IPAddress.Any, DiscoverPort);
var data = m_discoverListener.EndReceive(result, ref ep);
// filter out what we send
var add = AddressWithoutPort(ep.Address);
if (add == m_local.ToString()) return;
// parse discover response
// [clipped for clarity]
}
finally
{
m_discoverListener.BeginReceive(DiscoverCallback, m_discoverListener);
}
}
Wireshark running on a separate PC on the same network does see the discover request packet (from above)
The "discovered" device is also seeing it, because Wireshark is also seeing the reply
The Android device UDP listener is not receiving the response packet
The only major differences between devices that I can think of (other than different OEMs implementing the platform) is that the G2 has a cellular radio built in and the Galaxy Tab does not. In my specific test case, I have no SIM card in the phone, though, so no cellular connection is being made. Note that the code above is explicitly using the local endpoint that is on the WiFi network.
Is there a known issue with UDP on the G2 specifically or generally on older implementations of the Android platform?
It took a bit of work as the UDP response in question is coming from a microcontroller on the device and I wanted to make absolutely certain that it wasn't an issue on the micro end (though I suspected it wasn't). I created a PC-based simulator for the microcontroller device that handles my Android UDP request and that sends back the exact same UDP response that the microcontroller does, then verified all of the traffic looks fine with Wireshark.
The net result is that I see he exact same behavior with the simulator. The Galaxy Tab 7 and 10 devices receive the UDP response no problem. The HTC G2 never does. This leads me to conclude that one of the following is true:
a) The HTC G2 specifically has an implementation bug preventing it from receiving (or at least passing along) UDP broadcasts on the network
or
b) The older Android build has his bug.
Until I find different hardware with the same Android version as the G2 (v2.3) I can't tell which is the case. In either event, it's a bug that makes this (and potentially other) hardware unusable for my specific solution.
I have a couple of applications on the market based on UDP communication.
I have problems with HTC phones not receiving the UDP broadcast packets sent from another device... if sent from the same device, the packets arrive.
so, I think the problem is in HTC, and I found a possible solutions online (even though I have not tried it):
http://www.flattermann.net/2010/09/fix-udp-broadcasts-on-htc-phones-running-stock-firmware/