Scapy - retrieving RSSI from WiFi packets - wifi

I'm trying to get RSSI or signal strength from WiFi packets.
I want also RSSI from 'WiFi probe requests' (when somebody is searching for a WiFi hotspots).
I managed to see it from kismet logs but that was only to make sure it is possible - I don't want to use kismet all the time.
For 'full time scanning' I'm using scapy. Does anybody know where can I find the RSSI or signal strength (in dBm) from the packets sniffed with scapy? I don't know how is the whole packet built - and there are a lot of 'hex' values which I don't know how to parse/interpret.
I'm sniffing on both interfaces - wlan0 (detecting when somebody connects to my hotspot), and mon.wlan0 (detecting when somebody is searching for hotspots).
Hardware (WiFi card) I use is based on Prism chipset (ISL3886). However test with Kismet was ran on Atheros (AR2413) and Intel iwl4965.
Edit1:
Looks like I need to access somehow information stored in PrismHeader:
http://trac.secdev.org/scapy/browser/scapy/layers/dot11.py
line 92 ?
Anybody knows how to enter this information?
packet.show() and packet.show2() don't show anything from this Class/Layer
Edit2:
After more digging it appears that the interface just isn't set correctly and that's why it doesn't collect all necessary headers.
If I run kismet and then sniff packets from that interface with scapy there is more info in the packet:
###[ RadioTap dummy ]###
version= 0
pad= 0
len= 26
present= TSFT+Flags+Rate+Channel+dBm_AntSignal+Antenna+b14
notdecoded= '8`/\x08\x00\x00\x00\x00\x10\x02\x94\t\xa0\x00\xdb\x01\x00\x00'
...
Now I only need to set the interface correctly without using kismet.

Here is a valuable scapy extension that improves scapy.layers.dot11.Packet's parsing of present not decoded fields.
https://github.com/ivanlei/airodump-iv/blob/master/airoiv/scapy_ex.py
Just use:
import scapy_ex
And:
packet.show()
It'll look like this:
###[ 802.11 RadioTap ]###
version = 0
pad = 0
RadioTap_len= 18
present = Flags+Rate+Channel+dBm_AntSignal+Antenna+b14
Flags = 0
Rate = 2
Channel = 1
Channel_flags= 160
dBm_AntSignal= -87
Antenna = 1
RX_Flags = 0

To summarize:
signal strength was not visible because something was wrong in the way that 'monitor mode' was set (not all headers were passed/parsed by sniffers). This monitor interface was created by hostapd.
now I'm setting monitor mode on interface with airmon-ng - tcpdump, scapy show theese extra headers.
Edited: use scapy 2.4.1+ (or github dev version). Most recent versions now correctly decode the « notdecoded » part

For some reason the packet structure has changed. Now dBm_AntSignal is the first element in notdecoded.
I am not 100% sure of this solution but I used sig_str = -(256 - ord(packet.notdecoded[-2:-1])) to reach first element and I get values that seems to be dBm_AntSignal.
I am using OpenWRT in a TP-Link MR3020 with extroot and Edward Keeble Passive Wifi Monitoring project with some modifications.
I use scapy_ex.py and I had this information:
802.11 RadioTap
version = 0
pad = 0
RadioTap_len= 36
present = dBm_AntSignal+Lock_Quality+b22+b24+b25+b26+b27+b29
dBm_AntSignal= 32
Lock_Quality= 8

If someone still has the same issue, I think I have found the solution:
I believe this is the right cut for the RSSI value:
sig_str = -(256-ord(packet.notdecoded[-3:-2]))
and this one is for the noise level:
noise_str = -(256-ord(packet.notdecoded[-2:-1]))

The fact that it says "RadioTap" suggests that the device may supply Radiotap headers, not Prism headers, even though it has a Prism chipset. The p54 driver appears to be a "SoftMAC driver", in which case it'll probably supply Radiotap headers; are you using the p54 driver or the older prism54 driver?

I have similar problem, I set up the monitor mode with airmon-ng and I can see the dBm level in tcpdump but whenever I try the sig_str = -(256-ord(packet.notdecoded[-4:-3])) I get -256 because the returned value from notdecoded in 0. Packet structure looks like this.
version = 0
pad = 0
len = 36
present = TSFT+Flags+Rate+Channel+dBm_AntSignal+b14+b29+Ext
notdecoded= ' \x08\x00\x00\x00\x00\x00\x00\x1f\x02\xed\x07\x05
.......

Related

Property power-supply for an external pwm-backlight IC

I have this LCD panel:
LED panel's backlight is driven by the MIC2297 chip which takes two signals:
BRT - PWM signal for setting brightness of the LCD's background LEDs.
BL_EN that - gpio signal that enables or disables the LCD's background LEDs.
MIC2297 is powered from the +12V.
Now I connected this display to the Beaglebone Black's (BBB's) expansion connector and I am already running Linux on the BBB's microcontroller AM335x.
In order to enable the backlight I have to properly define it in the device tree i.e. .dts file. Currently I managed to set this up:
backlightt: backlight {
compatible = "pwm-backlight";
pwms = <&ehrpwm1 0 500000000>;
power-supply <>; // ???
enable-gpios = <&gpio2 3 0>;
brightness-levels = <0 4 8 16 32 64 128 255>;
default-brightness-level = <7>;
};
What I don't understand is the property power-supply. How can I know which regulator to use? My devicce uses external 12V! This is really confusing! Why do we even have to specify the regulator?
I found solution...
PWM backlight requires a property "power-supply" that points to some regulator inside the AM335x. This regulator is used to set the output voltage of the PWM - so we don't need to put any kind of voltage regulator between an AM335x and the backlight IC (which might only support 1.5V PWM input on some mobile devices). This is actually really useful.

pyserial issues with high baudrate FTDI

I have the following setup:
A FPGA sending out data on UART at a baudrate of 3Mbps. The data transmitted is a chunk of 1024 bytes sent at a variable periodicity ranging from 20ms to 200ms. (So even in the worst case, datarate is far from 3Msps)
A FTDI 232RG
A piece of python running on my computer (Windows), doing basically : opening a COM port with pyserial, 3Msps, polling the in_waiting until it reaches the size of a packet (1024 bytes), formatting the packet received and print it on the screen
The script works well for low repetition frequency, but I face issues with higher repetitions (typically 20ms). When the periodicity in 20ms I eventually end up getting a buffer overflow somewhere before the in_waiting. I checked the timing of my python loop and it takes about 4ms. So it looks like there is something upstream (in the FTDI or Windows) that feeds the pyserial buffer with more than one packet within the 4ms following a packet.
I tried changing the FTDI latency in the driver (from 16ms default down to a few ms) but it does not seem to help.
I am currently clueless about what is happening. Would you have any advice about how to understand better what is happening?
Thanks for your help!
You could create a "loop" between TX and RX and run the following code (tested with a FT2232H, so mostlikely you need to change the identifier string):
import time
import serial
import serial.tools.list_ports
print([(x[0],x[2]) for x in serial.tools.list_ports.comports()])
port = [x[0] for x in serial.tools.list_ports.comports() if "FT4Q1LJFB" in x[2]][0]
ser = serial.Serial(port,12000000)
while True:
t0 = time.time()
counter = 0
for i in range(1000):
ser.write([1]*3000)
recv = ser.read(ser.inWaiting())
delta_t = time.time() - t0
counter += len(recv)
print(counter / delta_t)
For me the following output is shown
[('COM7', 'USB VID:PID=0403:6010 SER=FT4Q1LJFA'), ('COM8', 'USB VID:PID=0403:6010 SER=FT4Q1LJFB')]
0.0
0.0
0.0
0.0
96787.81184093593
1201991.0268273412
1201197.0857713912
1201166.9350959768
1201445.4072856384
You will notice that it is 0.0 in the beginning. This is because I connected RX and TX after starting the program resulting in a ramping up of the received bytes. This is the "default" mode meaning 8 bits + 1 start bit + 1 stop bit = 10 bits per word which explains why "only" 1.2 Mbytes per second are transmitted.

Dropped packets even if rte_eth_rx_burst do not return a full burst

I have a weird drop problem, and to understand my question the best way is to have a look at this simple snippet:
while( 1 )
{
if( config->running == false ) {
break;
}
num_of_pkt = rte_eth_rx_burst( config->port_id,
config->queue_idx,
buffers,
MAX_BURST_DEQ_SIZE);
if( unlikely( num_of_pkt == MAX_BURST_DEQ_SIZE ) ) {
rx_ring_full = true; //probably not the best name
}
if( likely( num_of_pkt > 0 ) )
{
pk_captured += num_of_pkt;
num_of_enq_pkt = rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring,
(void*)buffers,
num_of_pkt,
&rx_ring_free_space);
//if num_of_enq_pkt == 0 free the mbufs..
}
}
This loop is retrieving packets from the device and pushing them into a queue for further processing by another lcore.
When I run a test with a Mellanox card sending 20M (20878300) packets at 2.5M p/s the loop seems to miss some packets and pk_captured is always like 19M or similar.
rx_ring_full is never true, which means that num_of_pkt is always < MAX_BURST_DEQ_SIZE, so according to the documentation I shall not have drops at HW level. Also, num_of_enq_pkt is never 0 which means that all the packets are enqueued.
Now, if from that snipped I remove the rte_ring_sp_enqueue_bulk call (and make sure to release all the mbufs) then pk_captured is always exactly equal to the amount of packets I've send to the NIC.
So it seems (but I cant deal with this idea) that rte_ring_sp_enqueue_bulk is somehow too slow and between one call to rte_eth_rx_burst and another some packets are dropped due to full ring on the NIC, but, why num_of_pkt (from rte_eth_rx_burst) is always smaller than MAX_BURST_DEQ_SIZE (much smaller) as if there was always sufficient room for the packets?
Note, MAX_BURST_DEQ_SIZE is 512.
edit 1:
Perhaps this information might help: the drops seems to be visible also by rte_eth_stats_get, or to be more correct, no drops are reported (imissed and ierrors are 0) but the value of ipackets equals my counter pk_captured (are the missing packets just disappeared??)
edit 2:
According to ethtools rx_crc_errors_phy is zero and all the packets are received at PHY level (rx_packets_phy is updated with the correct amount of transferred packets).
The value from rx_nombuf from rte_eth_stats seems to contain trash (this is a print from our test application):
OUT(4): Port 1 stats: ipkt:19439285,opkt:0,ierr:0,oerr:0,imiss:0, rxnobuf:2061021195718
For a transfer of 20M packets, as you can see rxnobuf is garbage OR it has a meaning which I do not understand. The log is generated by:
log("Port %"PRIu8" stats: ipkt:%"PRIu64",opkt:%"PRIu64",ierr:%"PRIu64",oerr:%"PRIu64",imiss:%"PRIu64", rxnobuf:%"PRIu64,
config->port_id,
stats.ipackets, stats.opackets,
stats.ierrors, stats.oerrors,
stats.imissed, stats.rx_nombuf);
where stats came from rte_eth_stats_get.
The packets are not generated on the fly but replayed from an existing PCAP.
edit 3
After the answer from Adriy (Thanks!) I've included the xstats output for the Mellanox card, while reproducing the same problem with a smaller set of packets I can see that rx_mbuf_allocation_errors get's updated, but it seems to contain trash:
OUT(4): rx_good_packets = 8094164
OUT(4): tx_good_packets = 0
OUT(4): rx_good_bytes = 4211543077
OUT(4): tx_good_bytes = 0
OUT(4): rx_missed_errors = 0
OUT(4): rx_errors = 0
OUT(4): tx_errors = 0
OUT(4): rx_mbuf_allocation_errors = 146536495542
Also those counters seems interesting:
OUT(4): tx_errors_phy = 0
OUT(4): rx_out_of_buffer = 257156
OUT(4): tx_packets_phy = 9373
OUT(4): rx_packets_phy = 8351320
Where rx_packets_phy is the exact amount of packets I've been sending, and summing up rx_out_of_buffer with rx_good_packets I get that exact amount. So it seems that the mbufs get depleted and some packets are dropped.
I made a tweak in the original code and now I'm making a copy of the mbuf from the RX ring using link and them releasing immediately the memory, further processing is done on the copy by another lcore. This do not fix the problem sadly, it turns out that to solve the problem I've to disable the packet processing and release also the packet copy (on the other lcore), which make no sense.
Well, will do a bit more investigation, but at least rx_mbuf_allocation_errors seems to need a fix here.
I guess, debugging rx_nombuf counter is a way to go. It might look like a garbage, but in fact this counter does not reflect the number of dropped packets (like ierrors or imissed do), but rather number of failed RX attempts.
Here is a snippet from MLX5 PMD:
uint16_t
mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
{
[...]
while (pkts_n) {
[...]
rep = rte_mbuf_raw_alloc(rxq->mp);
if (unlikely(rep == NULL)) {
++rxq->stats.rx_nombuf;
if (!pkt) {
/*
* no buffers before we even started,
* bail out silently.
*/
break;
So, the plausible scenario for the issue is as follow:
There is a packet in RX queue.
There are no buffers in the corresponding mempool.
The application polls for new packets, i.e. calls in a loop: num_of_pkt = rte_eth_rx_burst(...)
Each time we call rte_eth_rx_burst(), the rx_nombuf counter gets increased.
Please also have a look at rte_eth_xstats_get(). For MLX5 PMD there is a hardware rx_out_of_buffer counter, which might confirm this theory.
The solution to missing packets will be in changing ring API from bulk to burst. In dpdk there are 2 modes ring operation bulk and burst. In bulk dequeue mode if the requested elements are 32, and there are 31 elements the API returns 0.
I have faced similar issue too.

SP605 Spartan 6 DDR3 addressing

the following post is quite long, but since I have had trouble making the SP605 board properly interact with the DDR3 for over a month now, hopefully this will be useful to others in the same situation as I find myself in. I am pretty certain it's a simple configuration or conceptual error, but I would be more than happy to have this resolved soon.
=== SCENARIO ===
I have created a USB-UART interface to communicate with the FPGA and control the DDR3. Using the IP generator in ISE, I generated a MIG wrapper and then I designed the memory interface controller. However, I have referenced manuals ug388 and ug416, but I have not been able to have the DDR3 behave as expected.
=== PROBLEM STATEMENT ===
Playing around with the burst lengths for write and read commands, I am able to get data back from the DDR3, yet the addressing scheme does not seem to be correct as data is duplicated in addresses 0 and 1, 2 and 3, 4 and 5, and so forth. Also, whenever I write into address 0, for example, nothing changes. Then, when I write into address 1, both addresses 0 and 1 are updated with the data value I just sent. It seems I am "losing" half of the memory space due to this coupled effect.
=== DDR3 IP CONFIGURATION ===
The setup for the DDR3 using the IP generator – considering the SP605 board scenario – is listed below. In sum, I activated the DDR3 Bank 3 and configured Port0 to be 32-bit bidirectional.
Memory selection:
Enable AXI interface: unchecked
Use extended MCB performance range: unchecked
Memory type for bank 3: DDR3 SDRAM
Memory type for bank 1: none
Options for C3 – DDR3 SDRAM
Frequency: 400 MHz
Memory part: MTJ41J64M16XX-187E
Memory options for C3 – DDR3 SDRAM
Output driver impedance control: RZQ/6
RTT (nominal) – ODT: RZQ/4
Auto self refresh: enabled
Port configuration for C3 – DDR3 SDRAM
Two 32-bit bi-directional and four 32-bit unidirectional ports
Port0: checked
Port1: unchecked
Port2: unchecked
Port3: unchecked
Port4: unchecked
Port5: unchecked
Memory address mapping selection: row-bank-column
FPGA options for C3 – DDR3 SDRAM
Memory interface pin termination: Calibrated input termination
Select RZQ pin location: R7
Select ZIO pin location: W4
Debug signals for memory controller: disable
System clock: differential
=== DATA STRUCTURE ===
From Matlab, I send in a 64-bit command which should write or read the DDR3 based on the address and data provided in this command.
wire [00:00] cmd_instruction = usb_data[63:63]; // ‘0’ = write; ‘1’ = read
wire [27:00] cmd_address = usb_data[62:37]; // 26-bit address
wire [31:00] cmd_data = usb_data[31:00]; // 32-bit data
In ug388, the following can be extracted:
Page 20: The address is 26 bits wide.
C_MEM_ADDR_WIDTH = 13
C_MEM_BANKADDR_WIDTH = 3
C_MEM_NUM_COL_BITS = 10
C_P0_DATA_PORT_SIZE = 32 // 32-bit data ports
C_P0_MASK_SIZE = 4 // 4 bytes = 32 bits (1 mask bit = 1 entire data byte)
Pages 26-27: Command data structure.
pX_cmd_addr[29:0]: 30-bit address, however the last two bits should = “00” since every word (32 bits) is formed by 4 bytes.
pX_cmd_bl[5:0]: Burst length of 1 is obtained by setting this signal to 0.
pX_cmd_instr[2:0]: The only command instructions used are write=”000” and read=”001”.
Page 28: Write data structure.
pX_wr_mask[PX_MASKSIZE-1:0]: 4-bit mask is set to “0000” so that all 4 bytes are always written into the memory.
=== SIGNAL ASSIGNMENTS ===
Using all this information, I assigned my signals in the following manner:
assign p0_mcb_cmd_instr = {2'b00, cmd_instruction};
assign p0_mcb_cmd_addr = {2’d0, cmd_address, 2'd0};
assign p0_mcb_cmd_bl = 6'd0;
assign p0_mcb_wr_data = cmd_data;
assign p0_mcb_wr_mask = 4'd0;
localparam C3_MEM_BURST_LEN = 8;
=== CONCLUSIONS ===
Based on the configuration, does anyone know what the expected behavior of my controller should be?
If any additional information is necessary for clarification, please let me know.
Thanks a lot,
Bruno.

Set mmc2 on beaglebone black

I am working with a Beaglebone Black and I would like to use the mmc2 slot.
according to AM335xx TRM, a beaglebone black should have 3 mmc available:
mmc0 (sd card);
mmc1 (2G flash),
mmc2.
I am trying to enable mmc2 by device tree (and I am quite sure to have the right pin settings) but, by doing
dmesg
I obtain:
/ocp/mmc#47810000: can't find DMA channel
omap_hsmmc mmc.11: unable to obtain RX DMA engine channel 65
By putting the oscilloscope probe on the header (e.g. the mmc2 clk signal), I do not see any transition.
I already removed R 160 to have mmc2 cmd accessible but I do not see any transition also there.
I tried both to enable it by
echo > /sys/devices/..../slots
and by
capemgr.enable_partno
with no success:
I can see it in
/sys/devices/..../slots
(with the L meaning loaded)..but no way to see any signal on the header.
I already googled it but answers are not clear at all.
Any ideas?
My
uname -a
is:
Linux beaglebone 3.8.13 #1 SMP Tue Jun 18 02:11:09 EDT 2013 armv7l GNU/Linux
Thanks for your help.
You need to configure the mmc2 DMA events to some DMA channel since these events are not direct mapped.
I was not able to do this successfully using device tree overlays. So I made a change in the
am335-x-bone-common.dtsi directly (not sure this is the best way though):
&edma {
ti,edma-xbar-event-map = <32 12>, /* gpevt2 -> 12 */
<30 20>, /* xdma_event_intr2 -> 20 */
+ <1 32>,
+ <2 33>;
};
In the example above the event 1 (SDTXEVT2) was mapped to channel 32 and event 2 (SDRXEVT2) to channel 33.
In case you want to pick another open DMA channel check tables 11-23. Direct Mapped and Table 11-24. Crossbar Mapped from the technical reference manual Rev J.
In your device tree overlay file add these channels in the mmc3 node:
dmas = <&edma 32
&edma 33>;
dma-names = "tx", "rx";

Resources