So I'm having a strange issue trying to capture CDP packets. I wrote my own light weight application utilizing the wpcap.dll and built a filter and a parser everything was working fine until I started testing with other computers. It was then I figured out that my application will NOT capture CDP packets if wire shark is not running.
It is capturing an parsing packets fine I can see them scrolling by but I never get a CDP hit. My switches are set to advertise every 60 seconds but I can leave my program open for 10 mins and then open wireshark and get a CDP packet. If I close Wireshark I no longer see the CDP packets.
I'm using the filter 'ether[20:2] == 0x2000' looking for type '0x01E3'
For anyone else who finds this questions, my issue was I was not calling the capture in promiscuous mode. CDP packets are multicast and not directed to your computers MAC. I knew this but for some reason overlooked it in my own program...
Related
I know I can send 802.11 custom packets with wifi_send_pkt_freedom, and I'am using it without any problem.
But what about receiving?. Yes, I can enable promiscuos mode, and filter by MAC address. Doing that is perfect for sniffing, but not for communication, because the ESP8266 won't ACK the packets and if I'am not sniffing and my ESP is the only receiver, it will make the transmitter to repeat many times the packet. Yes, I can send it like broadcast or multicast to avoid waiting for the ACK, but I'll missing the ACK/re-send functionality. In short I would like to put the ESP Wifi interface in monitor mode (which is not promiscuos mode)
And yes, I can use ESPNOW, but for my application it wont suit very well my needs.
Thanks!
Román
I have a basestation (beaglebone running linux) at my home which is connected to multiple cameras. I connect my iPhone iOS9 to the basestation via TCP and the basestation will stream the video of each camera to a specific UDP port. All is well.
I want to be able to see the cameras when I am not connected to the local network. When my phone is connected to LTE, I have the iPhone connecting to the public address of my router via TCP and with port forwarding, all data is forwarded to the basestation. I am connecting and talking just like it was on the local network with the TCP client. All is well.
The problem is when the video is streamed via UDP on a specific port, no different than when on the local network, the basestation has no problem sending the packets, but the iPhone is not receiving anything. I am using GCDASYNCUDPSOCKET and my cellular carrier is Verizon.
I am wondering if this issue is due to Verizon blocking UDP packets? Or possibly there needs to be something else done other than just binding the iPhone UDP socket to a specific port and calling the beginreceiving function. I feel if it works on the local network, it should work on the cellular network.
I have also tried to ping the address of my cellphone from my computer which does not work. I am guessing the reason is because the iPhone has blocked this. It should be no different than pinging the address of google or anything else.
Please give me some insight on the possible issues or work arounds. I don't think I need to port forward the UDP since it is only outgoing and my Netgear router does not limit any outbound traffic (from the router to the iPhone). I was doing all this TCP before trying to send the video via UDP. It is much slower waiting to receive acks for 5 cameras streaming live video. And when it doesn't receive a packet it backs up the buffer and causes more issues. Also I had an issue with the TCP packets combining together so then I had to implement some kind of custom ack which made the delay worse, or add an end of message, but then it slows down on parsing and since I don't know exactly what data is coming it made things more difficult.
UDP is the way to go, just cannot receive the packets at this time. My understanding is a lot of games use UDP and they work on LTE network, so I'm not quite sure what the problem is. Is there special UDP ports that only work with Verizon?
My understanding is that current WiFi driver uses rate control algorithm to choose a data rate within a small set of predetermined values to send packets over the WiFi medium. Different algorithms exist for this purpose. But how does this process work when WiFi driver decides that the connection is lost and shutdown the connection all together? Which part of the code should I read in open source WiFi driver such as MadWiFi and the likes?
The WiFi driver for your hardware which runs in Linux communicates with the WiFi chip which also runs a pretty complex firmware. The interface between the driver and the firmware is hardware specific. In some hardware the detection of connection loss events is done completely by the firmware and the driver only gets a "disconnected" event while in others the driver is also involved.
Regardless of who does what disconnection usually occurs due to either
Receiving a DEAUTH frame from the AP
Detecting too many missing beacons. Beacons are WiFi frames sent periodically by the AO (for most APs every ~100ms) . If you get too far from the AP or the AP was just powered off you stop seeing the beacons in the air and usually you'll signal disconnection or try to roam to a different AP.
Too many failures on Tx of packets (i.e. not receiving ACK frames for too much traffic)
This usually indicates that you've gone too far from the AP. It could be that you can "hear" the AP but it can't hear you already. In this case it also makes sense to signal a disconnection.
For example you can look in TI wifi driver in the Linux kernel drivers/net/wireless/ti/wlcore/events.c and the function wlcore_event_beacon_loss()
In Cfg80211 architecture, assume we are station mode.
driver call kernel API cfg80211_send_disassoc() if we received a deassoc/deauth frame.this function will notify corresponding application (ex wpa_supplicant) a disconnect event.
on another hand, when we decide to disconnect with AP, applicantion (ex wpa_supplicant) can call linux kernel API cfg80211_disconnected(), it will trigger corresponding driver ioctl function to finish disconnection task.
I am developing an app to get information on network devices.
I have seen two different ipads get into a state where they are not sending out bonjour/mDNS traffic.
I used wireshark and did not see any broadcast traffic from the ipads at all.
I have a bonjour broadcast that other devices were responding to but the two iPads in question did not respond.
After I shut down the ipad and restarted it I was seeing normal bonjour traffic and they responded to my bonjour query just fine.
The iPads had been running for a long time without being shut down.
So, the question is: do iPads get into a funky state after they have been running for a long time where the mDNS service stops working?
Are there other causes for this to happen?
Is there any way to kick it other than shutting down to get it to respond again?
A suggeston, mostly maybe the problem of your progame. Once I met this case. My soultion is like this:
restart(re alloc) bonjour browse once the program exchanged "backgournd from foreground", of course,stop bonjour browse from "foreground" to "background" firstly. Thus sloved.
I want to develop an app that does basic packet sniffing. So, I would like to know if packet sniffing is feasible from a BlackBerry.
I don't think this is possible. The most you can do is keep track of the number of packets sent and received over the radio, but not see the actual contents. See RadioInfo.getNumberOfPacketsReceived() and RadioInfo.getNumberOfPacketsSent().
This is tagged "blackberry simulator" are you looking for an app in to observe what the simulator is doing or for an app in real world mode?
Intercepting things going in and out of the sim is not to hard, especially if you acted as some kind of intermediary pipe between the bbsim and the mds-cs sim.
Packet sniffing on device though i do not believe is possible at all, except over wifi from a promiscuous node laptop sniffing next to it.