I have a simple configuration of two XBees: one coordinator and one end device/router. The coordinator continuously sends data to the end device at 9600 bps without expecting any sort of response from it (I cannot increase the bps because of standardisation issues in my application). I managed to make it send data, but it arrives after a random amount of seconds at the end device, which I do not want - it must ideally be instantaneous. Which XBee parameters do I need to modify in order to make the transmission much faster?
If you're using the XBee module in transparent mode (ATAP=0), then you want to look at ATRO, the packetization timeout value. This is the amount of idle time on the serial interface that the XBee waits for before considering a packet complete and ready to send.
If this is a sleeping end device, you may experience delays if it's sleeping and the coordinator is waiting for it to wake up before sending. Try configuring it as a router and see if that helps with the delay.
Note that the serial port speed (ATBD) of the coordinator and end device do not need to match. The XBee module buffers packets and always sends them over the air at 250kbps. When possible, you should run the serial interface at at least 115,200bps to minimize the latency and maximize the throughput of the wireless interface.
Finally, how are you handling addressing of your packets? Using 64-bit or 16-bit addresses? If 16-bit addresses, there could be discovery overhead, but that should go away after the first packet gets through.
And if you're not using modules with chip antennas, do you have antennas attached?
The problem was that I did not configure them in transparent mode, they were working in another mode which was laggy.
Related
I need to know how the computer handles Local Area Network Input and Output Processor interruptions. I have been looking for a while but can't seem to find anything. Came across some RJ-45 port information but not much of what I specifically need. If someone has some information on how the CPU interrupts a process to call the pointer and therefore the driver, plus how this process works it would be much appreciated.
Thanks
Typically, the driver for the LAN card configured the card to issue an interrupt when the receive buffer gets close to full or the send buffer gets close to empty. Typically, these buffers live in system memory and the network hardware uses DMA to pull transmitted packets and store received packets in system memory.
When the interrupt triggers, some process on some core is typically interrupted and the network code begins executing. If it's a send interrupt and there are more packets to send, more packets are attached to the send buffer. If it's a receive interrupt, typically more packet buffers are attached to the receive buffer. The driver typically arranges for a "bottom half" to be dispatched to handle whatever other work needs to be done (such as processing the received packets) and the the interrupts completes.
There's a ton of possible variation based upon many factors, but this is the basic idea.
I'm trying to optimize battery usage when networking. If I hold all my http requests in an array for example, then I send them all (just empty out the array at once (for loop)), will the antenna turn on once to perform the 10 requests, or will it turn on and off n times? (I'm using NSURLRequest)
Is there a way to batch send requests at once? Or is this basically "batch" sending requests.
The documentation says nothing about how iDevice's hardware handles multiple NSURLRequests. It can be that handling on one model or OS version is different than on another one (e.g. iPhone 4 vs iPhone 5).
You will have to use Instruments and research it on your own using Energy Diagnostics. However, this is rather simple. Here is a short plan how to do it:
Connect the device to your development system.
Launch Xcode or Instruments.
On the device, choose Settings > Developer and turn on power logging.
Disconnect the device and perform the desired tests.
Reconnect the device.
In Instruments, open the Energy Diagnostics template.
Choose File > Import Energy Diagnostics from Device.
Moreover, have a look at Analyzing CPU Usage in Your App
The energy optimisation performed by the OS is not publicly known.
The exact handling for a particular interface depends on the interface, so some interfaces have very low set up/tear down costs ( e.g. Bluetooth LE), and others are quite cheap to run, but take time to set up and tear down ( e.g. 2G).
You generally have to take a course that gives the OS the best options possible, and then let it do what it can.
We can say a few things. It is unlikely that the connection is being powered up/down for individual packets, so the connection will be powered up when there is data to send, and kept up as long as you're trying to receive. The OS may be able to run at a lower power when it is just waiting, as it doesn't need to ACK packets, but it won't be able to power off completely.
Bottom line, if you send your requests sequentially, I believe that the OS is unlikely to cycle power in between requests, if you send them in parallel, it almost certainly won't.
Is this a worthwhile optimisation? Depends how much of it you're doing.
Possibly of interest: background downloads whereby the OS times your fetch when it knows it is going to do some other network activity anyway.
So i'm curious, is there any info on a router's config page or manual, or is there any way to measure it's buffer? (I'm talking about the "memory" it has to keep packets until the router can transmit them)
You should be able to measure your outgoing buffer by sending lots of (UDP) data out, as quickly as possible, and see how much goes out before data starts getting dropped. You will need to send it really fast, and have something at the other end to capture it as it comes in; your send speed has to be a lot faster than your receive speed.
Your buffer will be a little smaller than that, since in the time it takes you to send the data, at least the first packet will have left the router.
Note that you will be measuring the smallest buffer between you and the remote end; if you have, for example, a firewall and a router in two separate devices, you don't really know which buffer you are testing.
Your incoming buffer is a lot more difficult to test, since you can't fill it fast enough (from the internet side) not to be deliverable quickly enough.
I'm using Wireshark to monitor network traffinc to test a new software installed on a router. The router itself lets other networks (4g, mobile devices through usb etc) connect to it and enhance the speed on that router.
What I'm trying to do is to disconnect the connected devices and discover if there are any packet losses while doing this. I know I can simply use a filter stating "tcp.analysis.lost_segment" to track down lost packets, but how can I eventually isolate the specific device that causes the packet loss? Or even know if the reason was because of a disconnected device when there is a loss?
Also, what is the most stable method to test this with? To download a big file? To stream a video? Etc etc
All input is greatly appreciated
You can't detect lost packets solely with Wireshark or any other packet capture*.
Wireshark basically "records" what was seen on the line.
If a packet is lost, then by definition you will not see it on the line.
The * means I lied. Sort of. You can't detect them as such, but you can extremely strongly indicate them by taking simultaneous captures at/near both devices in the data exchange... then compare the two captures.
COMPUTER1<-->CAPTURE-MACHINE<-->NETWORK<-->CAPTURE-MACHINE<-->COMPUTER2
If you see the data leaving COMPUTER1, but never see it in the capture at COMPUTER2, there's your loss. (You could then move the capture machines one device closer on the network until you find the exact box/line losing your packets... or just analyze the devices in the network for eg configs, errors, etc.)
Alternately if you know exactly when the packet was sent, you could not prove but INDICATE its absence with a capture covering a minute or two before and after the packet was sent which does NOT have that packet. Such an indicator may even stand on its own as sufficient to find the problem.
I would like to synchronize (within .05 seconds) an event in the future on more than one iOS device, whether or not those devices have network access at that future time.
To do this, I believe I'll need to sync to a common clock between these devices, and the best way would be to get a precise time from an NTP server(s). I've looked at iOS-NTP, but the description says that that it determines time with 1 second accuracy, and that doesn't quite do it for me.
One alternative is to get the (accurate?) timestamp from a GPS clock, but I'm not sure that the devices will always have the ability to get a GPS signal.
Any other suggestions?
iOS doesn't give you low level access to the GPS hardware, so you can't use that method.
If the devices are close to each other, you can establish a peer to peer wireless connection using GameKit. That should give you around 35ms latency with bluetooth, and the latency should be consistent so you could measure it a few times to get the clocks in sync.
If they're not close together... then I would setup a server and use that as the "real" time. With several calls you should be able to measure the latency between the device and the server, then calculate the clock offset with reasonable accuracy.
If possible use UDP instead of TCP, that way network congestion will drop packets instead of delaying them.
You'll need to keep on re-calculating the offset though, since iOS devices change their clock frequently (I think it's part of the 3G/LTE network protocol? Jumping from one cell tower to another might update the clock).