I am trying to create a wifi trilateration project using 3 Raspberry pis.
I can capture packets from all 3 pi's to a web server but even if a mobile device stays in one spot, i will still get wild fluctuation in signal strength results.
I have been researching but can't really find a solution to getting a consistent signal strength.
I have seen the fingerprinting method but i would like to avoid it, since it requires a lot of setup and when moved has to be re calibrated.
If you could point me in the right direction, i would appreciate it.
Related
I need to realize a source-synchronous receiver in a Virtex 6 that receives data and a clock from a high speed ADC.
For the SERDES Module I need two clocks, that are basically the incoming clock, buffered by BUFIO and BUFR (recommended). I hope my picture makes the situation clear.
Clock distribution
My problem is, that I have some IOBs, that cannot be reached by the BUFIO because they are in a different, not adjacent clock region.
A friend recommended using the MMCM and connecting the output to a BUFG, which can reach all IOBs.
Is this a good idea? Can't I connect my LVDS clock buffer directly to a BUFG, without using the MMCM before?
My knowledge about FPGA Architecture and clocking regions is still very limited, so it would be nice if anybody has some good ideas, wise words or has maybe worked out a solution to a similar problem in the past.
It is quite common to use a MMCM for external inputs, if only to cleanup the signal and realize some other nice features (like 90/180/270 degree phase shift for quad-data rate sampling).
With the 7-series they introduced the multi-region clock buffer (BUFMR) that might help you here. Xilinx has published a nice answer record on which clock buffer to use when: 7 Series FPGA Design Assistant - Details on using different clocking buffers
I think your friends suggestion is correct.
Also check this application note for some suggestions: LVDS Source Synchronous 7:1
Serialization and Deserialization Using
Clock Multiplication
I’m having problems with a Spartan 6 (XC6SLX16-2CSG225I) and DDR (IS43R86400D) memory interface on some custom hardware. I've tried on a SP601 dev board and all works as expected.
Using the example project, when I enable soft_calibration, it never completes and calib_done stays low.
If I disable calibration I can write to the memory perfectly as far as I can see. But when I try to read from it, I get a variable number of successful read commands before the Xilinx memory controller stops implementing the commands. Once this happens, the command fifo fills up and stays full. The number of successful commands varies from 8 to 300.
I'm fairly convinced it's a timing issue, probably related to DQS centering. But because I can't get calibration to complete when enabled, I don't have continuous DQS Tuning. So I'm assuming it works with calibration disabled until the timing drifts.
Is there any obvious places I should be looking for why calibration fails?
I know this isn't a typical stack overflow question, so if it's an inappropriate place then I'll withdraw.
Thanks
Unfortunately, the calibration process just tries to write and read content successively while adjusting taps internally. It finds one end of success then goes the other direction and identifies that successful tap and then final settles on some where in the middle.
This is probably more HW centric as well, so I post what I think and let someone else move the thread.
Is it just this board? Or is it all of them that are doing it? Have you checked? If it's one board, and the RAM is BGA style, it could be a bad solider job. Push you finger down slightly on the chip and see if you get different results... After this is gets more HW centric
Does the FPGA image you are running on your custom board, have the ability to work on your devkit? A lot of times, that isn't practical I know, but I thought I would ask as it rules out that the image you are using on the devkit has FPGA constraints you aren't getting in your custom image.
Check your length tolerances on the traces. There should have been a length constraint. Plus or minus 50 mils something like that. No one likes to hear they need a board re-spin, but if those are out, it explains a lot.
Signal integrity. Did you get your termination resistors in there and are they the right values? Don't supposed you have an active probe?
Did you get the right DDR memory. Sometimes they use a different speed grade and that can cause all sorts of issue.
Slowing down the interface will usually help items 4 and 5. so if you are just trying to work done, you might ask for a new FPGA image with a slower clock.
I am fetching user location using CLLlocationManager and running webservice when lcoation is updated in background but it causes iphone heating up and battery Drains? Any one have solution for this ?
Getting your position drains power, you can do few things to avoid that:
use significant location changes (it is good if you do not need precise locations per time)
limit the accuracy (changing this can make you avoid the use of GPS that it is really a battery drainer)
I'm do not understand the heat, yes GPS make the device become hotter, but I've never experienced a restart due to heat.
Are you sure that you are not getting also into an expensive computational tasks?, you can check this by using profiler or the later versions of xcode.
You can also set the distance filter, this will continue to get the position (it will not reduce the battery drain) but will call the delagate callback only when the distance threshold is reached.
On iOS6 it has been introduced also the concept of deferring location updates in background, that probably is the best solution also for managing network traffic outgoing from your device.
In fact you have only the decision between low location accuracy (1000km) and high (3-6m).
In the first case the GPS chip is disabled, in the second it is enabled.
If it is enabled, and you need that precise locations you can do nothing.
GPS needs power, and that power last for a bit more than 8 hours full precision locations (measured on my iphone4)
warming up is no problem, however I cannot remember a warming up on my phone caused by GPS (I will check that soon). But for sure it never warms up so much that it will restart,
So your case this is a bit strange, that also could be a defect of your device.
The cause for warming up can be also that you try to comminicate very often with the server.
You can check that yourself, just download a decent GPS aplication, and let it record a track.
If it does get hot too, your device might have a problem. (Or you are living in a extremly hot environment and the sun shines strongly on your phone.)
Test also by disabling your network code.
I'm using Wireshark to monitor network traffinc to test a new software installed on a router. The router itself lets other networks (4g, mobile devices through usb etc) connect to it and enhance the speed on that router.
What I'm trying to do is to disconnect the connected devices and discover if there are any packet losses while doing this. I know I can simply use a filter stating "tcp.analysis.lost_segment" to track down lost packets, but how can I eventually isolate the specific device that causes the packet loss? Or even know if the reason was because of a disconnected device when there is a loss?
Also, what is the most stable method to test this with? To download a big file? To stream a video? Etc etc
All input is greatly appreciated
You can't detect lost packets solely with Wireshark or any other packet capture*.
Wireshark basically "records" what was seen on the line.
If a packet is lost, then by definition you will not see it on the line.
The * means I lied. Sort of. You can't detect them as such, but you can extremely strongly indicate them by taking simultaneous captures at/near both devices in the data exchange... then compare the two captures.
COMPUTER1<-->CAPTURE-MACHINE<-->NETWORK<-->CAPTURE-MACHINE<-->COMPUTER2
If you see the data leaving COMPUTER1, but never see it in the capture at COMPUTER2, there's your loss. (You could then move the capture machines one device closer on the network until you find the exact box/line losing your packets... or just analyze the devices in the network for eg configs, errors, etc.)
Alternately if you know exactly when the packet was sent, you could not prove but INDICATE its absence with a capture covering a minute or two before and after the packet was sent which does NOT have that packet. Such an indicator may even stand on its own as sufficient to find the problem.
We have a critical need to lower the latency of our UDP listener on iOS.
We're implementing an alternative to RTP-MIDI that runs on iOS but relies on a simple UDP server to receive MIDI data. The problem we're having is that RTP-MIDI is able receive and process messages around 20ms faster than our simple UDP server on iOS.
We wrote 3 different code bases in order to try and eliminate the possibility that something else in the code was causing the unacceptable delays. In the end we concluded that there is a lag between time when the iPAD actually receives a packet and when that packet is actually presented to our application for reading.
We measured this with with a scope. We put a pulse on one of the probes from the sending device every time it sent a Note-On command. We put another probe attached to the audio output of the ipad. We triggered on the pulse and measured the amount of time it took to hear the audio. The resulting timing was a reliable average of 45ms with a minimum of 38 and maximum around 53 in rare situations.
We did the exact same test with RTP-MIDI (a far more verbose protocol) and it was 20ms faster. The best hunch I have is that, being part of CoreMIDI, RTPMIDI could possibly be getting higher priority than our app, but simply acknowledging this doesn't help us. We really need to figure out how fix this. We want our app to be just as fast, if not faster, than RTPMIDI and I think this should be theoretically possible since our protocol will not be as messy. We've declared RTPMIDI to be unacceptable for our application due to the poor design of its journal system.
The 3 code bases that were tested were:
Objective-C implementation derived from the PGMidi example which would forward data received on UDP verbatim via virtual midi ports to GarageBand etc.
Objective-C source base written by an experienced audio engine developer with a built-in low-latency sine wave generator for output.
Unity3D application with Mono-based UDP listener and built-in sound-font synthesizer plugns.
All 3 implementations showed identical measurements on the scope test.
Any insights on how we can get our messages faster would be greatly appreciated.
NEWER INFORMATION in the search for answers:
I was digging around for answers, and I found this question which seems to suggest that iOS might respond more quickly if the communication were TCP instead of UDP. This would take some effort to test on our part because our embedded system lacks TCP capabilities, only UDP. I am curious as to whether maybe I could hold open a TCP connection for the sole purpose of keeping the Wifi responsive. Crazy Idea? I dunno. Has anyone tried this? I need this to be as real-time as possible.
Answering my own question here:
In order to keep the UDP latency down, it turns out, all I had to do was to make sure the Wifi doesn't go silent for more than 150ms (or so). The exact timing requirements are unknown to me at this time, however the initial tests I was running were with packets 500ms apart and that was too long. When I increased the packet rate to 1 every 150ms, the UDP latency was on par with RTPMIDI giving us total lag time of around 18ms average (vs. 45ms) using the same techniques I described in the original question. This was on par with our RTPMIDI measurements.